Do you remember Wall-E and the Axiom? If not, then you may want to go and watch the movie before reading on …
What comes into your mind when thinking about the humans on the Axiom? They are harmonious, homogenous in lifestyle, well-functioning and polite. They follow the daily routines and enjoy somehow the time on the ship. The ship has taken control of their life and organizes it for them. Everything is predictable, under control and automatically managed 24×7.
An interesting question is if such a comfortable state is really desirable. Intelligence is the ability to acquire and apply knowledge and skills in unexpected situations. But what is the result if intelligence is actually no longer required in daily life? The axiom could stimulate the humans by creating surprises and challenges but the people would soon figure out that this is only a game and does not really matter. If typical human skills are no longer required, what motivates people? What is the value and position of humans in such an environment?
The situation on the Axiom is an extreme state. But the question if we are progressing in such a direction is interesting.
It feels that the civilised world wants a “constant” to make all predictable and eliminate negative surprises. But such events may often not be positive or negative by their own. It is more how people react to them. If a system like the Axiom takes care if everything, will human intelligence, processing, or even value degenerate?
When people are in such a comfortable state, will they try or attempt something new – fearing to lose what they already have? Could the evolution of our society bring, as a consequence, a complete loss of agility? Are we already on the way to such a state caught in daily routines defined by others?
„The greatest danger for most of us is not that our aim is too high and we miss it but that it is too low and we reach it.”
Interaction with the environment is an essential ingredient to success in the mesh economy of the age of information. Many companies seem to struggle with this as they cannot let the past go. The legacy defines what is possible and also what success means. The activities are inside out oriented dominated by the inherent constraints.
One reason might be that many organizations have lost the sovereignty to move forward. For quite some time organizations were optimized by means of business process outsourcing. Near and offshoring, it was believed, would bring efficiency. That was maybe true in the age of industrialization. But it is not true anymore in the age of the information where lean and agile structures, skills and technology make the difference while repetitive tasks are automated. The approach may have provided a bit of short-term profit polishing but unfortunately resulted in a longer term strategic disadvantage as skills and know-how moved to the sourcing partner.
Another reason might be the way how business cases are used today. Any change requires a business case. This was relative simple in the slow changing age of individualization and also works well for small changes today. It does not work well in the world of VUCA.
In addition the dynamics in organizations do not help with the approach. The own business case has to be better than most other cases to have a chance in the internal competition for funding, so it will be optimized a bit. Since everyone knows that, all the cases are being tuned. The management sees the number of excellent business cases a a luxury problem – it feels good to have such choice.
On the flip side the decision process gets difficult and tedious. During all these processes and procedures, a lot of time passes by. At the point when decisions are finally made the environmental conditions have changed, and everything should actually start again. But that would be too painful, so the execution of the initiatives starts anyway. At some point after the project start, management realizes that the projects are a little off the mark, then the organization re-prioritizes and the internal bureaucracy starts again. Then also the ego of the typical manager starts to amplify the effects – a great manager cannot have done anything wrong, which brings the whole theater to the next round.
Bigger changes are hard to get through – the assumptions feel less intuitive and the risks are perceived to be higher. Hence the small incremental changes are preferred and the core fundamental ones continuously push down in priority. The company change capacity slows down and the cultural legacy is soon be joined by technical legacy. Some people like this stage as it offers the opportunity to launch simplification programs, which of course suffer from the same effects as all other projects.
I think there is only one way to resolve such a situation – push ‘refresh’. Rethink the business model and decompose the organization into self sufficient units. Give them the competence to decide and also the responsibility to deal with outcomes.
And this leads to the most challenging point – the time “where the turkeys are asked to vote for Christmas” and to disrupt themselves.
Technology progress has radically transformed our concept of privacy. The way we share information and display our identities has changed as we have migrated to the digital world. We carry/ interact with device(s) that give us access to all information, but they can also offer to the world vast quantities of information about us. We leave digital footprints as we navigate through the digital world. While sometimes this information can be harmless, it’s often valuable to various stakeholders, including government, corporations, marketers, and criminals.
The ethical debate around privacy is complex. Has our definition and standards for privacy evolved over time? How will it continue to do so in the decades to come?
Implications of emerging technologies will only challenge the notion of privacy as we further extend the integration of technologies such as virtual reality, the internet of things, brain machine interfaces, etc…
Company such as Neuralink attempts to merge the human brain with machines leading to potential implications for privacy. Brain machine interfaces by nature operate by extracting information from the brain and manipulating it in order to accomplish goals. There are many parties that can benefit and take advantage of the information from the interface. Marketing companies, for instance, would take an interest in better understanding how consumers think and consequently have their thoughts modified. Employers could use the information to find new ways to improve productivity or even monitor their employees. There will notably be risks of “brain hacking” which we must take extreme precaution against. On the other hand what if such approach was used for medical treatment purposes?
Technological advancement are significantly growing as part of who we are. The digital world has become an extension of our identities and the physical world. We will to a certain extent need a new definition of privacy until such a notion becomes obsolete in the near future. We have started to be more open as a society in the digital world, voluntarily sharing identities, interests, views, and personalities. The question is the trade off and balance between transparency vs privacy.
Will there and if so what is the new definition of privacy. Will ethical standards evolve and do people agree to it?
Can technology/ intelligent machine replace the human touch? While a machine can perform a given task, often more efficiently than we can, what it lacks is the creativity in the activity, a uniquely human ability to cater to the needs of the individual. What is creativity? What are the needs of the individual? Even if a machine could determine an appropriate plan humans will still want to interact with another who has the creative expertise/ experiences to talk us through, one who understands that creativity in that context.
How much of this effect is real, how much of it is specific to our generation? Will the next generation or future generations after have the same distrust and longing for such “human” touch? Especially digital natives who grew up with the internet and social media as part of their everyday world.
Technologies of the industrial revolution (steam power and machinery) – largely complemented human capabilities. The great question of our current time is whether digital technology will complement or instead replace human capabilities…can digital technology replace human capabilities especially the understanding and judgement – let alone the empathy – requires to successfuly deliver services such as social care; or that lead us to enjoy and value interacting with each other rather than with machines.
Faster isn’t wiser: Intelligence is defined in terms of the ability to acquire and apply knowledge and skills, but what is often missing is the act of taking decisions base on the ability to choose objectives or hold values that shape it.
Values are experience, not data: As we use increasingly powerful computers to create more and more sophisticated logical systems, we may succeed in making systems resemble human thinking, but there will always be situations that can only be resolved by humans employing judgement based on values that we can empathise with, based in turn on experiences that we can relate to.
Artificial life, experience, values:
Intelligent machines can make choices based on data available to them but this is very different than a judgement based on values that emerge from our experience of life.