Ongoing loss of agility

Do you remember Wall-E and the Axiom? If not, then you may want to go and watch the movie before reading on …

What comes into your mind when thinking about the humans on the Axiom? They are harmonious, homogenous in lifestyle, well-functioning and polite. They follow the daily routines and enjoy somehow the time on the ship. The ship has taken control of their life and organizes it for them. Everything is predictable, under control and automatically managed 24×7.

An interesting question is if such a comfortable state is really desirable. Intelligence is the ability to acquire and apply knowledge and skills in unexpected situations. But what is the result if intelligence is actually no longer required in daily life? The axiom could stimulate the humans by creating surprises and challenges but the people would soon figure out that this is only a game and does not really matter. If typical human skills are no longer required, what motivates people? What is the value and position of humans in such an environment?

The situation on the Axiom is an extreme state. But the question if we are progressing in such a direction is interesting.

It feels that the civilised world wants a “constant” to make all predictable and eliminate negative surprises. But such events may often not be positive or negative by their own. It is more how people react to them. If a system like the Axiom takes care if everything, will human intelligence, processing, or even value degenerate?

When people are in such a comfortable state, will they try or attempt something new – fearing to lose what they already have? Could the evolution of our society bring, as a consequence, a complete loss of agility? Are we already on the way to such a state caught in daily routines defined by others?

Need to break free from legacy ….

Interaction with the environment is an essential ingredient to success in the mesh economy of the age of information. Many companies seem to struggle with this as they cannot let the past go. The legacy defines what is possible and also what success means. The activities are inside out oriented dominated by the inherent constraints.

One reason might be that many organizations have lost the sovereignty to move forward. For quite some time organizations were optimized by means of business process outsourcing. Near and offshoring, it was believed, would bring efficiency. That was maybe true in the age of industrialization. But it is not true anymore in the age of the information where lean and agile structures, skills and technology make the difference while repetitive tasks are automated. The approach may have provided a bit of short-term profit polishing but unfortunately resulted in a longer term strategic disadvantage as skills and know-how moved to the sourcing partner.

Another reason might be the way how business cases are used today. Any change requires a business case. This was relative simple in the slow changing age of individualization and also works well for small changes today. It does not work well in the world of VUCA.

In addition the dynamics in organizations do not help with the approach. The own business case has to be better than most other cases to have a chance in the internal competition for funding, so it will be optimized a bit. Since everyone knows that, all the cases are being tuned. The management sees the number of excellent business cases a a luxury problem – it feels good to have such choice.

On the flip side the decision process gets difficult and tedious. During all these processes and procedures, a lot of time passes by. At the point when decisions are finally made the environmental conditions have changed, and everything should actually start again. But that would be too painful, so the execution of the initiatives starts anyway. At some point after the project start, management realizes that the projects are a little off the mark, then the organization re-prioritizes and the internal bureaucracy starts again. Then also the ego of the typical manager starts to amplify the effects – a great manager cannot have done anything wrong, which brings the whole theater to the next round.

Bigger changes are hard to get through – the assumptions feel less intuitive and the risks are perceived to be higher. Hence the small incremental changes are preferred and the core fundamental ones continuously push down in priority. The company change capacity slows down and the cultural legacy is soon be joined by technical legacy. Some people like this stage as it offers the opportunity to launch simplification programs, which of course suffer from the same effects as all other projects.

I think there is only one way to resolve such a situation – push ‘refresh’. Rethink the business model and decompose the organization into self sufficient units. Give them the competence to decide and also the responsibility to deal with outcomes.

And this leads to the most challenging point – the time “where the turkeys are asked to vote for Christmas” and to disrupt themselves.

Data Ethics: Privacy in the future?

47BEF63F-B172-420D-B5BC-347310E6B2A2
Technology progress has radically transformed our concept of privacy. The way we share information and display our identities has changed as we have migrated to the digital world. We carry/ interact with device(s) that give us access to all information, but they can also offer to the world vast quantities of information about us. We leave digital footprints as we navigate through the digital world. While sometimes this information can be harmless, it’s often valuable to various stakeholders, including government, corporations, marketers, and criminals.
The ethical debate around privacy is complex. Has our definition and standards for privacy evolved over time? How will it continue to do so in the decades to come?
Implications of emerging technologies will only challenge the notion of privacy as we further extend the integration of technologies such as virtual reality, the internet of things, brain machine interfaces, etc…
Company such as Neuralink attempts to merge the human brain with machines leading to potential implications for privacy. Brain machine interfaces by nature operate by extracting information from the brain and manipulating it in order to accomplish goals. There are many parties that can benefit and take advantage of the information from the interface. Marketing companies, for instance, would take an interest in better understanding how consumers think and consequently have their thoughts modified. Employers could use the information to find new ways to improve productivity or even monitor their employees. There will notably be risks of “brain hacking” which we must take extreme precaution against. On the other hand what if such approach was used for medical treatment purposes?
Technological advancement are significantly growing as part of who we are. The digital world has become an extension of our identities and the physical world. We will to a certain extent need a new definition of privacy until such a notion becomes obsolete in the near future. We have started to be more open as a society in the digital world, voluntarily sharing identities, interests, views, and personalities. The question is the trade off and balance between transparency vs privacy.
Will there and if so what is the new definition of privacy. Will ethical standards evolve and do people agree to it?

Digital to replace the human touch?

 
6441F716-9E3D-4E6B-973D-9515927000F2.jpeg
Can technology/ intelligent machine replace the human touch? While a machine can perform a given task, often more efficiently than we can, what it lacks is the creativity in the activity, a uniquely human ability to cater to the needs of the individual. What is creativity? What are the needs of the individual? Even if a machine could determine an appropriate plan humans will still want to interact with another who has the creative expertise/ experiences to talk us through, one who understands that creativity in that context.
How much of this effect is real, how much of it is specific to our generation? Will the next generation or future generations after have the same distrust and longing for such “human” touch? Especially digital natives who grew up with the internet and social media as part of their everyday world.
Technologies of the industrial revolution (steam power and machinery) – largely complemented human capabilities. The great question of our current time is whether digital technology will complement or instead replace human capabilities…can digital technology replace human capabilities especially the understanding and judgement – let alone the empathy – requires to successfuly deliver services such as social care; or that lead us to enjoy and value interacting with each other rather than with machines.
Faster isn’t wiser: Intelligence is defined in terms of the ability to acquire and apply knowledge and skills, but what is often missing is the act of taking decisions base on the ability to choose objectives or hold values that shape it.
Values are experience, not data: As we use increasingly powerful computers to create more and more sophisticated logical systems, we may succeed in making systems resemble human thinking, but there will always be situations that can only be resolved by humans employing judgement based on values that we can empathise with, based in turn on experiences that we can relate to.
Artificial life, experience, values:
Intelligent machines can make choices based on data available to them but this is very different than a judgement based on values that emerge from our experience of life.

Need for ethical data sharing standards ….

Currently there is an intense discussion about personal data and ownership. Although the assumption of exposing data is inherently dangerous, the opposite may be true as well. The key question focusses more on access control to manage data exposure in order to avoid risks while still getting the benefits. Or maybe it is more about establishing broad ethical standards as data availability is neither good or bad on its own – the way it is used makes the difference.

Let’s look at some use cases which illustrate the challenge. Let’s assume you feel an unusual pressure in your chest and decide to visit a doctor. He does an electrocardiogram (ECG) which looks normal. What does this mean – although it maybe normal in relation to a statistical sample but may be very unusual for you. The doctor does not have the necessary data to determine this.

Let’s assume now that you have a wearable which collects data of key body functions on an ongoing base. Many others do this as well and the gathered data can be analysed leading to algorithms which can detect anomalies like an indication of an increased risk of a heart attack on a personal level. Your data is now used in two ways – to build up the algorithm and to detect health issues for you. Please note that just doing some data gathering when you feel bad has not the same effect. The power comes from the availability of longer term data.

Let’s now assume that the new way to identify potential heart attacks reduces the perspective costs substantially. Especially in the long-term, trends and indicators can be used to influence the behavior e.g. towards more physical activity.

The data gathered could also be used to derive further information which could be used for unethical purposes.

You might say that it is always possible to switch data gathering on and off  – depending on the type of data such ‘white spots’ may finally be problematic and will not help to better protect privacy. If gathering data is the norm, then switching it off could easily seen as an indicator that somebody tries to hide something may actually attract attention.

Such data could be pseudonomized – but needs to have some sort of tag to allow correlations to power the analytics or it may even by identifying itself like the ECG which is also being used for authentication.

There are many such examples – if you share your GPS position, somebody may take advantage that you are far away from home or use the data to rescue you after an accident.

I think that collecting data will be the norm. Protecting data will be key – but even more important is the need to establish ethical standards on how to deal with such data and information derived from it.