People like simplicity. They prefer linear systems where an input translates in a well understandable way into an output or result.
The world is a mesh, a complex network of elements which influence each other. Small effects on one side may lead to a chain reaction in the network and cause a big change.
Many organizations have big difficulties to deal with the increasing complexity which is inherent to the shaping mesh economy. Established rules and patterns become less effective or even contra productive. It has become difficult to stay lean while things change fast – typically organizations and their supporting infrastructures grow continuously amalgamating approaches and technologies over many years.
The typical reaction is to launch some form of a simplification program. Such programs require good KPI’s to understand the benefits of the measures taken. But meaningful KPI’s are hard to find.
Organizations typically start counting elements as a measure and aim to reduce complexity by reducing the number of similar elements. Reducing e.g. the number of applications in use feels right – but it may come with side effects. This measure may decrease agility. Agility is required in order to deal with complexity. It may also lead to less productivity as broadly used applications may make it harder to deal with complicated situations where specialized tools provide a better support.
All problems are different and a best practice only fits to a specific set of problems. One approach which I find useful to measure complexity in the network is based on a heuristic which goes back to Robert Glass.
“For every 25 percent increase in problem complexity, there is a one 100 percent increase in solution complexity”
Let’s make a thought experiment. Let’s assume we want to measure complexity of a system with a set of functions which must interact with each other in a defined way.
- One extreme is to have all functions in a single block – we have now a complicated block which hides all interdependencies between functions from the outside world.
- The other extreme is to have a block for each function and then connect the blocks based on the functional dependencies. Now each block is simple but all interdependencies have been externalized which makes the network complicated.
This heuristic above means:
- If we have 4 functions in a block and add another to it, then the solution complexity of this block doubles.
- If we add an additional connection to 4 existing connections between blocks, the complexity of the network doubles.
The heuristic allows to find a setup where the functional bundling into nodes and network connections are in a good balance. Given this heuristic it becomes possible to start measuring and optimizing the complexity of networks. Obviously not all functions are equal neither are all connections. Such factors can be included into a more detailed approach which allows measuring and optimizing an application landscape towards a low complexity.
The most decisive factor may finally however be the solution approach and the clarity of the design. If there is a simpler way to solve the problem then this is the first step to do. And this leads back to the starting point – in order to truly simplify first the problem space and the solution approach need to be challenged. They change continuously.
One thought on “Measuring complexity in networks”