Organizations are complex systems that encapsulate interactions between people, processes and enabling infrastructure. It might seem reasonable, therefore, to manage innovation within organisations using some of the key principles that emerge from systems thinking and other disciplines associated with multi-disciplinary design. However, this is not as commonplace as might be expected. I suggest that integrating systems thinking with other change and project management techniques is key to ensuring a well-rounded approach that covers all the bases critical to successful innovation.

© franckito / 123RF Stock Photo

© franckito / 123RF Stock Photo

When a complex change is being envisaged it may initially seem overwhelming. The natural reaction to this is to decompose it into manageable bite-sized chunks. Whilst that is indeed to be recommended, there are good ways to do it and ignoring them can lead to a muddled exercise at best. Assigning objectives without due consideration of the emerging solution can lead to ineffective utilization of resources and conflicting initiatives. This is as true in the service space as it is in the product space.

Innovation begins in the problem domain by asking the right questions of the right people such that the requirements are properly developed and understood. Do this until the results cease to materially influence the emergent architecture. To ensure adequate coverage consider the system holistically and drill down from the Enterprise, through the Business and into the Operations layers. Also, think about the full life-cycle (cradle to grave) and associated supply-chain elements that interact with the system. An important point to note is that system boundaries must be defined as early as possible to enable the scope to be properly constrained.

Thereafter, requirements can be grouped into a modular architecture of coherent, interconnected blocks that have minimal lateral interdependencies. Ideally, there should be a process of discovery, ideation and validation that sits apart from the final solution and does not concern itself with how that will actually be achieved.

Critical within this process is the need to fully define interfaces between the blocks, as these will be a major source of risk if not managed effectively. Conceptually, each block performs a service for or receives a service from other blocks. These services need to be captured in terms of functional, performance and qualitative measures so that the objectives are clear and success can be verified. Some services will exist within a block and some will exist across the interface between one or more blocks – they are both equally important. The aim is to design blocks that are as independent from one another as is possible so that they are resilient to changes or issues elsewhere in the system. At the same time, these blocks need to aggregate only those services that sensibly belong together and do not benefit from further decomposition.

Once the problem-domain architecture has been defined, one or more candidate solution architectures can be developed and evaluated against it. Each coherent block of needs (expressed as requirements) must be answered by one or more chunks of the solution. Often this is an iterative and evolutionary process where requirements and emergent solutions are assessed and refined in parallel. Once the chosen solution architecture and interfaces have been sufficiently well defined, the architecture will be robust enough to facilitate the assignment of blocks to the group or entity best placed to deliver them effectively. The integration activities at points of convergence should not be overlooked as these often represent high-risk areas.

The level of analysis and subsequent convergence necessary to reach this point will vary and should be considered on a case-by-case basis. Ideally, you can consider the job done when every block can be mapped to a package of work that can be assigned to a project or subcontract with clearly defined objectives, dependencies and success criteria. In practice, there will likely be a need to defer portions of the breakdown – just be sure that they are adequately isolated, low risk and will not derail key architectural decisions already made.

If you are not sure what I mean here then consider this – if you’re initially thinking desktop application and then discover a self-hosted client-server approach is a better solution this is probably not a show-stopper. However, if you actually need a military-grade, multi-level secure, fault tolerant, distributed solution operating through mobile battlefield radio infrastructure then you have probably made a number of premature choices...

Anyway, once the assignment has been worked out, management can focus on the key integration activities of risk management, communication (incl. plans, updates, exchanging mock-ups, prototypes, templates etc), change control and the removal of blockages.

The key benefits derived from adopting this approach start with ensuring the solution can be delivered against a resilient, modular architecture that simplifies the management of change and other issues. Moreover, this modularity allows the overall complexity to be tamed by compartmentalising management focus and allowing the adoption of a “manage by exception” philosophy based around targeted reviews and other delivery-based milestones. This property stems from the relative independence of the work-packages. As a final point it should be noted that a properly designed modular architecture also facilitates focused risk management and simplified financial control.

Humans naturally think in terms of systems but do not always apply this pattern to managing innovation. With a bit of practice and a proper grasp of the underlying goals you should be able to apply it to just about anything.

Want some personalised insights? Click here to get started...
Posted
AuthorTrevor Lindars
CategoriesInnovation