Watching the tragic events in Japan and the challenges of thwarting a meltdown at multiple nuclear reactors, I am reminded that many of the large, complex programs our clients lead and manage need to learn that all cannot be planned for. This January 2011 Financial Times story on "What We Can Learn from a Nuclear Reactor" is prescient.
Dmitriy Samovskiy's post on "normal accidents" in complex systems provides a wealth of links and insights that any leader of any type of large program should consider. Charles Perrow's Normal Accidents: Living with High-Risk Technologies and The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters provide examples of what to consider:
- · No matter how much thought is put into the system design, or how many safeguards are implemented, a sufficiently complex system sooner or later will experience a significant breakdown that was impossible to foresee beforehand· This is principally due to unexpected interaction between components, tight coupling or bizarre coincidence.
- A big failure was usually a result of multiple smaller failures; these smaller failures were often not even related.
- Operators (people or systems) were frequently misled by inaccurate monitoring data
- In a lot of cases, human operators were used to a given set of circumstances, and their thinking and analysis were misled by their habits and expectations ("when X happens, we always do Y and it comes back" - except for this one time, when it didn't)
Implication for Program Managers and Leaders: programs can be complex systems, you may be creating complex systems. Plan for failure, expect it, and be prepared to deal with it.
No comments:
Post a Comment