Archive for February 25th, 2012


Complex Systems in Perspective

My March column in Mechanical Engineering magazine. 

The good news is that complex systems rarely fail. The bad news is that, inevitably, things happen. The unenviable challenge lies in figuring out a way to mitigate the risks of occurrence and the effects of the aftermath.

The consequences of large complex system failures are mostly and undeniably catastrophic, yet we must remind ourselves that most complex systems have a strong record of reliability.

It is not only the celebrated systems like aircraft, energy plants, and large infrastructures that are complex; even seemingly modest designs can prove very complex. But it is when the large systems fail that the public rightly pays the most attention because these are the spectacular cases of failure that bring dire consequences.

One of the complexities about these large systems—beyond the design itself—is that if they fail, the reasons are as multifaceted as the system itself is complicated. Although many things can go wrong with a complex system, usually most things go consistently right. Failure can be traced to engineering, operations, outdated infrastructure, human error, or often a combination of causes.

The mere thought of a complex system failure keeps engineers up at night because engineering is exact, but failure, not so much. Clearing the hurdle to build effective complex systems is an engineering challenge. Designers design for systems to work, not fail. But by nature most complex systems are hazardous.

So how do we make sense of failure? The answers lie as much in engineering books as they do in the annals of philosophy. After all, how does one predict what is unpredictable? Risk and volatility are not linearly tied to engineering performance, and this month we pry deeply into the whys and why-nots.

We’re still learning the lessons from recent examples where complex systems failed or where natural disasters jeopardized their performance. One of the lessons learned is clear: Engineers must remain vigilant over the design, development, and operations of large-scale, complex, dynamic human-engineered systems. This includes assessing the ethical responsibilities associated with process management and maintenance.

ASME has been on the forefront in collaborating with stakeholders to keep engineers actively vigilant in assessing critical factors related to risk from engineered systems. Next month, for example, ASME, along with China’s State Administration of Foreign Experts Affairs and the Chinese Academy of Engineering, will hold a forum on disaster prevention and mitigation in Beijing.

Despite the technology developments, we remain very much at the whim of Mother Nature, as recent disasters in Japan, Haiti, and Chile remind us. Managing the impact of natural disasters on infrastructure is where engineers come in.

As layers of technology defenses are being developed as safeguards, the same thought must be given to address cognitive issues associated with human factors. Active training and refining the skill sets of those working on complex systems is as important a piece of the puzzle. A conversation about complex system failures cannot occur if both areas are not given the same priority.

The Editor

John G. Falcioni is Editor-in-Chief of Mechanical Engineering magazine, the flagship publication of the American Society of Mechanical Engineers.

February 2012

Twitter from John Falcioni

Twitter from Engineering for Change