Consider a world with no technology standards. No codes like the ones ASME publishes to regulate pressure vessels, no rules to secure the safety of cranes or elevators, or any other technology. In the 1880s a group of engineers in the newly formed American Society of Mechanical Engineers began to develop parameters to stem the outburst of boiler explosions that were killing thousands of people every year. Standards developed by ASME would eventually govern how boilers were tested, made, and maintained thus preventing needless damage and saving countless lives.
Creating technology without such rules may seem downright foolish today, yet we may be on our way to exposing ourselves to just such risks.
In my column last month I wrote about robots that work peacefully, side-by-side with humans in assembly plants and some designed as novelty items that mix spirits into cocktails. Underlying all the good news about robotics and other developments, and the explosion of research into artificial intelligence, is the decades-old fear of what happens if our own creations, ungoverned, get away from us.
Hollywood has capitalized on this prospect since the days of D.W. Griffith, titillating us with plots of androids gone bad. This year’s crop of new films is no different, but two films take the special-effects nature of the genre and mix in a dose of real-world ethical quandary to make us pause.
Ex Machina and Tomorrowland, albeit very different films, raise existential themes tied to the development of powerful technologies and their implications on our lives and in the future of the world.
Film critics will talk about the political undertones of these films, but the films raise hard-hitting questions over the governance of artificial intelligence research and of technologies such as nano- and bioengineering, self-driving cars, tracking technologies, smart homes, and others.
There are many who share the viewpoint that tech companies are moving too fast without adding sufficient safeguards to their innovations, and that the potential implications of breakthroughs are lost on the technologists who are emboldened to innovate and create without regard to consequences. Count innovators such as Tesla founder Elon Musk among those who worry. He’s given $10 million to the Future of Life Institute, one of several organizations that, like this group, “support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.”
Conversations on the oversight of technology are deep, far more complex than when ASME was founded, increasingly political, and unavoidably divisive. But it is well worth the effort to take them on.
The steady growth of technology progress may well serve as a barometer by which we measure the growth of our species. But even if we don’t live in the apocalyptic worlds that Hollywood creates, it is prudent to be mindful of the benefits of codes and standards, and of rules, and of self-control. This is why it’s important for engineers and other technologists to engage politicians in the difficult dialogue over the rules of innovations.