Archive Page 2



1216mem_cover_no-boxImagine flipping on the light switch at home and wondering: Will the lights come on? Those of us lucky enough to live in parts of the world where the electric grid is robust rarely consider that question unless a strong storm or unusual circumstances cause a blackout.

But we can’t take the grid for granted. It’s the world’s largest supply chain with zero inventory, says Don Sadoway, the professor of materials science and engineering at the Massachusetts Institute of Technology who has been called the Socrates of Batteries.

I met the dapper Sadoway a few weeks back at the MIT Technology Review EmTech conference in Cambridge, Mass., but he’s no newcomer to the energy space (you can view both his EmTech presentation and his 2012 TED talk online). His lab invented a liquid metal battery that some—including investor Bill Gates—think will revolutionize the way energy is stored and pave the way to broadening the use of renewable energy. Sadoway’s company, Ambri, promises to deliver electricity where and when it’s needed at low cost.

Storage is one of the hurdles renewables such as wind and solar have to overcome in order to become mainstream.

Just as energy storage may be the key enabler to promoting the diversity of our energy sources, technologies that increase the connection between electricity producers and end users are at the heart of the smart grid—a combination of sensors and controllers plus a process for using information and communication technologies to integrate the components across the electric system.

Those technological advances will contribute to what is expected to be the most fundamental change to the U.S. power system since its inception a century ago. Engineers will be on the forefront of developing the new products to improve the efficiency and resiliency in the evolving grid.

Some of the products that make the grid more interconnected and responsive include advanced meters, automated feeder switches, voltage regulators, and other controls technology intended to give the grid stability and resilience.

“By increasing the analytic data available to grid operators and energy users, smart technologies create an information bridge linking generation, transmission, and distribution with consumers,” concluded a report this year from the Pew Charitable Trusts, an independent, non-partisan organization. “These capabilities allow grid managers and end users to make more informed decisions about how and when to use energy, based on grid requirements and price signals. And the additional information helps utilities manage their increasingly diverse generation portfolios.”

Improving the efficiency and robustness of the grid—and enhancing the capabilities of renewable energy sources that connect to it—is important, but even more critical is safeguarding it. Grid and security experts agree that the grid is becoming increasingly and dangerously susceptible to cyber and physical threats.

A few months ago, Senior Editor Dan Ferber took on the challenge to coordinate and serve as lead editor for a package of related articles addressing these important energy topics. This month’s comprehensive special focus on the grid is the culmination of Ferber’s hard work.

Our coverage provides a glimpse of what the electric grid of tomorrow might look like, even if we haven’t yet fully flipped on the switch on renewable energy.



A Machine That Thinks For You

1116mempc1_no-boxI was visiting a friend a few weeks ago when he started bragging about how he set up an Amazon Echo in his home office. “Alexa, what is the weather outside,” he volunteered unfettered—even as I could see the sun shining brightly out his window. In a few seconds, a rather pleasant computerized woman’s voice filled the room confirming my observation.

“Listen to this,” he continued. “Alexa, play Elton John’s ‘Candle in the Wind’.” A few moments later, the song came on.

It was getting irritating, so I decided to have a little fun. Before my friend could stop me, I commanded Alexa to place an order for a brown, four-shelf bookshelf. “Your order has been placed,” Alexa responded.

The next five minutes were frantic. My friend desperately fluttered on his keyboard trying to find customer support, but the answer was obvious. “Alexa,” I said sternly, “cancel the bookshelf order.” She confirmed.

Google’s co-founder, Larry Page, once described the perfect search engine as a machine that “understands exactly what you mean and gives you back exactly what you want.”

If he’s right, then the intersection of artificial intelligence and voice recognition is the pivot point. Google, the largest purveyor of search results on the Internet, has invested heavily—both dollars and engineering prowess—in data mining and artificial intelligence. The result is a technology likened to the talking computer on Star Trek, or a souped-up Siri, Apple’s voice-controlled virtual assistant. But Google claims its Google Assistant will be ever-more powerful than Apple’s Siri, Microsoft’s Cortana, or Amazon’s Alexa.

Sundar Pichai, Google’s chief executive, says that machine learning is at a point where a virtual assistant is all we need to solve all our information-related needs. Google Assistant will learn our habits, our likes and dislikes, and have access to just about all our confidential information. It will have the processing strength to understand and contextualize what we want and how we want it. It will book a trip, buy a coat, order a pizza, and make an appointment with a favorite hairdresser.

Building something better than Alexa, Siri, and Cortana is ambitious, but as Henry Lieberman, a pioneer of human-computer interaction at MIT’s Media Lab, told Associate Editor Alan Brown in this month’s cover story, “Language will become a means—not to help users understand a product more easily, but to have the product understand its users.”

The impact of harnessing the power of voice—and cognitive—recognition on product and systems design is still unclear. But we’ve seen significant strides in deep neural networks, referred to as deep learning. These are software constructs that enable machines to teach themselves how to recognize complex patterns. They have also greatly improved speech recognition.

Responding to public concern over the impact of machine learning on robots and intelligent systems, including factory automation and self-driving cars, a consortium of technology companies, including Amazon, Facebook, Google, IBM, and Microsoft, recently formed the Partnership on Artificial Intelligence to Benefit People and Society. Its focus is on ways to protect humans in the face of rapid advances in AI, and the potential for government regulation of the technology.

Sure, Alexa understood my command to cancel my joke order for the bookcase—that was trivial. But it’s critical that the engineering community recognizes the importance of building AI into the design of technologies in a way that doesn’t violate ethical mores. That’s no laughing matter.



1016mem_cover_no-boxIt was last November when those of us who still subscribe to the print edition of The New York Times received a relatively uninspiring cardboard insert with our Sunday papers. The instructions provided—“fold here, bend there”—were hardly different from those printed on a U-Haul cardboard packing box. But The Times promised the reward would be worth the effort.

After assembling it, downloading the smartphone app, and inserting my phone in the box, the payoff was unexpected. Before my eyes, the box and smartphone were transformed into a 21st century View-Master. But this wasn’t my mother’s stereoscope, it was an addictive immersive experience.

The virtual-reality initiative is a collaboration between The Times and Google on a project called NYT VR. More than one million Google Cardboard viewers were shipped to Times readers last year, showcasing a unique way to experience powerful storytelling.

The first story The Times delivered, “The Displaced,” captured the plight of children from South Sudan, eastern Ukraine, and Syria who were caught in the global refugee crisis. It immersed the viewer virtually inside the striking video images. You could look up to see the sky on the video or look down to see the soil. You could look back behind you or to the sides.

Other films followed, including a visual account of the candlelight vigils following the November 2015 terrorist attack on Paris. Today, NYT VR is also being used in many classrooms to help students learn about the world in visually powerful ways.

By collaborating with Google and other virtual-reality developers on this unique project, the 165-year-old newspaper, often referred to as The Gray Lady, leapfrogged online and digital storytellers. This is one of the ways, outside of the electronic gaming industry, in which the benefits of the immersive power of virtual reality has reached consumers.

For years, engineers have saved time and money using simulation and optimization software tools. These tools have brought virtual models to the screen and have fostered powerful multidisciplinary and collaborative product-development processes for designers.

The use of virtual-reality technologies, however, has been an elusive goal. But now there is a clearer understanding of what that technology can deliver. Major backing from NASA, Autodesk and Microsoft on industry research—combined with support from Apple, Facebook, and Sony on the development of lower-cost mixed reality systems—is helping to bring design and visualization closer together.

The technologies that comprise these advanced computing platforms, ranging from virtual reality to augmented or mixed reality, are beginning to rewrite the rules of product development. As our cover promises this month, we are sharing with you some of the leading developments of this transformative trend. And even if we’re not providing you with a do-it-yourself VR viewer with the magazine, the word pictures that our writers and editors have painted are sure to stimulate all your senses.



0916mem_coverEven if politics isn’t your cup of tea, this year’s presidential election has been hard to ignore. Its twists and turns wickedly resemble more the new J.K. Rowling fantasy novel than a noble competition to lead the most powerful country in the world.

One of the salvos being slung from one camp to the other involves opinions on the value of the North American Free Trade Agreement (NAFTA). At its core, the debate centers on whether NAFTA is bad for American manufacturers and workers because it enables cheap-labor countries like Mexico to take manufacturing jobs away from the United States.

Putting politics aside—and that’s no small feat given the existing climate—opinions on the causes of middle-income job losses in the United States include both economic forces and, some will argue, technology advances.

A recent column in The Wall Street Journal points to two interesting perspectives worth considering. One is an essay in Foreign Affairs by Dartmouth economist Douglas A. Irwin, who says that between 2007 and 2009, the United States lost nearly nine million jobs, pushing the unemployment rate up to 10 percent and, seven years later, the economy is still recovering. Even as trade commands broad public support, a significant minority of the electorate—about a third—opposes it. These critics come from both sides of the political divide, but they tend to be lower-income, blue-collar workers who are the most vulnerable to economic change. For these workers, “neither political party has taken their concerns seriously, and both parties have struck trade deals that the workers think have cost jobs,” says Irwin.

He argues that trade is but one reason some blue collar workers have lost their jobs. Another is technological advances that impact millions and occurs without enough formal retraining of displaced workers.

Still, “Technological change is far from the only factor affecting U.S. labor markets in the last 15 years,” argues MIT economist David H. Autor in a paper published last year in the Journal of Economic Perspectives. He notes the deceleration of wage growth, changes in occupational patterns, and dislocations in the U.S. labor market brought on by rapid globalization as the main reasons, but admits that in various ways these are linked with the spread of automation and technology. “Advances in information and communications technologies have changed job demands in U.S. workplaces directly and also indirectly … altering competitive conditions for U.S. manufacturers and workers.”

But “jobs are made up of many tasks,” Autor says, and while automation and computerization can substitute for some of them, understanding the interaction between technology and employment requires thinking about more than just substitution. In the end, technology has replaced some traditionally middle-education jobs, but this is the group that is also easiest to retrain.

Engineers have radically simplified manufacturing environments allowing for more autonomous and streamlined operations. But as Autor puts it, “human capital investment must be at the heart of any long-term strategy for producing skills that are ‘complemented by’ rather than ‘substituting for’ by technological change.”



0816MEM_CoverWith the trepidation of an old dog in a new home, I strapped a Fitbit on my wrist a few months ago hoping I’d find its religion. I haven’t looked back since.

Mind you, it’s not like the activity-monitoring device has turned me into a triathlete. I’m no more an avid runner or cyclist today than I was at the beginning of the year, but I’ve certainly become more aware of my activity. My Fitbit tells me how many steps I take, how many miles I walk, how many stairs I climb, how often my heart beats, and how long and how well I sleep. It also counts the calories I burn and tells me when I’m slacking off from my daily routine so I can get back to my personal peak performance level.

With the sensing device on my wrist I’m more motivated to opt to walk up and down stairs instead of taking the escalator; I go for more frequent and longer walks than I used to; and try to get up from behind my desk now and do a little stretching every hour or so.

I won’t say that the goal of 10,000 steps daily, recommended by the American Heart Association, has become an obsession, but it’s now an objective I care about.

My Fitbit is essentially my personal Internet of Things.

Like a Fitbit for the factory floor, the industrial IoT, with its network of Internet sensors and tracking technologies, monitors the health of machines and manufacturing equipment. It detects malfunctions, deviations, and malnutrition when supplies are low.

But unlike personal devices that will track a person’s activity regardless of age or fitness level, it’s not always easy to connect or retrofit plant equipment in a way for it to embrace and engage the IoT. Connecting a Fitbit or other similar health tracking device to its enabling software is a lot easier than connecting a milling machine to the cloud.

In some cases, it isn’t even that the equipment is too old to connect to sensors. Some equipment as young as 20 or even 10 years old can’t easily be hooked up to monitoring sensors and connect it to the Internet. Some manufacturers also fear that sensors can occasionally be finicky and make plant equipment difficult to troubleshoot.

That said, a recent IC Market Drivers report projected that worldwide systems revenues for applications connecting to the IoT will nearly double between 2015 and 2019, and could be more than $124 billion by 2020.

The report, which is published by IC Insights, a semiconductor market research company, said that during that same time period, new connections to the IoT could grow from about 1.7 billion in 2015 to nearly 3.1 billion in 2019.

Ultimately, the business case for the IoT is there: Reduce manufacturing costs and improve ROI, and that’s true even in cases when investments are necessary to retrofit equipment.

I’ve lost 10 pounds since I’ve been wearing my tracking device, so I’ve seen the ROI of being connected. But like some manufacturing equipment, I too get a little finicky, especially on those days when my Fitbit is telling me something I don’t want to know.


Where’s the Beef?

0716MEM_CoverThe last time I remember my son wanting to stop at a McDonald’s, he was mostly interested in the Happy Meal toy—he just graduated college, so it’s been a while. But we were in the car together a few weeks ago when we got hungry and pulled up to the first restaurant we saw, the one with the golden arches.

To our surprise, that McDonald’s had gone high-tech. I’m late to the party on this, but I subsequently learned that McDonald’s Create-Your-Taste has been around for a couple of years, mainly in Southern California and before that in global test markets Australia and New Zealand. About 2,000 U.S. locations have kiosks that give customers the option to create their own burger by selecting the kind of beef patties they want, and then choosing among the trademark special sauce, lettuce, cheese, pickles, onions, plus others: freshly roasted tomatoes, avocado, grilled mushrooms, and more.

Creating a made-to-order burger from a kiosk in a McDonald’s, then having it delivered to your table by a friendly server, isn’t just a novelty. It is part of the giant fast-food chain’s surge to capitalize on a growing global food culture that includes fresher ingredients and healthier options.

The most recent change in how we grow what we eat and how we consume it evolved with the trend toward organic products and through television food shows and chefs who helped celebritize the art of cooking and eating.

The evolution of food, well before Emeril Lagasse and Rachel Ray, goes back to the development of the first commercially successful steel plow by John Deere in 1837, and to the invention of pasteurization in 1864. What has been described as the second food epoch, or Food 2.0, occurred in the 1900s when the agricultural revolution ushered in mechanization, chemical fertilizers, plant breeding, and hybrid crops.

Today’s wave of agricultural advancements, some of which are described in Senior Editor Dan Ferber’s article, “Watching the Crops Grow,” on page 28, may be the bellwether of Food 3.0. The use of sophisticated robotics and drones for certain crop-breeding processes is helping the farming industry pave the way to serve a growing population on Earth, expected to reach 9 billion by 2050.

But farmers alone are not the only ones concerned with whether there will be enough food to go around.

In her captivating article, “Re-Engineering What We Eat,” on page 34, contributor Sara Goudarzi reports that scientists and other researchers fear that Earth itself may prove incapable of sourcing all the food we’ll need to feed ourselves, especially as the population grows in the next 35 years. Without sufficient land and water to produce beef, the alternative may be to engineer in vitro meat in the lab from precursor cells. Extensive research is also being conducted to genetically grow other meats and fish, as well as plants, in laboratory environments. A large amount of research is also being conducted on food printing, a process similar to the burgeoning 3-D printing we have become familiar with.

Even as I fancy myself a foodie, McDonald’s—high-tech or not—remains a guilty pleasure, even when there is no one around hankering for a Happy Meal. But as the notion of ordering a “high-tech burger” grows, I can’t help but feel nostalgic over the old McDonald’s jingle and fearful of what one featuring a synthetic meat burger and fries might sound like.



0616MEM_cover_no_boxThe April cover didn’t turn out quite as we intended. In fact, for some of you, it had a connotation quite the opposite from what we envisioned—that’s on us.

In hindsight, our headline should have read: “Robots at Work—Automation Helps Break Old Stereotypes.” That’s what we intended with our provocative cover.

Some readers, and even others who saw the cover but—by their admission—did not read the full story, wrote to me. Another 1,000 signed a letter of complaint, which appears in our Letters to the Editor section in this issue. One of those who signed the petition, Kim Allen, the chief executive officer of Engineers Canada, also wrote to me directly. He said, “As much as we try to avoid ‘judging a book by its cover,’ it does still happen, and I find it unfortunate that the cover image projects a gendered view of the engineering profession that distracts from the important message of the article.”

Unless you publish a New York City tabloid newspaper, no one in publishing likes to offend. That’s especially true in this instance since we regularly focus on women in engineering. Because that’s so, it was good to see that the three doctoral candidates from Stanford University who started the petition protesting old stereotypes were able to galvanize so many influential technologists, students, and proponents to sign the letter.

The conversation over women and other underrepresented minorities in engineering is essential. So much so, that three years ago the magazine, in cooperation with the ASME Foundation, developed and hosted the first program in the ASME Decision Point Dialogues series. The program was called, “Will Engineers Be True Global Problem Solvers?” That discussion was an important Socratic dialogue among thought leaders, in part, on the need for more diversity in the profession. Our second program, “Critical Thinking, Critical Choices: What Really Matters in STEM,” was another deep-dive exploration into the fundamental issues related to underrepresented groups in engineering. You can view both programs by visiting

Women and minority engineers contribute greatly to the fiber of the profession. One of our feature articles in this issue, for example, was coauthored by ASME Fellow Karen A. Thole. We regularly highlight engineers who are women or minorities, and will continue to do so. To determine strictly by gender or ethnicity who leads important engineering projects, or who a magazine highlights, would be offensive. Therefore, it is critical that the profession reaches a point where there is so much equal representation that it will laud successful engineers on the basis of the quality of their work, regardless of gender or ethnicity.

Until then, we have to lead the conversation to ensure that every student, regardless of who they are, has the opportunities to pursue a fulfilling and successful engineering career.

ASME is a leader on many fronts. The most recent is working as part of the 50K Coalition, an alliance of the Society of Women Engineers, the National Society of Black Engineers, the Society of Hispanic Professional Engineers, and the American Indian Science and Engineering Society. The goal is to graduate 50,000 engineering students who are women and underrepresented minorities by 2025.

Engineers of all races and genders are making technology breakthroughs and helping to reshape the way we live and work. Associate Editor Alan Brown brought that point home in the April cover story on the implications of automation. The article is insightful and leading.

The expectations you, the reader, have of this magazine are high, but no higher than those which we have of ourselves. I invite you to continue this conversation with us in the pages of this magazine.

The Editor

John G. Falcioni is Editor-in-Chief of Mechanical Engineering magazine, the flagship publication of the American Society of Mechanical Engineers.

May 2020

Twitter from John Falcioni

Twitter from Engineering for Change