Engineering Inc. magazine
As the engineering industry adopts artificial intelligence at a rapid pace, firms must give urgent attention to its ethical and legal ramifications
Engineering has always been hands-on. Whether blueprints and project designs have originated from drafting tables or computers running sophisticated CAD or BIM software, humans have been at the controls.
Now, artificial intelligence (AI) is flipping the script. Over the last few years, AI has appeared in tools, applications, and processes that touch nearly every aspect of engineering. These systems—including generative AI models that can chat, write content, summarize complex documents, and automate processes—increasingly take humans out of the driver’s seat.
“AI is exploding onto the scene. It is redefining the fundamental way architects and engineers work,” says Mark Blankenship, director of risk management at Willis Towers Watson and a member of ACEC’s Risk Management Committee. “There are enormous benefits associated with the technology, and it is something that firms in the A/E/C space cannot ignore. But it’s also critical to address the ethical and legal risks related to AI.”
To be sure, as AI moves into the mainstream of engineering, firms must adjust and adapt. There’s a need for policy updates, new technology controls, employee training, and various other guardrails that allow a firm to tap into the power of the technology while mitigating the risks that AI introduces.
“Firms should have checks and balances in place, especially since AI is changing so rapidly,” says Lillian Minix, marketing communications manager at Timmons Group, a full-service engineering firm headquartered in Richmond, Virginia. “Things can go astray without the right understanding of what AI does and doesn’t do well.”
Click here to continue reading the magazine article.