Ethical AI and the Need for a More Values-Driven Sociotechnical Future

By Ariel Robinson

I come from a family with a pretty dry sense of humor. We’re big fans of Despair, Inc., home of the wonderful “demotivation” posters. One says: “Consistency. It’s only a virtue if you’re not a screwup.” My mom has “Compromise,” which shows a man extending a handshake above one of her favorite sayings: “Let’s agree to respect each other’s views, no matter how wrong yours might be.”

  Image Source:  Despair, Inc.

Image Source: Despair, Inc.

My parents gave me “Motivation” before I left for college. Below a beautiful picture of a rocky shore at sunset it reads, “If a pretty poster and a cute saying are all it takes to motivate you, you probably have a very easy job. The kind robots will be doing soon.” Thanks, Mom and Dad.

Since the beginning of time, Americans have been inventing better ways of doing more for less, from Eli Whitney to Henry Ford. Revolutionary innovations often bring growing pains, but at the end of the day we adapt and move the world forward. In that sense, the social, legal and ethical challenges posed by artificial intelligence (AI) and autonomous systems (AS) aren’t new. But the complexity, scale and pace of change make AI/AS different. As systems become increasingly autonomous, capable of learning about and acting on environments without human input, how do we maximize their benefit to society while minimizing their risks? And who is liable for their actions?

All members of our society have a stake in the AI revolution – as well as a right and the responsibility to actively shape its outcome. But to do so, we must bridge existing sociotechnical divides. Without improved transparency, accountability, education and empowerment of all stakeholders – manufacturers and developers, buyers and users, and the public writ at large – our society will not reap the full benefits of autonomous intelligent systems (AIS).

These themes are central to a new draft guidance, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, released last month by IEEE (The Institute of Electrical and Electronics Engineers, the world’s largest technical professional association dedicated to advancing technology for the benefit of humanity). More than one hundred experts across multiple industries, sectors and disciplines are currently working on the document, which is also open for public comment through March 2017.

  Image Source:  IEEE

Image Source: IEEE

It is essential that [AI and AIS] be designed to adopt, learn, and follow the norms and values of the community they serve, and to communicate and explain their actions in as transparent and trustworthy manner possible.
— IEEE

 

In its guidance, IEEE advocates for an inclusive, multi-stakeholder approach to values-based design. Similar AIS may be employed in a variety of contexts, which may have competing values. By involving community members throughout system design, development and deployment, they will be more accepting of AIS outcomes.

There’s a long way to go before this becomes a regular practice, however. The IEEE guidance states: “Technologists often struggle with the imprecision and ambiguity inherent in ethical language, which cannot be readily articulated and translated into the formal languages of mathematics, and computer programming associated with algorithms and machine learning.”

That is certainly what Davi Ottenheimer, an historian, ethicist and information security expert, found. He teaches ethics to Masters students in the Department of Computer Science and Security at St. Pölten University of Applied Sciences in Austria. His first few years teaching ethics were difficult: too much philosophy, not enough technical applicability, his students said. I spoke with Ottenheimer, who said, “Overcoming my students’ inability to listen was easy by framing ethics into situations they could relate to in their daily lives or what they cared about.” Overcoming their decision-tree choices was harder. His students made assumptions about how people would interact with technology and felt that there was plausible deniability for outcomes based on someone else’s actions. But Ottenheimer challenged those assumptions, and has made a difference. “Students this year told me I fundamentally changed their minds,” he said. “It was the best compliment ever.”

IEEE suggests that lack of ethics education and leadership for technologists is one of the biggest barriers to a more values-driven sociotechnical future. But as more universities begin to offer courses on ethics and technology, the real question will be: will students choose to take them? Ottenheimer said for his course, “There was less than no demand.”

But this is a much simpler problem. There are three ways of generating demand:

  • The first and most obvious source of demand is consumers. However, consumers don’t always have their own best interests in mind. Consider recent challenges with securing Internet of Things devices, which are themselves part of the AI/AS ecosystem. Market forces and poor consumer education about the benefits and risks of IoTs drove innovations in price and features, not safety. Thus, the industry and its products are now plagued with potentially life-threatening vulnerabilities – many of which could have been avoided had manufacturers and developers prioritized security from the beginning.

  • The second way to create demand is to require it by law. There are plenty of models and precedents for government creation of demand that provide viable models for shaping market behaviors: federally managed licenses for ethically trained AI engineers, accreditation of manufacturers like educational institutions or banks, or approval of products on a case-by-case basis as the Food and Drug Administration does. It is critical that government not codify values; rather, it must codify the need for, and inclusion of, values in AI/AS product design, development and deployment. This kind of proactive regulation helps industry, too: not only does it make them more trustworthy by having an independent authority validate them as such, but it also shifts some of their liability to the government.

  • Unfortunately, history has shown that it takes a large-scale disaster and many human lives before the government takes regulatory action. This kind of post-hoc accountability is the final, and unfortunately, most likely, source of demand. But where the risks of many innovations may not be known until they happen, the possible dangers of AIS are painfully predictable. If a self-driving car has to decide between running into a wall and injuring its passenger (i.e. owner) or hitting a child, which should it choose? Who should be held responsible for that choice?

Legal scholar Jeffrey Vagle is a participant in an NSF-funded project at University of Pennsylvania on the Security and Privacy of Cyber-Physical Systems. They recently held a workshop on tort liability for advanced algorithms. The consensus, he told me in an interview, though not unanimous, was that existing tort laws will adapt to handle these novel technologies. “The adaptations are difficult to predict, since they depend so much on what issues courts eventually end up facing. There are, of course, statutory and regulatory provisions, but those too are based on political, social, and economic inputs that I don't think we've seen.”

Ethical AI is wholly achievable if we decide we want it, but users, technologists and policymakers alike must choose to actively pursue it together.

Consumers have a right to know how their lives are being affected by AIS – and a responsibility to make informed decisions about the technology they purchase and the data they allow to be collected. Government owes it to consumers to provide unbiased knowledge and education of benefits and costs, or to incentivize industry to do so itself.

And manufacturers and developers are held to a reasonable standard of liability (“reasonable” because not only is it not the hammer maker’s fault if a person  kills someone with their product, but because we also have shared expectations of how humans should use hammers). At this early stage of AI maturity, manufacturers and developers have the power – and responsibility – to inform society’s expectations of AIS, and to explain the capabilities and limitations of their tools so that they are used appropriately.

It is an exciting time for AI, with boundless possibilities for growth and innovation. But we must be intentional about what we create and recognize the benefits and risks that are inherent with new technologies. Ethics-by-Design involves all of us, and IEEE gives us a good place to start. It won’t be easy, but it is imperative to our moral future as individuals, institutions and communities. And, as Henry Ford said, “Whether you think you can or think you can’t – you’re right.”


About Ariel Robinson

Ariel Robinson is a national security and tech policy analyst who graduated from Wellesley College with a degree in cognitive science and linguistics, which is the foundation for her work bridging the technical and nontechnical worlds in the Washington, D.C. metro area.

More About Ariel