Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
631 Discussions

Facing Tough Ethical Questions in AI

Edward_Dixon
Employee
0 0 1,513

Chance are you know “the trolley problem”—a famous thought experiment wherein a person must choose between switching the direction of an out-of-control train, thereby selecting whether one or many people die. On its face, the answer seems simply utilitarian: choose the track that harms the least number of people. But the dilemma is in the details. What if you know the lone person who will be killed? What if the larger group of people are somehow morally objectionable? What responsibility do you have, if at all, to change the direction of the train in the first place?


These ethical questions, as MIT professor Bernhardt Trout notes in a recent episode of Intel on AI, have come up throughout the ages. But as artificial intelligence (AI) acts as an amplifier for human decisions, these questions are more important than ever before. Bernhardt recognizes these questions are uncomfortable; there’s an underlying tension between engineering—a largely utilitarian endeavor—and actual lived human values (I’m a utilitarian when I’m writing code, but in my personal life, my family weighs much more heavily in my moral calculus than any number of worthy folk unknown to me).




“I would say that certainly the technology is a major issue. But, perhaps a more important issue is how well we face that technology.”


–Bernhardt Trout



Ethical Questions in Autonomous Vehicles


The trolley problem is top-of-mind for many today due to the advancements being made in autonomous vehicles. In the podcast, Bernhardt recognizes that the decisions self-driving cars will make on the road are being programed by people. While that can make some people suspicious or even angry, these types of decisions—selecting acceptable levels of risk—are made all the time in society. For example: airbag deployment models, highway off-ramp placement, emergency exits in buildings, setting speed limits.


As podcast host Abigail Hing Wen notes, these decisions will be cultural to some extent, just as existing EU car safety standards differ from the US in terms expectations on pedestrian safety, and individual countries have different rules for things like speed limits. I’m compelled to add that, mighty though these wider cultural forces might be, they are no match for my spouse, who absolutely will not buy a vehicle that does not ruthlessly prioritize the safety of her children.


Engineers asked to consider the trolley problem will reflexively challenge the premise—"This trolley design is terrible! Let’s get over to that whiteboard and design in some brakes!” And as a company, we are driving towards safer roads. Our acquisition of Mobileye was a tremendous leap forward for the company’s commitment to developing advanced driver assistance systems (ADAS) that can lead to fewer crashes, in addition to the development of the Responsibility-Sensitive Safety model that sets specific and measurable parameters that define a “Safe State” for vehicles. As pilots and partnerships roll out across the globe in places like China, Germany, Japan, UAE, the US and elsewhere, perhaps some of these standards will be universal.



Defining True Artificial Intelligence


Another famous philosophical question relating to intelligence is whether machines can be said possess it at all. In Alan Turning’s in his seminal 1950 paper "Computing Machinery and Intelligence" published in 1950, he begins by saying that the question of “What is thinking?” is “too meaningless to deserve discussion” and proposes instead a test that he calls “The Imitation Game.” In the first variant that he introduces, a man who is pretending to be a woman and an actual woman communicate with a judge via typewriter, with the judge attempting to decide which correspondent really is a woman. Turing then suggests that if machine could take the place of the man and successfully fool the judge, we would have “functional proof” of human-like intelligence, sidestepping more nebulous questions, like “what is thinking?” or “what is intelligence?”.


The fact that the “Turing test” is still discussed as a benchmark for intelligence, 70 years later, testifies to the difficulty in devising really satisfactory definitions of intelligence. For example, chess was a popular focus for AI research until machines far outstripped the best human players, at which point we stated to think of chess playing machines as simply “better software.” At the time of writing, machines have yet to pass this test, and when they do, we are likely to speak of it as an interesting moment for “natural language processing” rather than declaring machines to be intelligent.



The Ethical Significance of the Mind


It’s fair to say that scientists, myself included, tend towards materialism—to think of the universe as being computable, and on this basis, readily accept the conclusions of Turing’s paper. In this same paper, Turing wrote that resistance to his ideas was rooted in a desire to believe in human exceptionalism—that we have special qualities not shared by non-humans, animal or otherwise. Bernhardt seems to be of this school, citing Michelangelo’s work as evidence of a creative spark which he feels is innately human. Bernhardt see the ethical significance in this, as a basis for claiming that humans are superior to machines. Unfortunately, not even my mother would claim that my artistic efforts are superior to the output of even a badly trained deep neural network, but I can console myself that she thinks of my human value as being intrinsic, and not dependent on my outperforming a machine on any task whatsoever.


Bernhardt has an additional concern with respect to Turing’s test: if humans come to think of themselves as essentially biological calculators, we may embark on transhumanist projects, trying to integrate silicon chips with our minds. And as sci-fi as it sounds, there are startups working on brain-machine interfaces, so this concern is not as theoretical as it might sound. On a more day-to-day level, Bernhardt is also concerned about the increasing role of AI in mediating our interactions with other humans: AI choosing the posts that appear in my Facebook feed. It very likely plays a similar role in my Twitter account, too. It's a curious thing that the many engineers, scientists, VCs and so on with whom I’ve connected with have been selected by a machine. We should be cognizant of how AI, especially how it’s used in our interactions with other humans, impacts society, and this is true of technology more generally. Films like Blade Runner, Ex Machina, and The Matrix reflect this concern, and so do books like The Veldt and Brave New World.



Technology & Existential Risk


For all the fears of machines becoming smarter than humans and rising up to destroy us, Bernhardt sees a bigger existential threat: that we forget about what's valuable in reality and lose ourselves, using technology to separate ourselves from nature. In a world where machines are trained specifically to feed us with the most stimulating and engaging content possible, this doesn’t seem an entirely academic concern; we know that many species are in some sense vulnerable to “hyperstimuli,” effectively inputs that hack their reward systems. A common example is the cuckoo’s egg, which so successfully appeals to the unfortunate foster parent’s maternal instincts as to win her attention away from her own brood.  Could we possibly be creating own AI-powered cuckoo eggs?



Theory & Practice


Solving a moral puzzle—which track should the trolley take? —is often easier than actually acting on the conclusion. I’m totally convinced of the benefits of exercise, but I keep choosing my sofa over my bicycle! Living up to principles, person or corporate, can be extremely challenging.


Intel, as a semiconductor manufacturer, is reliant on inputs that include tantalum—a mineral that is mined in only a few regions globally, many of which include conflict zones where serious human rights abuses are known to be associated with the mining industry. Since minerals are commodities that are discovered, extracted, and refined through a complex supply chain that often spans difficult-to-reach areas, Intel executives could easily have chosen to remain ignorant of the precise origins of the tantalum used in its facilities. Instead, Intel began a survey of its supply chain, visiting more than one hundred smelters and refineries across twenty three countries in order to ensure that only conflict-free minerals would be used in our processors. This wasn’t easy or cheap, and we’ll never be finished—over a decade later, we need to keep auditing. Slogans are cheap. What really matters is what you do when your principles collide with your profits. I’m proud that Intel spent heavily to do the right thing.


Very publicly living your principles in this way makes it easier to keep living up to them. When I had ethical concerns about some AI projects we were considering, bringing those to my VP felt like the natural thing to do because of the willingness that senior leadership had shown to prioritize ethics over expediency. You can see this commitment other places, too. For example, publishing gender pay parity data, as mentioned in the episode with Sandra Rivera, is something followed so far by very few companies.


Beyond “first do no harm,” we are very aware of the positive ethical imperative to use our companies' very special capabilities to improve lives through #AIForGood projects. l had the exceptional privilege to use my skills to help NCMEC, and some colleagues did amazing work to automate the generation of maps from satellite images to help aid workers in disaster zones.


There’s a lot to unpack in this episode, and I highly recommend Turing’s surprisingly accessible paper as reading prep!


To learn more about Intel’s work in AI, visit: https://intel.com/ai


To hear more Intel on AI episodes with some of the world’s most prominent technology guests, visit: intel.com/aipodcast








The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.