Ethics of Artificial Intelligence

Why we won't be bowing down to our robot overlords yet.

By Ellis Booker

Once the stuff of science fiction, autonomous, “thinking” robots are increasingly ubiquitous. They explore the surfaces of Mars and comets, wheel medications up and down hospital corridors, and, more recently, even drive themselves around our freeways. So how did robots become so capable so quickly in the 21st century? Experts say a confluence of core technological advances—in processors, sensors and materials, as well as control algorithms and machine learning—are making robotic systems and other forms of artificial intelligence (AI) both more reliable and better able to navigate the world on their own.

With such advances, however, comes an almost ageless concern: Are we on the brink of creating an artificial intelligence that will pose a threat to humanity?

The faculty and researchers at Georgia Tech, one of the top centers of research on the topic of human/robot interaction, take this concern seriously. But they stress that the emergence of “strong” AI—where it becomes as functionally equal or superior to human intelligence—is unlikely in the foreseeable future, no matter what you see in movies or read in books. (See “Strong vs. Weak AI,” page 52.)

“People are worried about super-intelligences, and their profound potential impact on the human race,” says Ronald Arkin, Regents professor and director of the Mobile Robotics Laboratory for Tech’s College of Computing. As one of the nation’s most respected roboticists and roboethicists, he’s personally more worried about the “questions that are confronting us in the here and now,” rather than those that might affect us somewhere far down the road.

Arkin presented his views this summer in Washington, D.C., at an Information Technology and Innovation Foundation panel titled “Are Super Intelligent Computers Really A Threat to Humanity?” As Arkin sees it, human-robot interactions are already surfacing ethical quandaries. Examples include lethal autonomous systems on the battlefield and machines designed to mimic human qualities and elicit emotional reactions from us.

The ethical questions prompted by such systems are worthy of immediate attention, “perhaps more than the potential extinction of the human race,” he says.

There are more practical questions, too, that will soon be relevant.

Take a self-driving car skidding on an icy street. Will the AI choose to crash the vehicle into a crowded school bus, a couple of adults on the street, or drive itself into a wall, potentially killing its owner?

“Someone will have to design what the system chooses to do, under those different types of circumstances, if it is indeed perceptually able to recognize those situations,” says Arkin, noting this dilemma is a version of the classic Trolley Problem, in which we’re given the option of redirecting a runaway trolley to kill one person and so save five others on the tracks.

Yes, the autonomous car may have to be programmed with strategies when confronted by a no-win crash. “Just don’t expect universal agreement,” Arkin cautions, citing the lack of agreement about many life and death questions, including smoking in public, capital punishment and abortion. “Part of the problem with ethics is, quite often, there are no universally agreed upon answers.”

Paradoxically, Arkin has argued that lethal robots on the battlefield, not self-driving cars, are better positioned to be ethical agents because of the long-negotiated agreements around the rules of war.

“Nations have come together and encoded laws of war in the Geneva Convention and the like to say, ‘If we’re going to kill each other on the battlefield, this is what’s legal and this is what’s not legal,’” he says. Robots will follow these rules strictly and dependably, while human soldiers may not.

Banning ‘Killer Robots’?

Regardless, concern over the militarization of robotics has been growing. In July, more than 1,000 AI and robotics experts signed an open letter calling for a ban on “offensive autonomous weapons.”

“The endpoint of this technological trajectory is obvious: Autonomous weapons will become the Kalashnikovs of tomorrow,” say the authors, who presented the letter at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, on July 28. “The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.”

The letter, presented by the Future of Life Institute, was signed by luminaries including Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and British physicist Stephen Hawking.

Not among the signatories was Arkin. “I am not a signatory and I believe that a pre-emptive ban is premature,” he says. “Rather I support a moratorium to determine via research whether reductions in noncombatant casualties are achievable.”

Last month, Arkin wrote an opinion piece for IEEE Spectrum titled, “Warfighting Robots Could Reduce Civilian Casualties, So Calling for a Ban Now Is Premature.” In it, he expresses optimism that autonomous robotic military systems could lead to a reduction in non-combatant deaths and casualties. For starters, robots won’t need to have self-preservation as a foremost drive, if at all. Their lack of emotional responses like anger, frustration, fear and the hunt for revenge—things that can cloud human judgment in the fog of war—is another advantage, as is the robot’s ability to “integrate more information from more sources far faster before responding with lethal force than a human possibly could in real-time.”

Such systems on the battlefield, serving as impartial monitors to the actions of human beings on all sides, might be a positive influence too, Arkin speculates. “This presence alone might possibly lead to a reduction in human ethical infractions,” he writes. Technical hurdles notwithstanding, if such battlefield systems are shown to be better at adhering to international humanitarian law than human beings, “there may even exist a moral imperative” to use them, he argues.

“Let us not stifle research in the area or accede to the fears that Hollywood and science fiction in general foist upon us,” Arkin writes, adding in conclusion: “It’s not clear how one can bring the necessary people to the table for discussion starting from a position for a ban derived from pure fear and pathos.”

Will future AI be more The Terminator or more WALL-E?

RonArkin

Tech’s Ron Arkin with a few of his robot pals.

As any moviegoer can tell you, ambivalence about robotics and autonomous systems is strong within popular culture. One needn’t look hard to find a movie—cue The Terminator, The Matrix and this year’s Avengers: Age of Ultron—about a dystopian future in which humanity and its machines face off in a war for survival. But there are an equal number of positive portrayals, where robots choose to help mankind, including the droids of Star Wars, Short Circuit, The Iron Giant and WALL-E.

Meanwhile, consumers continue to delight in the ever-improving generations of Siri, Apple’s intelligent personal assistant, which uses a natural language user interface that gets more accurate and personalized over time.

Recently, Stephen Hawking and Microsoft founder Bill Gates wandered into the fray with statements about the potential dangers of unchecked artificial intelligence.

Ominous warnings about AI’s risks irk Arkin, but he welcomes calm discussion about the impact of these systems on humanity now and in the future. “To me, that’s a reasonable approach,” he says. And he believes that this discussion shouldn’t be limited to just computer scientists and engineers.

“The important thing is you can not leave this up to the AI researchers,” The Washington Post quoted Arkin saying at the ITIF event in D.C. “You can not leave this up to the roboticists. We are an arrogant crew, and we think we know what’s best and the right way to do it, but we need help.”

Indeed, funding is flowing to the subject. Earlier this year, after billionaire entrepreneur Elon Musk donated $10 million to the Future of Life Institute (FLI) to finance studies aiming to keep AI safe and beneficial, almost 300 teams submitted their research proposals. The FLI has since decided to grant $7 million from Musk and the Open Philanthropy Project to 37 different projects over the next three years.

While some futurists fret, robots have already shown great promise as helpers and rescuers.

One of the leading lights in the so-called rescue robotics field is a former student of Arkin’s, Robin Murphy, ME 80, MS CS 89, PhD CS 92, currently Raytheon Professor of Computer Science and Engineering, and Engineering Faculty Fellow for Innovation in High-Impact Learning Experiences at Texas A&M University.

Murphy’s specialized robots work on land, underwater and in the air, and include novel physical designs, such as a robot that moves like a snake to burrow in granular materials (a collaboration with Georgia Tech and Carnegie Mellon University). To date, her robots have been deployed in 20 disasters, including urban search and rescue, structural inspection, hurricanes, flooding, mudslides, mine disasters, radiological events, and wilderness search and rescue.

But Murphy is forever refining these platforms.

“Our research focuses on human-robot interaction because a little bit over 50 percent of the documented terminal failures of robots during a disaster are ‘human error,’” Murphy says, clarifying that these are often errors made by the human designer of the system, not the robot’s operator.

One recent design improvement has been to give the robot and its operator the same visual view. “If both [the robot and the robot operator] are looking at the same thing, generally what the robot camera view is, there is less confusion,” she says. The Skywriter project, which allows a user with no experience to exploit a visual common ground with the robot, also adds the ability to sketch on or highlight the camera view, obviating the need for verbal instructions and simplifying the process of sending directions to a robot in the field. Skywriter is now being commercialized.

Advances in image processing are going to be a huge enabler for disaster robotics, Murphy says. “During the recent floods in Texas, small UAVs (unmanned aerial vehicles), including ours, flew missing persons missions,” she says, explaining that a single, 20-minute flight could generate upward of 800 high-resolution images. Rather than manually inspecting these images for signs of a person, Murphy’s students programmed in one day a set of anomaly detectors to help triage the images.

Like her colleagues, Murphy is enthusiastic about the future of robotics. But just like the dawn of the personal computer and its profound impact on mass culture, robotics will take off as compelling applications come to market. “Just having the computer power wasn’t sufficient,” she says.

Tech faculty and students also have several AI-driven robots that are working to help humanity on multiple fronts, from teaching autistic children how to play games to specialized, “smart,” prosthetic devices.

Physical and Psychological Safety

Working out the different ways in which humans will work and play with intelligent machines, and the potential risks of embedding them in the community, can take interesting paths. It turns out that physical safety is just one part of the puzzle.

As AI and robotics increase in sophistication, so will their potential interactions with people. In theaters this year, the sci-fi movie Ex Machina plays with these themes. The thriller asks: How can we tell if an artificial system is intelligent, and can we trust it? Georgia Tech computer scientists like Mark Riedl are working on this question, too—well, at least the first half.

What Arkin calls “intimate robots” tap into the human tendency to anthropomorphize inanimate objects, “to make people care about these plastic, metal and electronic artifacts.” Arkin should know: Years ago, he worked with Sony on its now-discontinued robot dog, AIBO.

There are deep ethical considerations around these products. “When we take it to the next level, which is where we incorporate more physical interactions, where we foster the illusion of life in these systems, there are questions about the psychological and sociological impacts,” says Arkin, who explores these questions in his class “Robots and Society.”

In other words, what if a robot or AI looks as human as it acts? The character testing the robotic AI in Ex Machina, for instance, has a decidedly difficult time because the robot takes the form of a very attractive woman.

Such sci-fi is not far off base. Consider the Realbotix, a new project by Matt McMullen who developed the RealDoll, a customizable, life-size sex doll that has sold more than 5,000 units since 1996. Within the next two years, McMullen plans to produce an interactive head that can be attached to an existing RealDoll body. That product, selling for around $10,000, will be followed by a full-body Realbotix, which will reportedly range in price from $30,000 to $60,000.

Closer to commercialization is Han, a robotic head that can engage in basic conversations and go through a range of complex facial expressions, thanks to about 40 motors controlling his artificial facial muscles. The lifelike quality of the face is made possible by a patented elastic polymer called “Frubber,” short for “Flesh Rubber,” according to Hanson Robotics, which showcased Han at the Global Sources electronics fair in Hong Kong in April. Hanson executives suggest Han will be useful in settings where face-to-face communication is important, such as hotel registration, entertainment and hospitals.

The ethics of such intimate systems is “No. 2 on my list, actually, in the things we should be worried about,” Arkin notes, pointing out that there currently are no restraints on their commercial development.

While humanoid robots have come a long way in just the last few years, Tech scientists are pursuing non-humanoid robotic designs too, taking their cues from other living creatures.

Finally, some economists remain worried about the economic repercussions of autonomous systems, with more advanced robots and computers potentially taking more jobs than they create. “It’s a topic we discuss, too,” Arkin says. “Right now we have cheap labor that’s gone overseas and created a middle class in China and India. If we create robots that are cheaper than them, what will this mean to geopolitical stability?”

What happens to people during this transition concerns Arkin. “It’s a question of how societies and political systems manage that change,” he says.

‘Strong’ vs. ‘Weak’ AI

Are scientists close to creating a self-aware artificial intelligence, one that thinks, plots and acts on its own accord?

This simple answer is “No,” according to Georgia Tech College of Computing Regents Professor Ronald Arkin. Despite science-fiction novels and movies in which that kind of AI is a familiar character, such a powerful (potentially malevolent) entity isn’t likely in the foreseeable future, he and his colleague say.

For decades, there was optimism and investment around “artificial general intelligence,” or machines able to perform general-purpose cognitive tasks as well as or better than human beings. But when the pursuit of so-called “strong AI” hit innumerable obstacles, computer researchers turned their attention to a plethora of narrower problems, such as algorithms able to perform classification, language translation, speech and facial recognition.

These “Weak” AI systems “mimic what people do” in constrained settings, explains Mark Riedl, associate professor for the School of Interactive Computing. Today, weak AI is all around us: in smartphones, online shopping sites and customer-support telephone “agents.” Moreover, it’s improving rapidly. Platforms like Apple’s Siri, used by millions of people daily, “learns” from its mistakes and, so, becomes incrementally more accurate all the time.

Increasingly, we simply expect these automated systems to understand what we say and what we want, just like a human being would. “It’s what we call the ever-vanishing definition of AI,” Arkin says. “Meaning, as soon as you can do it, it’s no longer AI, and it vanishes into the background noise of technology.” —ELLIS BOOKER

 

BIG HERO 6

Programmed with advanced AI like the fictional Baymax, these robots’ mission is simple: Make human lives better.

Mabu1. MABU: The Mabu Personal Healthcare Companion, the brainchild of Tech alumnus Cory Kidd, CS 00, and the cornerstone of his company, Catalia Health Inc., is an in-home robot that has daily conversations with patients to ask them important questions about their health and make sure they’re on track with their prescribed medication, treatments and exercises. The combination of AI with behavioral psychology creates strong relationships between the health care companion and the patient. This encourages patients to adhere to treatment while providing the capability to collect data on a daily basis about the patient’s treatment progress and health. This data can be shared with caregivers—kept to the highest HIPAA standards of course—to optimize treatment outcomes.

2. DARWIN OP:Hae Won Park, an Electrical and Computer Engineering (ECE) PhD candidate and member of the Human-Automation Systems (HumAnS) Lab, with a Dynamic Anthropomorphic Robot with Intelligence-Open Platform (DARwIn-OP) that can play the game Angry Birds It turns out that children with cognitive disabilities would rather play tablet video games with a robot than they would with an adult. Darwin OP, a small, perky humanoid robot, closely watches the kids play a game like Angry Birds, and then take its own turn, mimicking their gameplay movements and celebrating if it succeeds (or shaking its head if it does not). Tech researchers Ayanna Howard and Hae Won Park found that children played the game longer, with more interaction, with Darwin than an adult—a huge boost to the kids’ perhaps otherwise dull rehabilitation exercises.

 

3 & 4. Simon and CuriCURI & SIMON: This dynamic duo does duty as personal service robots, working alongside humans to help out with everyday tasks such as operating a microwave or washing the dirty dishes. Curi and Simon, equipped with arms, fingers, eyes and even voices—adapt their skillsets in dynamic environments by interacting with humans and learning from them, what Tech researchers call socially guided machine learning. Andrea Thomaz, professor of interactive computing, continues to work with both robots to train them for advanced domestic duty. Curi, for one, has learned to cook a mean pasta dinner.

Robot Drummer5. ROBOT DRUMMER: Unlike the other robots in this bunch, Robot Drummer doesn’t have a cool name. But music lovers will agree that what it does is definitely beyond cool. Developed by Gil Weinberg in Tech’s Center for Music Technology, the high-tech prosthesis can take the place of a drummer’s amputated arm and power two drumsticks, essentially making the musician a cyborg. The first stick is controlled physically by the drummer, as well as electronically by using muscle sensors. The second stick’s AI “listens” to the music and improvises, employing anticipation algorithms to predict what the drummer will do next. It also allows the drummer to play faster than humanly possible.

UnnamedHeroRobot6. A HERO WITH NO NAME: The PR2 is a commercially available robot model that has served as an in-home personal assistant for Henry Evans, a quadriplegic. Associate Professor of Biomedical Engineering Charlie Kemp and his team have programmed the PR2 and modified its hardware to intelligently adjust to its surroundings and operate in typical cluttered domestic settings. But what’s most impressive is that the robot uses custom whole-arm tactile sensors made out of a soft, stretchable fabric designed to help it interact physically with humans in a gentle way. Most robots can’t do that, and it’s especially important when caring for a person with disabilities such as Evans. —ROGER SLAVENS

SHALL WE PLAY A (STORYTELLING) GAME?

Tech Associate Professor Mark Riedl rethinks the Turing Test while exploring the role of creativity in artificial intelligence.

MarkRiedlFor centuries, philosophers and scientists have struggled to create definitions for consciousness and intelligence. The problem took on new dimensions when mathematician, philosopher and cryptanalyst Alan Turing* wondered how we’d even recognize artificial intelligence in a man-made machine.

In his 1950 paper, “Computing Machinery and Intelligence,” Turing proposed a thought experiment, what he called an “imitation game.” In Turing’s game, a human being and a computer would converse with a human interrogator who wouldn’t know which was which. If the interrogator couldn’t distinguish between the two participants, it would be unreasonable not to call the computer “intelligent,” Turing argued. His insight was the necessity of a subjective judgment from external observation, thereby avoiding an explicit test of consciousness.

But there are problems with what’s now popularly know as the Turing Test, starting with its dependence on deception. In fact, programmers have learned to make computers more
“human-like” and deceive less-sophisticated judges by, for example, having them respond to questions they don’t understand with vague answers or deflections. (Apple’s Siri, chat bots and automated voice response systems use these tricks today.)

Enter Mark Riedl, an associate professor in Georgia Tech’s School of Interactive Computing, who has conceptualized a creative update to the 66-year-old Turing Test.

For starters, Riedl’s thought experiment takes out the need for deception. “You know you’re talking to a computer,” he says. “Either it will be able to do the activity or it won’t.”

Furthermore, the activity isn’t a conversation. Instead, Riedl’s approach takes as its guide the “creative problem-solving and artistic skills” we associate with humans.

In Riedl’s thought experiment, dubbed the Lovelace 2.0 Test, one might start with a simple query, such as asking the system to draw a picture of a plant. Next, the system would be asked to draw a plant with blue leaves, for which there are no real-world examples. Finally, it’d be asked to draw a plant with blue leaves and four eyes. As constraints are added, the level of difficulty—the creative challenge—increases, he explains.

Lovelace 2.0 builds on earlier work by Selmer Bringsjord, Paul Bello and David Ferrucci, who in 2001 came up with a test to determine that an artificial agent possesses intelligence in terms of whether it can “take us by surprise.”

Riedl’s Lovelace 2.0 has never been run, because “there is no AI that could succeed more than one or two rounds,” Riedl explains, adding that even his own state-of-the-art storytelling program probably couldn’t last more than two rounds.

Storytelling and Game Design

That program is a software system developed by Riedl and his colleagues that generates original stories about everyday events.

Aptly named “Scheherazade,” for the legendary Arabic queen and the storyteller of One Thousand and One Nights, the system writes original stories about going to the airport, a movie date or a bank robbery.

Story generators dealing with the real world present a harder problem than some earlier systems, which output fairy tales, he explains. That’s because if the details about a mundane activity, such as going to a restaurant, are wrong, out of order or just plain weird, “it becomes very apparent to the human” that something’s wrong, that the system isn’t working right, Riedl says.

Scheherazade augments its knowledge base by using crowdsourcing platforms like Amazon’s “Mechanical Turk” to ask questions about everyday scenarios. Consider a trip to the airport. As frequent flyers know, there’s a lot involved: standing at the ticket desk, taking off your shoes in a TSA line, dealing with screaming infants and much more. By looking for patterns among these crowdsourced stories, Scheherazade synthesizes elements to use in its own, original story.

Scheherazade wrote the following story about a bank robbery. It was one of a variety of different bank robbery stories, in which parameters were changed to try to control the “colorfulness” of the language.

“Bank Robbery Situation”

John drove to the bank, with a nervous look on his face.

John opened the bank door while his heart was beating fast.

John put on sunglasses.

John walked into the bank with a handgun underneath his jacket.

John looked around the bank, scoping out security cameras or guards.

John noticed one of the tellers named Sally seemed bored and distracted.

John stood in line.

John approached Sally naturally as to not raise alarm.

John pulled out a gun.

John leveled the gun at Sally and kept it on her.

Sally let out a bone-chilling scream.

John barked his orders at Sally, demanding she put the money in the bag.

John forced the bag into Sallys hands.

Sallys hands were trembling as she put the money in the bag.

John then grabbed the bag of money out of Sallys nervous hands.

Sally felt tears streaming down her face as she let out sorrowful sobs.

Still shaken, Sally reached for the phone and in a panicked manner called the police.

John quickly fled the bank and entered into his car.

John escaped in the car.

AI on a Whole New Level

SuperMarioGameLevelsAnother project that Riedl is working on is a system that designs new maps for the Super Mario Brothers video game.

In operation, the yet-unnamed program studies YouTube videos of people playing the Super Mario Brothers video game. It uses machine vision to look for patterns, watching the gameplay videos to see where players spend most of their time. Its goal is to determine high-interaction areas—those spots where players spend more time to collect bonus items or master a challenge.

The system then tries to create new levels. It doesn’t duplicate exactly what’s out there, Riedl explains, but creates a map that a human would assess and realize wasn’t completely random.

“What we’re really interested in is the role of human expertise in creativity,” he says, noting that this is similar to the strategy of the Scheherazade storytelling program.

“This is a creativity problem, too,” Riedl explains. “We want to see if a system can learn to mimic these skills.” —ELLIS BOOKER

 

Leave a Reply