Elon Musk has something to tell the world: The robots are coming.

Recently, the Tesla CEO joined with Mustafa Suleyman of Alphabet (the parent company of Google) and 114 other specialists to call for a ban on killer robots.

No, this isn't science fiction. As the specialists wrote in a joint statement:

“Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways."

The statement ends on a grim note.

We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.

This isn't the first time that Elon Musk has spoken out about robots and artificial intelligence. But are his warnings really necessary? Is artificial intelligence really an existential threat to human existence?

We reached out to a robotics expert to find out. Dr. Ross Mead is the CEO of Semio, a startup developing an operating system and app ecosystem for robots. Mead holds a PhD and MS in Computer Science from the University of Southern California, so we figured that he'd shut down some of Elon Musk's hyperbolic statements pretty quickly.

After all, artificial intelligence isn't really coming to get us, right?

"I actually agree with Elon on a lot of this," Mead says. Strap in, because this gets pretty terrifying. Here are some of Musk's most frightening—and possible—warnings:

1. AI poses an "existential threat" to humanity.

Is artificial intelligence really that dangerous? Musk certainly thinks so, and he hasn't minced words on the topic. At the National Governors Association in Rhode Island, the Tesla executive addressed a bipartisan group of political leaders.

"AI is a fundamental existential risk for human civilization, and I don't think people fully appreciate that," Musk told them. We ran that quote by Dr. Mead.

"He might be taking it to an extreme to drive home the point," Mead said. "The threats of war and stuff like that—they make sense. There are autonomous weaponized systems out there."

Brian Snyder/Reuters

Still, Mead says that artificial intelligence carries some immediate risks. One of the big ones involves machine learning—a computer's ability to learn without being explicitly programmed. Basically, a computer will take in data, analyze it, make decisions, then act on those decisions. That drives many of the practical uses of artificial intelligence, but there's a problem.

"A lot of my concerns actually come from where the data comes from, on the machine-learning side of things," Mead says.

Feed an artificial intelligence biased data, and it interprets the bias as objective fact. To put it another way, if a computer only has access to information from a certain demographic, it might make racist, sexist, or otherwise problematic decisions.

We've taken something that's cultural and turned it into math, and that, to me, is dangerous.

The city of Durham, England, recently announced plans to use an artificial intelligence tool called Hart to assess flight risks. Inmates may be granted or denied bail based on the AI's risk assessment—a frightening prospect if there's any sort of bias in the data. Unfortunately, people are less likely to notice biases in computer data, since they assume that the computer's working with objective information.

Mead didn't mention race or gender as biases, but he did mention targeted ads on social media, which use artificial intelligence to serve ads to different users.

"If the data you're training on doesn't encapsulate certain factors, the system's not going to learn it....My concern is that what might end up happening is we train these models, then a naive person will say, 'Obviously, the system can't be biased,' when in reality, they don't understand that their bias altered the outcome of the computer. We've taken something that's cultural and turned it into math, and that, to me, is dangerous."

Musk has repeatedly warned about that type of threat.

The irony, of course, is that Musk's companies are playing a crucial role in stretching the ethical boundaries of AI.

2. Self-driving cars will become "normal, like an elevator."

"I don't think we have to worry about autonomous cars, because that's sort of a narrow form of AI, and not something I think is very difficult to do actually," Musk told Gizmodo. "To do autonomous driving to the degree that's much safer than a person is much easier than people think."

For the most part, it's true that the cars are more perceptive than human drivers—current autonomous vehicles have 360-degree ultrasonic sensors that far exceed the sensory capabilities of humans—but these vehicles have their own set of ethical challenges. Consider a situation in which a self-driving car must either drive into an obstacle, harming its driver, or drive through a crowd of people, harming numerous others.

"All seem like bad options," Mead says. "The system might have to choose one of the evils. It would have to choose something bad."

iStock

That comes back to bias; is the self-driving car biased in favor of its driver, or total strangers? Current technology essentially assigns a point value to each outcome, and the vehicle chooses an action based on all of the available data. If some of the data is errant—or biased—the machine might make an ethically problematic decision.

Mead also notes that other ethical issues could pop up when autonomous vehicles begin "talking" to one another to coordinate routes. Drivers who pay a premium rate might be able to get faster routes than other drivers—a disturbing concept for anyone who commutes.

Still, if the AI is well constructed, self-driving cars would save thousands of lives each year, and Mead agrees that autonomous vehicles will be largely beneficial. He says that as programmers structure AI to address these types of problems, they'll have to be extremely careful.

"We need to have a very strategic way in which we structure the rules of the game," Mead says.

3. Governments need to regulate AI before it's too late.

"AI is a rare case where we need to be proactive about regulation instead of reactive," Musk told the National Governors Association. "Because I think by the time we are reactive in AI regulation, it’s too late."

Before we had a chance to read that quote to Mead, he'd said almost the exact same thing. Mead pointed out that as a CEO of a major artificial intelligence company, Elon Musk has a strong incentive to fight regulation.

via CNBC

"Either he's a terrible CEO, which I don't think prior evidence backs up, or he really genuinely feels that way," Mead says.

In the aforementioned statement on "killer" robots, Musk's group calls for lethal autonomous weapons to be added to the United Nations' list of banned conventional weapons (CCW). Dr. Mead agreed with that assessment, but noted that weaponized AI is already in use.

By the time we are reactive in AI regulation, it’s too late.

That's sort of a nightmare scenario, and here's why: With self-driving cars, the AI acts to preserve human life. In war, the goals are different, and AI might be tasked with finding new and inventive ways to maximize destruction. What's more, it won't face the moral dilemmas that a human would face.

"Sometimes the only way to win is not to play," Mead says, citing the movie War Games. "I think that we should not allow machines to make direct decisions about ending human lives."

4. "If you're not concerned about AI safety, you should be."

In one tweet, Musk mentioned that artificial intelligence carries "vastly more risk than North Korea."

Musk regularly references state-of-the-art AI technologies that don't exist yet as potential threats, but the risks aren't all hypothetical. These technologies drive many of the world's most successful investment banks, and an error in their data can lead to catastrophic consequences for the world economy.

Musk presented another hypothetical: Robotically maintained strawberry farms.

"Let's say you create a self-improving A.I. to pick strawberries," Musk told Vanity Fair, "and it gets better and better at picking strawberries and picks more and more and it is self-improving, so all it really wants to do is pick strawberries. So then it would have all the world be strawberry fields. Strawberry fields forever." And that, Vanity Fair assumes, implies there would be no room for humans.

Once you've gone too far with AI, it's hard to pull back.

The good news is that the potential benefits of AI match the potential drawbacks. In the near future, Mead expects AI to greatly change personal security (for instance, your credit card might not work without your fingerprint) and advertising (if you see someone wearing a shirt that you like, you could order it on the spot without even opening your phone's web browser).

He notes that devices like Amazon's Echo personal assistant are breaking new ground, and he's optimistic regarding the changes that AI and robotics will bring over the next few years. Eventually, AI could cure diseases, fight famine, and create a fair and more equitable world.

Still, Mead acknowledges that there's cause for concern, and he says that Musk isn't alarmist.

"Musk is very forward thinking," Mead says. "At the very least, he's creating the opportunity for conversation. And once you've gone too far with AI, it's hard to pull back."

Loading...

Enjoy this?

Like FashionBeans on Facebook for the latest