The Concord Monitor (N.H.), April 14:

On Tuesday, “Granite Geek” columnist David Brooks wrote about the rather shocking artificial intelligence of Google’s AlphaGo, which managed to master a complex Asian board game known as Go. Even the best player in the world, Lee Sedol of South Korea, was no match for the “deep learning” program.

For those who grew up with the TRS-80, AlphaGo’s ability to “think” its way to decisive victory is yet another example of science fiction becoming science fact. Our inner nerd can’t help but smile – but our inner philosopher shudders at the implications of software that functions like the human brain.

And we’re not alone. As Brooks mentioned, big-brained people such as Stephen Hawking, Bill Gates and Steve Wozniak are worried, too. In fact, Elon Musk is donating millions toward research for the creation of “friendly” artificial intelligence, and that’s an effort destined to be mind-bogglingly difficult.

In an interview with the alumni magazine of the University of California-Berkeley, professor Colin Allen of Indiana University posed the problem of “ethical sensitivity” this way: While it may seem like a good idea to program automated cars so they never break the speed limit, what happens if a passenger is bleeding to death in the back seat?

Society needs smart machines to understand that sometimes breaking the rules is the right moral decision, Allen said. The question then is, should robots be programmed with a specific moral code or should they, like humans, learn morality organically?

Advertisement

The first option is loaded with pitfalls. Even if humanity could agree on a single moral code, how could programmers make sure there were no loopholes to exploit?

Imagine, for example, robots interpreting the code to justify killing 75 percent of the world’s population because the action represented the best way to assure the continued survival of the human race on an overburdened planet. And what about the computation of the “butterfly effect”? A robot trying to make the best choice for its human creators could fry its brain trying to calculate all of the possible ramifications of a single decision.

The second option may well hold the best chance for the peaceful coexistence of man and machine. David Gunkel, author of The Machine Question: Critical Perspectives on AI, Robots and Ethics, told ChicagoInno.com that “we need to tap centuries of human understanding” to create a symbiotic relationship with machines. Gunkel said that means including the “best intelligence from philosophy, literature, sociology, anthropology, music, etc.” to help robots understand what it means to be human.

Gunkel’s bigger concern, however, has more to do with human weakness, specifically that people will be vulnerable to manipulation by increasingly human machines. Suppose, he said, that a robot pleads with you not to shut it off because the experience is painful. To ignore the robot’s plea requires resistance to a “natural inclination for empathy.” But to show mercy for the robot is to give it “emotional capability” – and that’s where the line between man and machine can become hopelessly, and dangerously, blurred.

This is an amazing age for technology, and AlphaGo represents another glimpse at the possible paths for artificial intelligence. But humanity can’t afford to sit back and watch with wonder and amusement as the future becomes the present. “The time to start thinking about potential consequences of machine learning is now,” Gunkel said, “while it is still just a game.”

Our human experience tells us the alarm will not reach all of the right ears, especially in Washington, until it’s too late, but the very real concern among some very smart people gives us reason for optimism.


Comments are not available on this story.