Boing Boing Staging

Artificial intelligence won't destroy the human race anytime soon

The Allen Institute for Artificial Intelligence (AI2), funded by billionaire Paul Allen’s, is developing projects like an AI-based search engine for scientific papers and a system to extract “visual knowledge” from images and videos. According to Scientific American, another goal of AI2 is “to counter messages perpetuated by Hollywood and even other researchers that AI could menace the human race.” SciAm’s Larry Greenemeier interviewed AI2 CEO and computer scientist Oren Etzioni:


Why do so many well-respected scientists and engineers warn that AI is out to get us?

It’s hard for me to speculate about what motivates somebody like Stephen Hawking or Elon Musk to talk so extensively about AI. I’d have to guess that talking about black holes gets boring after awhile—it’s a slowly developing topic. The one thing that I would say is that when they and Bill Gates—someone I respect enormously—talk about AI turning evil or potential cataclysmic consequences, they always insert a qualifier that says “eventually” or this “could” happen. And I agree with that. If we talk about a thousand-year horizon or the indefinite future, is it possible that AI could spell out doom for the human race? Absolutely it’s possible, but I don’t think this long-term discussion should distract us from the real issues like AI and jobs and AI and weapons systems. And that qualifier about “eventually” or “conceptually” is what gets lost in translation…


How do you ensure that an AI program will behave legally and ethically?


If you’re a bank and you have a software program that’s processing loans, for example, you can’t hide behind it. Saying that my computer did it is not an excuse. A computer program could be engaged in discriminatory behavior even if it doesn’t use race or gender as an explicit variable. Because a program has access to a lot of variables and a lot of statistics it may find correlations between zip codes and other variables that come to constitute a surrogate race or gender variable. If it’s using the surrogate variable to affect decisions, that’s really problematic and would be very, very hard for a person to detect or track. So the approach that we suggest is this idea of AI guardians—AI systems that monitor and analyze the behavior of, say, an AI-based loan-processing program to make sure that it’s obeying the law and to make sure it’s being ethical as it evolves over time.

Do AI guardians exist today?

We issued a call to the community to start researching and building these things. I think there might be some trivial ones out there but this is very much a vision at this point. We want the idea of AI guardians out there to counter the pervasive image of AI—promulgated in Hollywood movies like The Terminator—that the technology is an evil and monolithic force.


AI Is Not out to Get Us(SciAm)

Exit mobile version