Sunday, February 08, 2015

What do Stephen Hawking, Elon Musk and Bill Gates all have in common?

They are concerned about the dangers posed by artificial intelligence:

Stephen Hawking warns artificial intelligence could end mankind
[...] He told the BBC:"The development of full artificial intelligence could spell the end of the human race."

His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI.

But others are less gloomy about AI's prospects.

The theoretical physicist, who has the motor neurone disease amyotrophic lateral sclerosis (ALS), is using a new system developed by Intel to speak.

Machine learning experts from the British company Swiftkey were also involved in its creation. Their technology, already employed as a smartphone keyboard app, learns how the professor thinks and suggests the words he might want to use next.

Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.

"It would take off on its own, and re-design itself at an ever increasing rate," he said.

"Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded." [...]

Elon Musk Thinks Sci-Fi Nightmare Scenarios About Artificial Intelligence Could Really Happen
[...] Musk, who called for some regulatory oversight of AI to ensure "we don't do something very foolish," warned of the dangers.

"If I were to guess what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence," he said. "With artificial intelligence we are summoning the demon."

Artificial intelligence (AI) is an area of research with the goal of creating intelligent machines which can reason, problem-solve, and think like, or better than, human beings can. While many researchers wish to ensure AI has a positive impact, a nightmare scenario has played out often in science fiction books and movies — from 2001 to Terminator to Blade Runner — where intelligent computers or machines end up turning on their human creators.

"In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out," Musk said. [...]

Bill Gates: Elon Musk Is Right, We Should All Be Scared Of Artificial Intelligence Wiping Out Humanity
Like Elon Musk and Stephen Hawking, Bill Gates thinks we should be concerned about the future of artificial intelligence.

In his most recent Ask Me Anything thread on Reddit, Gates was asked whether or not we should be threatened by machine super intelligence.

Although Gates doesn't think it will bring trouble in the near future, that could all change in a few decades. Here's Gates' full reply:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.

Google CEO Larry Page has also previously talked on the subject, but didn't seem to express any explicit fear or concern.

"You can't wish these things away from happening," Page said to The Financial Times when asked about whether or not computers would take over more jobs in the future as they become more intelligent. But, he added that this could be a positive aspect for our economy.

At the MIT Aeronautics and Astronautics' Centennial Symposium in October, Musk called artificial intelligence our "biggest existential threat."

Louis Del Monte, a physicist and entrepreneur, believes that machines could eventually surpass humans and become the most dominant species since there's no legislation regarding how much intelligence a machine can have. Stephen Hawking has shared a similar view, writing that machines could eventually "outsmart financial markets" and "out-invent human researchers."

At the same time, Microsoft Research's chief Eric Horvitz just told the BBC that he believes AI systems could achieve consciousness, but it won't pose a threat to humans. He also added that more than a quarter of Microsoft Research's attention and resources are focused on artificial intelligence.
They all seem to agree that any threat is not immediate, and probably far off in the future. So far as I can see, machines so far merely mimic intelligence. They certainly have no consciousness.

I found the remark by the Microsoft researcher interesting, that he believes that "AI systems could achieve consciousness". I don't see how that could be possible, which is what makes the remark... interesting. It's interesting too, that Microsoft is focusing such a large percentage of it's attention and resources on AI. What would an "artificial consciousness" created by Microsoft be like? Hopefully, nothing like Windows 98. ;-)

Read the original complete articles, for embedded links and more.
     

No comments: