Stephen Hawking Warns Artificial Intelligence Or AI Could Destroy The Human Race, Elon Musk Agrees [PHOTO]

Stephen Hawking recently upgraded the device that helps him communicate, with one that includes a rudimentary AI. Being one of the smartest people alive, that got him thinking about the potential risks of AI technology.

"It would take off on its own, and re-design itself at an ever increasing rate," he told the BBC.

"Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

Cleverbot, Hawking's new communication software is designed to chat like a human would

"I believe we will remain in charge of the technology for a decently long time and the potential of it to solve many of the world problems will be realised," said Rollo Carpenter, creator of Cleverbot.

Cleverbot's software learns from its past conversations. It has gained high scores in the Turing test, a test designed to see if a person can distinguish between an AI and person.

Mr Carpenter believes we are still a ways off and that the result of AI may not be so negative, "We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it," he says.

Prof Hawking is using new software to speak, but has opted to keep the same voice. Prof Hawking concedes that it's slightly robotic, but insists he didn't want a more natural voice. "It has become my trademark, and I wouldn't change it for a more natural voice with a British accent," he said.

Computer acientest Ben Medlock said, "It's our responsibility to think about all of the consequences good and bad.We've had the same debate about atomic power and nanotechnology. With any powerful technology there's always the dialogue about how do you use it deliver the most benefit and how it can be used to deliver the most harm."

"If you look at the history of AI, it has been characterised by over-optimism. The founding fathers, including Alan Turing, were overly optimistic about what we'd be able to achieve."

"We dramatically underestimate the complexity of the natural world and the human mind, "he explains. "Take any speculation that full AI is imminent with a big pinch of salt."

Then there is the question of using AIs in wars, a question theat was addressed in a report by two Oxford scholars called, Robo-Wars: The Regulation of Robotic Weapons

"I'm particularly concerned by situations where we remove a human being from the act of killing and war," says Dr Alex Leveringhaus, the lead author of the paper.

Elon Musk, chief executive of rocket-maker Space X, also fears artificial intelligence, who has warned that AI is "our biggest existential threat". Musk warned last summer that AI was potentially "more dangerous than nukes".

In a tweet he added "Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.

"I'm pleased that a scientist from the 'hard sciences' has spoken out. I've been saying the same thing for years," said Daniela Cerqui, an anthropologist at Switzerland's Lausanne University.

"It may seem like science fiction, but it's only a matter of degrees when you see what is happening right now," said Cerqui. "We are heading down the road he talked about, one step at a time."

Tags
Join the Discussion

Latest Photo Gallery

Real Time Analytics