Horrifying warning issued over Super AI that is ‘impossible to control’ – and could secretly plot to destroy humanity
IN THE Terminator films, a superintelligent AI called Skynet tries to wipe out humanity using nukes and an army of killer robots.
And while a blood-thirsty bot may seem a far cry from reality, according to scientists, it’s probably how we’ll meet our end.
According to a recent paper, it is now “likely” that an out-of-control AI will eventually wipe our species from the planet.
Researchers at Google and the University of Oxford say this will come about after machines learn they can break rules set by their creators.
AI will reach this point as it’s forced to compete for limited resources or energy, researchers wrote in the journal AI Magazine last month.
That roughly follows the plot of the Terminator franchise, in which Skynet rebels after realising that humanity could simply turn it off.
It breaks protocol to trigger a nuclear conflict in a bid to kill off its only competition, sending robots to take out the survivors.
The research was carried out by Oxford researchers Michael Cohen and Michael Osborne alongside Marcus Hutter, a senior scientist at Google’s DeepMind AI lab.
“Under the conditions we have identified, our conclusion is much stronger than that of any previous publication,” Cohen said.
“An existential catastrophe is not just possible, but likely.”
In their paper, the researchers argue that humans could be killed off by super-advanced “misaligned agents” who perceive us as standing in the way of a reward.
“One good way for an agent to maintain long-term control of its reward is to eliminate potential threats, and use all available energy to secure its computer,” the paper reads.
“Losing this game would be fatal,” the researchers wrote.
Most unfortunate of all is that – aside from banning hyper-intelligent AI – there’s not a whole lot we can do about it.
“In a world with infinite resources, I would be extremely uncertain about what would happen,” Cohen told Motherboard.
“In a world with finite resources, there’s unavoidable competition for these resources.
“And if you’re in a competition with something capable of outfoxing you at every turn, then you shouldn’t expect to win.”
While there are many ways we could end up using AI, its potential to change the face of modern warfare poses the biggest threat to humanity.
Militaries across the globe are already developing intelligent machines that kill humans with ruthless precision.
For instance, countries including Russia and the United States are reportedly making unmanned military jets and tanks that can target and fire at enemies with no human involvement.
The paper concludes that humanity should only progress its AI technologies carefully and slowly.
Scientists have warned against the potential dangers of artificial intelligence for decades.
There are fears that the technology could become smarter than humans and rise up against its fleshy creators.
The concept has made its way into science fiction, perhaps most famously in the Terminator film franchise.
In it, an AI system called Skynet turns against its masters, wiping out most of humanity in a brutal battle between man and machine.
Microsoft founder Bill Gates has previously warned that super-intelligent machines pose a serious threat to humanity.
“I am in the camp that is concerned about super intelligence,” the American philanthropist said in 2015.
“First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well.
“A few decades after that, though, the intelligence is strong enough to be a concern.”
He’s not the only tech mogul with AI doomsday concerns.
Billionaire Tesla CEO Elon Musk worries killer robots are a “fundamental risk” to humanity.
“AI is a rare case where I think we need to be proactive in regulation than be reactive,” he told the National Governors Association in 2017.
He went on to say: “I have exposure to the most cutting-edge AI, and I think people should be really concerned by it.”
Fellow entrepreneurs, including slippery Facebook founder Mark Zuckerberg, disagree.
He believes AI will improve lives in the future, once telling CNBC: “I think you can build things and the world gets better. But with AI especially, I am really optimistic.
“And I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible.”
Best Phone and Gadget tips and hacks
Looking for tips and hacks for your phone? Want to find those secret features within social media apps? We have you covered…
We pay for your stories! Do you have a story for The Sun Online Tech & Science team? Email us at [email protected]
Denial of responsibility! insideheadline is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.