Stephen Hawking Warned How Artificial Intelligence May Be Our Biggest Disaster


According to renowned physicist Professor Stephen Hawking, one day, Artificial Intelligence could develop a ‘will of its own,’ becoming one of the greatest threats to humanity.

Professor Hawking issued a warning saying that artificial intelligence could become so advanced that it may develop a will of its own, which could conflict with that of humanity.

This could result in dangerous and powerful autonomous weapons, as he called out researchers to further study Artificial Intelligence and its possibilities.

However, Professor Hawking also said that if we do our homework and research enough, we could avoid potential dangers, which could result in a better way of life, saying that Artificial intelligence may help humanity ‘finally eradicate disease and poverty,’ he added.

Professor Hawking spoke at the launch of The Leverhulme Centre for the Future of Intelligence, which aims to research and explore implications in the fast development of artificial intelligence.

The Leverhulme Centre for the future of intelligence is a collaboration between several universities in the United Kingdom and the United States.

Their ultimate mission is to “create an interdisciplinary research community” which will work in close collaboration with business and government and try to determine, among other things, “the risks and benefits in the short and long-term” of artificial intelligence.

The director of the Leverhulme Centre for the Future of Intelligence. Huw Price said that the creation of intelligent machines is a milestone of humanity and this center will try to make “the future the best possible.”

Among other things, the Centre will analyze the consequences of the rapid development of intelligent machines, such as robots or driverless cars which, while offering solutions to challenges of everyday life, also pose risks and ethical dilemmas for humanity since many people fear artificial intelligence could surpass human intelligence and take over.

‘I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It, therefore, follows that computers can, in theory, emulate human intelligence – and exceed it.’
Professor Hawking stated that the potential benefits are great, and such a technological revolution could help mankind undo some of the damage we have done to our planet.

“In short, success in creating AI could be the biggest event in the history of our civilization,’ said Prof Hawking. “But it could also be the last unless we learn how to avoid the risks. Alongside the benefits, AI will also bring dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It will bring great disruption to our economy.And in the future, AI could develop a will of its own – a will that is in conflict with ours.”

“In short, the rise of powerful AI will be either the best or the worst thing, ever to happen to humanity. We do not know which, said Professor Hawking.”

“That is why, in 2014, I and a few others called for more research to be done in this area. I  am very glad that someone was listening to me,” concluded Professor Hawking.

Featured image by Julian-Faylona


Like it? Share with your friends!

Ancient Code

Ancient Code was founded in 2012 by author and researcher Ivan Petricevic. Thre website is based on the Ancient Astronaut Theory, but it also offers its readers a vast collection of articles about History, Mythology, Lost Civilizations, etc.

10 Comments

  1. Although this report is very accurate and AI is extremely dangerous to humans, Stephen Hawkings died several years ago and has been replaced with an actor so God knows who this article really belongs to but it’s true so I guess this is all that counts, well done Ancient-code for posting it

  2. AI is so dangerous that it should be illegal just like human cloning is.
    NEED to be stopped before is too late (if already isn’t).

  3. I dont think the danger from AI comes from AI becoming sentient then deciding to whack humans. Theres already 7+ billion high level intelligent agents (humans!) on earth already and we’re still doing OK, all things considered. My concern is the uses its already being put to by humans themselves. AIs ability to sort novel patterns in massive datasets is a godsend to the surveilance state. It can monitor every human on earth and hunt for clues of things that annoy the government, like dissent or thoughtcrime, and the humans controlling it dont even have to know what to look for , the pattern recognition algorithms can work that out itself, and that, boys and girls, is what keeps me up at night. THAT is what you should be worried about , and its a problem RIGHT NOW.

    1. Yes, of course it’s about collecting data, and taking automatic decision from them. A super way of monitoring nearly EVERYTHING.

  4. Stephen Hawkin confuses intelligence with logics. There are many other forms of intelligence. Emotional, moral intelligences… For the logics, the machine has already exceeded human intelligence, and it will become a stronger weapon again in the future. But he’s right with the risk, for other reasons.

  5. The way I see it the beauty and the danger of AI lies in the superior intelligence of its constructors and also consequently the logics which is underlying to this AI. If its core logic is benevolent concerning humanity wouldnt it be great to have one omniintelligent AI doing background calculations and its variations and also social construct so there would be no lying thieveing croocked company and state leaders but instead it would surround itself with purely intelligent and honest lifeform? Think about all the missed chances that would have been realized. Sure it would limit us in certain human stupidity that comes very often but that would be small price to pay IMHO for decently organizing society.

Comments are closed.