Is Artificial Intelligence a Real Threat?

Actions
Is Artificial Intelligence a Real Threat?
Emily Craft

Article by

Emily Craft

Apr 18, 2015

Artificial intelligence and nanotechnology have been named alongside nuclear war, ecological catastrophe and super-volcano eruptions as “risks that threaten human civilization”.

In the case of AI, the report by the Global Challenges Foundation, suggests that future machines and software with “human-level intelligence” could create new, dangerous challenges for humanity – although they could also help to combat many of the other risks cited in the report.

“Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), and would probably act to boost their own intelligence and acquire maximal resources for almost all initial AI motivations,” suggest authors Dennis Pamlin and Stuart Armstrong.

“And if these motivations do not detail the survival and value of humanity, the intelligence will be driven to construct a world without humans. This makes extremely intelligent AIs a unique risk, in that extinction is more likely than lesser impacts.”

The report also warns of the risk that “economic collapse may follow from mass unemployment as humans are replaced by copyable human capital”, and expresses concern at the prospect of AI being used for warfare: “An AI arms race could result in AIs being constructed with pernicious goals or lack of safety precautions.”

In the case of nanotechnology, the report notes that “atomically precise manufacturing” could have a range of benefits for humans. It could help to tackle challenges including depletion of natural resources, pollution and climate change. But it foresees risks too.

“It could create new products – such as smart or extremely resilient materials – and would allow many different groups or even individuals to manufacture a wide range of things,” suggests the report. “This could lead to the easy construction of large arsenals of conventional or more novel weapons made possible by atomically precise manufacturing.”

The foundation was set up in 2011 with the aim of funding research into risks that could threaten humanity, and encouraging more collaboration between governments, scientists and companies to combat them.

That is why its report presents worst-case scenarios for its 12 chosen risks, albeit alongside suggestions for avoiding them and acknowledgements of the positive potential for the technologies involved.

In the case of artificial intelligence, though, Global Challenges Foundation’s report is part of a wider debate about possible risks as AI gets more powerful in the future.

In January, former Microsoft boss Bill Gates said that he is “in the camp that is concerned about super intelligence”, even if in the short term, machines doing more jobs for humans should be a positive trend if managed well.

“A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Tesla and SpaceX boss Musk had spoken out in October 2014, suggesting that “we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that”.

Professor Stephen Hawking is another worrier, saying that “the primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race.”

Comments (1)

You must Register or Login to post a comment

1000 Characters left

Copyright © GLBrain 2024. All rights reserved.