CARE: Christian Action, Research and Education

For what you believe
Open menu Close menu

Yuval Noah Harari says AI has potential to cause 'catastrophic' financial crisis

Technology, Robotics, and AI
10 November 2023
Stock exchange

Yuval Noah Harari, author and historian, says that the sophistication of artificial intelligence makes predicting future dangers difficult as there is not one “big, dangerous scenario” that everyone understands – unlike when testing nuclear weapons.

The Sapiens author claims that it could cause a potential financial crisis with “catastrophic consequences”.

Last week, the global AI safety summit was held in Bletchley Park in which leading governments gathered and made a multilateral declaration to do something about the concerns they held over AI.

“I think that was a very positive sign” says Harari, “Without global cooperation, it will be extremely difficult, if not impossible, to rein in the most dangerous potential of AI.”

An agreement was made by 10 governments (including the UK and US – but not China), the EU and major AI companies (including the ChatGPT developer OpenAI and Google) to test future AI models before and after they are released.

Even with safety testing, foreseeing all the problems that could arise would be close to impossible. “AI is different from every previous technology in human history” says Harari,

“Because it’s the first technology that can make decisions by itself, that can create new ideas by itself and that can learn and develop by itself.

“Almost by definition, it’s extremely difficult for humans, even the humans who created the technology, to foresee all the potential dangers and problems.”

Harari recognises that the finance sector could be suited to the adoption of AI systems because “it’s only data” but is nervous of the intelligence if given greater control over financial systems of the world and its potential to create systems that only AI can understand.

Take the financial crisis in 2007-08 as an example. This was caused by debt instruments such as collateralised debt obligations (CDOs) that few people understood and therefore were inadequately regulated.

Harari has added his name to those calling for a six-month pause in advanced development of AI and making tech companies liable for any damage their products may cause.

He says that focus should shift from creating specific regulations and laws which may be outdated by the time they pass through parliament (or congress) to instead understanding the technology and knowing how to react quickly to new breakthroughs.

“We need to create as fast as possible, powerful regulatory institutions that are able to identify and react to the dangers as they arise based on the understanding that we cannot predict all the dangers and problems in advance and legislate against them in advance.”

Last month, both the UK and the White House announced plans to establish AI Safety Institutes to play a key role in testing advanced AI models.

Rishi Sunak told the summit last week that the UK needed to understand the capabilities of advanced models first before introducing legislation to deal with them.

A recent whitepaper on AI has identified the Financial Conduct Authority and Prudential Regulation Authority as the new watchdogs for AI and Finance.

A spokesperson for the Department for Science, Innovation and Technology said these bodies “understand the risks in their sectors and are best placed to take a proportionate approach to regulating AI.”

Receive news from CARE each week

By signing up stay in touch you agree to receive emails from CARE. You can change your mailing preferences at any time either by getting in touch with CARE, or through the links on any of our emails.

Recent news in Technology, Robotics, and AI

Tech2

Technology, Robotics, and AI

CARE is exploring the theological, social and practical implications of advances in artificial intelligence and robotics.

Find out more about the cause