Ad
Most people worry far more about AI’s immediate harms—like bias, misinformation, or job losses—than they do about distant, existential threats, a University of Zurich study of over 10,000 UK and US respondents finds. Even when exposed to apocalyptic warnings, participants remained highly alert to present-day problems, suggesting that public debate should tackle both short-term and long-term AI risks in parallel. Credit: SciTechDaily.com
A new University of Zurich study shows that people are more concerned about the immediate risks of AI, like bias and misinformation, than about distant existential threats.
Most people are more concerned about the immediate risks of artificial intelligence than about distant, theoretical threats to humanity’s survival. A new study from the University of Zurich highlights that respondents clearly distinguish between abstract future scenarios and specific, tangible problems and tend to take the latter much more seriously.
While there is broad agreement that AI poses significant risks, people differ in how they interpret and prioritize those dangers. One view stresses long-term, speculative risks, such as the possibility of AI endangering human existence. Another focuses on pressing, real-world issues, including the amplification of social biases and the spread of disinformation by AI systems.
Some experts warn that an excessive focus on dramatic “existential risks” could divert attention from the urgent and concrete problems AI is already creating today.
Present and future AI risks
To examine those views, a team of political scientists at the University of Zurich conducted three large-scale online experiments involving more than 10,000 participants in the USA and the UK. Some subjects were shown a variety of headlines that portrayed AI as a catastrophic risk.
Others read about present threats such as discrimination or misinformation, and others about potential benefits of AI. The objective was to examine whether warnings about a catastrophe far off in the future caused by AI diminish alertness to actual present problems.
Greater concern about present problems
“Our findings show that the respondents are much more worried about present risks posed by AI than about potential future catastrophes,” says Professor Fabrizio Gilardi from the Department of Political Science at UZH.
Even if texts about existential threats amplified fears about scenarios of that kind, there was still much more concern about present problems including, for example, systematic bias in AI decisions and job losses due to AI. The study, however, also shows that people are capable of distinguishing between theoretical dangers and specific tangible problems and take both seriously.
Conduct broad dialogue on AI risks
The study thus fills a significant gap in knowledge. In public discussion, fears are often voiced that focusing on sensational future scenarios distracts attention from pressing present problems. The study is the first-ever to deliver systematic data showing that awareness of actual present threats persists even when people are confronted with apocalyptic warnings.
“Our study shows that the discussion about long-term risks is not automatically occurring at the expense of alertness to present problems,” co-author Emma Hoes says. Gilardi adds that “the public discourse shouldn’t be ‘either-or.’ A concurrent understanding and appreciation of both the immediate and potential future challenges is needed.”
Reference: “Existential risk narratives about AI do not distract from its immediate harms” by Emma Hoes and Fabrizio Gilardi, 17 April 2025, Proceedings of the National Academy of Sciences.
DOI: 10.1073/pnas.2419055122
Ad
SomaDerm, SomaDerm CBD, SomaDerm AWE (by New U Life).