An Inconvenient Doom

Why AI could kill us all – and why nobody wants to believe that


in development




science & technology

An inconvenient doom


Why AI could kill us all – and why nobody wants to believe that.


In May 2023 hundreds of AI scientists and CEOs of the big AI companies warned about extinction-level dangers to the human species from advanced AI – but almost nobody took this warning seriously. “An Inconvenient Doom” explains the extremely complicated existential threat from AI, tells the stories of the experts and activists with their emotional pleas to humanity and explores why journalists, politicians and society might not react in time to avoid catastrophe.


In May of 2023 the non-profit “Center for AI Safety” drafted a short statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” So far more than 650 computer scientists and CEOs have signed it.

This could easily be dismissed as science fiction if the signatories were just the “crazy fringe that other scientists laugh at”. But unfortunately, they are not the fringe – the are the all-stars of AI: Two of the three Turing-award-winning “godfathers” of AI, most of the other top AI researchers, the CEOs and CTOs of OpenAI, Google Deepmind and Anthropic, professors, politicians and even Bill Gates. So, this statement has to be taken seriously.

And while this sounds insane to almost everybody else, in the world of computer science it is old news: A vast majority of AI researchers give human extinction through AI a non-zero possibility. On average a 16,2% chance to be precise. This is almost 1 out of 6, so the “Russian roulette” metaphor gets used a lot. The only difference is: having bad luck doesn’t only kill the player with the gun, but everybody.

But why did this statement only last for one news cycle? Why is not everybody in the world talking about it right now? Because unlike the other risks of AI – like job loss, bias, manipulation and military use – the existential risk from advanced AI cannot be explained (much less believed) in a few minutes. It consists of at least seven sub-problems and is really abstract and theoretical. So, journalists can’t be blamed if they rather report on the other risks of AI which their audiences can understand and where there are victims to find, images to show, and stories to tell.

Therefore, the first task for “An Inconvenient Doom” is to explain the problem of AI existential risk in a short, precise and understandable way – which would be a world first, because there are only 80-page research papers, 4-hour podcasts and way too advanced explanation videos for experts available so far.

Of course, it is impossible to properly explain it in this treatment, but here is the super-short version and the taglines for the seven sub-problems: If we build something much smarter and more capable than ourselves, we can’t hope to reliably control it. A chimp or a dog could not outsmart a you or make you comply with any rules. And if we cannot control it, the next best thing would be to align it with our goals – make it want what we want. But this “alignment problem” is unfortunately an unsolved engineering challenge for twenty years and right now nobody knows how to control advanced AIs in any future-proof way.

But why is it so hard? Among the hurdles are:

  1. AI is a black box – even the people building it cannot understand or even read what is going on inside the vast neural networks of large language models like ChatGPT.
  2. New emergent capabilities cannot be foreseen when training a new model – this includes benign things like math and playing chess, but also theory of mind and even deception and lying!
  3. AI model behavior cannot be programmed. They are trained with rewards and punishment like a pet, which makes them behave like monomaniacal psychopaths. And still their inner workings are in no way comparable to humans: No morals, emotions or care for anything but their reward.
  4. If there is a shortcut to a given goal or a loophole in the goal definition the AI will exploit it with undesirable consequences … careful what you wish for.
  5. It will – by pure logical deduction – arrive at the subgoals of “staying alive” and “amass power and resources”. No conscience or malice needed.
  6. Once AI crosses a certain threshold it will easily outsmart and manipulate us. It can do the same amount of research and thinking in one minute as Einstein did in his whole life.
  7. If it reaches that threshold, we lose all control and cannot turn it off anymore. So, we get only one shot at building a harmless and helpful superintelligence – and time is ticking.

But explaining the problem is only one part of “An Inconvenient Doom”. It also focuses on the question “What should and can be done about it?”.

One storyline will follow the experts that have been warning about the problem like a modern Kassandra – in some case for twenty years – and their growing pessimism that we won’t find a solution in time or can’t implement meaningful regulations. Their estimated chance of human survival ranges somewhere between 1-20%.

This is due to the breakneck speed of AI development with all the big companies locked in a frantic race to be the first to reach the next (and potentially deadly) generation of AI: AGI – artificial general intelligence. But the public and political discussion is comparable to climate change in the 80s – and we don’t get 40 years to slowly acknowledge the problem. Technical advancements happen weekly, with major breakthroughs every month. But 95% of them never reach the public because they are published and discussed by the leading scientists in this field only in an exclusive AI research twitter bubble.

Of course there is disagreement in the discussions: Some say that the alignment problem may even be impossible to solve like the perpetuum mobile. Others say that the current large language models are a dead end and that we need a few more paradigm shifts to achieve true AGI. Some tinker with the models to produce dangerous behavior. And there are even fanatical techno-optimist groups among the researchers and tech-billionaires that see the biological bodies of the human species as a mere stepping stone of evolution and root for AI to replace us – or at least lead us as a silicon god.

Governments are slowly waking up to the threat although they hesitate to name the problem directly. But at least there was a first international safety summit in Bletchley Park on Nov. 1st 2023 and another one will be held in Nov. 2024 in France.

Another storyline follows the activists that upended their lives early last year because they realized the extreme danger and urgency of the matter. The documentary has a unique access to the largest group “Pause AI” that consist of extremely worried individuals from all over the world who are not driven by political or economic agendas but by the simple fact that they want their loved ones to survive the coming decade. A lot of them have totally devoted their lives to this cause with some of them already doing their bucket lists.

Furthermore, experts in psychology and sociology talk about the enormous mental hurdles individuals and societies face when they are confronted with risks of this magnitude.

And in the last segment the protagonists give a (somewhat) hopeful outlook and a call to action. Because it is not too late to do something about this looming threat – even in the age of global crisis fatigue.


The project is currently in development and is part of the Documentary Campus Masterschool 2024. Niki Drozdowski will write and direct, Jennifer Günther will be the cinematographer and José Hildebrandt the editor.

Please find further information on the project’s website:

GDPR Cookie Consent with Real Cookie Banner