
AI Won’t Kill Us All
Mar 14, 2025
But fear might hold us back
In recent years, the fear of an AI-driven apocalypse has surged from niche internet forums to mainstream discourse. Headlines warn of artificial superintelligences plotting humanity’s demise, and public figures — from tech entrepreneurs to researchers — sound alarms about the existential threats posed by rapidly advancing AI systems. Even Geoffrey Hinton, a founding father of modern machine learning who was awarded the Nobel Price in Physics for his contribution to the field, recently voiced concerns that AI could eventually surpass human intelligence and potentially turn hostile. The narrative of AI as an imminent and uncontrollable force has never been more widespread or more powerful.
This fear-driven discourse is built on a series of assumptions: that superintelligent AI will emerge suddenly as an unstoppable event, that such systems would inherently seek to dominate, and that we are powerless to guide their development safely, among others. At Codika, we are driven by a different vision — one where AI empowers developers, accelerates progress, and democratizes technology without compromising safety. As an innovative startup focused on helping developers build production-ready apps with unprecedented speed, we see firsthand how fear can overshadow opportunity. Our mission is to leverage AI responsibly and transparently, proving that it is possible to innovate without courting catastrophe.
A Disclaimer Before We Proceed: It’s important to acknowledge that the future of AI is inherently uncertain, and no one — myself included — can claim to have all the answers. This is a complex and controversial topic, with a wide range of possible outcomes and perspectives, many of which come from experts far more knowledgeable than I am. My intention here is not to dismiss these concerns — some of which I believe are legitimate — outright but to offer a reasoned argument for why I believe the AI doomsday scenario is highly improbable. What follows is my perspective on why the fear of AI apocalypse is not only misguided but also a barrier to meaningful progress.
The Rise of AI Doomers
The notion that artificial intelligence could one day rise up and eradicate humanity has rapidly shifted from science fiction to mainstream concern. What was once a plot device in movies like The Terminator or Ex Machina has now found its way into the mouths of respected scientists, industry leaders, and even policymakers. In May 2023, the Center for AI Safety released a statement signed by hundreds of experts, including Geoffrey Hinton, Yoshua Bengio and Demis Hassabis, warning that AI poses an existential risk to humanity. This growing chorus of AI doomers has had a profound impact on public perception. A 2023 Pew Research survey found that 52% of Americans are more concerned than excited about artificial intelligence (a 14% increase from 2022), while only 10% of them are more excited than concerned.
The AI doomer narrative is typically built around three core assumptions — while there are certainly other concerns, these are the ones we hear most often and will focus on today: first, that we will inevitably develop a superintelligent AI capable of surpassing humans in every domain; second, that such an AI would inherently seek to take over the world; and third, that we would be powerless to stop it. In the following sections, we’ll examine each of these arguments in detail to better understand what the most likely outcomes truly are.
Superintelligence: Evolution, Not Revolution
It’s reasonable to believe that, at some point in the future, we will develop machine learning models and intelligent systems that surpass human capabilities across all domains. What remains uncertain is the form this will take — whether through entirely new architectures and training methods yet to be discovered or as an extension of current techniques scaled with more data and computational power. That debate is for another time. What is clear, however, is that this evolution will not happen overnight. The development of new models and innovations will occur gradually, just as it always has.Large Language Models (LLMs), for example, are built on technologies that have existed for nearly a decade — the Attention Is All You Need paper, which introduced the transformer architecture, was published in 2017. Every breakthrough since then has been the result of incremental improvements and increased computational resources rather than sudden, unprecedented discoveries. While it may seem to some that AI has advanced rapidly out of nowhere, the reality is that this progress has been a series of steady, foreseeable steps.Therefore, while I am convinced that superintelligent AI will eventually emerge, we will have time to observe its evolution and prepare accordingly.
The Myth of AI’s Will to Power
A common misconception is that superintelligence naturally leads to a desire for dominance. People often assume that because intelligence contributed to the rise of powerful historical figures like Napoleon or Caesar, an intelligent AI would follow a similar trajectory. But this flips the cause-and-effect relationship — these figures didn’t become dominant because they were intelligent; rather, they were dominant individuals who leveraged their intelligence to achieve their goals. In nature, intelligence and dominance don’t always go hand in hand — chimpanzees, for example, are aggressive and hierarchical, while orangutans, despite being just as intelligent, are far more solitary and non-dominant. One study even found that the tendency for social submission predicts superior cognitive performance in mice.AI, however, is fundamentally different from both humans and animals — it has no intrinsic drives, instincts, or goals beyond those explicitly programmed into it. There is no reason to assume that intelligence alone leads to power-seeking behavior, and more importantly, there is little incentive for us to build AI that way. In fact, ensuring AI remains safe and aligned with human interests is a top priority for researchers, which leads us to the next section: how AI safety measures and iterative improvements make a doomsday scenario even less likely.
AI Safety Is in Our Hands
A common fear is that once AI reaches a certain level of intelligence, it will begin improving itself at an accelerating rate, eventually surpassing human control. This idea, often referred to as an “intelligence explosion” or a “technological singularity”, suggests that AI could rewrite its own code, optimize itself beyond our comprehension, and quickly outpace human intervention. While I believe that, at some point, AI will be capable of self-improvement, we must remember that until we reach that stage, humans will still be the ones designing these systems — and we have every incentive to make them safe.Machine learning models are fundamentally mathematical systems, optimized through objective functions that we define. While alignment challenges are real and require careful study, the likelihood that we would create an AI whose objectives are completely misaligned with ours to the point of existential catastrophe is extremely low. Moreover, AI does not operate in a vacuum — it requires resources, infrastructure, and access to data, all of which remain under human control. Even in the unlikely event that an AI system behaves unpredictably, we have the ability to impose constraints, refine future models, and iteratively improve safety measures before reaching any point of no return.
Ultimately, AI development is an ongoing process, not a singular moment of irreversible change. The idea that we will suddenly find ourselves powerless against an all-controlling AI overlooks the very mechanisms that make AI development possible in the first place: human oversight, iterative improvements, and the ability to adjust course when needed.
The Fear of the New: AI as the Latest Tech Panic
Throughout history, every major technological breakthrough has been met with fear and skepticism. The idea that AI poses an existential threat to humanity follows a well-worn pattern: a new, disruptive technology emerges, people struggle to grasp its implications, and worst-case scenarios dominate the public discourse. This cycle of fear has played out repeatedly, from the Industrial Revolution to the rise of the internet, and AI is simply the latest chapter.
A clear historical parallel is the fear surrounding electricity in the late 19th century. When electrical power grids were first introduced, many people believed they were inherently dangerous. Newspapers ran sensationalized stories about people being electrocuted, and critics argued that widespread electrification would lead to catastrophic fires or even societal collapse. Yet, rather than destroying civilization, electricity became one of the most transformative innovations in human history, powering everything from modern medicine to global communication.
AI panic follows a similar trajectory. Instead of recognizing AI as a tool that can be shaped, regulated, and improved over time, many view it as an uncontrollable force destined to spiral beyond our control. But just like electricity, the printing press, and the internet before it, AI will likely integrate into our world in ways that improve human life — so long as we approach it with rational optimism rather than fear-driven paralysis. Technological progress always comes with challenges, but history has shown that these challenges are best addressed through iteration, adaptation, and responsible innovation, not by resisting progress altogether.
The Cost of AI Doomerism: Are We Misallocating Resources?
While AI safety is undeniably important, the over-representation of the AI doomer movement risks diverting attention — and resources — away from real, immediate challenges. It’s reasonable to dedicate some effort to ensuring AI remains aligned with human values, but should we be pouring ever-growing amounts of funding, talent, and compute into preventing a hypothetical doomsday scenario when there are pressing, tangible AI-related issues to address today?Take OpenAI’s Superalignment team as an example. They have committed 20% of their total compute — a massive investment — to solving the AI alignment problem. Yet, some argue that even this isn’t enough. Really? One-fifth of their entire compute budget solely dedicated to alignment, and it’s still considered insufficient? If we keep pushing this logic further, should all AI research be focused on preventing an unlikely catastrophe at the expense of everything else?
Meanwhile, AI is already affecting society in ways that need urgent attention: bias in decision-making systems, misinformation, privacy risks, job displacement, and misuse of AI in cybercrime. These are not hypothetical — they are real, happening now, and require immediate solutions. The more resources we pour into preventing a theoretical AI apocalypse, the fewer we have to address the actual challenges AI is posing today. Prioritization matters, and an obsession with worst-case scenarios could hinder meaningful progress where it’s truly needed. This misplaced focus ties into a broader confusion about what AI actually is, something I explored in my previous article.
The Real AI Challenges We Should Focus On
While AI doomsday scenarios dominate discussions, there are pressing real-world challenges that demand urgent attention. For example, bias in AI systems can reinforce societal inequalities, leading to unfair outcomes in hiring, criminal justice, and lending. Another major concern is the immense computational power required to train and run advanced machine learning models, leading to high costs and significant carbon footprints. This inefficiency must be optimized to make AI development more accessible and sustainable. Additionally, AI-driven ethical dilemmas — such as how self-driving cars make life-or-death decisions or whether autonomous weapons should have lethal authority — pose serious moral and regulatory questions. These are just a few examples from a much broader list of real issues that need resources and solutions today.If we want AI to be a force for good, we must ensure that resources are allocated to the problems that truly matter. While alignment research is valuable, it should not come at the cost of neglecting these very real and immediate challenges. But beyond just addressing risks, we also need to focus on AI’s potential to drive meaningful progress. Every resource spent on speculative fears is a resource not spent on AI systems that could accelerate scientific discovery, revolutionize healthcare, or develop new medicines to cure life-threatening diseases. Instead of being paralyzed by doomsday scenarios, we should be working to build AI that is fair, sustainable, and actively contributing to human well-being.
The Importance of Open-Sourcing Models
One of the best safeguards against AI risks — both real and speculative — is openness. Open-sourcing machine learning models ensures that development remains transparent, accountable, and distributed across a diverse set of researchers, engineers, and organizations. When research is locked behind closed doors, controlled by a handful of corporations or governments, the risk of misuse, bias, and unchecked power increases.By making models open-source, we allow experts worldwide to audit their behavior, identify vulnerabilities, and improve safety mechanisms collectively. This collaborative approach reduces the risk of any single entity monopolizing AI development in ways that may not align with public interest. Furthermore, open ecosystems foster innovation, accelerating advancements in medicine, sustainability, and other crucial fields where AI has the potential to drive meaningful progress.

Of course, openness must be balanced with security — certain models or capabilities may require restrictions to prevent misuse. However, the alternative — placing them entirely in the hands of a few powerful entities — poses far greater risks. The more people working on AI safety and ethical considerations in an open environment, the better equipped we are to develop systems that serve humanity rather than a select few.
Conclusion: Moving Beyond Fear Toward Responsible Innovation
Of course, no one can say with certainty what the future of AI holds, and it would be arrogant of me to claim otherwise. There are many possible outcomes, and AI safety is a complex issue that deserves careful attention. However, as I’ve outlined in this article, I strongly believe that the scenario where AI takes over and eradicates humanity is far from the most likely. That doesn’t mean we shouldn’t work on AI alignment — we absolutely should — but we need to do so with rationality rather than fear and panic, treating it as one of many challenges we must address rather than the singular existential crisis some make it out to be.
While some level of caution is warranted, the overrepresentation of AI doomerism risks becoming a distraction. AI already presents urgent challenges — bias, computing power, ethical decision-making — that need resources and solutions today. And beyond risk mitigation, AI holds immense potential to drive positive change, from scientific breakthroughs to life-saving medical advancements. We should be allocating our attention accordingly.
This conversation also matters deeply to us at Codika, where we use AI to empower developers and automate professional mobile app development. We believe AI is a tool that can help people build faster, innovate more freely, and unlock new creative possibilities — without replacing human ingenuity. The more we get caught up in fear-based narratives, the harder it becomes to push forward with meaningful, practical applications of AI systems that benefit businesses and developers alike. Our focus is on using machine learning responsibly to create real value, rather than obsessing over doomsday scenarios that may never materialize.
The best path forward is not fear-driven paralysis but responsible innovation. Open-sourcing models, promoting transparency, and iteratively improving AI safety mechanisms will ensure that AI develops in a way that benefits humanity rather than harms it. Instead of asking, “How do we prevent AI from destroying us?” we should be asking, “How do we ensure AI is used for the greatest good?”. The future of AI is in our hands — it’s time to focus on building it responsibly rather than fearing the worst.
A Couple of Side Notes
Geoffrey Hinton — The Paradox of AI Doomerism
Geoffrey Hinton is arguably the most prominent figure in AI doomerism. He played a foundational role in the field, essentially inventing back-propagation — the algorithm that made deep learning possible — and was awarded both the Turing Award (in 2018) and a Nobel Prize (in 2024) for his contributions to deep neural networks. Given his deep expertise, it has always struck me as odd that someone so knowledgeable could hold such a radical view on AI’s existential risk.

I sometimes wonder: Am I missing something?
In an interview with the BBC a few months ago, Hinton stated:
My guess is that in between 5 and 20 years from now, there is a probability of about a half that we will have to confront the problem of them trying to take over.
When I first heard this, I found it astonishing. Not only because of the claim itself, but also because it means absolutely nothing in practical terms — how does one even calculate a “50% probability” of that? A statement like this, especially coming from someone of his stature, can be deeply unsettling to those who don’t realize how arbitrary it is. And I struggle to see what good it does. But again, maybe I’m missing something.
While discussing this with my manager at a previous company, he introduced me to the concept of “Nobel disease” — an informal term referring to how some Nobel Prize winners embrace fringe or scientifically unsound ideas later in their careers. Interestingly, Hinton’s public warnings about AI’s existential risk began surfacing after he won the Turing Award, often called the “Nobel Prize of Computer Science.”
I found this to be a fascinating observation. Whether it’s a case of “Nobel disease” or simply the burden of extraordinary foresight, it’s a paradox worth pondering.
Yann LeCun — A Counterpoint to AI Doomerism
Yann LeCun, along with Geoffrey Hinton and Yoshua Bengio, won the Turing Award for his contributions to deep learning. But unlike Hinton, LeCun stands on the opposite end of the AI risk spectrum.

Most of the arguments I make in this article align closely with what LeCun often discusses in his interviews. He has consistently pushed back against the AI doomer narrative, arguing that fears of superintelligent AI going rogue are largely unfounded and based on flawed assumptions about how intelligence works.
If you’re interested in a more grounded perspective on AI’s future, I highly recommend listening to some of his interviews. They offer deeper insights into why the apocalyptic view of AI is often more science fiction than science.
Newsletter
Enjoyed this read? Subscribe.
Discover design insights, project updates, and tips to elevate your work straight to your inbox.
Unsubscribe at any time
Updated on
Mar 14, 2025