Artificial Intelligence Is Not What You Think It Is
Mar 3, 2025
How machine learning does the work while AI takes the credit
Over the past few years, I’ve watched the rise of Artificial Intelligence from the inside. As a Machine Learning engineer, I’ve worked with these models, built these systems, and seen firsthand what this technology is truly capable of — and, more importantly, what it isn’t. Yet, as AI continues to dominate headlines, product pitches, and investor decks, I can’t help but notice a growing disconnect between the reality of the technology and the way it’s being marketed. Now, as the head of ML research at Codika, a startup pushing the boundaries of automated mobile application development, I find myself at a crossroads. On one hand, there’s what I know to be true: AI, as it’s commonly described, is often just a collection of Machine Learning models. On the other hand, there’s the challenge of communicating our work in a world where “AI” sells, even when the term is misleading.This article isn’t just a technical clarification — it’s an exploration of how the broad and often misleading use of the term “Artificial Intelligence” impacts public understanding, industry expectations, and technological progress. By distinguishing AI from Machine Learning, I hope to shed light on what these technologies truly are, why the distinction matters, and how the misrepresentation of AI influences funding, research priorities, and business decisions.
What Is Artificial Intelligence, Really?
The term Artificial Intelligence (AI) was first coined in 1955 by Professor John McCarthy. The term was introduced as part of the Dartmouth Summer Research Project on Artificial Intelligence, a proposal for a two-month study at Dartmouth College in 1956. The introduction of the proposal stated:We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

We can already see that the proposal refers to “learning” and then to “any other feature of intelligence”. This ambiguity in defining intelligence itself has remained a fundamental challenge in AI research. What exactly constitutes intelligence? Is a simple algorithm like Depth-First Search (DFS) demonstrating intelligent behavior? Is a calculator, which can solve complex equations instantly, an intelligent machine? In my view, it is, but that does not align with what most people imagine when they hear the term “AI.”
Today, the Oxford English Dictionary defines Artificial Intelligence as:
The capacity of computers or other machines to exhibit or simulate intelligent behaviour.
Again, this is a very vague definition. How de we assess whether a machine is exhibiting intelligent behaviour? As far as I know, there is no clear answer to this question.
What Is Machine Learning?
The term Machine Learning (ML) was coined a little later than AI, by Arthur Samuel in a paper titled : “Some Studies in Machine Learning Using the Game of Checkers”. However, Machine Learning techniques, as we define them today, were already being developed before this.One of the most widely accepted definitions comes from Tom Mitchell's 1997 textbook:A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.In mathematical terms, we can define Machine Learning as a class of algorithms that improve their performance on a given task by iteratively adjusting their internal parameters to minimize a loss function or maximize an objective function based on data.This definition is far more precise than the broader and more ambiguous concept of AI. Unlike AI, which is often loosely applied to any form of automation or data-driven decision-making, Machine Learning provides a clear framework that allows us to determine whether an algorithm belongs to this category. If an algorithm can be formally described in terms of experience, tasks, and performance measures, it qualifies as ML. This level of clarity is crucial in distinguishing ML from other computational approaches.I previously explored this concept in my article “What Does Learning Mean for a Machine?”, where I examined the fundamental aspects of learning in computational systems. As discussed in that article, the key aspect of ML is that it is not just about performing a task, but about systematically improving performance based on data-driven learning. This improvement is measurable and objective, making ML a well-defined and rigorous field compared to the more loosely defined notion of AI.
Why Calling Everything “AI” Is a Problem
Albert Camus once said,
Mal nommer les choses, c’est ajouter au malheur du monde
which translates from French as: “To misname things is to add to the world’s misfortune.” This sentiment perfectly applies to the way the term Artificial Intelligence is used today. The phrase has become a catch-all label, applied to everything from sophisticated deep learning models that power large language models (LLMs) to simple linear regression models or even basic rule-based automation. This lack of precision is not just misleading — it actively distorts our understanding of technological progress.Today, when someone says they are working on AI, it provides almost no information about what they are actually doing. Are they building a cutting-edge neural network with billions of parameters, capable of generating human-like text and reasoning? Or are they applying a straightforward statistical model to predict sales based on last year’s data? Both of these vastly different approaches are frequently marketed under the same AI umbrella, despite the fundamental differences in complexity and capability.This broad and indiscriminate use of the term AI is particularly frustrating when companies developing truly groundbreaking deep learning models are forced to share the same label with those performing rudimentary data analysis. When a company advertises that it uses AI, it could mean anything from using a logistic regression to classify emails as spam to deploying a sophisticated transformer-based model trained on massive datasets. For the public, investors, and even policymakers, this lack of distinction creates unrealistic expectations and hinders meaningful discussions about the actual state of AI/ML research.The ambiguity also has real consequences. Companies that do little more than apply well-known ML techniques can exaggerate their innovations, securing funding and attention they might not deserve. Meanwhile, genuine advances in ML research risk being drowned in the noise, making it harder for the public to differentiate between hype and substance. This contributes to cycles of overpromising and underdelivering, which could eventually lead to disillusionment and setbacks in the field.If we want to have serious conversations about the future of AI, we must first agree on what AI actually means.A relevant side note: Back in 2023, when AI hype was already in full swing, Apple managed to go through its entire WWDC keynote without mentioning “AI” once. While every other tech giant was slapping “AI-powered” onto everything from search engines to spreadsheets, Apple simply showcased new features without ever branding them as artificial intelligence. They did mention Machine Learning several times, however. The reasons for this deliberate omission remain open to interpretation — maybe they wanted to avoid the buzzword fatigue, or maybe it was just classic Apple marketing strategy at play. Regardless, it was an interesting contrast to the rest of the industry, and a reminder that not everyone was eager to jump on the AI branding bandwagon — at least not yet.
The Dilemma: Marketing vs. Truth

A quick look at Google Trends reveals a stark reality: while interest in Artificial Intelligence has skyrocketed over time, Machine Learning has remained relatively stagnant. This divergence reflects a broader trend in the tech industry — one where branding and hype often overshadow the reality of technological development.
The vast majority of the breakthroughs over the past few years that have captured global attention — Large Language Models like ChatGPT, Claude, and DeepSeek; diffusion models such as DALL·E, MidJourney, and Stable Diffusion; and scientific advancements like AlphaFold — are all powered by Machine Learning (or, more precisely, by Deep Learning). Yet, the public and media largely refer to them under the vague and overly broad term “AI.”
For startups, and companies in general, this creates a fundamental dilemma. In a world where funding, media attention, and public perception are disproportionately drawn toward anything labeled as “AI,” there is enormous pressure to adopt the term, even when it may not accurately describe the underlying technology. Companies developing complex deep learning models — like large language models (LLMs) — often market them as AI, but so do companies using nothing more than basic linear regression or rule-based automation. This blurring of definitions creates confusion and dilutes the meaning of true advancements in ML.
At Codika, where we develop tools powered by Machine Learning, we face this challenge as well. Do we market our work as AI, knowing it will draw more attention? Or do we stay truthful to the fact that we are building ML-driven automation, even if it lacks the same hype factor? This is a dilemma faced by countless startups, and it highlights a fundamental issue in the way AI is perceived and communicated in the industry.
Ultimately, while branding plays a crucial role in business strategy, misrepresenting ML as AI does a disservice to both the industry and the public. If we want meaningful discussions about the future of AI, we need more precise terminology and a commitment to transparency.
Beyond Machine Learning: The Rise of Autonomous Agents
The concept of an agent has long been part of the vocabulary, particularly in reinforcement learning, where an agent interacts with an environment to optimize decision-making through trial and error. However, recent developments have introduced a new kind of system — one that goes beyond the traditional ML definition. These autonomous agents are not just learning from data; they are executing tasks, using tools, and communicating with other agents to achieve goals in ways that aren’t strictly machine learning.

While many of these agents incorporate ML models — especially large language models (LLMs) or reinforcement learning techniques — their functionality extends beyond Machine Learning itself. They integrate multiple capabilities, such as:
Tool use — Agents don’t just predict; they can call APIs, query databases, execute code, and manipulate external systems.
Multi-agent collaboration — Systems like LangGraph or OpenAI Swarm orchestrate multiple models or sub-agents to complete tasks more efficiently than a single model.
Optimized communication — Unlike traditional ML, which optimizes a single model for a given task, autonomous agents can distribute workloads, share knowledge, and refine strategies through interaction.
This shift represents something fundamentally new — not just more advanced Machine Learning, but a new category of systems. We may need a new term for this, or maybe we already have it. Some call this “agentic workflows,” while others refer to it as “intelligent agents.” But neither term fully captures the depth of what is happening: these agents are evolving into interconnected ecosystems that leverage ML but are not defined by it.
As autonomous agents become more powerful and widespread, we must ask: Are we witnessing the rise of a new computational paradigm — one that moves beyond Machine Learning as we know it? If so, what should we call it?
The Future: Where Do We Go From Here?
At this point, it may be too late to fully correct the misuse of the term Artificial Intelligence. It has become so ingrained in public discourse, marketing strategies, and investment trends that redefining it in a precise manner seems almost impossible.And who am I to judge? The last article I published was titled “Building an AI Chatbot”. Nevertheless, this does not mean we should give up on advocating for better clarity.
Conclusion
The widespread misuse of the term Artificial Intelligence has real consequences. It creates confusion, inflates expectations, and distorts public perception of technological progress. While AI remains a compelling vision of the future, much of what is labeled as AI today is, in reality, Machine Learning.For companies like Codika, and countless others developing ML-driven technologies, this presents a difficult choice — chase the hype or stay true to the science? For me, as both a scientist and an entrepreneur, this presents a deep and ongoing conflict. I fully understand why companies choose to brand their work as AI — it’s what attracts funding, media coverage, and user interest. However, I also believe in the importance of clear and honest communication about technology. Navigating this dilemma is not easy, and there is no simple solution. The reality is that we are in a situation where the misuse of AI as a term is widely accepted, making it incredibly difficult to reverse. All we can do is push for more transparency, educate where possible, and strive for a future where technological progress is understood for what it truly is.
Newsletter
Enjoyed this read? Subscribe.
Discover design insights, project updates, and tips to elevate your work straight to your inbox.
Unsubscribe at any time
Updated on
Mar 3, 2025