Argomenti trattati
The debate over artificial intelligence has moved from research labs into boardrooms and living rooms. Luc Julia, a French-American computer scientist who has held senior technical roles at Apple, Samsung and Hewlett-Packard and today serves as chief scientific officer at Renault Group, summarizes this tension in his book The AI Illusion (Wiley, 2026). Drawing on decades of work — including early contributions to natural language processing — he argues that much public fear and fascination stems from language that mischaracterizes engineered systems as if they possessed humanlike minds. This introduction lays out why that distinction matters for engineers, policymakers and everyday users.
At the heart of the argument is a semantic and technical point: terms like intelligence are used in two very different ways. One meaning refers to raw information processing, the predictable manipulation of symbols and numbers. The other denotes the rich, context-sensitive abilities of humans — creativity, subjective experience and moral judgment. Conflating these senses produces what Julia calls the AI illusion, a persistent tendency to anthropomorphize systems that, in reality, run on algorithms and datasets rather than inner lives.
Why the “AI illusion” persists
The illusion is amplified by storytelling and by incentives inside industry. Popular culture offers vivid images of autonomous minds, while some companies benefit commercially from portraying products as more capable than they are. Investors and marketing teams have reasons to emphasize breakthroughs; the result is exaggerated claims that translate into public anxiety and unrealistic expectations. Meanwhile, many researchers outside the commercial race are clear-eyed: modern systems are ensembles of specialized components — the product of machine learning, statistical optimisation and curated data — not agents with continuous, creative thought.
Anthropomorphism, hype and responsibility
When complex outputs mimic human expression — fluent prose, artlike images or plausible dialogue — observers can be misled into thinking a machine understands meaning the way a person does. A good analogy is a highly polished musical box: it can reproduce a tune with startling fidelity but has no appreciation of melody. Likewise, a generative AI model can assemble text that looks creative without experiencing inspiration. Recognizing this gap is essential because it defines where human oversight must be applied and where regulatory guardrails are necessary to counter misuse, bias and overreach.
What ai systems can and cannot do
Modern systems shine in tasks that involve processing vast quantities of data with repeatable rules. They can detect patterns in medical images, recommend products, optimise logistics and accelerate research through rapid hypothesis generation. These strengths come from disciplined engineering: better models, larger datasets and faster compute. However, there are clear limits. Machines lack consciousness, subjective intentionality and the kind of transferable creativity that lets humans invent entirely new frameworks of thought. A chess engine may outperform grandmasters by calculating moves; it will not write a meaningful poem about loss unless prompted and guided by human values and context.
Concrete examples: chess engines, poetry and medicine
Consider three contrasting use cases. A specialised game-playing model demonstrates superhuman optimisation within a defined rule set. A text-generation model can produce evocative language but does not know what the words mean beyond statistical association. In medicine, AI-assisted surgery or diagnostic tools can improve accuracy and speed, but their reliability depends on training data quality and on clinicians validating outputs. In each case, the technology is a powerful tool, not a replacement for human judgment.
Industry, governance and the path forward
Accepting the difference between powerful tools and sentient agents reshapes how society should respond. Transparency about capabilities, rigorous testing, and clear labelling of automated outputs help reduce confusion. Companies should avoid rhetoric that suggests autonomy where there is none, and governments should focus policy on data governance, safety testing and training for human oversight. The stakes include not only technical failures but also the social harms that arise when systems reproduce biases or when their limitations are misunderstood.
Data, bias and human oversight
Ultimately, the quality and context of training data determine how well models perform in real settings. Poor data creates systematic errors; narrow objectives create brittle systems. Effective deployment therefore combines robust engineering with ethical design: independent audits, continuous monitoring and mechanisms for human review. By reframing the conversation away from mythical intelligence and toward practical competence, stakeholders can harness the benefits of machine learning while guarding against the harms of hype.

