Argomenti trattati
The debate about the military uses of artificial intelligence often centers on weapons, autonomy, and legal rules. Yet the recent string of court decisions in the United States has drawn public attention to a different set of influences: the design practices of large technology firms and the way those practices create dependencies that can migrate into defence settings. These rulings, combined with corporate decisions such as Anthropic’s public stance in February 2026, underscore that private-sector product choices are not neutral technical details but a core governance question for military AI.
Two jury verdicts in the final week of March gave unusually explicit insight into corporate design intent. Plaintiffs argued that features like infinite scroll, autoplay, and algorithmic recommendations were deliberately crafted to increase user engagement. The courts accepted claims that these features functioned as addictive mechanisms and that companies knew about the harms yet failed to mitigate them. For those following the integration of AI into defence practice, the takeaway is blunt: if commercial platforms intentionally engineer certain behaviours, similar design imperatives could be replicated — or baked into — military systems unless governance explicitly addresses them.
Why platform companies matter for military AI
States pursuing advanced AI-enabled military capabilities often lack the internal capacity to build every component themselves, so they rely on external suppliers. This creates what analysts call a military-tech complex — close ties between armed forces, governments, and commercial tech firms. Many of the largest platform companies possess not just software products but the underlying cloud, hardware, and data infrastructure that modern machine learning applications depend on. Examples include commercial involvement with systems like Project Maven and the 2026 availability of Meta’s Llama models for defence-related uses. That infrastructural reach means platform design choices can shape how military tools are built, deployed, and understood.
What the court decisions teach us about design and responsibility
The legal strategy in the Los Angeles and New Mexico trials shifted blame away from user content toward product design. By treating social platforms as defective products, plaintiffs were able to navigate around traditional shields such as Section 230 of the Communications Decency Act. Internal documents presented in court — memos describing efforts to optimize for continuous viewing or comparing engagement effects to addictive behaviours — illustrated corporate awareness of harm. Translating that into the military domain, the crucial point is that accountability cannot rest on the assumption that the technology provider is a neutral toolmaker: product architectures and monetization logics matter for operational outcomes.
Implications for governance and oversight
These developments reveal two linked governance challenges. First, achieving genuine accountability requires visibility into the decisions and trade-offs made throughout a system’s lifecycle. Accountability is not a mere label: it is a relationship of answerability that presupposes documentation, interrogation, and the ability to trace design choices back to actors. Second, the human–machine interaction problems often described as cognitive biases or over-reliance can sometimes be the result of deliberate product engineering. We therefore need governance frameworks that can differentiate between operator error and engineered dependency created by vendors.
Engineered dependency and military risk
When commercial platforms optimize relentlessly for user engagement, they institutionalize a vocabulary and a set of practices that prize frictionless interaction and continuous attention. Those design habits can carry over into military software procurement and configuration. Once an organizational practice proves effective — whether for advertising revenue or operator efficiency — it tends to become entrenched. The result is a risk that commanders will interpret and act on information shaped by opaque design choices, making it harder to ensure compliance with legal and ethical obligations.
Toward practical remedies
Realistic governance must confront the motivations of commercial actors and the technical avenues through which influence is exerted. That includes demanding access to design documentation, procurement clauses that condition use on transparency, and audit mechanisms to reveal the socio-technical contours of systems. The lessons of the recent trials are not only legal: they are procedural. If regulators and military planners want accountability to mean anything substantive, they must build tools to expose the traces of human decision-making embedded in AI systems supplied by private firms, rather than assuming those traces are neutral or invisible.
In short, the courtroom setbacks for Big Tech send a signal: design matters. As militaries continue to integrate AI, governance debates must move beyond abstract exhortations and focus on the concrete choices that shape human–machine interaction, the commercial incentives behind those choices, and the mechanisms that will make vendors answerable when those choices produce harm.

