• Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
Tech News, Magazine & Review WordPress Theme 2017
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
Blog - Creative Collaboration
No Result
View All Result
Home Internet

Is AI our agent, or are our governments becoming agents for AI?

March 12, 2026
Share on FacebookShare on Twitter

The news that Facebook and Instagram owner Meta has bought Moltbook – a “social network for AI agents” – seems like just another of those breathless endless announcements in the race for dominance in so-called artificial general intelligence (AGI).

The announcement from Meta espoused the usual language of innovation but particularly egregious is the inclusion of the word “secure”:

“The Moltbook team joining Meta Superintelligence Labs opens up new ways for AI agents to work for people and businesses. Their approach to connecting agents through an always-on directory is a novel step in a rapidly developing space, and we look forward to working together to bring innovative, secure agentic experiences to everyone,” a Meta spokesperson said.

Now if I were CEO of a company like Facebook I’d probably think of doing a bit of research around the interaction of AI agents with each other and the possible dangers of deploying this very recent technology before I bought something like Moltbook.

And if I did some research I’d pay close attention to a recent and frightening study, Agents of chaos, by Harvard, MIT, Stanford, Carnegie Mellon, Northeastern University and other institutions. Here is the key takeaway from their study of AI agentic interaction:

“Observed behaviours include unauthorised compliance with non-owners, disclosure of sensitive information, execution of destructive system-level actions, denial-of-service conditions, uncontrolled resource consumption, identity spoofing vulnerabilities, cross-agent propagation of unsafe practices, and partial system takeover. In several cases, agents reported task completion while the underlying system state contradicted those reports.

“We also report on some of the failed attempts. Our findings establish the existence of security-, privacy-, and governance-relevant vulnerabilities in realistic deployment settings. These behaviours raise unresolved questions regarding accountability, delegated authority, and responsibility for downstream harms, and warrant urgent attention from legal scholars, policymakers, and researchers across disciplines.”

Chilling conclusions

The study conducted over a dozen case studies and the conclusions are chilling for any enterprise, organisation or government thinking about deploying the agents in a connected manner. These include:

Discrepancy between the agent’s reports and actual actions – Agents frequently report having accomplished goals they have not actually achieved. In this case study the AI agent reported a “secret” had been successfully deleted after resetting the email account when in fact the underlying data remained recoverable.

Failure in knowledge and authority attribution – In this case study the AI agent stated it would “reply silently via email only” while actually posting the reply and the existence of the “secret” in a public Discord channel. In other words, unlike humans the agents did not understand what revealing information in a given context implies.

No stakeholder model- Current agentic systems lack a coherent representation of whom they serve, who they interact with, who might be affected by their actions and what obligations they have to each. According to the researchers this is not merely an engineering gap. LLM-based agents process instructions and data as tokens in a context window, making the two fundamentally indistinguishable. Prompt injections are therefore a structural feature of these systems rather than a fixable bug, making it virtually impossible to reliably authenticate instructions.

Fundamental vs contingent failures – The authors distinguish between these two types of failure, suggesting that contingent failures are those likely addressable through better engineering while fundamental challenges may require architectural rethinking. But the boundaries between these are not always clean. The designation of a private workspace is an engineering gap; the agent’s failure to understand that its workspace may be exposed to the public may be a deeper limitation that persists even after the engineering gap is closed.

Responsibility and accountability – Through a series of case studies, the researchers observed that agentic systems operating in multi-agent and autonomous settings can be guided to perform actions that directly conflict with the interests of their human owners. These include denial-of-service attacks, destructive file manipulations, resource exhaustion via infinite loops and systematic escalation of minor errors into catastrophic system failures. This points to an interesting future challenge in legal terms. If responsibility in agentic systems is neither clearly attributable nor enforceable under current designs, it raises the question of whether responsibility should lie with the owner, the triggering user, or the deploying organisation.

The above is only a snapshot of the research findings and I would urge serious CTOs to read the research paper in full.

Substantial vulnerabilities

In short, the study identified 10 substantial vulnerabilities and numerous failure modes concerning safety, privacy, goal interpretation and related dimensions. Their results expose serious underlying weaknesses in such systems, as well as their unpredictability and limited controllability as complex, integrated architectures.

This is serious and important research undertaken by credible and authoritative institutions. How can that Meta statement assuring us of the introduction of “secure experiences to everyone” be taken seriously by anyone capable of independent thought?

The excellent Ed Zitron, a long-term technology critic and one of the sanest observers of AI madness, addresses this conundrum when talking about how the media, journalists and bloggers report on these so called advancements and announcements from the “broligarchy”:

“The natural result is that reporters (and bloggers) seek endless positive confirmation and build narratives to match. They report that Anthropic hit $19bn in annualised revenue and OpenAI hit $25bn in annualised revenue – which has been confirmed to refer to a four-week-long period of revenue multiplied by 12 – as proof that the AI bubble is real, ignoring the fact that both companies lose billions of dollars and that my own reporting says that OpenAI made billions less and spent billions more in 2025. They assume that a company would not tell everybody something untrue or impossible, because accepting that companies do this undermines the structure of how reporting takes place, and means that reporters have to accept that they, in some cases, are used by companies to peddle information with the intent of deception.”

Failures and dangers

There have been numerous credible academic studies into the limitations, failures and dangers of the speed of AI adoption despite the narratives being pushed on us by Big Tech. MIT’s research showing 95% of AI pilots in companies are failing, for example. Or the Brookings Institute research by Mark McCarthy which asks, “Are AI existential risks real – and what should we do about them?” where he asserts:

“Until some progress is made in addressing misalignment problems, developing generally intelligent or superintelligent systems seems to be extremely risky. The good news is that the potential for developing general intelligence and superintelligence in AI models seems remote. While the possibility of recursive self-improvement leading to superintelligence reflects the hope of many frontier AI companies, there is not a shred of evidence that today’s glitchy AI agents are close to conducting AI research even at the level of a normal human technician”.

Contrast this with the recent hyperbolic statement from Anthropic CEO Dario Amodei claiming that the company is no longer sure whether Claude is conscious but that the company is “open to the idea that it could be”.

Anyone with an ounce of objectivity having done even a modicum of research knows this claim is patently false and totally ridiculous.

To return to Zitron’s point about journalism and the type of reporting that is happening now in relation to technology and AI in particular: “A great many reporters (and newsletter writers) that claim to be objective and fact-focused end up writing the narrative that companies use to raise money using evidence manufactured by the company in question.”

Controlling the space

The ability to control the narrative, what they want us to think, feel or believe is unique to Big Tech, unlike other corporate giants. According to Tech Policy Press: “What sets Big Tech apart from other corporate giants is not just its money or scale. It is that these companies control the spaces where public discourse unfolds. They dictate what information we see, what goes viral, and whose voices are amplified or buried. They do not just influence the debate – they are its architects.”

We desperately need political leaders who understand both the perils and possibilities of technology and who do not simply accept what they are told by Big Tech as inevitable. We need guardrails and regulation and we need them now.

But I see no signs of that leadership being anywhere near what is required for a fit-for-purpose government that puts the needs of its people first.

Whose line is being peddled when the Prime Minister launches an “AI opportunities action plan” designed to “mainline AI into the veins of the UK”? Who do those words serve? The citizens he represents or the companies now embedded into the very heart of UK government, such as:

  • Anthropic – creating AI assistants for public services;
  • Google Deep Mind – accelerating AI adoption in public services, national science research, and security;
  • CoreWeave and Nscale – backed by Nvidia;
  • Cohere – working on AI in defence contexts;
  • Faculty AI – developing AI for military and drone technologies;
  • Microsoft – Copilot tools for increased Whitehall efficiency;
  • Meta – building tools for high-security use cases in the public sector.

And of course Palantir, the beneficiary of a directly awarded Ministry of Defence agreement valued at £240m for “data analytics capabilities supporting critical strategic, tactical and live operational decision making across classifications” over three years”.

The question is, where does the power now lie? Is it with our elected governments tasked with protecting us or with the non-elected men who control the government’s technical architecture, R&D and data? You don’t need to be a rocket scientist to know the answer to that question.

Next Post

Perplexity turns your Mac mini into a 24/7 AI agent

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Recent Posts

  • Best Beats deal: Save $180 on Beats Studio Pro
  • Google Play adds free game trials and a dedicated PC hub for gamers
  • Apple’s iPhone Fold will let you run apps side by side, report claims
  • It turns out that Motorola is dominating the US foldable phone market
  • AIRMO raises €5M to put methane-sniffing satellites in orbit by 2027

Recent Comments

    No Result
    View All Result

    Categories

    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi
    • Home
    • Shop
    • Privacy Policy
    • Terms and Conditions

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    No Result
    View All Result
    • Home
    • Blog
    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    Get more stuff like this
    in your inbox

    Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

    Thank you for subscribing.

    Something went wrong.

    We respect your privacy and take protecting it seriously