• Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
Tech News, Magazine & Review WordPress Theme 2017
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
Blog - Creative Collaboration
No Result
View All Result
Home Gadgets

Italy’s antitrust authority closes probes into DeepSeek, Mistral, and Nova AI

April 30, 2026
Share on FacebookShare on Twitter

The AGCM accepted binding commitments from all three chatbot providers, establishing a concrete benchmark for what ‘adequate’ hallucination transparency must look like in practice, and a 120-day compliance window before potential fines.


Italy’s competition and consumer protection authority, the AGCM, has closed its investigations into three AI chatbot providers, China’s DeepSeek, France’s Mistral AI, and Turkey’s Scaleup Yazilim (operator of Nova AI), after each company agreed to binding commitments designed to improve how users are warned about the risk of AI hallucinations.

The closures were published in the AGCM’s official bulletin.

The three cases, PS12942 (DeepSeek), PS12968 (Mistral Le Chat), and PS12973 (Nova AI), were each opened on the basis that the companies’ AI chatbots had failed to inform users clearly, immediately, and intelligibly that their AI systems could generate inaccurate, misleading, or entirely fabricated content.

That failure, in the AGCM’s view, constituted a potentially unfair commercial practice under Articles 20, 21, and 22 of Italy’s Consumer Code, because it prevented users from making informed decisions about whether to use the services, particularly in high-stakes areas such as health, finance, and law, where overreliance on AI outputs could cause direct harm.

TNW City Coworking space – Where your best work happens

A workspace designed for growth, collaboration, and endless networking opportunities in the heart of tech.

None of the three cases resulted in a formal finding of infringement or a fine. All three were resolved through the commitment mechanism available under Article 27(7) of the Consumer Code, under which companies propose remedies the authority deems sufficient to address its concerns.

The AGCM accepted those proposals. Non-compliance with the commitments within a 120-day window, however, would reopen the cases and expose each company to fines of up to approximately $11.6 million.

What each company agreed to do

The commitments differ by company, reflecting the specific transparency failures identified in each case.

DeepSeek, operated by Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, agreed to the broadest package: prominent warnings about hallucination risk added directly to its chat interfaces and website in Italian, a full Italian-language translation of relevant disclosures, internal compliance training workshops, and, unusually, an active technical commitment to invest in reducing hallucination rates.

The AGCM explicitly acknowledged that current technology cannot eliminate hallucinations entirely, making DeepSeek’s technical commitment a forward-looking obligation rather than a present-state claim. DeepSeek also agreed to submit a full compliance report to the AGCM within the 120-day deadline.

For Mistral’s Le Chat, the French AI company’s commitments developed along four lines under AGCM decision No. 31864: the inclusion of in-chat disclaimers (specifically phrasing to the effect of ‘Le Chat may make mistakes. Please check responses’); strengthening and Italian localisation of its terms of service with explicit reference to the potential unreliability of outputs; improved accessibility of those terms throughout the user journey, including homepage, login, registration, app store pages, and the chat interface itself; and a full Italian translation of its website and help centre.

The AGCM’s emphasis was on what it called ‘contextual’ transparency: users must be warned at the moment and place where risk materialises, not merely in terms and conditions buried at the end of a sign-up flow.

For Nova AI, operated by Scaleup Yazilim Hizmetleri, the commitments addressed two distinct transparency failures. The first was the same as in the other cases: warnings about hallucination risk had been absent from the chat interface.

The second was specific to Nova AI’s product architecture: the service is a cross-platform aggregator that provides a single interface for accessing multiple underlying AI models including ChatGPT, Gemini, Claude, and DeepSeek, but this was not made clear to users, who may have believed they were interacting with a single, proprietary AI.

Scaleup committed to making its aggregator character explicit, including disclosing that it does not itself aggregate or process the responses from the underlying models, alongside the standard hallucination disclosure requirements.

The AGCM’s three-case sweep is the first time a European regulator has extracted binding, specific commitments from AI companies on hallucination disclosure as a consumer protection obligation, and the first to do so simultaneously across companies from three different jurisdictions (China, France, Turkey), applying the same standard to all.

The conceptual framework Italy has established is transferable. The argument is simple: if a consumer product can cause harm through user overreliance on its outputs, then informing users of that risk at the point of use is a basic consumer protection obligation, not optional transparency.

The AGCM has been among Europe’s most aggressive regulators in the AI consumer protection space. Alongside the hallucination probes, the authority launched a separate abuse of dominance investigation in July 2025 into Meta’s integration of Meta AI into WhatsApp (case A576), imposing interim measures in December 2025 to suspend WhatsApp Business Solution terms that blocked rival AI assistants from the platform.

The European Commission opened its own antitrust case into Meta’s WhatsApp AI integration in December 2025. Italy is consistently moving faster than Brussels.

The practical standard Italy has now articulated through these commitments, that hallucination warnings must be contextual, meaning present in the chat interface at the moment of use rather than buried in terms of service, is likely to inform how other EU regulators, and eventually the European Commission under the AI Act’s transparency obligations for general-purpose AI, approach the same question.

Article 13 of the AI Act requires providers of general-purpose AI models to provide adequate information about capabilities and limitations. The AGCM’s consumer code enforcement arrives first and sets a concrete precedent for what ‘adequate’ means in practice.

For AI companies operating in Europe, the message is clear: a disclaimer in the terms of service no longer satisfies the obligation. The warning must be where the user is, at the moment the risk is live.

Next Post

'Daily Show's Josh Johnson reacts to King Charles dropping jokes in Congress

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Recent Posts

  • ‘A transparent, consistent price at the register’: Maryland is the first US state to ban ‘surveillance pricing’ — and shocked shoppers are hoping others follow suit soon
  • Best Pokémon TCG deal: Chaos Rising Booster Pack preorders are cheaper at Amazon
  • Samsung could bring Android-based Aluminium OS to Galaxy Books
  • SPRIND opens applications for €125M competition
  • ‘Daily Show’s Josh Johnson reacts to King Charles dropping jokes in Congress

Recent Comments

    No Result
    View All Result

    Categories

    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi
    • Home
    • Shop
    • Privacy Policy
    • Terms and Conditions

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    No Result
    View All Result
    • Home
    • Blog
    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    Get more stuff like this
    in your inbox

    Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

    Thank you for subscribing.

    Something went wrong.

    We respect your privacy and take protecting it seriously