Google, the search engine, is having one of its weakest years since 2015. Which is to say, its market share has dipped just below 90% for almost every month in 2025, according to Statcounter. Of course, that still puts Google very much on top of the search world. And it is seeking to cement that dominance with more AI Overviews.
If you’ve managed to miss them since they arrived in May 2024, Google AI Overviews are the roughly 250 word AI-written summaries atop some Google search results. Just how many Google results get AI Overviews is a closely guarded secret. Research from SE Ranking found them on as many as 18% of all queries in 2024 (with a dip to 7% soon after release, amid a wave of weird results including the infamous “glue on pizza” AI Overview).
In March 2025, Pew Research also found 18% of searches came with AI Overviews. Then in May 2025, SE Ranking saw a sudden jump: AI Overviews on nearly 30% of Google searches.
This result may yet be an outlier. Google declined to say how many results get the AI Overview treatment, but remains confident that the product is helping. “As people use AI Overviews, we see they’re happier with their results,” a Google person said in a statement, “and they come to Google to even ask more of their questions.”
Google offered no evidence for that claim. Still, the Pew study also suggested users trust Google AI Overviews — or at least, users are 50 percent less likely to click on links in search results with an AI Overview attached. The crash in clicks from search results has already produced what many online publishers are calling a traffic apocalypse.
But should they be trusting AI Overviews more than articles that cite experts? In a year when the most hallucinatory of OpenAI’s new models get it wrong 40% of the time, this question has never been more important.
Mashable has tested AI Overviews for hallucinations in questions twice before (in in May 2024 and December 2024), endeavoring to focus on genuine searches (as opposed to glue-on-pizza-style gotcha questions). Here’s what we found in 2025.
How accurate are Google AI Overviews? Don’t ask Google
First of all, here’s what that Google spokesperson had to say about AI Overview accuracy: “The vast majority of AI Overviews are highly factual and we’ve continued to make improvements to both the helpfulness and quality of responses.”
But don’t tell that to the AI Overview that pops up when you ask the same question on Google: the service wildly overestimates how much it hallucinates.
Credit: Google screenshot
Take a moment to chef’s-kiss the irony here. A Google AI Overview is saying AI Overviews are wrong as much as 60% of the time, and it’s wrong, according to Google.
Here’s a Mashable story that the AI Overview cites, and appears to have misunderstood. It was reporting on a journalism experiment: various AI chatbots were given an article excerpt and asked to name the article and its author. The chatbots made up answers 60% of the time: troubling in itself, but that has nothing to do with hallucinations in Google AI Overviews — which Google says are supposed to be more accurate than other AI because they’re “rooted in search results.”
A Google spokesperson says that AI Overview accuracy is roughly equivalent to a similar, older search feature called featured snippets — which isn’t exactly the best argument, given that they have been known for dumb results too, and once promoted a bizarre conspiracy about President Obama.
Still, in my tests, which were more anecdotal than rigorous study, only 1 in 5 AI Overviews returned an inaccurate or misleading answer. Don’t be so down on yourself, Google AI Overviews!
Mashable Light Speed
Google AI Overviews can get it wrong when AI Mode gets it right
When AI Overviews do get it wrong, however, it can be a doozy. Case in point:the hottest nerd question on the internet right now. Or at least, the one question every Doctor Who fan wants answered right now.

Fake fan!
Credit: Google screenshot
To explain: in the finale of the long-running sci-fi show’s most recent season, the 15th Doctor (Ncuti Gatwa) regenerated into … well, someone who looks like the Doctor’s former companion Rose Tyler (Billie Piper). The BBC, meanwhile, has conspicuously not announced Piper as the 16th Doctor. Fans suspected some kind of timey-wimey fakeout — until Doctor Who showrunner Russell T Davies revealed that even he didn’t know if Piper was the 16th or not.
In other words, it’s a hotly-debated open question, served with a healthy dose of nuance. But not according to the Google AI Overview, which confidently announced that Piper was not the 16th Doctor. And for an encore, went on to claim that Gatwa was still the Doctor, despite Gatwa and the BBC clearly stating otherwise. (We hear you, AI Overview: moving on from your favorite Doctor is hard.)
But here’s the kicker: clicking on Google AI Mode returns a completely different answer. AI Mode natters on at length without giving a straight yes or no, pretty much as a Doctor Who fan would:

Credit: Google screenshot
OK, good job AI Mode! Next question: why the heck didn’t you share any of this with your hardworking buddy in the next cubicle, AI Overviews?
Google declined to make a search executive available for interview (after doing so for our December 2024 checkup). But it’s hardly hallucination to suggest that an in-depth AI Mode query requires more processing power than an AI Overview. A Google spokesperson said that AI Mode is a cutting-edge tool based on the latest Gemini models, and that more of its results would wind up in AI Overviews over time.
Which is fair enough, but this does lead to the uncomfortable conclusion that Google is knowingly pushing less accurate results our way, while placing a more likely correct answer one click away.
AI Overview shoots for the moon, misses
So much for science fiction. Surely science fact is easier?
Not if the science fact question is “when is the next mission to the Moon,” as much of a gimme as that sounds. Google AI Overview claimed that NASA’s Artemis II mission is aiming to launch in September 2025, citing its source as a NASA press release from January 2024.
All well and good — except there was another NASA press release in December 2024, moving Artemis II’s target date to September 2026.
For a service that appears to be replacing a lot of news clicks, the AI Overview seems sometimes uninterested in things that are, y’know, new.
AI Overview doesn’t know DOGE
Another odd stuck-in-2024 moment came when I searched for “Elon Musk DOGE controversy.” This yielded an AI overview on … a lawsuit over alleged price manipulation of Dogecoin, which was settled in November 2024.
Wonder if Google users who type those keywords might be thinking of more recent DOGE-related events, perhaps involving the entire U.S. government rather than a memecoin?
It would be a DOGE-like level of craziness to suggest that these results were entirely representative, however.
In my tests, as in our previous two check-ups, most AI Overviews yielded decent summaries; as a newly-minted vegan I was pleased to see it knew about the existence of vegan pesto (replace the parmesan in regular pesto with nutritional yeast or cashew cheese), and I found novel suggestions for popping small dents out of my car trunk door (hair dryer followed by plunger).
So if it’s mostly useful, is it bad that AI Overview sometimes hallucinates? Your mileage may vary. But when I put it as a Google query, AI Overviews recused themselves. Instead, Google offered a featured snippet from Abeba Birhane, AI Accountability advisor to the Mozilla Foundation, responding in the negative.
“Google’s AI overviews hallucinate too much to be reliable,” Birhane wrote in 2024. “There is no clear evidence showing users even want this AI Overview.”
So whether AI Overviews or snippets are the future of Google search, it seems we can look forward to either one contradicting official Google statements.
Topics
Artificial Intelligence
Google