Home Technology Eating Rocks and Making Excuses: Google’s Latest Gen-AI Fail

Eating Rocks and Making Excuses: Google’s Latest Gen-AI Fail

When you purchase through our sponsored links, we may earn a commission. By using this website you agree to our T&Cs.

As Alphabet’s (NASDAQ:GOOGL) (NASDAQ:GOOG) Google desperately strives to play catch-up to Microsoft (NASDAQ:MSFT) in the generative-AI wars, it’s somewhat understandable if haste leads to a few missteps. However, Google’s latest blunder with its gen-AI offerings has moved from embarrassing to inexcusable.

As you may recall, OpenAI’s ChatGPT chatbot kicked off the whole gen-AI craze a year and a half ago. With that, AI became top-of-mind on Wall Street and a must-mention in every tech firm’s quarterly conference calls.

Amid the emerging AI wars, Microsoft leapfrogged Google and other rivals when it invested heavily in OpenAI’s gen-AI technology. Since then, Google’s mishaps have elicited mockery and hilarity on social media; this time around though, the errors aren’t a laughing matter anymore.

A pattern of mistakes

First and foremost, let me acknowledge that in the U.S., Google is still the search-engine king and Microsoft’s Bing still isn’t much of a threat. Despite all the chatter about Microsoft and other competitors toppling the king of the hill, Google still commands 90% or more of the search-engine market in 2024.

In January, Harvard Business School economist and professor Shane Greenstein warned, “Google has to be careful not to hurt its brand and product when it comes to testing new AI tools.”

Yet, Google is now doing damage control after a series of gen-AI blunders.

The pattern of mistakes started off with a spectacular failure when the company debuted Bard, the gen-AI chatbot that was supposed to be its answer to ChatGPT. In a widely shared demonstration of Bard, the chatbot delivered a factual error.

In response, Alphabet’s market capitalization took an immediate $100 billion haircut.

After that blunder, the Gemini AI engine was supposed to be Google’s new and improved successor to Bard. However, even Google CEO Sundar Pichai and co-founder Sergey Brin had to offer up mea culpas after Gemini delivered racially controversial/ inaccurate images.

Now, the unfortunate pattern continues. Google’s AI Overview, which uses generative AI to respond to search queries, recently delivered results that have variously been described as “wild,” “odd,” “incorrect” and “outlandish.” To those descriptors, I’d add “offensive” and “dangerous.”

I’ll let Yahoo! Finance say what I’d rather not repeat in digital print:

[O]ver the last week… [Google’s AI Overview] told users they can use nontoxic glue to keep cheese from sliding off their pizza, that they can eat one rock a day, and claimed Barack Obama was the first Muslim president.

Here come the excuses

As I alluded to earlier, Google’s executives basically acknowledged that the Gemini fiasco was unacceptable. In contrast, a Google spokesperson offered excuses for AI Overview suggesting that it’s okay to eat rocks:

[It] seems a website about geology was syndicating articles from other sources on that topic onto their site, and that happened to include an article that originally appeared on the Onion. AI Overviews linked out to that source.

Does this mean it’s a satire publication’s fault? Or a geology website’s fault? Or just mere happenstance?

None of this passes the common-sense test. Google’s “AI overview” should filter out unacceptable results and be credible when other resources aren’t.

Evidently, Google VP and Head of Search Liz Reid had the unenviable task of spearheading the company’s damage control.

In a blog post, she secured the Understatement of the Year award when she wrote that “some odd, inaccurate or unhelpful AI Overviews certainly did show up.”

They didn’t just “show up.” AI Overviews delivered them to users. Reid then dug herself deeper into the hole of excuses, stating that AI Overviews may be “misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available.”

Here’s a left-field proposal, if I may. If an AI engine doesn’t have “a lot of great information available,” then it’s perfectly okay for it to say that it doesn’t know the answer.

Admitting you don’t know something doesn’t detract from your credibility; actually, it can enhance Google’s credibility while it beta-tests its products in private rather than in public (if I may suggest this to a $2 trillion company).

Even after this rant, I’m not quite ready to suggest that Alphabet’s stockholders dump their shares. However, if the company doesn’t implement better product-quality oversight, then I might start to feel uncomfortable using Google, not to mention investing in it.

Our Editorial Standards

At ValueWalk, we’re committed to providing accurate, research-backed information. Our editors go above and beyond to ensure our content is trustworthy and transparent.

David Moadel
Financial Writer

Want Financial Guidance Sent Straight to You?

  • Pop your email in the box, and you'll receive bi-weekly emails from ValueWalk.
  • We never send spam — only the latest financial news and guides to help you take charge of your financial future.