AI Search Engines: The Problem with Made-Up Answers
AI chatbots are confidently incorrect 60% of the time.

A study by Columbia Journalism Review on March 6th revealed that AI search engines often fabricate citations and answers. The study found that chatbots were generating incorrect responses 60% of the time. Grok 3, Elon Musk’s AI tool, answered 94% of queries incorrectly. Gemini, Google's AI, only provided a fully correct response once (out of 10). The lowest error rate belonged to Perplexity.
The issue here is fundamental. When Google Search became popular, it acted as a middleman, directing users to credible sources. AI search engines, however, are acting like a used car salesman—you’re not entirely sure if they’re giving you the right information. And instead of admitting they don’t know an answer, chatbots just make something up. Imagine being in a job interview and fabricating answers just to sound competent. That’s exactly what’s happening here, but with a supposedly trustworthy source of information.
If AI search engines are going to replace traditional search, they need to be more accurate than the systems they’re replacing. Right now, they aren’t even admitting when they’re wrong. That’s a big problem.