I Tested 8 Search Engines So You Don’t Have To

Some were great, some were broken, and one might change how you search forever.

I Tested 8 Search Engines So You Don’t Have To

Search engines are broken. Google is bloated with ads. ChatGPT makes stuff up. So I ran 240 queries across 8 platforms to find out what actually works in 2025. The results? Surprising. Two underdogs crushed it, and some big names flopped hard. If you're tired of digging for answers, this one's for you.


What else is happening....


Tired of cookie-cutter marketing advice that doesn’t work for your business? I built a growing library of done-for-you organic marketing plans tailored specifically for small businesses — from local services to niche online brands. Each plan is based on real-world results, Reddit insights, case studies, and proven SEO strategies. Whether you’re just starting or looking to grow, this resource gives you actionable steps to attract more customers without paying for ads.

👉 Explore the Marketing Plans Directory and get inspired with ideas built to actually work in the real world.


I Tested 8 Search Engines So You Don’t Have To

Before anyone could say a word, the world changed. 

ChatGPT entered the scene and hit 1 million users in just 5 days, far surpassing OpenAI’s expectations and generating massive buzz.

Kevin Roose called ChatGPT “the best AI chatbot ever released,” while Paul Graham of Y Combinator noted even typically skeptical experts were blown away stating, “something big was happening.

The Google Problem: Years in the Making

The biggest surprise was that ChatGPT caught Google off guard and showed everyone what they already felt: Google search has not been good for years.

A Reddit thread from six years ago called “GOOGLE SUCKS NOW” said old Google gave better results, but now it shows less helpful ones. Even Bing was seen as better.

A Hacker News thread from six years ago asked if people were adding “reddit” to their searches to get better results. Many said yes because they were unhappy with Google.

Maybe that is why Google now shows Reddit and Quora posts in the top five results for almost every keyword.

A 2022 Fast Company article talked about the same issue and cited a post from the DKB Blog. It asked why people search Reddit so often.

The answer was that Google search is getting worse, and much of the web feels too fake to trust.

Frankly, not much as has changed, as evidenced by this Meme about John Wick 5:

Reality Check: Chatbots Aren’t Perfect

Hopes that ChatGPT or similar tools could fix bad search results took a hit after a recent CJR study.

It showed that chatbots often struggled to find original sources and sometimes even made up fake links.

A February 2024 AP News article showed that chatbots failed to give basic info about voting as the primaries got closer.

Over half of the answers from five major AI models were wrong, and 40% were seen as possibly harmful. Google's Gemini gave wrong answers almost two-thirds of the time.

So are there any potential solutions to the current limitations of both traditional search engines and AI chatbots?

The Study: How I Compared 8 Search Platforms

To find out for myself, I did a mini study that took almost a full day to finish.

I tested 8 platforms by running 30 different questions on each, making 240 total queries. It was a small start, but a solid one.

This will be the first of many studies I plan to do.

It was fun, though a bit annoying to wait for results before writing. But I needed the data to give an honest opinion.

Let’s dive into how I did the study.

The questions I used ranged from simple ones like “What is the capital of France?” to complex ones like “What's the plot of the latest Marvel movie?”

I rated each response from 1 to 10 based on three things: accuracy, usefulness, and citations.

For accuracy, I checked if the answer was correct or partly correct. Here's an example where Google Search missed the mark:

The query is “What's the plot of the latest Marvel movie?” - 

The actual answer is Captain America: Brave New World. Chat GPT got it correct:

So that is the difference between an accurate and inaccurate result.

The next factor to look at is Usefulness. Very simple, how useful was the result? 

Here is an example of a result that isn’t very useful from Chat GPT:

ChatGPT respected copyright here, but links to the lyrics would’ve helped.

In comparison, Perplexity gave a much more useful result:

Perplexity gave lyrics, source links, and videos — very useful.

The third thing I checked was citations: were the links real, trusted, and recent?

Here is an example of bad citations from Bing. Some links worked, but others did not. 

Starting off you get this really great Recipe Carousel on Bing. Next, lets click on the first option, BBC.co:

This looks like a nice experience. However, if we click on the second “Read full directions” then get a redirected link - 

This happened repeatedly for several links from the Recipe carousel. This is where Bing lost points. 

Now, let's look at some amazing citations from DuckDuckGo:

This recipe carousel links directly to the publisher. That’s all you need!

The Rankings

The final scores surprised me as well:

Platform

Average Score

Brave Search

9.8

Google AI Mode

9.7

Perplexity

9.7

Kagi

9.5

Google Search

9.3

Duck Duck Go

9.2

Bing

8.4

Chat GPT

6.9

This study was fully subjective and not scientific. People value different things, but here is what I looked for:

  • Correct answers
  • links to the source
  • clean design
  • clear responses
  • no confusing extras.

First, I’ll cover the lowest scorers: Bing and ChatGPT. Then I’ll review Brave Search and Kagi.

Highlights and Lowlights

Bing

Bing lost points for poor design and broken links. Some results were confusing or had too much happening.

Here is an example from the search “What are the top-rated restaurants in Chicago?”

There is a carousel at the top and a blue box showing info from tastingtable.com and Tripadvisor.

These sections do not connect well. They should match, support each other, or one should be removed.

Clicking a carousel option opens a new Bing search instead of the restaurant’s website, which I found frustrating.

It felt like poor design, though that is just my personal opinion.

Chat GPT

ChatGPT’s score was lower than expected, which was disappointing since I use it often.

It lost points due to network issues and slow replies. OpenAI even showed an alert about high error rates during testing.

The answers were just okay, not great, especially compared to other platforms.

For example, here is what it gave for the question “How do I reset my iPhone?”

Perplexity gave steps for when your iPhone is locked, and Brave explained what to do if you forgot your password.

Using them felt like trying fresh, hand-rolled bagels after basic deli ones. I did not want to go back.

The Standouts: Brave Search and Kagi

I had not used Brave Search before this study, but it really impressed me. It scored 9.8 for accuracy, 9.7 for usefulness, and 9.8 for citations.

Its accuracy matched Google AI Mode and Perplexity, but it was more useful because of its simple, clean results.

Kagi

Kagi was new to me before this study, but it stood out quickly. It’s a paid search engine with no ads at all.

The results were accurate, useful, and easy to read. I also liked the clean layout. Here’s an example from the search “How do I reset my iPhone?”

I loved the “references” section at the bottom. The citations were clear, real, and made sense—three came from apple.com, and one from asurion.com.

Kagi quickly became my favorite. I had imagined a paid, ad-free search engine like this, and Kagi does a great job.

It also has an interesting feature where you can click on the little shield icon on each search result and actually provide feedback to change its ranking. The exact instructions are like this:

Kagi has a great idea—let users give direct feedback and help shape the results. Google could really learn from that.

Search intent can be tricky, but Kagi handles it better than anything else I’ve seen so far.

Kagi focuses on quality, not popularity or SEO tricks. Users help judge results, and since it’s paid, there’s no data tracking.

Anyway, I’ve talked about this enough for now.

Key Takeaways

The main takeaway is that Brave Search and Kagi often scored perfect 10s and matched or beat top platforms like Google and ChatGPT, especially in accuracy and citations.

While Google was accurate, it lacked strong citations and had more ads. Brave and Kagi gave clearer, well-sourced answers, making them great options for quality and privacy.

ChatGPT gave helpful answers but lost points on citations, making it weaker for source-based questions. Kagi was the most balanced, scoring high in all areas.

With no ads, strong citations, and user control, Kagi stands out. This shows you no longer need to rely on Google because Brave and Kagi are strong options that sometimes perform even better.

Here's the summary of the summary -

  • ChatGPT: Poor answers, lack of consistent citations.
  • Bing: UX flaws, broken links, messy results.
  • Google (Search & AI Mode): Decent enough, but in some cases, completely missing the answer.
  • DuckDuckGo: Clean design, accurate answers, but difficult to differentiate between sponsored and organic listings.
  • Perplexity: Surprisingly useful and citation-rich.
  • Brave Search: Simple UX, high usefulness.
  • Kagi: Paid, ad-free, top marks in everything.

If you have questions or want to share your experience with different search platforms, feel free to leave a comment. I’m happy to help and offer guidance.