AI is aware of all of it — however what occurs when it makes it up?
I bear in mind analysis analysts being probably the most pissed off group again in November 2022 when ChatGPT exploded onto the tech scene. They have been being requested to experiment with and use AI of their workflows, however it didn’t take lengthy for them to come across a significant stumbling block. In any case, would you threat your profession and credibility over a brand new expertise fad?
Whereas content material creators like myself, knowledge scientists, and engineers have been thriving with AI adoption, we may solely empathize with our analysis analyst friends as we partnered with them to search out new methods to make OpenAI, Gemini, Langchain, and Perplexity cater to their necessities. Everybody tried constructing belief in AI as we placed on our researcher hats.
However quickly, the consensus was that AI hallucinations have been an issue for data staff, whether or not you have been a researcher, content material creator, developer, or a enterprise chief.
Quick ahead to 2025, and regardless of all of the developments in AI, hallucinations haven’t disappeared. Whereas corporations like Anthropic, OpenAI, and NVIDIA are pushing the boundaries of AI reasoning fashions, the ghost of hallucinations nonetheless lingers. Our newest G2 LinkedIn ballot reveals that almost 75% of execs have skilled AI hallucinations, with over half (52%) saying they’ve skilled AI hallucinations a number of occasions.
These new developments would possibly promise smarter, quicker, and extra dependable AI, however the query stays — are they robust sufficient to maintain hallucinations at bay?
Let’s take a more in-depth take a look at the most recent AI LLM updates shaping the {industry}:
A timeline of key AI LLM mannequin updates in 2025
- February 24, 2025: Anthropic launched Claude 3.7 Sonnet, the world’s first hybrid reasoning AI mannequin to boost and broaden output limits
- February 27, 2025: OpenAI unveiled GPT-4.5 Orion, integrating varied applied sciences right into a unified mannequin for streamlined AI purposes
- March 18, 2025: NVIDIA introduced the open Llama Nemotron household of fashions with reasoning capabilities to empower enterprise
- March 20, 2025: At GTC 2025, NVIDIA launched NVIDIA Dynamo, an open-source software program designed to speed up and scale AI reasoning fashions in AI factories
Hallucinations, the ‘Reply Economic system’, and real-world challenges
As AI fashions evolve with new capabilities, the way in which we work together with data can also be reworking. We’re witnessing the rise of a mega-trend that our very personal Tim Sanders calls the “Reply Economic system.” Persons are transitioning from search-based analysis to an answer-driven type of studying, shopping for, and dealing.
However there’s a catch in all of this. AI chatbots appear to be delivering prompt, assured responses — even once they’re improper. And regardless of accuracy issues, these AI-generated solutions are influencing selections throughout industries. This shift poses a vital query: are we too fast to just accept AI’s responses as fact, particularly when the stakes are excessive? How robust is our belief in AI?
Whereas AI chatbots are shaking up search and AI corporations are leaping in the direction of agentic AI, how robust are their roots when hallucinations hang-out? AI hallucinations may be as trivial as Gemini telling folks to eat rocks and glue pizza. Or as massive as fabricating claims like those under.
AI hallucinations: a timeline of authorized challenges
- January 6, 2025: An AI skilled’s testimony was challenged in court docket for counting on AI hallucinated citations in a deepfake-related lawsuit, elevating issues in regards to the credibility of AI-generated proof
- February 11, 2025: Legal professionals in Wyoming confronted potential sanctions for utilizing AI-generated fictitious citations in a lawsuit towards Walmart, highlighting the dangers of counting on hallucinated knowledge in authorized filings
- March 20, 2025: OpenAI confronted a privateness grievance in Europe after ChatGPT falsely accused a Norwegian particular person of homicide, elevating issues about reputational injury and GDPR violations
There have been a number of different notable AI hallucination mishaps in 2024 involving manufacturers like Air Canada, Zillow, Microsoft, Groq, and McDonald’s.
So, are AI chatbots making life simpler or simply including one other layer of complexity for companies? We combed by way of G2 evaluations to uncover what’s working, what’s not, and the place the hallucinations hit hardest.
Greater than your common publication.
Each Thursday, we spill scorching takes, insider data, and information recaps straight to your inbox. Subscribe right here
The G2 take
A fast comparability of ChatGPT, Gemini, Claude, and Perplexity exhibits ChatGPT because the chief at a look, with an 8.7/10 rating. Nonetheless, a more in-depth look reveals that Gemini leads by way of reliability — by a slim margin.
Supply: G2.com
Whereas ChatGPT has higher capabilities of studying from person interactions to cut back errors and perceive context, Perplexity and Gemini beat it at content material accuracy with an 8.5 rating.
Supply: G2.com
Practically 35% of evaluations spotlight the accuracy hole
These AI chatbots are being utilized in small companies, SMEs, and enterprises by every kind of execs — analysis analysts, advertising and marketing leaders, software program engineers, tutors, and many others. And a deep dive into G2 evaluation knowledge reveals a evident pattern: inaccuracy stays a shared concern throughout the board.
We are able to’t assist however discover that, proper off the bat, a mean of ~34.98% of evaluations have issues about inaccuracy, context understanding, and outdated data.

Supply: Unique G2 Information
Customers aren’t shy about flagging their frustrations. Out of the a whole lot of evaluations, accuracy issues topped the listing of cons:
- ChatGPT: 101 mentions of inaccuracy, with outdated data including to the frustration
- Gemini: 33 situations of inaccurate responses, compounded by 26 complaints about context understanding
- Claude: Fewer reviews, however with seven accuracy points and 5 issues about recognition
- Perplexity: Whereas boasting fast insights, it wasn’t immune — customers identified seven limitations associated to AI accuracy
Whereas China’s DeepSeek has turned heads and wreaked inventory market havoc as a result of its pace and cost-saving go-to-market (GTM) product, it doesn’t have a particular (and dare we are saying authorized sufficient) presence within the USA for legitimate issues over security and potential knowledge siphoning. Speculations round its reliability outweigh the attract of affordability.
Our VP of Insights, Tim Sanders, referred to as it out for its hallucination charge in a current interview.
“DeepSeek’s R1 has an 83% hallucination charge for analysis and writing, which is way larger than the ten% hallucination charge of different AI platforms.”
Tim Sanders
VP of Analysis Insights at G2
Gemini: The ironic productiveness booster for analysis analysts
We famous a number of analysis analysts use Gemini. Some significantly desire the analysis mode and use it for tutorial and market analysis.
“Every day use, significantly in love with analysis mode. Gemini’s pace enhances the browsing expertise general, particularly for many who use the web for intensive analysis and work duties or who multitask.”
Elmoury T.
Analysis Analyst
However right here’s the twist: analysis analysts aren’t raving about Gemini for its analysis reliability. As a substitute, it’s the seamless connectivity to Google’s suite of instruments and customizable person expertise that steals the highlight. Productiveness boosts, streamlined workflows, and smoother job administration? Completely. Trusting it for rigorous analysis? Not a lot.
Whereas Gemini’s analysis mode aggregates data from the web, accuracy and fact-checking aren’t making the headlines. Reminiscence administration points and sluggish efficiency additionally maintain it from being a real analysis powerhouse.
Supply: G2.com Critiques
ChatGPT: energy participant with precision pitfalls
From code era to market analysis, ChatGPT has turn into a every day go-to for professionals to brainstorm, generate content material rapidly, and reply advanced questions. But, accuracy issues persist.
Geopolitical subjects and nuanced analysis usually result in deceptive outcomes. Context understanding is strong, however misinformation and hallucinations nonetheless plague customers.
Consumer evaluations reward ChatGPT’s polished tone and contextual understanding, however this confidence usually masks the occasional hallucination. Customers highlighted its tendency to offer plausible-sounding however inaccurate data, particularly in advanced or nuanced eventualities like geopolitics. It’s a textbook case of “sounding sensible however not all the time being proper.”
Paid account customers are impressed with its new multimodal inputs, voice interactions, and reminiscence retention but in addition spotlight its limitations in knowledge evaluation, picture creation, and general accuracy.
Total, paid customers discover the product expensive in comparison with different free alternate options out there available in the market owing to ChatGPT’s server down time and accuracy points.
Supply: G2.com Critiques
Supply: G2.com Critiques
G2 evaluations additionally surfaced how customers undergo back-and-forth with ChatGPT to get their desired outcomes. At occasions, customers ran out of allotted tokens rapidly, leaving their queries unhappy.
Supply: G2.com Critiques
However for some customers, the advantages far outweigh the pitfalls. As an illustration, in industries the place pace and effectivity are essential, ChatGPT is proving to be a game-changer.
G2 Icon use case
Peter Gill, a G2 Icon and freight dealer, has embraced AI for industry-specific analysis. He makes use of ChatGPT to research regional produce developments throughout the U.S., figuring out the place seasonal peaks create alternatives for his trucking companies. By lowering his weekly analysis time by as much as 80%, AI has turn into a vital software in optimizing his enterprise technique.
“Historically, my weekly analysis may take me over an hour of guide work, scouring knowledge and reviews. ChatGPT has slashed this course of to simply 10-Quarter-hour. That’s time I can now spend money on different vital areas of my enterprise.”
Peter Gill
G2 Icon and Freight Dealer
Peter advocates that AI’s advantages prolong far past the logistics sector, proving to be a robust ally in at this time’s data-driven world.
Perplexity: pace meets smarts — with a aspect of stumbles
Perplexity’s exterior net search functionality and speedy updates have earned it a strong fanbase amongst researchers. Customers reward its skill to offer complete, context-aware insights. The frequent integration of the most recent AI fashions ensures it stays a step forward.
Nevertheless it’s not all sunshine and summaries. Customers flagged points with knowledge export, making it tougher to translate insights into actionable reviews. Minor UX enhancements may additionally considerably elevate its person expertise.
Michael N., a G2 reviewer and head of buyer intelligence, acknowledged that Perplexity Professional has reworked how he builds data.
Supply: G2.com Critiques
“Simplest way of conducting tiny and sophisticated analysis with correct prompting.”
Enterprise leaders and CMOs like Andrea L. are utilizing completely different AI chatbots to both complement, complement, or full their analysis.
Supply: G2.com Critiques
G2 Icon use case
Luca Piccinotti, a G2 Icon and CTO at Studio Piccinotti, makes use of AI to navigate advanced market dynamics. His staff makes use of AI to course of huge quantities of knowledge from surveys, social media, and buyer suggestions for sentiment evaluation, serving to them gauge public opinion and spot rising developments. AI additionally streamlines their survey workflows by automating query era, knowledge assortment, and evaluation, making their analysis extra environment friendly.
To translate insights into actionable methods, Luca depends on predictive analytics to forecast shopper habits, monitor rivals, and personalize advertising and marketing campaigns. His most popular AI instruments? Perplexity for analysis and ChatGPT for managing and refining the info.
“Perplexity is our trusted companion for analysis functions, whereas we use ChatGPT for managing the obtained knowledge. We additionally use extra instruments and wrappers, API, native fashions and many others. However the unbeatable ones are Perplexity and ChatGPT at this second.”
Luca Piccinotti
G2 Icon and CTO at Studio Piccinotti
Claude: a reasonably trustworthy, human-like, data-deficient counterpart
Claude’s conversational tone and contextual understanding shine by way of in evaluations. Customers admire its willingness to confess when it doesn’t know one thing reasonably than hallucinating a response. That degree of transparency builds belief.
Nonetheless, restricted coaching knowledge and functionality gaps in comparison with rivals like ChatGPT stay areas for enchancment. And whereas its strengths lie in conversational accuracy, its structured knowledge evaluation remains to be a piece in progress.
Not like most AI chatbots that confidently present incorrect solutions, Claude customers admire its transparency when it doesn’t know one thing. This “honesty over hallucination” method is a singular promoting level, making it a most popular alternative for customers who worth dependable suggestions over speculative responses.
Supply: G2.com Critiques
Nonetheless, customers additionally expressed frustrations round Claude’s skilled mode, citing its utilization bandwidth and lack of customer support.
Supply: G2.com Critiques
Verdict: AI for analysis — yay or nay?
It’s a cautious yay — which remains to be higher than the traditional “it relies upon”.
AI chatbots are undeniably worthwhile analysis instruments, particularly for rushing up data gathering and summarizing. However they’re not flawless.
4 key takeaways
Hallucinations, accuracy points, and inconsistent reliability stay challenges.
- Gemini could be your productiveness sidekick, simply not your analysis fact-checker if you happen to’re a analysis analyst who values integration and productiveness over pinpoint accuracy.
- ChatGPT is a productiveness booster for fast analysis duties, however fact-checking stays a should, even if you happen to’re paying a bomb for the paid subscription.
- Perplexity is a dependable data companion for researchers who worth pace and cutting-edge AI.
- Claude is the selection for these looking for trustworthy, human-like responses, however don’t anticipate it to crunch advanced datasets.
My tried-and-tested prompting hacks to keep away from AI hallucinations
- Immediate construction = Be exact + give context + specify what you need the specified end result to be + warn it about what its output shouldn’t have + share an instance if potential
- Use a immediate that calls on AI’s chain-of-thought reasoning to verify accuracy and establish hallucinations. Ask the AI chatbot: “Break down the steps you adopted to provide this output. Additionally, are you able to clarify your rationale for doing so?”
- Use templatization and comply with organization-wide pointers on utilizing AI chatbots and LLMs for work
- People within the loop stay vital, particularly in high-stakes environments like authorized analysis, market analysis, medical analysis, monetary analysis, and many others.
- All the time confirm and cross-check sources. We all know life will get busy, however a fast verify is all the time cheaper than a lawsuit!
Hallucinate much less, confirm extra: keep away from the AI tunnel imaginative and prescient lure
Count on AI fashions to double down on accuracy and transparency. Advances in multimodal AI and retrieval-augmented era (RAG) may scale back hallucinations. Perplexity, OpenAI, Google, and Anthropic now have their very own AI search capabilities, which can plug into real-time person knowledge to sharpen the accuracy and relevance of outputs.
Though newer fashions like DeepSeek R1 are being constructed at one-tenth the price of main rivals, its trustworthiness will decide its destiny within the international market.
In the long run, AI chatbots and LLMs are your analysis sidekick, not your fact-checker. Use them properly, query relentlessly, and let the info — not the chatbot — prepared the ground.
Loved this deep-dive evaluation? Subscribe to the G2 Tea publication at this time for the most popular takes in your inbox.
Edited by Supanna Das
