AI visibility platforms, like Radix or Promptwatch, have discovered G2 to be essentially the most cited software program evaluation platform.
Radix analyzed 10,000+ searches on ChatGPT, Perplexity, and Google’s AI Overviews and located G2 has “the best affect for software-related queries” with 22.4%.
Moreover, PromptWatch discovered G2 to be essentially the most seen B2B software program evaluation platform throughout 100 million+ clicks, citations, and mentions from AI search like ChatGPT, tracked throughout 3,000+ web sites.
The info means that G2 has a significant affect on software program searches on LLMs (e.g., ChatGPT, Perplexity, Gemini, Claude, and so on.). As an unbiased researcher, I wished to see if I may detect a relationship in our knowledge and validate the claims.
To get there, I analyzed 30,000 AI citations and share of voice (SoV) from Profound, which span throughout 500 software program classes on G2.
- Citations: A website, G2 on this case, is cited in an LLM with a hyperlink again to it.
- SoV: The variety of citations a website will get divided by the overall obtainable variety of citations
What the info revealed
Classes with extra G2 Critiques get extra AI citations and the next SoV. When ChatGPT, Perplexity, or Claude must advocate software program, they cite G2 among the many first. Right here’s what I discovered.
1. Extra opinions are linked with extra citations
The info reveals a small however dependable relationship between LLM citations and G2 software program opinions (regression coefficient: 0.097, 95%, CI: 0.004 to 0.191, R-squared: 0.009).
Classes with 10% extra opinions have 2% extra citations. That is after eradicating outliers, controlling for class measurement, and utilizing conservative statistical strategies. The connection is clear.

2. Classes with extra opinions have the next SoV
I additionally discovered a small however dependable relationship between G2 Critiques and SoV (regression coefficient: 0.113, 95% CI: 0.016 to 0.210, R-squared: 0.012).
If opinions rise by 10%, SoV will increase by roughly 0.2-2.0%.

What does all this imply?
The variety of citations and the SoV are primarily decided by elements exterior this evaluation: model authority, content material high quality, mannequin coaching knowledge, natural search visibility, and cross-web mentions. Critiques clarify lower than 2% of the variance, which suggests they seem to be a small piece of a bigger puzzle.
However why G2 particularly?
AI fashions face a verification drawback. They want scalable, structured indicators to evaluate software program high quality. G2 gives three attributes that matter: verified consumers (reduces noise), standardized schema (machine-readable), and evaluation velocity (present market exercise). With greater than 3 million verified opinions and the best natural visitors in software program classes, G2 gives sign density that different platforms cannot match.
A ten% enhance in opinions correlating with a 2% enhance in citations sounds modest. However think about the baseline: most classes obtain restricted AI citations. A 2% elevate on a low base could also be virtually negligible. Nonetheless, in high-volume classes the place a whole bunch of citations happen month-to-month, a 2% shift may meaningfully alter aggressive positioning. In winner-take-most classes the place the highest three outcomes seize disproportionate consideration, small quotation benefits compound.
What issues is not your uncooked evaluation rely, however your place relative to opponents in your class. A class with 500 opinions the place you maintain 200 positions has a special affect than a class with 5,000 opinions the place you maintain 200.
Why this issues now
The shopping for journey is reworking. In G2’s August 2025 survey of 1,000+ B2B software program consumers, 87% reported that AI chatbots are altering how they analysis merchandise. Half now begin their shopping for journey in an AI chatbot as a substitute of Google — a 71% bounce in simply 4 months.
The actual disruption is in shortlist creation. AI chat is now the highest supply consumers use to construct software program shortlists — forward of evaluation websites, vendor web sites, and salespeople. They’re one-shotting choices that used to take hours. A immediate like “give me three CRM options for a hospital that work on iPads” immediately creates a shortlist.
Once we requested consumers which sources they belief to analysis software program options, AI chat ranked first. Above vendor web sites. Above salespeople.
When a procurement director asks Claude to share the “greatest CRM for 50-person groups” in the present day, they’re getting a synthesized reply from sources the AI mannequin trusts. G2 is a type of sources. The software program trade treats G2 as a buyer success field to examine. The info suggests it is turn into a distribution channel — not the one one, however a measurable one.
What actions you possibly can take based mostly on these analysis insights
One of the simplest ways to use the info is to put money into opinions and G2 Profiles:
- Write a profile description (+250 characters) that clearly highlights your distinctive positioning and worth props.
- Add detailed pricing data to your G2 Profile.
- Drive extra opinions to your G2 Profile, equivalent to by linking to your G2 Profile web page from different channels.
- Provoke and interact with discussions about your product and market.
Methodology
To conduct this analysis, we used the next methodology and method:
We took 500 random G2 classes and assessed:
- Accepted opinions within the final 12 months
- Citations and SoV within the final 4 weeks
We eliminated rows the place:
- Citations within the final 4 weeks are beneath 10
- Visibility rating is 0 p.c
- Accepted opinions within the final 12 months are under 100 authorized opinions
- Critiques have been vital outliers
For the result, the median was unchanged, which helps that pruning didn’t bias the middle of the distribution.
We analyzed the regression coefficient, 95% confidence interval, pattern measurement, and R-squared.
Limitations embody the next:
- Cross-sectional design limits causal inference: This evaluation examines associations at a single time limit (opinions from the prior 12 months, citations from a 4-week window). We can’t distinguish whether or not opinions drive citations, citations drive opinions, or each are collectively decided by unobserved elements equivalent to model power or market positioning. Time-series or panel knowledge can be required to ascertain temporal priority.
- Omitted variable bias: The low R² values (0.009-0.012) point out that evaluation quantity explains lower than 2% of the variation in citations and SoV. The remaining 98% is attributable to elements exterior the mannequin, together with model authority, content material high quality, mannequin coaching knowledge, natural search visibility, and market maturity. With out controls for these confounders, our coefficients could also be biased.
- Aggregation on the class degree: We analyze classes slightly than particular person merchandise, which obscures within-category heterogeneity. Classes with equivalent evaluation counts however completely different distributions throughout merchandise could exhibit completely different AI quotation patterns. Product-level evaluation would offer extra granular insights however would require completely different knowledge assortment.
- Pattern restrictions have an effect on generalizability: We excluded classes with fewer than 100 opinions, fewer than 10 citations, or excessive outlier values. Whereas this improves statistical properties, it limits our potential to generalize to small classes, rising markets, or merchandise with atypical evaluation patterns. The pruning maintained the median, suggesting central tendency is preserved, however tail conduct stays unexamined.
- Single platform evaluation: This examine focuses solely on G2. Different evaluation platforms (like Capterra, TrustRadius, and so on.) and knowledge sources (like Reddit and trade blogs) additionally affect AI mannequin outputs. G2’s dominance in software program classes could not prolong to different verticals, and multi-platform results stay unquantified.
- Mannequin specification assumptions: We use log transformations to deal with skewness and assume linear relationships on the remodeled scale. Various useful varieties (like polynomial and interplay phrases) or modeling approaches (equivalent to generalized linear fashions and quantile regression) may reveal non-linearities or heterogeneous results throughout the distribution.
- Measurement concerns: Citations and SoV rely upon Profound’s monitoring methodology and question choice. Completely different monitoring instruments, question units, or AI fashions could produce completely different quotation patterns. Evaluate counts rely upon G2’s verification course of, which can introduce choice results.
These limitations counsel our estimates must be interpreted as suggestive associations slightly than causal results. The connection between opinions and AI citations is statistically detectable however operates inside a posh system of a number of affect elements.
