After Share of Search and Share of Voice comes SERP Market Research

SERPS (for the boardroom people reading this: the 'SERP' is the 'Search Engine Results Page', a.k.a. 'Google') shows your full market: who offers what, how they position, how visibly, and what buyers are really asking.

You as an SEO-specialists can use it to demonstrate the true value of SEO. That is, if you're willing to look beyond rankings.

Beyond rankings, there is market. At scale. For everyone to see. For those who look closely. #

The SERP offers us quality data. We only need to learn to look. And look closely. Not at keywords, and not at rankings. But at markets. With, within it: your competitors. With market share. The value of it. Their positioning. And also in that market: people. With emotions, biases, jobs to be done, journey stages.

SERP Market Research starts where Share of Search and Share of Voice end. Because what you want to know is:

  • -

    what actually IS the market?

  • -

    Which players are visible in it. Not in terms of positions, but literally visible, and not based on a handful of researched keywords, but ALL of them?

  • -

    What is each player's share? Real share?

  • -

    What is that share worth?

  • -

    How are they positioning themselves? How successful are they at it?

  • -

    And separately from the competitors, there is the audience: what are they searching for?

  • -

    Why? Based on which emotions?

  • -

    How far are they in their journey?

  • -

    Which jobs do they want to get done?

No need to be fancy: a simple table can already show real value

Market share and value per competitor for a single category, total market value €29.3M across 61,672 queries (top 80% analysed across 17,310 competing domains). The columns translate the measurement: Keywords as presence, Avg. Pos. and W. Pos. as positioning, Value and Share as economic translation. Domains and category anonymised.

This is what SEO research should be. What it can be. And what lands way better in the boardroom than talking about traffic, and canonicals, and 307-redirects. We know this, but the main question, as I always try to answer: that's fine, but how, exactly?

This 'how' has been the arc of my work for quite some time already1. And now, finally, I can say I have built it.

The SERP as a storefront #

I have worked in SEO for twenty years, long enough to notice the same conversation repeating itself. The work delivers measurable value. The reports show it. The traffic shows it. The revenue shows it. And then the conversation reaches the boardroom, and the value evaporates.

This is definitely not because it is not there, but because the language to defend it does not exist. That gap is not a problem of skill or technique within the discipline. It is a problem of translation.

This is where I bring in a metaphor.

I see the SERP as a window to the outer online world: like a storefront.

A storefront of every category at once: the place where buyers go to see what is available, what others recommend, which alternatives exist. But the storefront metaphor is only an entry point. The real argument is about what lives inside it, and what you can do with that information once you stop reading it as a ranking ladder.

What the SERP shows that ranking analysis misses #

Three things become visible when you stop reading the SERP as a leaderboard and start reading it as a market:

  1. 1.

    Category structure. A market is not the keyword list you built in 2019. It is the semantic space that emerges from how real searchers behave: the variations, refinements, and intent-shifts they actually use. Reading the SERP as a market means asking: what does aggregated search behaviour reveal about how this category is bounded? Which queries cluster together? Which intents are dominant, which are emerging? That structure is visible in the SERP if you read for it. It is invisible if you read for rankings.

  2. 2.

    Competitive position at the category level. Ranking dashboards show position per keyword. Market reading shows presence per category. The first tells you where you appear in a list. The second tells you what share of the category's total attention you currently hold, and which competitors hold the rest. The first is operational data. The second is competitive intelligence.

  3. 3.

    Buyer behaviour. Every refined query, every "people also ask" expansion, every related search is a trace of how buyers actually think about a category before they make a decision. Aggregated across thousands of queries, those traces reveal jobs to be done, decision criteria, anxieties, comparison frameworks, and the sequence in which questions arise. Most of this is in plain sight in the SERP. It is read as keyword opportunity by SEO tools. It can be read as buyer research by anyone willing to interpret it that way.

Read together, the three readings turn the SERP into a positioning map. You see not just who is present, but how each player is making its bid for the category.

Positioning per competitior

Same market share, different positioning. Where each player concentrates says more than where they rank.

A shout out, and what I've learned from other SEOs about this #

I've learned from others.

Dorron Shapow has consistently framed search user optimisation around buyer psychology, most visibly through systems like Persona-X, with the premise that every refined query is an unresolved need and every related search a trace of how buyers actually think before they make a decision. Arnout Hellemans approaches the SERP as a window into audience research and is developing this direction, which he will soon tell more about at conferences near you. Emina Demiri-Watson has argued, consistently, that brand perception signals are an underappreciated component of SERP quality and that much of what is sold as new (GEO, AEO) is a repackaging of what serious SEO already does. Mark Williams-Cook has spent more than a decade building AlsoAsked and continues to publish on intent shift detection, click model analysis, and the patent-level mechanics of how Google constructs SERP features. On the methodological side, Daniel Foley Carter has consistently advocated for counting queries rather than (or at least: next to) ranking on them, applied at the page level. SERP Market Research takes the same principle and applies it at the market level: thousands of pages, across all the queries that define a category.

(I am absolutely 100% sure i'm forgetting loads of smart people who have said similar things. Sorry! Reach out to me, let's exchange ideas.)

Like them, I look at the SERP not (just) as a ranking ladder to climb, but as something far richer if you actually look closely. And I even see in it a way to validate our budgets, to be honest.

So, this is SERP Market Research #

What I have been describing is what I call SERP Market Research. It sits where three existing practices meet: share of search and share of voice (which together measure aggregate brand presence in organic and paid contexts), competitive intelligence, and classic SEO analysis. Together they map aggregate presence, who the players are (both competitors and people searching), and where you appear on individual queries. SERP Market Research combines these into a reading of search data as a category map: who holds how much attention, where, how, at what value.

Surface element Underlying signal
Query Position statement from a human, the expression of a job to be done, a translation of an emotion
Competitor at position X, with title, snippet A player positioning itself, saving ad costs, earning visibility, being successful in it to some degree
Related searches A fragment of collective intent, next-steps people take (fan-outs, anyone?)
People Also Ask Audience research, zero cost, in human language, at scale
PAA, at market scale

36,019 PAA questions, classified by behavioral pattern. Audience research that traditional methods cannot deliver at this scale.

The central metric: Opportunity Cost #

Reading the SERP as a market makes competitive position visible, but visibility alone does not translate into the language strategy actually uses. To make the comparison legible at the boardroom level, you need a unit of measurement that is identical across competitors and recognisable as a value indicator. That unit is opportunity cost.

Each query in a category has a market price, expressed as the cost-per-click that advertisers collectively are willing to pay for one click on that query. Multiplied across the queries you appear on organically, that price represents what it would have cost to buy the same presence through paid search. It is not revenue. It says nothing about conversion, margin, or customer value, all of which depend on factors specific to your funnel. What it does say is what the market collectively values that attention at, in a currency that is identical for you and for every competitor in the category.

That makes it useful precisely as a benchmarking metric, not as a revenue figure. "We capture 23% of an attention pool the market values at €2.1M annually, competitor Y captures 31%" is a comparison on equal terms, because the same CPC is applied to both. It allows competitive positioning to be expressed in monetary units without claiming a return that has not been earned. Boardrooms understand the distinction between market value and realised revenue. SEO reporting historically has not made it explicit enough to be defended.

Used alongside revenue metrics, never as a substitute for them, opportunity cost is the bridge that makes the rest of SERP Market Research strategically usable.

Where the gaps are

Gaps expressed in market value: €1.0M here, €809K there. Not missing keywords; missing positions in the category.

This data is actually pretty reliable #

A reasonable question at this point: why would the SERP be a trustworthy source of market intelligence?

The answer is that the SERP is a continuous, automated experiment. Google tests constantly: which titles are clicked, which snippets satisfy, which layouts perform, which brands accumulate trust for which queries. This is not a claim the SEO industry makes. It is a claim Google makes, publicly. In 2023 alone Google reported running over 700,000 search experiments, resulting in more than 4,000 implemented changes to search, evaluated in part by roughly 16,000 external quality raters running side-by-side comparisons of proposed versus existing results. What you see in the SERP today is not a passive output. It is an optimised output, tuned against observed behaviour in real time.

The result is the publicly visible output of an A/B test the size of the entire searching population, paid for by Google and run at a scale no individual organisation could afford. That is why reading the SERP as a market signal is not speculation. The signal has already been tested against real behaviour.

It is a proxy, like all observational data is. But it is the only one that shows the whole searching population, live, unfiltered.

What remains is the interpretation.

Why it matters now even more than before #

This whole AI hype we're in makes this more valuable than it has been before.

AI systems retrieve from existing indices, and they cite a small number of sources per query. Up to 90% of ChatGPT Search retrieval traces back to Google's index2. Google's AI Overviews draw on Google's own organic results. Whoever is broadly present across a category in search is, by mechanism, more likely to be retrieved and cited.

The empirical signal here is suggestive rather than settled. Commercial datasets from SEO research outlets report a Spearman correlation of approximately r=0.41 between topical breadth and AI citation frequency, more than twice as strong as domain authority3. Pages that rank across both a main query and its fan-out variants are 161% more likely to be cited4. These figures originate from commercial tooling and have not yet been independently replicated in peer-reviewed work. The pattern is consistent enough to act on cautiously, not strong enough to treat as a settled mechanism.

What is settled is that AI retrieval depends on the index, and that breadth across a category increases the probability of appearing in any given retrieval. The mechanistic detail of how that probability resolves into citation will keep changing as AI systems evolve. Gemini 3 changed citation patterns substantially in early 2026. The next model upgrade will change them again. Treating any specific correlation as a stable predictor is naive. Treating broad market presence as a generally favourable position regardless of which retrieval mechanism dominates is reasonable, because it survives changes in the underlying mechanism in a way that mechanism-specific optimisation does not.

The same logic extends to the interface itself. Whatever the SERP looks like in three years (ten blue links, an AI Overview, a fully generated answer surface like Google's Web Guide experiment, a conversational interface entirely), it will still be presenting some aggregation of what the market currently signals. The interface is the layer that changes. The aggregation underneath is where market reading operates. For as long as some entity is aggregating search-driven attention into a representation of a category, the question "who is present in that aggregation, and how broadly" will be answerable, and the answer will be strategically meaningful. That is an implication of how aggregation works, not a forecast about which interface will win.

What this is, and what it is not #

One note on the form this work takes. The tooling I have built (Skåut) is not a SaaS product. It runs as part of my own consultancy work with enterprise clients. I apply it in engagements where category-level competitive intelligence is what a client actually needs, and where my analysis of what the data shows adds more than the data would on its own. Whether Skåut becomes a broader product at some point is an open question. For now, it is how I work with clients, not how clients work with a tool.

References #

  • 1 The opportunity cost framing for SEO investment (reading organic presence through the lens of what the same attention would cost in paid search) is a line I have been developing in public for some time. I discussed it in depth on the Search with Candour podcast with Jack Chambers-Ward (episode "How to demonstrate the value of SEO", 7 July 2025: [link] and in a talk at SEObenelux. This article extends that framing from individual SEO-investment justification to category-level competitive benchmarking. back
  • 2 The Information, August 2025 (Efrati, Palazzolo, Mascarenhas). OpenAI uses SerpAPI to scrape Google results. Independent analysis by SEO consultant Alexis Rylko (July 2025) found up to 90% of ChatGPT Search URLs correspond to Google, with only ~30% matching Bing. The Bing Search API was retired by Microsoft on August 11, 2025. back
  • 3 Reported correlation between organic keyword breadth and AI citation frequency: r=0.41, vs r=0.18 for domain authority. Sources: ZipTie.dev / SearchAtlas 2025, with confirming analysis from Wellows (February 2026). These figures derive from commercial datasets and aggregated tool research; they have not been independently replicated in peer-reviewed work and should be treated as suggestive rather than established. back
  • 4 Search Engine Land analysis of 10,000 keywords. Pages ranking for both main query and fan-out queries are 161% more likely to appear in AI Overview citations. Confirmed by Gemini Digital Hub, December 2025 (173,902 URLs). Note that AI citation patterns changed substantially following the Gemini 3 upgrade in January 2026, and any specific correlation should be expected to shift as retrieval mechanisms evolve. back
Menu
Published: April 24, 2026 ~ 19 min.
Home  »  Blog  »  After Share of Search and Share of Voice comes SERP Market Research

Eikhart - Mad Scientist