Here is a thing that happens with some regularity in businesses that have been around long enough to have pricing models: someone decides to actually check.

Not estimate. Not infer from win/loss rates or sales rep anecdotes. Actually go to a competitor's website, look at the prices, and write them down.

And the number they find doesn't match anything in the model.

Tracking competitor prices manually? SiteScoop extracts them into a spreadsheet in seconds - no code, no uploads, nothing leaves your browser.

Try SiteScoop free →

Not by a rounding error. By eighteen percent. On a whole product category. Not a sale. Not a promotion. Just: that's the price. Has been for six months. Possibly longer. Nobody knew because nobody had looked, and nobody had looked because in most organizations, it's nobody's primary job - it's everyone's secondary one.

This is what competitor price analysis actually is in practice. It's not a strategic exercise. It's a reckoning.

The gap between "we're roughly right" and what the market is doing

Pricing teams work from models. Good ones, usually - built on years of sales data, customer feedback, regular internal review. The models feel solid. They're internally consistent, stress-tested, developed by experienced people who take pricing seriously.

What they're not is checked against what's actually live on competitor websites right now.

Research by pricing software companies consistently finds that the gap between a team's estimate of competitor pricing and the actual market data runs between 15 and 30 percent for teams who consider themselves well-informed. For teams that haven't done a systematic collection in six months or more, the gap routinely exceeds 40 percent.

Forty percent. On teams who thought they knew.

The models aren't wrong because the people building them are careless. They're wrong because the data feeding them is old, or inferred, or based on impressions. Prices move without announcements. A competitor responding to cost pressure doesn't send a press release. They update their website. The only way to know what changed is to look.

What the four-hour task nobody assigned actually costs

Manual competitor price collection is how most small and mid-sized businesses do this, when they do it at all. Someone visits each competitor's site on a set schedule, navigates to the relevant products, records the current prices. It's not complicated. It's just relentless.

A survey of 200 procurement managers found the median time spent on manual competitive price research was 4.2 hours per week per analyst. At the high end - complex product categories, lots of SKUs, several competitors - analysts reported 12 hours or more.

Twelve hours a week. On copy-pasting numbers into a spreadsheet.

And what those hours produce is a snapshot. A photograph of prices at a specific moment in time. By the time the spreadsheet is finished, some of those prices have already changed. The analysts doing this work know it. They're not confused about the economics of what they're spending their afternoon on. They just have a task that needs doing and the tools they have are a browser and a spreadsheet.

Three things that show up when someone finally looks systematically

There are patterns that appear reliably when businesses do a real, structured price collection for the first time. Not a spot-check. A proper sweep.

The range between cheapest and most expensive across major competitors is almost always wider than pricing teams expect going in. The mental model of a "market price" turns out to be an average of impressions rather than an actual distribution. When someone maps the distribution properly, it tends to be more spread out, and less symmetrical, than anyone assumed. Some competitors are running much higher than expected. Some much lower.

The price gaps also tend to cluster - not spread evenly across a catalogue. A competitor isn't uniformly cheaper. They're deliberate: aggressive in specific categories, at or above market everywhere else. Finding which categories is the genuinely useful intelligence, and that pattern is invisible until you have the data laid out in front of you. Then it's obvious. Then there's a moment of wondering how it wasn't obvious before.

And then there's promotional pricing - which, looked at over time, often isn't promotional at all. A product "on sale" on a competitor site 80 percent of the time is, functionally, priced at the sale price. The original figure is the fiction. A single visit to the site misses this completely. It only becomes legible once someone has been collecting long enough to see the pattern.

Why one check is almost the same as none

A one-time price collection is a photograph. Markets are films.

The useful intelligence isn't a number at a point in time. It's what the number is doing - rising, falling, holding, drifting. A competitor at price parity six months ago who is now running 12 percent below is a different situation from a competitor who has been 12 percent below for two years. The current number is identical. The implication is completely different.

That kind of trend data requires consistent collection: same products, same methodology, regular cadence. Weekly in volatile categories. Monthly in more stable ones. The interval matters less than the consistency. What you're building, over time, is something a single check can never produce - the ability to see direction, not just position.

The constraint isn't analysis, by the way. Most teams can interpret pricing data once they have it. The constraint is collection. Building a view of competitor pricing that reflects current reality, refreshes on a useful cadence, and covers the scope you actually need - that's the part that breaks. That's where the 4.2 hours a week goes. That's why it gets skipped when things get busy, and then done in a rush when a lost deal forces the question.

The lost deal is an expensive way to discover a competitor moved on price three months ago.

What happens after someone finally looks

Someone has spent the better part of a week collecting competitor prices. They're about to share what they found.

The surprise runs almost always in the same direction: further from market than expected. Sometimes advantageously - room to raise prices in categories where margin has been left on the floor, a gap that's been there so long it feels like an accident. More often the opposite: a competitor running below for long enough that some customers have already noticed, even if the internal model hasn't.

The SiteScoop extension handles the collection side of this: visit the page, extract the product and pricing data, export to a spreadsheet. No coding, no infrastructure, no server-side crawler to set up and maintain. Just the browser you already have, pointed at the pages you'd otherwise be visiting and transcribing by hand.

The method matters less than what it replaces. What it replaces is the work that doesn't get done because it's too slow, or gets done badly because there wasn't enough time, or gets done once and then abandoned because nobody wanted to spend another afternoon doing it.

Teams that track competitor prices consistently tend to hold their pricing beliefs a bit more loosely than teams that don't. They've seen the gap between assumption and market reality often enough to know that "we're roughly right" is a hypothesis, not a fact. It might be confirmed. It might not be. Either way, they know which one it is.

That's the thing that actually changes when someone starts looking properly. Not just the numbers. The relationship with the numbers.