Back to Blog

Rules-Based Pricing for Retailers Who Need to Explain Every Price

Rules-based pricing run by a deterministic agent: explainable conflict resolution, full audit trail, and a five-step loop that beats spreadsheets and black-box ML.
Retailgrid Team
9 min read

Every pricing team eventually hits the same wall. The spreadsheet that worked at 500 SKUs starts giving wrong answers at 5,000. The category manager who used to rerun prices every Tuesday now gets to it once a fortnight, and the lag shows up as margin leak. The CFO asks why a particular SKU was priced where it was, and the only honest answer is "because Maria changed it last March."

This is the gap rules-based pricing was supposed to close. And in most software it didn't, because most pricing engines are either too rigid (one-size-fits-all formulas), too opaque (a model says the price is €4.79 — why? "the model said so") or too brittle (any rule conflict and the whole batch errors out). What works is rules-based pricing run by an agent — a deterministic engine that takes the team's actual business rules, resolves the inevitable conflicts in a way humans can trace, and explains every decision so it can be audited, overridden, or copied into next quarter's playbook.

This is what Retailgrid's Rules-Based Pricing Agent does. Below is what that means in practice, how it differs from the two approaches it replaces, and what to expect when you put it on real data.

What rules-based pricing actually is

A rule in this context is not a formula in a cell. It's a structured object with four parts:

  • Type — the kind of constraint or transformation. Fixed price, price change limit, price match (vs. competitor or vs. sister SKU), margin floor, rounding, sync prices across a group, price relations between products.
  • Scope — which rows it applies to, defined as Column + Operator + Value filters combined with AND. "Cost > €100 AND Is Promo IS FALSE" is a scope. No scope means "all rows."
  • Parameters — the rule-specific settings. For a price-match rule, that's how close (exact, ±2%, within €0.50). For a margin floor, that's the minimum percentage and how it's calculated.
  • Priority and behavior — an integer priority (lower = higher), a conflict behavior (follow priority by default; return an error for compliance-critical rules), and a missing-value behavior (skip silently, or fail loudly).

That four-part structure is what separates a rule from a hand-typed override. It carries enough metadata that a machine can apply thousands of them deterministically — and a human can read the output and trace exactly why each decision happened.

The traditional approach: spreadsheets and accumulated heroics

Most retailers in the €10M–€500M range still run pricing in Excel. There's a master sheet, a dozen lookup tables, a few VBA macros that nobody wants to touch, and a Slack channel where pricing analysts ask each other "wait, did we already mark this one down for the Easter promo?"

Two things break this setup as the assortment grows. First, rules conflict, and Excel has no native concept of priority. "Maintain margin > 10%" and "be cheaper than the named competitor" can't both be true on every SKU; somebody has to decide what to do, and in practice that decision happens one cell at a time, in someone's head, and never gets written down. Second, scale punishes consistency. A team of three can keep 1,500 active SKUs honest. The same team at 15,000 SKUs is doing triage, not pricing.

The McKinsey work on retail pricing puts numbers on this gap. Margin uplift from moving away from manual pricing toward data-driven systems is consistently in the range of 2–6 percentage points across categories, and most of that comes not from "smarter prices" but from more consistent application of the rules the business already had. The ceiling on a manual setup isn't strategy — it's coverage.

The black-box ML overcorrection

The reaction to spreadsheet sprawl has often been "let's throw machine learning at it." Demand-elasticity models, neural net pricing, reinforcement-learning loops. There's real value in some of that work, but as a replacement for rules-based pricing it hits a different wall.

A category manager who can't answer "why is this price €4.79?" can't defend it to a buyer, a regulator, or a CFO. A model that says "this price maximises expected profit given my training data" is, in practice, a model that gets switched off the first time it produces a result that surprises the team. Industry surveys consistently report 70–80% of enterprise AI projects fail to reach production, and a substantial share of those failures aren't accuracy problems — they're trust problems. The pricing team didn't push back because the model was wrong. They pushed back because they couldn't explain its decision to anyone above them.

So rules-based pricing isn't a step backward from ML. It's the layer on which ML eventually sits — once an organisation has the rule structure, the priority logic, and the audit trail, then it can sensibly add elasticity inputs, demand forecasts, and elasticity-aware optimisation. Skipping that layer is what produces the failure rate.

What an "agentic" pricing engine adds

Calling something an agent is fashionable. It's also overloaded. In this context, the agentic part is precise:

  • It takes a goal-shaped input (the business rules, the dataset, the execution scope) rather than a step-by-step procedure.
  • It plans the resolution (compile rules, narrow corridors, resolve conflicts) without the human writing the order of operations.
  • It produces an explainable result (a final price and the trace of how it got there) rather than just a number.
  • It runs as a service, not as a script someone has to remember to kick off — the same job from the same inputs always produces the same output, and the result is written back to the workbook the team already uses.

In Retailgrid, that loop runs in five steps, and the team doesn't have to think about most of them. The agent loads the job, validates the dataset and column mappings, compiles the rules into vectorised constraint generators, runs a progressive corridor-narrowing algorithm to resolve conflicts in priority order, and writes the result back into the Grid as a set of explainability columns: the rule that triggered the price, the rules that influenced it, the rules that got overridden, the new margin, and the absolute and percentage delta.

Conflict resolution, the part that matters most

The interesting decisions in rules-based pricing happen when two rules pull in opposite directions. The agent's job there isn't to be clever. It's to be predictable.

Rules are evaluated in priority order, lowest integer first. Each rule produces a corridor — a range of acceptable prices for the SKU. The agent narrows the feasible range as it walks down the priority list. When the next rule's corridor still intersects the current feasible range, fine — narrow further. When it doesn't intersect, two things can happen, and the rule itself decides which:

  • Follow priority (the default, used for commercial rules). The lower-priority rule is logged as violated, the engine stays inside the previously-feasible range, and the final price moves to the boundary closest to where the violated rule wanted it. The team sees the violation in the output, with the rule name and the gap.
  • Return error (used for compliance and regulatory rules). The SKU stops processing. It comes back with a status of error and the name of the conflict, and the team handles it manually. Other SKUs continue normally — one stop doesn't take the batch down.

This is the "deterministic" promise the agent makes. Same dataset plus same rules equals same prices, every time, regardless of the order rules were typed in or which server happened to run the job. That property is what unlocks the audit trail. A pricing decision that can be re-derived is a pricing decision a team can defend.

What you actually see on the Grid

When the agent finishes, every SKU in the workbook gets a set of new columns the team can sort, filter, and override on:

  • rule_based_price — the final number.
  • rbp_trigger_rule — the single rule that determined the price (often the one whose corridor was narrowest at the deciding moment).
  • rbp_applied_rules — every rule that influenced the calculation.
  • rbp_violated_rules — lower-priority rules whose corridors didn't fit.
  • rbp_skipped_rules — rules that didn't apply because the SKU didn't match the scope or had a missing value.
  • rbp_delta_pct and rbp_delta_abs — change from current price.
  • rbp_new_margin — recomputed margin at the new price.

A category manager doesn't read every row. The point is that when the buyer flags a single SKU and asks "why is this €19.99 and not €21.99?", the team has the answer in three clicks: trigger rule, applied rules, violated rules. No spreadsheet archaeology. No "the model said so."

What this approach explicitly doesn't do

Three things to be honest about, all of which are encoded in how the engine is built.

It does not predict demand. It works on the signals it has — current price, cost, competitor price, KVI flag — and resolves rules over those. If a team wants elasticity-aware optimisation, that's a separate model layered on top, with its own assumptions and its own validation. Mixing the two upfront is what produces the black-box trust problem.

It does not run on opaque ML for the price decision itself. The optimisation rules use a deterministic gradient-descent inside a sealed corridor, and the post-processing rules (rounding, sync, hard bounds) are mechanical. There's no neural net guessing what the "best" price is. This is a deliberate choice; without it, explainability collapses.

It does not replace the team. Override rates between 5% and 20% are healthy on a working configuration. Zero overrides means nobody is reading the recommendations. Persistent overrides above 30% on the same rule means that rule is wrong and the configuration needs revisiting. The agent surfaces this; the team decides.

How a Practitioner team starts on it

Three things make the difference between a rules-based pricing rollout that lands and one that gets quietly turned off.

Start with a small rule set. Five to seven rules covers most working categories: a margin floor, a cost-plus minimum, a competitive corridor for the top decile of SKUs, a rounding rule, and one or two category-specific overrides. Adding the next thirty rules is what breaks rollouts; the first seven are what produces 80% of the value.

Watch the violated-rules column for the first two cycles. Those rows are where the rule set is fighting itself. They're the most informative output the engine produces in the first month — better than the prices themselves, because they tell the team where their stated rules disagree with their actual practice.

Set explicit priorities once, then leave them. The instinct on day three is to keep nudging priorities until the output looks nice. That instinct is what kills determinism. If the output is wrong, fix the rule definitions; the priority list should change rarely and on purpose.

The agent gives the team something it didn't have before: a tool that turns "we have pricing rules" from a sentence in a deck into something that actually runs, every cycle, on every SKU, with the receipts attached. The receipts are the part most retailers have been missing.

If you want to see what the agent does on a real workbook, the Retailgrid demo walks through a full job from rule definition to explainability output in about six minutes. The faster path is to drop in a 10,000-SKU dataset and a handful of rules and watch the trace columns fill in — that's the thing the team needs to see before the framework lands.

Want to talk through what the rule set on your assortment would actually look like? Get in touch with the Retailgrid team — bring the rules you already use; the agent will run them.

Enjoyed this article?
See how RetailGrid can transform your pricing workflow.