RICE Scoring
Prioritise objectively with a four-factor formula
RICE Scoring is a prioritisation framework that scores each initiative on four factors: Reach (how many users affected per period), Impact (how much it moves the needle for each user), Confidence (how certain you are of your estimates), and Effort (person-months of work). Dividing Reach × Impact × Confidence by Effort produces a comparable score that removes gut-feel from prioritisation debates.
Developed by Sean McBride at Intercom around 2015. First published on the Intercom blog. Rapidly adopted across the product management community as a practical, lightweight prioritisation framework.
Use RICE Scoring when
- ✓Backlog grooming sessions where multiple competing initiatives need objective comparison
- ✓Teams with prioritisation debates driven by HiPPO (Highest Paid Person's Opinion) rather than data
- ✓When you need to communicate prioritisation rationale to stakeholders in a defensible format
- ✓Quarterly planning to rank a large candidate list before committing to a roadmap
Avoid it when
- ✗Situations requiring strategic, non-numeric judgment (market positioning, partnerships, platform bets)
- ✗Teams where estimation is so uncertain that any RICE score would be fiction
- ✗When the prioritisation decision is obvious and the framework adds bureaucracy without insight
Key Concepts
How many users or customers will be affected by this initiative in a given period? Use real data where possible.
How much will this move the needle per user affected? Typically scored 0.25 (minimal) to 3 (massive).
How confident are you in your Reach and Impact estimates? 100% = data-backed; 50% = gut feel; 20% = pure speculation.
Total person-months required to design, build, and ship. Includes all roles (PM, engineering, design).
(Reach × Impact × Confidence) ÷ Effort. Higher scores indicate higher expected ROI per unit of effort.
The mechanism that prevents overconfident estimates from dominating. Low confidence dramatically reduces a feature's score, incentivising validation before building.
How it works
Collect all initiatives to be prioritised. Works best with 10–50 items — too few and it adds no value; too many and the effort estimation becomes a bottleneck.
Estimate Reach, Impact, and Confidence for each initiative. Use a structured template to ensure consistency. Calibrate scores across the team.
Get rough effort estimates from engineering and design. T-shirt sizing (S/M/L/XL) mapped to person-months is sufficient for initial ranking.
Calculate RICE scores. Rank from highest to lowest. Use the ranking as a starting point for roadmap planning, not a final mandate.
Tools that support RICE Scoring
Industry standard for software development teams — most PMs will encounter Jira in their career
Exceptionally intuitive and visually clean interface — one of the lowest onboarding friction tools for non-technical teams
Highly visual and intuitive interface with color-coded boards — one of the easiest PM tools for non-technical teams to adopt
All-in-one platform replacing multiple tools — docs, whiteboards, goals, time tracking, chat, and project management in a single workspace
Unmatched flexibility as an all-in-one workspace — combines docs, wikis, databases, and project management in a single tool
Spreadsheet-familiar interface makes adoption easy for teams transitioning from Excel — minimal training needed for basic use
Extremely intuitive drag-and-drop Kanban interface — virtually zero learning curve, new users productive within minutes
Browser-based with no installation required — runs on any OS and enables instant sharing via URL, removing friction for cross-functional collaboration with PMs, engineers, and stakeholders
Frequently Asked Questions
RICE scores are only meaningful relative to other items in the same exercise. The goal is stack ranking, not absolute targets. A feature with RICE 100 is prioritised over one with RICE 50 — but only within the context of consistent estimates across the same backlog.
Use proxies: similar features in comparable products, size of the affected user segment, or the proportion of users who encounter the relevant use case. Document your proxy assumption so it can be challenged. Pair a low Confidence score with high-uncertainty Reach estimates.
RICE adds Reach to ICE. Reach is valuable when your initiatives affect different segments of your user base — it prevents you from over-investing in features for a small but vocal minority. If all your initiatives affect all users equally, ICE is simpler and produces equivalent rankings.