ToolStack
PM Framework

RICE Scoring

Prioritise objectively with a four-factor formula

RICE Scoring is a prioritisation framework that scores each initiative on four factors: Reach (how many users affected per period), Impact (how much it moves the needle for each user), Confidence (how certain you are of your estimates), and Effort (person-months of work). Dividing Reach × Impact × Confidence by Effort produces a comparable score that removes gut-feel from prioritisation debates.

Developed by Sean McBride at Intercom around 2015. First published on the Intercom blog. Rapidly adopted across the product management community as a practical, lightweight prioritisation framework.

Use RICE Scoring when

  • Backlog grooming sessions where multiple competing initiatives need objective comparison
  • Teams with prioritisation debates driven by HiPPO (Highest Paid Person's Opinion) rather than data
  • When you need to communicate prioritisation rationale to stakeholders in a defensible format
  • Quarterly planning to rank a large candidate list before committing to a roadmap

Avoid it when

  • Situations requiring strategic, non-numeric judgment (market positioning, partnerships, platform bets)
  • Teams where estimation is so uncertain that any RICE score would be fiction
  • When the prioritisation decision is obvious and the framework adds bureaucracy without insight

Key Concepts

Reach

How many users or customers will be affected by this initiative in a given period? Use real data where possible.

Impact

How much will this move the needle per user affected? Typically scored 0.25 (minimal) to 3 (massive).

Confidence

How confident are you in your Reach and Impact estimates? 100% = data-backed; 50% = gut feel; 20% = pure speculation.

Effort

Total person-months required to design, build, and ship. Includes all roles (PM, engineering, design).

RICE Score

(Reach × Impact × Confidence) ÷ Effort. Higher scores indicate higher expected ROI per unit of effort.

Confidence Penalty

The mechanism that prevents overconfident estimates from dominating. Low confidence dramatically reduces a feature's score, incentivising validation before building.

How it works

1
Gather Candidates

Collect all initiatives to be prioritised. Works best with 10–50 items — too few and it adds no value; too many and the effort estimation becomes a bottleneck.

2
Score Each Factor

Estimate Reach, Impact, and Confidence for each initiative. Use a structured template to ensure consistency. Calibrate scores across the team.

3
Estimate Effort

Get rough effort estimates from engineering and design. T-shirt sizing (S/M/L/XL) mapped to person-months is sufficient for initial ranking.

4
Calculate and Stack-Rank

Calculate RICE scores. Rank from highest to lowest. Use the ranking as a starting point for roadmap planning, not a final mandate.

Tools that support RICE Scoring

#1
Jira
4.3Free tier

Industry standard for software development teams — most PMs will encounter Jira in their career

#2
Asana
4.4Free tier

Exceptionally intuitive and visually clean interface — one of the lowest onboarding friction tools for non-technical teams

#3
Monday.com
4.5Free tier

Highly visual and intuitive interface with color-coded boards — one of the easiest PM tools for non-technical teams to adopt

#4
ClickUp
4.7Free tier

All-in-one platform replacing multiple tools — docs, whiteboards, goals, time tracking, chat, and project management in a single workspace

#5
Notion
4.7Free tier

Unmatched flexibility as an all-in-one workspace — combines docs, wikis, databases, and project management in a single tool

#6
Smartsheet
4.4Free tier

Spreadsheet-familiar interface makes adoption easy for teams transitioning from Excel — minimal training needed for basic use

#7
Trello
4.4Free tier

Extremely intuitive drag-and-drop Kanban interface — virtually zero learning curve, new users productive within minutes

#8
Figma
4.7Free tier

Browser-based with no installation required — runs on any OS and enables instant sharing via URL, removing friction for cross-functional collaboration with PMs, engineers, and stakeholders

Frequently Asked Questions

What's a good RICE score?

RICE scores are only meaningful relative to other items in the same exercise. The goal is stack ranking, not absolute targets. A feature with RICE 100 is prioritised over one with RICE 50 — but only within the context of consistent estimates across the same backlog.

How do you score Reach for new features with no data?

Use proxies: similar features in comparable products, size of the affected user segment, or the proportion of users who encounter the relevant use case. Document your proxy assumption so it can be challenged. Pair a low Confidence score with high-uncertainty Reach estimates.

Is RICE better than ICE (Impact, Confidence, Ease)?

RICE adds Reach to ICE. Reach is valuable when your initiatives affect different segments of your user base — it prevents you from over-investing in features for a small but vocal minority. If all your initiatives affect all users equally, ICE is simpler and produces equivalent rankings.

Related frameworks

MoSCoW PrioritisationKano ModelOKRs