Opportunity Scoring & Prioritization
Purpose
This is where judgment matters most.
Scoring translates raw, interview-backed observations into a ranked list of opportunities using a simple, high-level framework.
This step is subjective by design but disciplined in execution.
The goal is directionally correct prioritization that leadership can trust.
What Gets Scored
Only items that already exist in the Opportunity Identification log.
Rules:
- Must be traceable to interviews or kickoff
- Must be written as a concrete opportunity
- No new ideas introduced here
If it was not heard, it is not scored.
Scoring Dimensions
1. Effort (T-Shirt Size)
Effort is a time-to-meaningful-impact estimate.
| Size | Rough Range (Months) | Interpretation |
|---|---|---|
| XS | 0-1 | Configuration or small workflow |
| S | 0-3 | Clearly bounded, low dependency |
| M | 3-6 | Moderate integration or change |
| L | 6-12 | Cross-team, multi-system |
| XL | 12-18 | Structural or transformational |
Effort considers:
- Systems involved
- Data readiness
- Change management
- Security / governance hurdles
2. Value (1-5, 5 is good)
Value must be anchored to interview signal.
| Score | Meaning |
|---|---|
| 1 | Marginal improvement |
| 2 | Nice-to-have |
| 3 | Meaningful local impact |
| 4 | Clear functional leverage |
| 5 | Broad, cross-functional or strategic impact |
Ask:
- How often was this pain mentioned?
- How emotional was the language?
- How many teams benefit?
- Does this unlock downstream opportunities?
3. Risk (1-5, 5 is bad)
Risk captures execution and organizational risk.
| Score | Meaning |
|---|---|
| 1 | Very low risk |
| 2 | Manageable |
| 3 | Some uncertainty |
| 4 | Significant coordination or trust risk |
| 5 | High likelihood of stall or failure |
Risk drivers:
- Data sensitivity
- Write-back into core systems
- Policy or compliance friction
- Cultural resistance
Priority Score Formula
A simple composite score is used to rank opportunities.
Example structure:
Priority = (Value × Weight) ÷ (Effort + Risk)
Exact weights can vary by engagement, but must be consistent within a sprint.
The goal is:
- High value
- Low effort
- Low risk = higher score
Canonical Scoring Sheet (Tab 1: Raw Scores)
Required columns:
| Column | Description |
|---|---|
| Rank | Calculated from Priority |
| Function | IT, Finance, Ops, etc |
| Opportunity | What was observed |
| Impact | Why it matters |
| Potential improvement | Proposed solution |
| Effort | XS / S / M / L / XL |
| Value | 1–5 (5 is good) |
| Risk | 1–5 (5 is bad) |
| Priority | Calculated score |
| DupCount_Opportunity | How often this issue appears |
| DupCount_Solution | How often this solution recurs |
Example rows:
| Rank | Function | Opportunity | Impact | Potential improvement | Effort | Value | Risk | Priority |
|---|---|---|---|---|---|---|---|---|
| 12 | IT | Evaluating Microsoft Copilot and Snowflake Cortex | Could deliver quick wins via existing AI platforms | Ground MS Copilot in Snowflake data | M | 5 | 1 | 1.09 |
| 28 | Procurement | Lack of automated PO acknowledgments | Missed confirmations, risk of delays | Automate supplier/PO touchpoint | S | 3.5 | 1 | 0.85 |
| 69 | IT | Three separate ERP systems maintained in parallel | Data duplication, integration complexity | Migrate into a single ERP | XL | 2 | 5 | 0.32 |
Ranking Sheet (Tab 2: Sorted by Priority)
Copy rows from Tab 1, sorted descending by Priority score.
Use a dynamic ranking formula that references the Priority column. If Priority is blank, show nothing; otherwise rank it against all other Priority values.
This enables:
- Easy re-sorting as scores change
- Sensitivity testing
- Transparent logic for leadership review
Example output:
| Rank | Function | Opportunity | Potential improvement | Effort | Value | Risk | Priority |
|---|---|---|---|---|---|---|---|
| 1 | Procurement | Tableau under-used for Procurement data | Create shared data scientist function | S | 4 | 1 | 1.30 |
| 1 | Operations | Tableau views limited and managed by IT | Create shared data scientist function | S | 4 | 1 | 1.30 |
| 1 | Finance | Tableau underutilized for Finance reporting | Create shared data scientist function | S | 4 | 1 | 1.30 |
| 7 | Operations | Manual hourly production boards at plants | Real-time Snowflake/Tableau refresh | S | 4 | 1.5 | 1.23 |
| 12 | IT | Evaluating MS Copilot and Snowflake Cortex | Ground Copilot in Snowflake data | M | 5 | 1 | 1.09 |
Ties are expected. They reveal opportunities that share a common solution.
Deduplication Counts
| Column | Purpose |
|---|---|
| DupCount_Opportunity | How many times this issue appears across functions |
| DupCount_Solution | How many opportunities point to the same fix |
High DupCount_Solution = platform-level fix disguised as local issues.
This is critical signal for theming.
Judgment Guidelines
When scoring:
- Err on simplicity
- Be consistent across functions
- Document disagreements
- Revisit after seeing full distribution
If two senior people disagree on a score, that disagreement is signal worth capturing.
Template
Use the standard Opportunity Scoring Template (Excel) for all sprints.
The template includes:
- Tab 1: Raw scoring with formulas
- Tab 2: Ranked output
- Conditional formatting for effort/value/risk
- Pre-built Priority formula
Do not modify the structure mid-sprint.
What This Enables
This scoring model feeds directly into:
- Deduplication & theming
- Roadmap sequencing
- Executive narratives
- Pilot selection
Leadership will challenge this table more than any other artifact.
Success Signal
This step is successful if:
- Top 10 items feel obvious in hindsight
- Low-ranked items are defensible to deprioritize
- IT agrees effort and risk are realistic
- Executives can say "yes, this matches what we heard"
Next step: Theming and Roadmap Construction, where ranked opportunities collapse into a small number of strategic tracks.