Confirmation Bias in Performance Reviews: How to Avoid Rating Errors

Leadership
|
Confirmation Bias in Performance Reviews: How to Avoid Rating Errors

That was the opening remark from a divisional VP in a performance calibration session. We were reviewing a director who had launched two major initiatives that had both gone over budget and behind schedule. When someone raised concerns, the VP brushed them off: “Let’s not forget she saved our Q3 numbers last year. She’s earned our trust.”

The room nodded. No one challenged it further. The rating stayed high.

And just like that, confirmation bias won again.

Confirmation Bias in Performance Reviews
Confirmation Bias: The Hidden Star-Maker
Does reputation override real results?
Past Wins Confirmation Bias High Rating
Challenge your first impression—what evidence would flip your rating?

Why this bias matters in business

Performance reviews are supposed to reflect reality. But confirmation bias - our tendency to seek, interpret, and remember information in ways that confirm pre-existing beliefs - quietly undermines this goal. Once we label someone as high-performing (or not), our brains begin to filter evidence to fit that narrative.

This matters because inaccurate reviews corrode accountability, distort development plans, and quietly demotivate top contributors.

In a meta-analysis published in Personnel Psychology, researchers found that even trained managers exhibited strong confirmation bias during evaluations, often ignoring disconfirming data once an initial impression was formed. The cost isn’t just reputational - it’s organisational. Promotions, raises, and retention all hinge on flawed signals.

We’ve seen it repeatedly: a high-potential leader leaves, frustrated that despite delivering results, they were “still seen as the guy from that project two years ago.” Or a struggling team member continues to coast on old praise while their performance flatlines.

It’s not about bad intent. It’s about blind spots.

So how do we fix this?

Why Confirmation Bias Matters in Business
Why Bias Warps Business Reality
Distorted reviews erode trust and accountability
Demotivation Distorted Development Flawed Promotions Bias = Hidden Cost
Audit your reviews—are you rewarding reputation or real results?

The 4-Part Model: Bias-Proofing Your Review Process

Let’s be clear - we’re not aiming for utopia. No system will be completely bias-free. But we can make it fairer, more disciplined, and anchored in evidence. Here’s the model we use with executive teams:

1. Interrupt the Narrative

Before reviews begin, we ask: “What might we be wrong about?” This question pauses the brain’s auto-pilot.

  • For each employee, write down your first gut-level rating.

  • Then challenge it: “What would I need to see to justify the opposite rating?”

This matters most for long-tenured team members, rising stars, and chronic underperformers. The longer the history, the stronger the story we’ve built.

Reflection Prompt:

Who on your team might be getting too much benefit of the doubt - or too little?

Micro-action: Ask one skip-level stakeholder for input before forming your rating. It disrupts your mental loop.

2. Separate Behaviour from Perception

Comments like “She’s not strategic” or “He lacks initiative” are judgments, not data. Instead, we ask:

  • What did they do or not do?

  • When and where did this happen?

  • What was the impact?

This subtle shift forces reviewers to cite observable facts rather than character traits. It also arms reviewees with concrete feedback they can actually act on.

Mini-exercise:

Take a piece of feedback you’ve written this cycle. Is it behaviour-based or perception-based? Rewrite it with specifics.

Micro-action: In your review form, add a column: “Observation or interpretation?” Tag each comment.

3. Triangulate Your Sources

No one sees the full picture. Managers see effort; peers see collaboration; customers see outcomes. Use all three.

  • Use upward feedback for behavioural checks.

  • Use cross-functional inputs for results validation.

  • Use 1:1 notes or project retros for time-stamped context.

By triangulating, you weaken the influence of any one viewpoint - especially your own.

Pro Tip: Ask each reviewer to cite at least two non-managerial data points.

Micro-action: Build a “feedback dossier” for each person, updated quarterly. It beats a last-minute memory scrape.

4. Normalise Calibration Conflict

If everyone agrees too easily in calibration meetings, it’s not consensus - it’s avoidance. The goal isn’t harmony. It’s intellectual honesty.

Great calibration sessions sound like this:

  • “I saw improvement on X, but a drop in Y. Which should weigh more this cycle?”

  • “We’re anchoring on last year’s rating. Do we need to reset?”

  • “Why are we scoring her higher than someone with stronger delivery metrics?”

Psychological safety plays a role here. But so does structured dissent.

Micro-action: Assign a “bias challenger” role in each calibration meeting. Their only job: poke holes.

Embedding Fairness into Performance Reviews
Embedding Fairness in Reviews
Operationalize discipline for equitable outcomes
Pre-Review Checklist Language Audit Fairness
Implement structured steps: checklists, language audits, and timeboxed calibration.

Embedding fairness into performance reviews

The model works best when it’s operationalised, not just aspirational. Here’s how teams can start:

  1. Create a pre-review checklist

    • Have I reviewed recent 1:1 notes and project feedback?

    • Am I relying too heavily on one event - good or bad?

  2. Audit your language

    • Replace traits with actions (“proactive” becomes “raised client issues in advance 3 times this quarter”).

  3. Timebox your calibration

    • Use a “1-minute challenge” rule: anyone can contest a rating for one minute, no cross-talk. Forces clarity.

Pro Tip: Create “rating guides” with clear definitions and examples. It aligns expectations and reduces subjectivity.

Common traps even experienced leaders fall into

  • Halo/Horns Effect: One great (or poor) result colours the entire evaluation.

  • Recency Bias: Overweighting what happened last month instead of the full cycle.

  • Affinity Bias: Higher ratings for people who “feel like us” in background, communication, or personality.

  • Status Bias: Giving senior individuals more leeway than junior staff for similar missteps.

Fix: Build a bias radar - a short reminder slide before any review process begins. Keep it visible.

Leadership reflection corner

Prompt 1: When was the last time someone surprised you in a review - positively or negatively? What allowed that to happen?

Prompt 2: How might your past labels be distorting present assessments? Take 5 minutes to jot down three people and reassess.

What better reviews unlock

Getting this right pays off. Teams see:

  • Stronger accountability and ownership.

  • More accurate succession planning.

  • Fairer promotions and retention.

  • Greater trust in leadership decisions.

Importantly, it tells your people: “We see you clearly. And we’re willing to adjust when the data tells us to.”

Because the opposite - sticking to outdated stories - costs more than we admit.

Your next strategic move

This week, choose one high-visibility review and put it through the 4-part model. Don’t aim for perfection. Just test the discipline.

And if you’re up for a deeper dive, we’re happy to share templates or workshop this with your leadership team.


Team SHIFT

“She’s a star, always has been.”

That was the opening remark from a divisional VP in a performance calibration session. We were reviewing a director who had launched two major initiatives that had both gone over budget and behind schedule. When someone raised concerns, the VP brushed them off: “Let’s not forget she saved our Q3 numbers last year. She’s earned our trust.”

The room nodded. No one challenged it further. The rating stayed high.

And just like that, confirmation bias won again.

Confirmation Bias in Performance Reviews
Confirmation Bias: The Hidden Star-Maker
Does reputation override real results?
Past Wins Confirmation Bias High Rating
Challenge your first impression—what evidence would flip your rating?

Why this bias matters in business

Performance reviews are supposed to reflect reality. But confirmation bias - our tendency to seek, interpret, and remember information in ways that confirm pre-existing beliefs - quietly undermines this goal. Once we label someone as high-performing (or not), our brains begin to filter evidence to fit that narrative.

This matters because inaccurate reviews corrode accountability, distort development plans, and quietly demotivate top contributors.

In a meta-analysis published in Personnel Psychology, researchers found that even trained managers exhibited strong confirmation bias during evaluations, often ignoring disconfirming data once an initial impression was formed. The cost isn’t just reputational - it’s organisational. Promotions, raises, and retention all hinge on flawed signals.

We’ve seen it repeatedly: a high-potential leader leaves, frustrated that despite delivering results, they were “still seen as the guy from that project two years ago.” Or a struggling team member continues to coast on old praise while their performance flatlines.

It’s not about bad intent. It’s about blind spots.

So how do we fix this?

Why Confirmation Bias Matters in Business
Why Bias Warps Business Reality
Distorted reviews erode trust and accountability
Demotivation Distorted Development Flawed Promotions Bias = Hidden Cost
Audit your reviews—are you rewarding reputation or real results?

The 4-Part Model: Bias-Proofing Your Review Process

Let’s be clear - we’re not aiming for utopia. No system will be completely bias-free. But we can make it fairer, more disciplined, and anchored in evidence. Here’s the model we use with executive teams:

1. Interrupt the Narrative

Before reviews begin, we ask: “What might we be wrong about?” This question pauses the brain’s auto-pilot.

  • For each employee, write down your first gut-level rating.

  • Then challenge it: “What would I need to see to justify the opposite rating?”

This matters most for long-tenured team members, rising stars, and chronic underperformers. The longer the history, the stronger the story we’ve built.

Reflection Prompt:

Who on your team might be getting too much benefit of the doubt - or too little?

Micro-action: Ask one skip-level stakeholder for input before forming your rating. It disrupts your mental loop.

2. Separate Behaviour from Perception

Comments like “She’s not strategic” or “He lacks initiative” are judgments, not data. Instead, we ask:

  • What did they do or not do?

  • When and where did this happen?

  • What was the impact?

This subtle shift forces reviewers to cite observable facts rather than character traits. It also arms reviewees with concrete feedback they can actually act on.

Mini-exercise:

Take a piece of feedback you’ve written this cycle. Is it behaviour-based or perception-based? Rewrite it with specifics.

Micro-action: In your review form, add a column: “Observation or interpretation?” Tag each comment.

3. Triangulate Your Sources

No one sees the full picture. Managers see effort; peers see collaboration; customers see outcomes. Use all three.

  • Use upward feedback for behavioural checks.

  • Use cross-functional inputs for results validation.

  • Use 1:1 notes or project retros for time-stamped context.

By triangulating, you weaken the influence of any one viewpoint - especially your own.

Pro Tip: Ask each reviewer to cite at least two non-managerial data points.

Micro-action: Build a “feedback dossier” for each person, updated quarterly. It beats a last-minute memory scrape.

4. Normalise Calibration Conflict

If everyone agrees too easily in calibration meetings, it’s not consensus - it’s avoidance. The goal isn’t harmony. It’s intellectual honesty.

Great calibration sessions sound like this:

  • “I saw improvement on X, but a drop in Y. Which should weigh more this cycle?”

  • “We’re anchoring on last year’s rating. Do we need to reset?”

  • “Why are we scoring her higher than someone with stronger delivery metrics?”

Psychological safety plays a role here. But so does structured dissent.

Micro-action: Assign a “bias challenger” role in each calibration meeting. Their only job: poke holes.

Embedding Fairness into Performance Reviews
Embedding Fairness in Reviews
Operationalize discipline for equitable outcomes
Pre-Review Checklist Language Audit Fairness
Implement structured steps: checklists, language audits, and timeboxed calibration.

Embedding fairness into performance reviews

The model works best when it’s operationalised, not just aspirational. Here’s how teams can start:

  1. Create a pre-review checklist

    • Have I reviewed recent 1:1 notes and project feedback?

    • Am I relying too heavily on one event - good or bad?

  2. Audit your language

    • Replace traits with actions (“proactive” becomes “raised client issues in advance 3 times this quarter”).

  3. Timebox your calibration

    • Use a “1-minute challenge” rule: anyone can contest a rating for one minute, no cross-talk. Forces clarity.

Pro Tip: Create “rating guides” with clear definitions and examples. It aligns expectations and reduces subjectivity.

Common traps even experienced leaders fall into

  • Halo/Horns Effect: One great (or poor) result colours the entire evaluation.

  • Recency Bias: Overweighting what happened last month instead of the full cycle.

  • Affinity Bias: Higher ratings for people who “feel like us” in background, communication, or personality.

  • Status Bias: Giving senior individuals more leeway than junior staff for similar missteps.

Fix: Build a bias radar - a short reminder slide before any review process begins. Keep it visible.

Leadership reflection corner

Prompt 1: When was the last time someone surprised you in a review - positively or negatively? What allowed that to happen?

Prompt 2: How might your past labels be distorting present assessments? Take 5 minutes to jot down three people and reassess.

What better reviews unlock

Getting this right pays off. Teams see:

  • Stronger accountability and ownership.

  • More accurate succession planning.

  • Fairer promotions and retention.

  • Greater trust in leadership decisions.

Importantly, it tells your people: “We see you clearly. And we’re willing to adjust when the data tells us to.”

Because the opposite - sticking to outdated stories - costs more than we admit.

Your next strategic move

This week, choose one high-visibility review and put it through the 4-part model. Don’t aim for perfection. Just test the discipline.

And if you’re up for a deeper dive, we’re happy to share templates or workshop this with your leadership team.


Team SHIFT

Summary

Confirmation Bias in Performance Reviews: How to Avoid Rating Errors

Leadership
|

“She’s a star, always has been.”

That was the opening remark from a divisional VP in a performance calibration session. We were reviewing a director who had launched two major initiatives that had both gone over budget and behind schedule. When someone raised concerns, the VP brushed them off: “Let’s not forget she saved our Q3 numbers last year. She’s earned our trust.”

The room nodded. No one challenged it further. The rating stayed high.

And just like that, confirmation bias won again.

Confirmation Bias in Performance Reviews
Confirmation Bias: The Hidden Star-Maker
Does reputation override real results?
Past Wins Confirmation Bias High Rating
Challenge your first impression—what evidence would flip your rating?

Why this bias matters in business

Performance reviews are supposed to reflect reality. But confirmation bias - our tendency to seek, interpret, and remember information in ways that confirm pre-existing beliefs - quietly undermines this goal. Once we label someone as high-performing (or not), our brains begin to filter evidence to fit that narrative.

This matters because inaccurate reviews corrode accountability, distort development plans, and quietly demotivate top contributors.

In a meta-analysis published in Personnel Psychology, researchers found that even trained managers exhibited strong confirmation bias during evaluations, often ignoring disconfirming data once an initial impression was formed. The cost isn’t just reputational - it’s organisational. Promotions, raises, and retention all hinge on flawed signals.

We’ve seen it repeatedly: a high-potential leader leaves, frustrated that despite delivering results, they were “still seen as the guy from that project two years ago.” Or a struggling team member continues to coast on old praise while their performance flatlines.

It’s not about bad intent. It’s about blind spots.

So how do we fix this?

Why Confirmation Bias Matters in Business
Why Bias Warps Business Reality
Distorted reviews erode trust and accountability
Demotivation Distorted Development Flawed Promotions Bias = Hidden Cost
Audit your reviews—are you rewarding reputation or real results?

The 4-Part Model: Bias-Proofing Your Review Process

Let’s be clear - we’re not aiming for utopia. No system will be completely bias-free. But we can make it fairer, more disciplined, and anchored in evidence. Here’s the model we use with executive teams:

1. Interrupt the Narrative

Before reviews begin, we ask: “What might we be wrong about?” This question pauses the brain’s auto-pilot.

  • For each employee, write down your first gut-level rating.

  • Then challenge it: “What would I need to see to justify the opposite rating?”

This matters most for long-tenured team members, rising stars, and chronic underperformers. The longer the history, the stronger the story we’ve built.

Reflection Prompt:

Who on your team might be getting too much benefit of the doubt - or too little?

Micro-action: Ask one skip-level stakeholder for input before forming your rating. It disrupts your mental loop.

2. Separate Behaviour from Perception

Comments like “She’s not strategic” or “He lacks initiative” are judgments, not data. Instead, we ask:

  • What did they do or not do?

  • When and where did this happen?

  • What was the impact?

This subtle shift forces reviewers to cite observable facts rather than character traits. It also arms reviewees with concrete feedback they can actually act on.

Mini-exercise:

Take a piece of feedback you’ve written this cycle. Is it behaviour-based or perception-based? Rewrite it with specifics.

Micro-action: In your review form, add a column: “Observation or interpretation?” Tag each comment.

3. Triangulate Your Sources

No one sees the full picture. Managers see effort; peers see collaboration; customers see outcomes. Use all three.

  • Use upward feedback for behavioural checks.

  • Use cross-functional inputs for results validation.

  • Use 1:1 notes or project retros for time-stamped context.

By triangulating, you weaken the influence of any one viewpoint - especially your own.

Pro Tip: Ask each reviewer to cite at least two non-managerial data points.

Micro-action: Build a “feedback dossier” for each person, updated quarterly. It beats a last-minute memory scrape.

4. Normalise Calibration Conflict

If everyone agrees too easily in calibration meetings, it’s not consensus - it’s avoidance. The goal isn’t harmony. It’s intellectual honesty.

Great calibration sessions sound like this:

  • “I saw improvement on X, but a drop in Y. Which should weigh more this cycle?”

  • “We’re anchoring on last year’s rating. Do we need to reset?”

  • “Why are we scoring her higher than someone with stronger delivery metrics?”

Psychological safety plays a role here. But so does structured dissent.

Micro-action: Assign a “bias challenger” role in each calibration meeting. Their only job: poke holes.

Embedding Fairness into Performance Reviews
Embedding Fairness in Reviews
Operationalize discipline for equitable outcomes
Pre-Review Checklist Language Audit Fairness
Implement structured steps: checklists, language audits, and timeboxed calibration.

Embedding fairness into performance reviews

The model works best when it’s operationalised, not just aspirational. Here’s how teams can start:

  1. Create a pre-review checklist

    • Have I reviewed recent 1:1 notes and project feedback?

    • Am I relying too heavily on one event - good or bad?

  2. Audit your language

    • Replace traits with actions (“proactive” becomes “raised client issues in advance 3 times this quarter”).

  3. Timebox your calibration

    • Use a “1-minute challenge” rule: anyone can contest a rating for one minute, no cross-talk. Forces clarity.

Pro Tip: Create “rating guides” with clear definitions and examples. It aligns expectations and reduces subjectivity.

Common traps even experienced leaders fall into

  • Halo/Horns Effect: One great (or poor) result colours the entire evaluation.

  • Recency Bias: Overweighting what happened last month instead of the full cycle.

  • Affinity Bias: Higher ratings for people who “feel like us” in background, communication, or personality.

  • Status Bias: Giving senior individuals more leeway than junior staff for similar missteps.

Fix: Build a bias radar - a short reminder slide before any review process begins. Keep it visible.

Leadership reflection corner

Prompt 1: When was the last time someone surprised you in a review - positively or negatively? What allowed that to happen?

Prompt 2: How might your past labels be distorting present assessments? Take 5 minutes to jot down three people and reassess.

What better reviews unlock

Getting this right pays off. Teams see:

  • Stronger accountability and ownership.

  • More accurate succession planning.

  • Fairer promotions and retention.

  • Greater trust in leadership decisions.

Importantly, it tells your people: “We see you clearly. And we’re willing to adjust when the data tells us to.”

Because the opposite - sticking to outdated stories - costs more than we admit.

Your next strategic move

This week, choose one high-visibility review and put it through the 4-part model. Don’t aim for perfection. Just test the discipline.

And if you’re up for a deeper dive, we’re happy to share templates or workshop this with your leadership team.


Team SHIFT

This Article is part of the course if you want read the full article buy the shift course

BUy NoW