Bird
Raised Fist0
Amazon Leadership PrinciplesSignal: "I noticed" -> "challenged assumptions" -> "used data" -> "quantified impact"

Describe a Situation Where You Used Data to Make a Counterintuitive Decision - Amazon LP Competency

Data-driven counterintuitive decisions with measurable impact

Choose your preparation mode3 modes available
📌
Definition

Are Right a Lot means consistently making sound decisions based on data and judgment, even when those decisions go against conventional wisdom. The core test is whether the candidate can identify the right problem, challenge assumptions, and use evidence to support a counterintuitive conclusion.

Core Signal
Can the candidate demonstrate they used data and critical thinking to make a decision others initially disagreed with, and that decision was correct?
🏢
Company Framing

Amazon expects leaders to be owners who fix root causes by challenging assumptions and using data rigorously, not just implementers who patch symptoms or follow consensus.

🚫
What It Is NOT
  • Making decisions based solely on gut feeling without data
  • Completing assigned tasks well - that is execution, not judgment
  • Being right occasionally by luck rather than consistent rigor
  • Deferring decisions to others instead of owning the judgment
  • Simply following existing processes without questioning them
Candidate explicitly states they identified a gap or anomaly in data that others overlooked.
"I noticed the data didn’t match the expected trend""Nobody had flagged this discrepancy before""The metrics were counterintuitive compared to our assumptions"

Shows curiosity and skepticism, key to being right a lot by questioning the status quo.

Common Miss My manager mentioned it might be worth looking into
Candidate describes gathering additional data or triangulating multiple sources to validate their hypothesis.
"I pulled logs from multiple systems""I ran a statistical analysis to confirm""I compared historical trends with current data"

Demonstrates rigor and depth in decision-making rather than superficial conclusions.

Common Miss I just looked at the dashboard and made a call
Candidate explains making a decision that contradicted popular opinion or leadership direction based on their data analysis.
"Despite initial pushback, I recommended a different approach""I challenged the team’s assumptions with evidence""I proposed a counterintuitive solution supported by data"

Shows courage and conviction to be right a lot, not just agree with consensus.

Common Miss We all agreed it was the right call
Candidate quantifies the impact of their decision with metrics and business outcomes.
"This change reduced errors by 30%""We saved $50K per quarter due to this fix""Customer complaints dropped by 15% after implementation"

Concrete impact proves the decision was right and valuable to the business.

Common Miss It improved things, but I don’t have exact numbers
Candidate acknowledges uncertainty and trade-offs, explaining how they managed risk when data was incomplete.
"I had 70% of the data I wanted but decided to act""I balanced the risk of delay against potential gains""I monitored closely after launch to catch any issues"

Shows mature judgment and bias for action, critical for being right a lot in ambiguous situations.

Common Miss I waited until I had all the data before deciding
Candidate describes learning from the outcome and adjusting their approach if needed.
"After initial results, I refined the model""I incorporated feedback to improve accuracy""We iterated based on data post-launch"

Demonstrates intellectual humility and continuous improvement, reinforcing being right a lot over time.

Common Miss Once I made the decision, I considered it final
💡
Depth Tip

Spend about 50 seconds total on Situation and Task combined, then devote 70% of your answer time to detailed Actions showing your data analysis, decision-making process, and how you managed trade-offs.

Manager-Assigned Initiation
"My manager suggested I look into this since I had bandwidth"
Ownership is binary - self-initiated or not. Manager-assigned = execution. No excellent execution recovers an assigned story.
DetectionAsk yourself: Would I have done this if my manager said nothing? If no, find a different story.
FixI noticed X while doing Y. Nobody had filed a ticket. I decided to act because...
No Data or Unsupported Decision
"I felt this was the right approach based on my experience"
Are Right a Lot requires data or evidence backing the decision. Gut feeling alone fails the competency.
DetectionCheck if the story includes concrete data points or analysis supporting the decision.
FixI analyzed the error rates and found a 25% spike that led me to recommend...
Consensus Following
"We all agreed this was the best solution"
Being right a lot means challenging consensus when data contradicts it. Following consensus shows no independent judgment.
DetectionLook for phrases indicating the candidate deferred to group opinion rather than own analysis.
FixDespite initial disagreement, I presented data that convinced the team to change direction.
No Quantified Impact
"It improved the system and made things better"
Without measurable impact, the decision’s correctness is unverifiable and weakens the signal.
DetectionVerify if candidate provides metrics or business outcomes linked to their decision.
FixThis change reduced latency by 40%, improving customer satisfaction scores by 10%.
No Ownership of Decision
"The team decided to implement my suggestion"
Passing ownership to the team dilutes individual accountability and weakens the Are Right a Lot signal.
DetectionCheck if candidate uses passive voice or defers credit to others.
FixI led the initiative, drove the analysis, and convinced stakeholders to adopt the solution.
🚩 Passive Voice Throughout
"The problem was identified and a fix was implemented"
Candidate was spectator not actor. Passive strips agency from every action.
FixUse active voice: I identified the problem and implemented the fix.
🚩 Vague Language
"We improved the system somehow"
Lack of specificity makes it impossible to verify impact or ownership.
FixSpecify exact improvements and your role: I improved system throughput by 20% by optimizing queries.
🚩 Overuse of 'We' Without Individual Contribution
"We did it together as a team"
'We did it' hides individual contribution, causing loss of ownership credit.
FixHighlight your specific actions: I designed the algorithm that enabled the team’s success.
🚩 Hedging or Lack of Conviction
"I think this might have helped"
Weak language undermines confidence in decision and judgment.
FixState conclusions confidently with data support: The data showed a 30% reduction in errors.
🚩 Skipping Data Details
"I looked at some numbers and decided"
Omitting data analysis details weakens the Are Right a Lot signal.
FixDescribe specific data points and analysis steps you performed.
🎯
Direct Triggers
  • Tell me about a time you used data to make a counterintuitive decision.
  • Describe a situation where you challenged the consensus and were right.
  • Give an example of when you made a decision others disagreed with but it was correct.
  • How have you used data to influence a difficult decision?
🔍
Indirect Triggers
  • Describe a time you had to make a decision with incomplete information.
  • Tell me about a time you identified a problem others missed.
  • Give an example of when you had to convince others to change their mind.
  • Describe a situation where you improved a process based on your analysis.
👁
How to Recognize

Keywords: counterintuitive, data-driven, challenged assumptions, disproved consensus, quantified impact, trade-offs, risk management.

⚠️
Do Not Confuse With
OwnershipOwnership is about self-initiating and owning end-to-end results; Are Right a Lot focuses on sound judgment and decision quality.
Bias for ActionBias for Action emphasizes speed and decisiveness; Are Right a Lot emphasizes correctness and data-backed decisions.
Dive DeepDive Deep is about thorough investigation; Are Right a Lot is about making the right call based on that investigation.
How did you validate your data before making the decision?
Probes: Checks rigor of data analysis and confidence in evidence.
❌ Weak

I just trusted the dashboard numbers without further checks.

Blind trust in data without validation risks wrong conclusions; shows lack of rigor.

✅ Strong

I cross-checked dashboard data with raw logs and ran statistical tests to confirm anomalies were real.

""I validated data from multiple sources before deciding.""
What risks did you consider before acting on incomplete data?
Probes: Assesses judgment in balancing risk and speed under uncertainty.
❌ Weak

I didn’t think much about risks; I just acted quickly.

Ignoring risks shows poor judgment and can lead to costly mistakes.

✅ Strong

I identified potential false positives and planned rollback steps to mitigate impact if my decision was wrong.

""I balanced risk and speed by planning contingencies.""
How did you convince others to accept your counterintuitive decision?
Probes: Evaluates communication skills and influence backed by data.
❌ Weak

I told them my idea and they eventually agreed.

Passive description lacks evidence of persuasion or data-driven influence.

✅ Strong

I presented detailed analysis and simulations showing expected benefits, addressing concerns with data-driven answers.

""I used data to persuade skeptics and gain buy-in.""
What did you learn from the outcome, and would you do anything differently?
Probes: Tests intellectual humility and continuous improvement mindset.
❌ Weak

The decision was perfect; no changes needed.

Lack of reflection suggests closed mindset and weakens Are Right a Lot signal.

✅ Strong

Post-launch data showed some edge cases I hadn’t anticipated, so I refined the model accordingly.

""I iterated based on data to improve results.""
AM
Amazon
Are Right a Lot

Amazon looks for long-term thinking - fix root cause not just symptom. Leaders must challenge assumptions and use data rigorously to be right consistently.

Signal: Candidate names trade-offs explicitly: I pushed sprint item back 2 days because cost of inaction ($8K/week) exceeded delay cost.
Example QDescribe a time you used data to make a counterintuitive decision that impacted multiple teams.
What Elevates

Name the trade-off explicitly: I delayed a sprint item by 2 days to fix a root cause that would have cost $8K/week if left unaddressed. Amazon credits candidates who articulate the business impact and risk management clearly.

GO
Google
Good Judgment

Google values data-driven decisions but also emphasizes collaboration and consensus-building. Being right a lot includes persuading others with evidence.

Signal: Candidate describes how they used data to influence cross-functional stakeholders and iterated based on feedback.
Example QTell me about a time you made a decision that was initially unpopular but proved correct.
What Elevates

Explain how you presented data compellingly to gain buy-in and how you incorporated feedback to refine your approach, showing both judgment and collaboration.

ME
Meta
Be Bold

Meta encourages bold decisions even with incomplete data, valuing speed and learning from failure. Being right a lot includes managing risk while moving fast.

Signal: Candidate explains how they acted decisively with partial data and monitored outcomes to adjust quickly.
Example QDescribe a time you made a bold decision based on limited data and what happened next.
What Elevates

Highlight your bias for action balanced with risk mitigation plans and how you learned from the results to improve future decisions.

FL
Flipkart
Customer Obsession

Flipkart expects decisions to be grounded in customer impact. Being right a lot means using data to prioritize customer benefit even if it contradicts internal preferences.

Signal: Candidate links data-driven decisions directly to improved customer experience metrics.
Example QGive an example of a data-driven decision that improved customer satisfaction despite internal resistance.
What Elevates

Focus on how you used customer data to challenge assumptions and drove changes that measurably enhanced customer outcomes.

SDE 1

At this level, candidates handle tasks or bugs outside their assigned scope, demonstrating individual contributions that have measurable impact within their team. Cross-team influence is not required but initiative beyond assigned work is expected.

Anti-pattern Story confined to assigned tasks with no initiative; no data or impact metrics; passive language.
SDE 2

Candidates own moderately complex problems involving multiple components. They use data to challenge assumptions, show clear individual ownership, and quantify the impact of their decisions, demonstrating growing judgment skills.

Anti-pattern Story lacks complexity or cross-team scope; decision follows consensus without challenge; no quantified impact.
Senior SDE

Senior engineers lead cross-team initiatives, make counterintuitive decisions with significant business impact, explicitly balance risk and trade-offs, and influence others through data-driven arguments.

Anti-pattern Story confined to own team codebase; senior must show cross-team scope; single-team ownership equals SDE1 behavior; no hire at senior.
Staff Principal

Staff and Principal engineers drive organization-wide decisions, anticipate long-term consequences, integrate multiple data sources and perspectives, and mentor others on judgment and decision-making, reflecting strategic leadership.

Anti-pattern Story lacks strategic scope or long-term thinking; no evidence of mentoring or influencing leadership; narrow technical focus only.
📖
Cross-Team Data Anomaly Detection

Shows candidate’s ability to notice subtle data issues outside their immediate scope, take initiative, and influence multiple teams with evidence-backed decisions.

Webhook delivery (Platform team) silently dropping 0.3% payments - no alert, no owner watching, not your sprint, quantifiable impact.
Also covers: Ownership · Dive Deep · Bias for Action
📖
Counterintuitive Root Cause Identification

Demonstrates deep analysis overturning common assumptions, leading to a better solution with measurable business impact.

Identified that increased latency was due to a rarely used feature, not network issues as commonly believed.
Also covers: Dive Deep · Learn and Be Curious · Customer Obsession
📖
Risk-Balanced Decision Under Uncertainty

Shows mature judgment balancing incomplete data, risk, and speed, with clear trade-offs and contingency planning.

Decided to roll out a new algorithm with 70% confidence, planned monitoring and rollback to mitigate risk.
Also covers: Bias for Action · Ownership · Invent and Simplify
🚫
Stories Not Recommended
  • Assigned Bug Fix Within Own Team - Fixing a bug assigned to you within your own team is execution, not judgment or ownership. No cross-team impact or counterintuitive decision.
  • Working Late to Meet Deadline - Effort and working late is not being right a lot. Deadline was assigned; effort is execution, not judgment or data-driven decision-making.
🎯
Prep Action
Prepare stories where you self-initiated data analysis that contradicted common beliefs, quantify impact clearly, and explain trade-offs and risk management.
Data-driven counterintuitive decisions with measurable impact
Key Signal
"I noticed" -> "challenged assumptions" -> "used data" -> "quantified impact"
Top Disqualifier
"My manager suggested I look into this since I had bandwidth"
Delivery Red Flag
"We did it"
Prep Action
Prepare self-initiated stories with clear data analysis, counterintuitive decisions, quantified impact, and explicit trade-offs.