Harvard Business Review article: reducing behavioural problems
Kahneman, Lovello and Sibony
The underlying premise is that it is very difficult for individuals to recognise their own subconscious bias – simply put: we cannot see our own blind-spots. But we are far better at identifying those of others or of groups.
They put forward a check-list of questions that should be asked when a group is putting forward a decision recommendation. We will describe that list below. But first some commentary:
What is good about the article?
- Anything that can help us reduce the negative impact of behaviour that we are unaware of is good. As the article says: advice up to now has been more of a case of “forewarned is forearmed” which has shown to do little to have an impact on our behaviours.
- Using an external person to question a group is also a great idea (albeit not entirely original – Genesis have been suggesting this in one form or other for years) as it is definitely easier to see someone else’s blind spots than our own.
- Having a check-list is a good idea and forces one to go through the appropriate disciplines of asking and challenging.
- We believe that the concept of challenging behaviours is better built into the entire decision-making process – or at least at a number of check-points along the way. Waiting until the recommendation phase is fraught with obvious problems.
- The check-list itself should be tailored to best suit the requirements of the situation and the team involved. The generic list put forward is fine, but the emphases may well be in the wrong areas.
- Agreeing on who should play the role of “decision-auditor” is critical. A combination of appropriate experience (in the role), expertise in the subject matter (of behaviours and the decision context), independence and authority are all important. Just as we believe the person who ultimately takes the decision should not be the person to drive the decision process (better suited to a “decision coach”), we believe that the decision leader should also not be playing this behaviour-challenge role. After all, they have their own set of behaviours, perspectives and bias to consider.
- Is there any reason to suspect motivated errors, or errors driven by the self-interest of the recommending team?
- Have the people making the recommendation fallen in love with it?
- Were there dissenting opinions within the recommending team?
- Could the diagnosis of the situation be overly influenced by salient analogies?
- Have credible alternatives been considered?
- If you had to make this decision again in a year, what information would you want, and can you get more of it now?
- Do you know where the numbers came from?
- Can you see a halo effect?
- Are the people making the recommendation overly attached to past decisions?
- Is the base case overly optimistic?
- Is the worst case bad enough?
- Is the recommending team overly cautious?