They say the road to hell is paved with good intentions. Our goals may be virtuous. But shortsighted solutions can end up making things infinitely worse. Cobra Effects perfectly illustrate what happens when good intentions come in the guise of perverse incentives. Three fascinating anecdotes from history illustrate this phenomenon. They teach us valuable lessons about how to design reward systems to avoid unintended negative consequences.
What Are Cobra Effects?
We can think of Cobra Effects as a special case of the Law of Unintended Consequences. The adage goes back to research pioneered by American sociologist Robert K. Merton. Merton studied how our attempts to make changes within a complex social system can have effects we did neither anticipate nor intend.
If we’re lucky, results may be unexpected yet positive. If we’re a bit less fortunate, our intervention has negative drawbacks nobody foresaw. In the worst-case scenario, however, a well-meaning policy can lead to perverse results, that is consequences that run counter to what decision-makers were trying to achieve.
Cobra Effects fall in the latter category. Ironically, it is rewards offered to solve a problem that fuel the failure of these kinds of decisions. The term itself was coined by German economist Horst Siebert in his 2001 book about economic policy-making going astray. Let’s take a look at three stories about unintended negative consequences triggered by perverse incentives.
Cases of Cobra Effects
Stories about cobra effects are numerous. Here are the three most striking examples, each with a brief reflection on the factors that play into the fateful consequences.
The Original Cobra Effect
The anecdote that gave the cobra effect its name takes us back to India during British rule and was famously told by Horst Siebert. It goes something like this.
During the British rule of India, when the population of venomous cobras rose to worrying levels in Delhi, authorities offered a reward for dead cobras. People tracked the snakes down, killed them and turned them in. It worked — until it didn’t.
Some inventive locals began to breed cobras so they could make a profit by killing them and turning them in. Since that was not in the spirit of the incentive and didn’t solve the problem at hand, the British government ended the program. It worked — only it didn’t.
The cobras had suddenly become useless to the breeders. So they set them free, once again causing a cobra plague in Delhi. It’s even said that it was worse than before the government intervention.
What may seem obvious in hindsight wasn’t at the time. From the government’s perspective, the reward proposed to solve a serious societal problem inadvertently became a reward for making it more severe. From the cobra breeder’s perspective, the incentive looked suspiciously like a business opportunity. Even though their actions had a detrimental effect on the city they were living in.
The population’s reaction speaks to one of the reasons Merton noted in regard to the causes of unintended consequences: people choosing short-term gains while neglecting the long-term consequences. The spirit of the incentive, making the city a safer place to live, must’ve been clear. Yet, people’s more immediate interests, such as providing for their families, seemed to have overwritten their concern for fewer snakes in the streets of Delhi.
There were certainly other factors at play. But it goes to show how incentives are a potent driver of human behaviour. Even though this behaviour can seem unpredictable. Legendary investor Charlie Munger once said: “Show me the incentives, and I will show you the outcome.” The experienced businessman knew how rewards drive people’s decisions. The only question is, in what direction?
Hanoi Rat Bounty
With our first anecdote in mind, see if you can predict the outcome of the next story. It took place in Vietnam during French colonial rule. The incident was recorded by historian Michael Vann and goes something like this:
At the end of the 19th century, Hanoi was plagued by rats. Driven by the desire to modernise the city, the Governor-General instituted a bounty program. Unfortunately, hired rat hunters proved ineffective in getting a hold of the vast rat population. So citizens were paid a small amount of money for each rat they killed.
Given the health risks, the colonial government didn’t want piles of rat corpses to be handed over to officials. So they opted to pay locals for every rat tail they handed over instead.
It worked. Until the rat hunters realised they didn’t have to kill the rodents. All they needed to do was catch a rat, cut off its tail, release the rodent and cash in. This way, the rats could even breed again, producing more valuable rat tails. Needless to say, the rat bounty failed to achieve the desired outcome; worsening the rat plague rather than solving it.
The French government’s initiative failed for reasons similar to the cobra effect. But their approach also shines a light on an important aspect of incentives: how success is measured. This brings us to Goodhart’s Law, an adage named after British economist Charles Goodhart. It states that “when a measure becomes the target, it ceases to be a good measure.”
In our example, the goal was to reduce the rat population by way of citizens killing them for money. Rat tails were determined as a measure of the number of rats killed and the reward handed out. As a result, the measure became the people’s goal when rat tails became an object of value overnight. Citizens optimised to get more rewards rather than reducing the number of rats roaming the city of Hanoi.
In hindsight, the consequences seem obvious again. As a matter of fact, this anecdote relates to a second cause studied by Merton; policy-makers committing analytic errors when drafting an intervention. Their analysis of the likely consequences may have been too process-focused rather than outcome-focused. Perhaps similar initiatives had worked in the past, but they did not work again. People adapt, after all.
Soviet Nail Factory
Did you predict that outcome? Let’s try it one more time with our last anecdote as related by David R. Henderson in The Joy of Freedom: An Economist’s Odyssey. I paraphrase:
In the Soviet Union of the early 20th century, nails were in high demand. The Soviet Union didn’t have a market economy, though. All production was centrally planned. In order to increase production, the government tried to incentivise nail factory workers.
First, the government offered to reward workers by quantity. The more nails they produced, the more they earned. Unfortunately, this led workers to use the limited steel resources to make as many small nails as possible.
So the government shifted their incentive structure, now measuring the output by weight and rewarding workers accordingly. Unfortunately, this also proved counterproductive as the factory started producing fewer but insanely large and heavy nails.
At first glimpse, the Soviet approach seems like an improvement over our previous examples. The incentive structure is much more sophisticated and targeted. Though, to be fair, this time, we’re dealing with a small group of people as opposed to a large society. Still, the factory workers adapted their behaviour to increase their reward rather than act in the spirit of the incentive structure.
Henderson attributes this outcome to the absence of a free market and central planning. We could also see it as the result of a third common cause for unintended consequences. Sometimes, we’re simply too ignorant to understand or predict complex social systems. Partly because they’re dynamic entities populated by thinking humans who adapt and react to whatever measures we implement.
What all stories have in common is a lack of skin in the game, a phrase popularised by Nassim Nicholas Taleb. Ironically, much like banks privatising their gains and socialising their losses, the Soviet workers did not incur a measurable risk by wilfully misinterpreting the incentive. As Taleb implies, people tend to game the system if they don’t have to suffer the consequences of their actions.
How to Avoid Cobra Effects?
That leaves us with the question of how to avoid Cobra Effects and prevent people from gaming the system. It seems like the problem is not that incentives do not work. It’s that they’re often not calibrated enough to achieve the desired effect. So we might summarise our task as follows.
Set attainable goals for incentives and come up with ways to measure their achievement, so they don’t become a target themselves. While you’re at it, increase analytical rigour, factor in your own ignorance and make people see and act on the long-term benefits of your little intervention. Let me elaborate.
Goals & Measures
Think through your incentive structure until the very end before you implement it. What are all the possible outcomes of our incentive? What new “market” might the reward unwittingly create? What previously worthless commodities, such as rat tails, are we putting a price tag on? Testing the incentive structure on a small scale to calibrate the measures won’t hurt either.
Perhaps rewarding Soviet workers by quantity while specifying the type of nails they had to produce would’ve helped. Maybe obfuscating the way rewards are calculated would’ve prevented the Hanoi rat bounty from failing. Of course, this creates the risk of a needlessly complicated and bureaucratised reward system nobody comprehends. But it’s more sophisticated than assuming people share our goals.
Analytic Humility
Speaking of assumptions. Improving the rigour of our analysis of the problem as well as its solutions is another way to avoid unintended negative consequences. Challenging our key assumptions is one of the five habits of a master thinker I wrote about in a newsletter. This practice is all about considering alternative scenarios, even if they seem unlikely.
We can’t predict people’s reactions with certainty. But we can try to anticipate certain patterns based on past experience. A Premortem Analysis can be used to explore how an incentive might fail before the fateful decision is made. The structured analytic technique could even utilise the above stories to inspire participants to find and close loopholes and make success a bit more likely.
Culture
None of the above measures, however, seem to beat the importance of culture. Imagine everyone in a group shared a long-term goal and had a proper amount of risk involvement. Incentives would be interpreted in the most charitable way out of mere selfish morality. To channel Taleb once again, forcing the players to have skin in the game will lead the system to self-correct.
The alternative is a situation in which the individual’s interests are disconnected from the group’s, and incentive structures have to account for a myriad of loopholes. If you find yourself in such a cynical position, what else is there to do than to outgame your own rewards system? As Mark Twain put it: “The best way to increase wolves in America, rabbits in Australia, and snakes in India, is to pay a bounty on their scalps. Then every patriot goes to raising them.”
Closing Thoughts
Incentives work. Not necessarily the way we want them to. But they work, which is why the road to hell is sometimes paved with bad incentives. By the same token, it’s easy to dismiss rewards for dead cobras, bounties on rat tails, and bonuses for clownishly large nails as obviously ill-fated solutions. I’m sure the criticism is well-intended. But I guess you can’t learn how to improve a complex social system without taking some risks.