How Can We Learn If Decisions Are Disconnected from Impact?

Two years ago, there was a ballot initiative in California, Proposition 10, which would enable city governments to enact rent control on any buildings. I had no idea whether or not this was a good idea, and I was going to go to my default in situations like this, which is to vote no. But I decided to check with my friend, Steph, who works in affordable housing. She was thoughtful, knowledgeable, and even, and after my conversation with her, I decided to vote Yes.

This year, there was a similar ballot initiative, Proposition 21, which would enable city governments to enact rent control on buildings that was first occupied over 15 years ago. Once again, I had no idea whether this was a good idea or not. Once again, I turned to Steph. Once again, I voted Yes.

Here’s what troubled me about finding myself in almost the exact same situation two years later: I had learned absolutely nothing. I didn’t even remember whether or not Proposition 10 had passed. (It didn’t. Neither did Proposition 21.) Even if it had, I would have had no idea what the impact of that measure was.

I believe strongly in collaboration, democracy, and the wisdom of crowds. It’s why I do what I do. And I understand why folks don’t have faith in a population’s ability to govern itself. Our track record, especially recently, is terrible.

Here’s the thing: In order to act in intelligent ways, people need to be responsible for the impact of their decisions. If we don’t know the impact of our decisions, we are not going to make good decisions.

In his book, The Signal and the Noise, Nate Silver explains that pundits are terrible at predicting financial markets, but meteorologists are exceptional at predicting the weather. Why? For starters, we are more likely to remember a meteorologist’s track record, because the feedback loop is tighter. Last night, the weatherperson said it would be sunny. Today, it rained. You’re going to remember that.

If we could figure out better ways to tie decisions with impact, I think we would find society generally doing the right things. This is obviously incredibly hard, but I don’t think it’s impossible.

A Quick Note on Probability and Strategy

As folks continue to monitor the presidential elections, I think it’s worth revisiting an important truth about probability and strategy.

If someone is given a 90 percent chance of winning something, that means if you replay the same scenario 100 times, that person wins on average 90 times.

Ninety is not 100. When you only get to play that scenario once, and the person favored to win 90 percent of the time loses, that doesn’t necessarily mean that the probabilities were wrong, and it doesn’t necessarily mean that the strategies that resulted in a loss were wrong either. They all could have been wrong, but it requires much deeper analysis to evaluate that.

If there’s a silver lining to the last two presidential elections, it’s that we’re collectively learning this in a very visceral way. The better we understand this, the better we’re able to make strategic decisions and prepare ourselves for different outcomes. It’s not just true in presidential elections, it’s true in business, and it’s true in life.

Professional poker players understand this. (I highly recommend Maria Konnikova’s The Biggest Bluff for more on what we could learn from professional poker players on strategy and decision-making.) Bill Belichik understands this. Nate Silver and the good folks at Five Thirty Eight are constantly trying to explain this to the rest of us. It’s very human not to understand this, but it’s helpful to keep trying.

The Not-So-Mystifying Power of Groupthink and Habits

My friend, Greg, recently sent me this excellent and troubling Nate Silver article, “There Really Was A Liberal Media Bubble,” on the 2016 presidential election. Silver references James Surowiecki’s book, The Wisdom of Crowds and suggests that political journalists fail the first three of Surowiecki’s four conditions for wise crowds.

  1. Diversity of opinion (fail)
  2. Independence (fail)
  3. Decentralization (fail)
  4. Aggregation (succeed)

Many of Silver’s points hit very close to home, not just because I believe very strongly in the importance of a strong, independent media, but because improving our collective wisdom is my business, and many fields I work with — including my own — suffer from these exact same problems.

On diversity, for example, Silver points out that newsrooms are not only not diverse along race, gender, or political lines, but in how people think:

Although it’s harder to measure, I’d also argue that there’s a lack of diversity when it comes to skill sets and methods of thinking in political journalism. Publications such as Buzzfeed or (the now defunct) Gawker.com get a lot of shade from traditional journalists when they do things that challenge conventional journalistic paradigms. But a lot of traditional journalistic practices are done by rote or out of habit, such as routinely granting anonymity to staffers to discuss campaign strategy even when there isn’t much journalistic merit in it. Meanwhile, speaking from personal experience, I’ve found the reception of “data journalists” by traditional journalists to be unfriendly, although there have been exceptions.

On independence, Silver describes how the way journalism is practiced — particularly in this social media age — ends up acting as a massive echo chamber:

Crowds can be wise when people do a lot of thinking for themselves before coming together to exchange their views. But since at least the days of “The Boys on the Bus,” political journalism has suffered from a pack mentality. Events such as conventions and debates literally gather thousands of journalists together in the same room; attend one of these events, and you can almost smell the conventional wisdom being manufactured in real time. (Consider how a consensus formed that Romney won the first debate in 2012 when it had barely even started, for instance.) Social media — Twitter in particular — can amplify these information cascades, with a single tweet receiving hundreds of thousands of impressions and shaping the way entire issues are framed. As a result, it can be largely arbitrary which storylines gain traction and which ones don’t. What seems like a multiplicity of perspectives might just be one or two, duplicated many times over.

Of the three conditions where political journalism falls short, Silver thinks that independence may be the best starting point for improvement:

In some ways the best hope for a short-term fix might come from an attitudinal adjustment: Journalists should recalibrate themselves to be more skeptical of the consensus of their peers. That’s because a position that seems to have deep backing from the evidence may really just be a reflection from the echo chamber. You should be looking toward how much evidence there is for a particular position as opposed to how many people hold that position: Having 20 independent pieces of evidence that mostly point in the same direction might indeed reflect a powerful consensus, while having 20 like-minded people citing the same warmed-over evidence is much less powerful. Obviously this can be taken too far and in most fields, it’s foolish (and annoying) to constantly doubt the market or consensus view. But in a case like politics where the conventional wisdom can congeal so quickly — and yet has so often been wrong — a certain amount of contrarianism can go a long way.

Maybe he’s right. All I know is that “attitudinal adjustments” — shifting mindsets — is really hard. I was reminded of this by this article about Paul DePodesta and the ongoing challenge to get professional sports teams to take data seriously.

Basis of what DePodesta and Browns are attempting not new. Majority of NFL teams begrudgingly use analytics without fully embracing concept. Besides scouting and drafting, teams employ analytics to weigh trades, allot practice time, call plays (example: evolving mindset regarding fourth downs) and manage clock. What will differentiate DePodesta and Cleveland is extent to which Browns use data science to influence decision-making. DePodesta would like decisions to be informed by 60 percent data, 40 percent scouting. Present-day NFL is more 70 percent scouting and 30 percent data. DePodesta won’t just ponder scouts’ performance but question their very existence. Will likewise flip burden of proof on all football processes, models and systems. Objective data regarding, say, a player’s size and his performance metrics — example: Defensive ends must have arm length of at least 33 inches — will dictate decision-making. Football staff will then have to produce overwhelming subjective argument to overrule or disprove analytics. “It’s usually the other way around,” states member of AFC team’s analytics staff.

On the one hand, it’s incredible that this is still an issue in professional sports, 14 years after Moneyball was first published and several championships were won by analytics-driven franchises (including two “cursed” franchises, the Boston Red Sox and the Chicago Cubs, both led by data nerd Theo Epstein).

On the other hand, it’s a vivid reminder of how hard habits and groupthink are to break, even in a field where the incentives to be smarter than everyone else come in the form of hundreds of millions of dollars. If it’s this hard to shift mindsets in professional sports, I don’t even want to imagine how long it might take in journalism. It’s definitely helping me recalibrate my perspective about the mindsets I’m trying to shift in my own field.