Collaboration Lessons from Disaster Documentaries and Other “Unexpected” Sources

This post is for anyone who has ever asked me for book recommendations on collaboration.

Yesterday, Carmen Medina‘s wrote a wonderful blog post on the television documentary series, Air Disasters. (It’s known as Mayday in Canada.) She shares a number of insights she gathered from the show on human performance, systems design, and leadership in general. For example:

The Importance of Sleep and Good Rest:

Commercial airlines have strict rules about how many hours flight crews can work before they must rest…. These rules reflect hard lessons learned about how poor rest and lack of sleep can degrade the cognitive performance and judgment of pilots…. The aviation industry learned long ago that “people just have to tough it out” is not a useful strategy.

Hierarchy Can Kill You:

Traditionally the captain and the first officer in commercial aviation were in a command and obey-orders relationship. But captains are not infallible and there are several fatal accidents that could have been avoided if the first officer had been listened to. Oftentimes the captain would have had a hard time “hearing” the other view because the first officer actually never verbalized his concern. The respect for hierarchy was so paralyzing that first officers have deferred to wrongheaded captains even when it led to certain death. These types of accidents became so concerning for the aviation industry that airlines instituted mandatory crew resource management procedures that emphasize the importance of collaboration and teamwork in the cockpit.

Who’s Accountable?

As airline crash investigators know, many airplane accidents involve a chain of unlikely events, any one of which would rarely occur. A supervisor decides to pitch in and help his overworked maintenance team by removing a set of screws. The maintenance team isn’t able to finish the job but don’t know to replace the screws. Nevertheless, the plane makes many safe landings and takeoffs until a pilot decides to make an unusually fast descent. The pilot and all the passengers die.

Who exactly is accountable here? Is it the supervisor who tried to be helpful? Or the airline management that under-resourced its maintenance operations? Or the pilot? In many organizations, holding someone “accountable” is the signature move of “strong leaders”. But what often happens is that some unfortunate individual is held to blame for what was a systemic failure of an organization — often driven by complacency, expediency, and/or greed.

These are all really good lessons, all from watching a television show! Of course, it’s a little disingenuous to say that. Carmen was a long-time leader at the CIA with a lifetime of hard-earned experiences. Most of us would not be able to recognize the deeper lessons that Carmen did, much less articulate them so clearly. This is what the best authors do — pull good insights from all kinds of places, some of them unexpected — and package them in a clear and compelling way. Not surprisingly, Carmen is one of those authors.

Still, when it comes to collaboration at least, I find that many people seem to eschew sources like television documentaries or — more dishearteningly — their own experiences for books written by “experts.” You don’t have to have been a CEO at a Fortune 500 company or a business school professor to have had amazing insights and experiences on collaboration. (Honestly, I don’t think many CEOs or business school professors would even make my list of top collaboration practitioners.)

Do you have a family? Friends? Classmates? A partner of any sort, business or life? Are you in a band? Do you play pickup sports or volunteer in your community? If so, I promise you, you already have a lifetime of experiences on which to draw. It’s only a matter of being intentional about sifting through your experiences for insights and trying to practice what you learn. Once you start doing this, you’ll start to recognize deeper lessons from all sorts of places, some of them unexpected. I use television and movie clips all the time to help groups learn how to recognize and navigate power dynamics. For those of you follow this blog, I obviously write a lot about basketball, and athlete podcasts have been a particularly rich source of insights for me for a while now. (My favorites are All the Smoke and The Old Man and the Three.) Over the past few years, I’ve been learning a ton about systems design and collaboration from watching birds and experimenting with bird feeders.

Deeply examining your own experiences and drawing from unexpected sources are much more effective for learning about collaboration than reading a book, and they’re far more fun.

Lessons Learned from 30 Days of Blogging

Last month, I decided to blog every day. As I explained earlier:

For whatever reason, I’ve found writing hard to do the past few years, and this year has been the hardest. I’ve also been disinclined to think out loud, even though I’ve had a lot I’ve wanted to say and share, both personally and professionally.

Mid-way through the experiment, I reported:

What it’s been doing is helping unlock whatever has been inside of me. I’ve been precious about sharing what I’ve been thinking, not wanting to say them unless I can say them well and feeling paralyzed as a result. I’ve also found it overwhelming at times to try to blog. I guess things are crazy in the world right now, and it’s not only affecting my mental health, it’s hard for me to make sense of it all.

Blogging as a practice has reminded me not to be too precious. The less I try to say, the less overwhelming I feel. The more frequently I share, the less I have to worry about saying it all in one piece, which makes it much easier to write. Plus, even though I don’t think I’ve shown it yet, I’m starting to remember what it feels like to write well. I’m rounding into shape again, which always feels good.

The biggest surprise has been that sharing regularly has helped me re-engage with my broader community. I didn’t think anyone really followed this blog anymore, and because I’m rarely on social media anymore, the algorithms seem to have decided I’m not worthy of most people’s feeds. Still, some people are paying attention to what I’m saying, and getting to hear from them has been a treat and is also motivating me to write more.

After having finished the experiment, I’m not sure I have anything different to report, other than to say that I don’t think I had any breakthroughs after 30 days, and I want to keep exercising this muscle. I thought seriously about extending my project through the end of the year, but I opted against it for a few reasons. Even though it wasn’t particularly stressful, it wasn’t stress-free either, and I don’t need the added pressure this month. It also tires out muscles that I’m using for work right now. I can focus on developing these muscles more when work settles down.

In the meantime, I think the exercise still is helping me share more than I was before. This is my third blog post in December. I think a good pace for me is to be blogging about once a week, especially when those posts are more or less organic.

Maybe the most interesting thing for me was seeing what I chose to blog about. This wasn’t just a writing exercise, it was a sharing exercise. I aggregated all of the tags from those 30 days of blog posts and ran them through WordClouds.com to see if I could detect any patterns.

Not surprisingly, I wrote a lot about COVID-19 and the elections. It was nice to see that I wrote quite a bit about collaboration. This wasn’t my goal, but I admit I was curious to see how often I felt compelled to write about “work stuff” — the original purpose of this blog — especially when I had so many other things on my mind. I loved that I wrote about a lot about making — food and art and photography and stories in general.

Finally, I was curious about the people and places I wrote about. Here were people I knew whom I mentioned in various posts (not including my partner and sister, whom I mentioned often and didn’t bother tagging):

I loved seeing this list. My interactions with others play such a huge role in what I think about and how I feel, and I love being able to share this space with the people in my life.

People I mentioned whom I don’t know:

Places I mentioned:

  • Africa
    • Nigeria
  • Alaska
  • California
    • Bay Area
      • Colma
      • Oakland
        • Joaquin Miller Park
        • Mountain View Cemetery
      • San Francisco
        • Fort Point
        • Golden Gate Bridge
    • Los Angeles
      • Forest Lawn
  • Cincinnati
  • Santa Fe
    • Ghost Ranch

On Markets, Government, and American Exceptionalism

On Election Day, Carmen Medina outlined ten beliefs underlying her views on the world and on politics. Read her whole post. It’s short, sharp, and thought-provoking.

Here’s what she wrote about regulations:

1. More often than not government (all) regulations do not entirely achieve their intended effects. Their unintended effects can be positive or negative. This is due to the world’s and society’s infinite complexity. Thus, I am skeptical of most grand efforts to “fix a problem”.

and a few points later on climate change:

4. Climate change is real and it is currently driven by humans. Given that regulatory approaches are often flawed, solutions should be emergent and market and locally-based. (See point 1) Thirty years ago I was debating pollution and energy with a friend in an English pub. He was advocating a large government program. I asserted that the first successful electric car would be created by a private company.

I don’t know enough about public policy to know whether her first point — specifically, “more often than not” — is true, although Carmen, as a long-time civil servant, would know infinitely more about this than me. I’m curious, however, what she means by “market-based solutions” in this light.

All markets are regulated, in the sense that someone gets to define the rules by which a market plays. Those rules impact how those markets work and whom they benefit. We saw this play out on Tuesday. Elections are a kind of market that serve as the cornerstone of our democracies. All elections are also regulated. Someone decides who gets to vote, the mechanisms by which they vote, and how those votes are counted. Subtle differences in those rules can have massive effects on their outcomes. This is true of all markets.

This complexity plays out in her electric car example. I assume she’s talking here about Tesla, whose founder, Elon Musk, has loudly endorsed market-based solutions to climate change (such as a carbon tax) and opposed government subsidies. However, he also happily accepted a $450 million loan from the federal government in 2010, which enabled him to scale up production of Tesla’s Model S (and which Tesla paid back with interest three years later). I’m also willing to bet that a good portion of the scientific and technological foundations on which Tesla and other electric cars are based were funded by the government. One might argue that these are all examples of market-based interventions rather than regulations. I’m not sure that the distinction is that clean or that it matters at all.

I think the more important point is that there’s no such thing as the perfect structure. Whatever you put into place will have unintended consequences (a point that Carmen makes right from the start). Without alignment around the desired consequences and a fair, equitable system for making adjustments (i.e. regulations), that structure will fail. Therein lies the rub, especially when it comes to elections. Elections are supposed to be that fair, equitable system for making adjustments, but if they start off flawed (the way all intentionally-designed systems in a complex world do), we are now relying on a flawed system to fix a flawed system. Messy, right?

(This is also what galls me about the current capitalism / socialism rhetoric. Most of the time, when I hear someone railing about one or the other, I have no idea what they’re talking about. Is the U.S. capitalist or socialist? It’s both, and it always has been, although the degrees have shifted over the years. The challenge is in finding the right mix, whatever you want to call it in the end, not in replacing one with a more “pure” version of the other and calling it a day.)

Earlier this week, Stephen Bates published a piece in Lawfare on Reinhold Niebuhr, where he wrote:

For Niebuhr, [Charles] Merriam-style complacency is all too common in the United States. Americans like to ascribe their success to moral virtue rather than good luck. Thanksgiving, he once remarked, is a time for “congratulating the Almighty upon his most excellent co-workers, ourselves.” Americans smugly presume that they have the gold-standard democracy against which all others must be measured. The framers, they think, fashioned stable, incorruptible, self-correcting institutions. Whenever part of the system goes haywire, the other parts compensate, and constitutional homeostasis prevails.

Not so, according to Niebuhr. “There are no such natural harmonies and balances …[,]” he wrote in a Hutchins Commission memo. “Whatever harmony exists at a particular moment may be disturbed by the emergence of new factors and vitalities.” In his view, the price of liberty isn’t merely eternal vigilance; it’s also eternal trial and error. New solutions create new problems. Virtues in one situation become vices in another. Measures to suppress abuses of freedom can end up suppressing freedom. Reason advances justice in some circumstances and camouflages injustice in others. The expansion of knowledge sometimes fuels global understanding and other times fuels imperialism. A free society, Niebuhr believed, demands ceaseless recalibration of unity and diversity, freedom and order, mores and mandates, state power and corporate power. The challenge is “a perpetual one,” he told [Henry] Luce, “for which no single solution is ever found but upon which each generation must work afresh.”

In this vein, I enjoyed how Carmen reframed American Exceptionalism:

10. America is the world’s most multicultural nation. That is its only true exceptionalism. We will prove to be either a successful example or a tragic one.

COVID-19 Sensemaking Journal: April 4, 2020

As I suggested might happen, I’ve stopped updating my spreadsheet, and I’ve started relying on two of the great dashboards that have emerged in recent weeks — Wade Fagen-Ulmschneider’s dashboard (which I mentioned last week) for international and state-wide comparisons and this dashboard (hat tip to Yangsze Choo).

The regular attempts at sensemaking, however, continue. Here’s what I’m learning this week. The usual disclaimer applies: I’m just an average citizen with above average (but very, very rusty) math skills trying to make sense of what’s going on. Don’t trust anything I say! I welcome corrections and pushback!

From the beginning, the main thing I’ve been tracking has been daily new cases country-by-country. Here’s Fagen-Ulmschneider’s latest log graph:

This week’s trend is essentially a continuation of last week’s, which is good news for Italy (whose growth rate is slowing), and bad news for the U.S. (whose growth rate seems more or less consistent.

Early on, I started using a log graph, because it showed the growth rate more clearly, especially in the early days of growth, when curves can look deceivingly flat and linear. Now that some time has passed, one of the challenges of the log graph is becoming apparent: It dulls your sensitivity to how bad things are as you get higher in the graph (and the scale increases by orders of magnitude). You could conceivably look at the above graph and say to yourself, “Well, our curve isn’t flattening, but we’re not that much worse than Italy is,” but that would be a mistake, because you have to pay attention to your scale markers. You don’t have this problem with a linear graph:

Yeah, that looks (and is) a lot worse. The other challenge with these graphs is that the daily points create a spikiness that’s not helpful at best and deceiving at worst. If you’re checking this daily (which I’m doing), you can see a drop one day and think to yourself, “Yay! We’re flattening!”, only to see the the curve rise rapidly the next two. That is, in fact, what happened over the last three days with the national numbers, and it’s an even worse problem as you look at regional data. It would probably be better to show averages over the previous week or even weekly aggregates instead of daily (which might make more sense after a few more weeks).

In addition to the nice interface, one of the main reasons I started using Fagen-Ulmschneider’s dashboard is that he’s tracking state-by-state data as well. He’s even normalizing the data by population. My original impetus for doing my own tracking was that I couldn’t find anyone else normalizing by population. What I quickly realized was that normalizing by population at a national level doesn’t tell you much for two reasons. First, I was mainly interested in the slope of the curve, and normalizing by population doesn’t impact that. Second, outbreaks are regional in nature, and so normalizing by a country’s population (which encompasses many regions) can be misleading. However, I think it starts to become useful if you’re normalizing by a region’s population. I think doing this by state, while not as granular as I would like, is better than nothing. Here’s the state-by-state log graph tracking daily new cases normalized by population:

California (my state) was one of the first in the U.S. to confirm a COVID-19 case. It was also the first to institute a state-wide shelter-in-place directive. And, you can see that the curve seems to have flattened over the past five days. If you play with the dashboard itself, you’ll notice that if you hover over any datapoint, you can see growth data. In the past week, California’s growth rate has gone down from 15% daily (the growth rate over the previous 24 days) to 7% daily. Yesterday, there were 30 new confirmed cases of novel coronavirus per million people. (There are 40 million people in California.)

An aside on growth rates. One of the things that’s hard about all these different graphs is that they use different measures for growth rates. Fagen-Ulmschneider chooses to use daily growth percentage, and he shows a 35% growth curve as his baseline, because that was the initial growth curve for most European countries. (Yikes!) Other folks, including the regional dashboard I started following this past week, show doubling rate — the number of days it takes to double.

Finance folks use a relatively straightforward way of estimating the conversion between doubling rate and growth rate. I have a computer, so there’s no reason to estimate. The formula is ln 2 / ln r, where r is the growth rate. (The base of the log doesn’t matter, but I use a natural log, because that’s how the Rule of 72 is derived.) However, what I really wanted was a more intuitive sense of how those two rates are related, so I graphed the function:

You can see that the 35% growth rate baseline is equivalent to a doubling of cases every 2.2ish days. (Yikes!) Over the past 24 days, California’s growth rate was 15%, which means there was a doubling of cases every five days. Over the past week, the growth rate was 7%, which is the equivalent of doubling approximately every 10 days. (Good job, California!)

Which brings me to the regional dashboard I’ve been using. I love that this dashboard has county data. I also like the overall interface. It’s very fast to find data, browse nearby data, and configure the graph in relatively clean ways. I don’t like how it normalizes the Y-axis based on each region’s curve, which makes it very hard to get a sense of how different counties compare. You really need to pay attention to the growth rate, which it shows as doubling rate. Unlike the above dashboard, it doesn’t show you how the growth rate over the previous seven days compares to the overall growth curve, so it’s hard to detect flattening. My biggest pet peeve is that it doesn’t say who made the dashboard, which makes it harder to assess whether or not to trust it (although it does attribute its data sources), and it doesn’t let me share feedback or suggestions. (Maybe the latter is by design.)

Here’s the California data for comparison:

Another nice thing about this dashboard is that it shows confirmed cases (orange), daily new cases (green), and daily deaths (black). I keep hearing from folks saying that the reported cases data is useless because of underreporting due to lack of tests. These graphs should help dispel this, because — as you browse through counties — the slopes (which indicate growth rates) consistently match. Also, the overall growth rate shown here (doubling every 5.1 days) is consistent with the data in the other dashboard, so that’s nice validation.

Here’s what the Bay Area looks like:

You can see what I meant above about being hard to compare. This graph looks mostly the same as the California graph, but if you look at the scale of the Y-axis and the doubling rate, it’s very different. The Bay Area (which declared shelter-in-place even before the state did) is doing even better, curve-wise. (Good job, Bay Area!)

My next project is to try and get a better sense of what all the death numbers mean. More on that in a future blog post, perhaps. In the meantime, here are some other COVID-19 things I’m paying attention to.

First and foremost, I’m interested in how quickly we create an alternative to shelter-in-place, most likely some variation on test-and-trace. Until we have this in place, lifting shelter-in-place doesn’t make sense, even if we get our curve under control, because the growth rate will just shoot up again. This is nicely explained in Tomas Pueyo’s essay, “Coronavirus: The Hammer and the Dance.” My favorite systems explainer, Nicky Case, has partnered with an epidemiologist to create a dashboard that lets regular folks play with different scenarios. They haven’t released it yet, but this video nicely gives us the gist:

Unfortunately, the media isn’t really talking about what’s happening in this regard (other than the complete clusterfuck that our national response has been), so I have no idea what’s happening. Hang tight, I suppose.

On the other hand, there are some things we can learn from past pandemics. This National Geographic article shares these lessons (and visualizations) from the 1918 flu pandemic, a good warning about lifting shelter-in-place prematurely. (Hat tip to Kevin Cheng.) Similarly, Dave Pollard shares some lessons learned from SARS, several of which are very sobering.

In the meantime, the most pressing concern is hospital capacity. Last week, I mentioned the Institute for Health Metrics and Evaluation’s dashboard, which got some national play too and apparently had a role in waking up our national leadership. Carl Bergstrom, an epidemiologist who also happens to study how disinformation spreads, tweeted some useful commentary on how to (and how not to) interpret this data.

Speaking of disinformation, these are interesting times, not just because of the horrific role that disinformation campaigns are playing in our inability to response, but also because it’s surfacing in a more nuanced way the complicated nature of expertise. FiveThirtyEight published an excellent piece explaining why it’s so hard to build a COVID-19 model. Zeynep Tufekci’s article, “Don’t Believe the COVID-19 Models,” complements the FiveThirtyEight piece nicely. Ed Yong demonstrates how this complexity plays out in his excellent piece on masks. And Philippe Lemoine nicely explains where common sense fits into all of this. (Hat tip to Carmen Medina.)