From The Atlantic, “The Like Button Ruined the Internet”:
In the Google Reader days, when RSS ruled the web, online publications — including blogs, which thrived because of it — kept an eye on how many subscribers they had. That was the key metric. They paid less attention to individual posts. In that sense their content was bundled: It was like a magazine, where a collection of articles is literally bound together and it’s the collection that you’re paying for, and that you’re consuming. But, as the journalist Alexis Madrigal pointed out to me, media on the web has come increasingly un-bundled—and we haven’t yet fully appreciated the consequences.
When content is bundled, the burden is taken off of any one piece to make a splash; the idea is for the bundle—in an accretive way—to make the splash. I think this has real consequences. I think creators of content bundles don’t have as much pressure on them to sex up individual stories. They can let stories be somewhat unattractive on their face, knowing that readers will find them anyway because they’re part of the bundle. There is room for narrative messiness, and for variety—for stuff, for instance, that’s not always of the moment.
Madrigal suggested that the newest successful media bundle is the podcast. Perhaps that’s why podcasts have surged in popularity and why you find such a refreshing mixture of breadth and depth in that form: Individual episodes don’t matter; what matters is getting subscribers. You can occasionally whiff, or do something weird, and still be successful.
Imagine if podcasts were Twitterized in the sense that people cut up and reacted to individual segments, say a few minutes long. The content marketplace might shift away from the bundle—shows that you subscribe to—and toward individual fragments. The incentives would evolve toward producing fragments that get Likes. If that model came to dominate, such that the default was no longer to subscribe to any podcast in particular, it seems obvious that long-running shows devoted to niches would starve.
My friend, Greg, recently sent me this excellent and troubling Nate Silver article, “There Really Was A Liberal Media Bubble,” on the 2016 presidential election. Silver references James Surowiecki’s book, The Wisdom of Crowds and suggests that political journalists fail the first three of Surowiecki’s four conditions for wise crowds.
- Diversity of opinion (fail)
- Independence (fail)
- Decentralization (fail)
- Aggregation (succeed)
Many of Silver’s points hit very close to home, not just because I believe very strongly in the importance of a strong, independent media, but because improving our collective wisdom is my business, and many fields I work with — including my own — suffer from these exact same problems.
On diversity, for example, Silver points out that newsrooms are not only not diverse along race, gender, or political lines, but in how people think:
Although it’s harder to measure, I’d also argue that there’s a lack of diversity when it comes to skill sets and methods of thinking in political journalism. Publications such as Buzzfeed or (the now defunct) Gawker.com get a lot of shade from traditional journalists when they do things that challenge conventional journalistic paradigms. But a lot of traditional journalistic practices are done by rote or out of habit, such as routinely granting anonymity to staffers to discuss campaign strategy even when there isn’t much journalistic merit in it. Meanwhile, speaking from personal experience, I’ve found the reception of “data journalists” by traditional journalists to be unfriendly, although there have been exceptions.
On independence, Silver describes how the way journalism is practiced — particularly in this social media age — ends up acting as a massive echo chamber:
Crowds can be wise when people do a lot of thinking for themselves before coming together to exchange their views. But since at least the days of “The Boys on the Bus,” political journalism has suffered from a pack mentality. Events such as conventions and debates literally gather thousands of journalists together in the same room; attend one of these events, and you can almost smell the conventional wisdom being manufactured in real time. (Consider how a consensus formed that Romney won the first debate in 2012 when it had barely even started, for instance.) Social media — Twitter in particular — can amplify these information cascades, with a single tweet receiving hundreds of thousands of impressions and shaping the way entire issues are framed. As a result, it can be largely arbitrary which storylines gain traction and which ones don’t. What seems like a multiplicity of perspectives might just be one or two, duplicated many times over.
Of the three conditions where political journalism falls short, Silver thinks that independence may be the best starting point for improvement:
In some ways the best hope for a short-term fix might come from an attitudinal adjustment: Journalists should recalibrate themselves to be more skeptical of the consensus of their peers. That’s because a position that seems to have deep backing from the evidence may really just be a reflection from the echo chamber. You should be looking toward how much evidence there is for a particular position as opposed to how many people hold that position: Having 20 independent pieces of evidence that mostly point in the same direction might indeed reflect a powerful consensus, while having 20 like-minded people citing the same warmed-over evidence is much less powerful. Obviously this can be taken too far and in most fields, it’s foolish (and annoying) to constantly doubt the market or consensus view. But in a case like politics where the conventional wisdom can congeal so quickly — and yet has so often been wrong — a certain amount of contrarianism can go a long way.
Maybe he’s right. All I know is that “attitudinal adjustments” — shifting mindsets — is really hard. I was reminded of this by this article about Paul DePodesta and the ongoing challenge to get professional sports teams to take data seriously.
Basis of what DePodesta and Browns are attempting not new. Majority of NFL teams begrudgingly use analytics without fully embracing concept. Besides scouting and drafting, teams employ analytics to weigh trades, allot practice time, call plays (example: evolving mindset regarding fourth downs) and manage clock. What will differentiate DePodesta and Cleveland is extent to which Browns use data science to influence decision-making. DePodesta would like decisions to be informed by 60 percent data, 40 percent scouting. Present-day NFL is more 70 percent scouting and 30 percent data. DePodesta won’t just ponder scouts’ performance but question their very existence. Will likewise flip burden of proof on all football processes, models and systems. Objective data regarding, say, a player’s size and his performance metrics — example: Defensive ends must have arm length of at least 33 inches — will dictate decision-making. Football staff will then have to produce overwhelming subjective argument to overrule or disprove analytics. “It’s usually the other way around,” states member of AFC team’s analytics staff.
On the one hand, it’s incredible that this is still an issue in professional sports, 14 years after Moneyball was first published and several championships were won by analytics-driven franchises (including two “cursed” franchises, the Boston Red Sox and the Chicago Cubs, both led by data nerd Theo Epstein).
On the other hand, it’s a vivid reminder of how hard habits and groupthink are to break, even in a field where the incentives to be smarter than everyone else come in the form of hundreds of millions of dollars. If it’s this hard to shift mindsets in professional sports, I don’t even want to imagine how long it might take in journalism. It’s definitely helping me recalibrate my perspective about the mindsets I’m trying to shift in my own field.
In honor of International Women’s Day, I’d like to honor two women, and I’d like to share a strange story about misogyny. The first is Ada Lovelace, the remarkable daughter of Lord Byron, who published the very first computer program in 1843. Because I have many friends and colleagues in technology, I often see Ada’s name crop up in various social media channels on this day. I also occasionally get surprised emails from friends, who start reading about her work, see references to some guy named, “Eugene Eric Kim,” and wonder if that person could possibly be me.
Yup, it’s me. In 1999, I co-authored a piece about Ada in Scientific American with the brilliant Betty Alexandra Toole, who is the second person I’d like to honor. I am exceptionally proud of the piece, because Betty and I cleared up several myths about Ada there. I also found a bug in Ada’s program, and am the first person to have identified and published it, 156 years after the fact!
I studied history of science in college, and while I was primarily interested in understanding scientific revolutions, I was irresistably drawn to the history of computers and computer science. Shortly after graduating, I came across Betty’s extensive and definitive scholarship on Ada and emailed her about it. Betty was warm and delightful, and it turned out she was friends with my colleague, the inimitable Michael Swaine, and his partner, Nancy Groth, and that she lived in the Bay Area. We became friends, and we talked often about her work.
One of the curious things I learned about Betty was how many people seemed to want to discredit Ada’s contributions to computer science. I found it interesting and weird, but I didn’t delve too much into it myself. I more or less trusted Betty’s scholarship, but I still reserved the right to be skeptical myself. If other historians were discounting Ada’s contributions, there was probably a reason for it, right?
Then, in 1998, Betty forced me to confront my skepticism. She wanted to write a popular article about Ada’s work, and she asked me if I would co-author it with her. I said I would, but that I needed to look at the data myself before I could do it with her. Betty not only agreed, this was her plan all along. She had copies of Ada’s original notes, and she knew that I could understand the technical parts.
So I spent a few months carefully reading through Ada’s notes, double-checking her math, and also reading the scholarly work that disputed her contributions. I concluded that those claims were baseless.
Ada first met Charles Babbage (the inventor of the first computer) when she was 17. She published the first computer program when she was 27. I was 23 when I first read her notes, so I tried to remember what I was like when I first started learning algebra, then I tried to adjust my expectations based on the state of education at the time. All of that was largely unnecessary. Ada was a competent mathematician and a skillful learner. At minimum, she would have been in the same advanced classes as me in high school, and she likely would have been ahead of me. Moreover, she would have argued with me often and put me in my place. She was clearly proud, independent, and stubborn.
But she wasn’t a mathematical prodigy, either, or a technical genius like Babbage. Her genius was in recognizing the brilliance and potential of Babbage’s invention, not just from a pie-in-the-sky perspective (Babbage himself was skilled in spinning yarns), but from a down-and-dirty, here’s-how-it-actually-works perspective. She was both a visionary and a hacker, a century ahead of her time.
Here’s what Betty and I wrote in our article:
We cannot know for certain the extent to which Babbage helped Ada on the Bernoulli numbers program. She was certainly capable of writing the program herself given the proper formula; this is clear from her depth of understanding regarding the process of programming and from her improvements on Babbage’s programming notation. Additionally, letters between Babbage and Ada at the time seem to indicate that Babbage’s contributions were limited to the mathematical formula and that Ada created the program herself. While she was working on the program, Ada wrote to Babbage, “I have worked incessantly, & most successfully, all day. You will admire the Table & Diagram extremely. They have been made out with extreme care, & all the indices most minutely & scrupulously attended to.”
The importance of Ada’s choosing to write this program cannot be overstated. Babbage had written several small programs for the Analytical Engine in his notebook in 1836 and 1837, but none of them approached the complexity of the Bernoulli numbers program. Because of her earlier tutelage, Ada was at least familiar with the numbers’ properties. It is possible that Ada recognized that a Bernoulli numbers program would nicely demonstrate some of the Analytical Engine’s key features, such as conditional branching. Also, because Menabrea had alluded to the Bernoulli numbers in his article, Ada’s program tied in nicely with her translation of Menabrea.
What strikes me, almost 20 years after I wrote that article with Betty, is how often people continue to question Ada’s contributions. Why would they do so? The scholarship is actually not extensive. Only a few historians have actually looked at the source material, and the same ones (including Betty and me) are cited over and over again. There is no smoking gun suggesting she is not responsible for her own work, and the evidence in her favor is considerable. Why is there any doubt? More importantly, why is the doubt so often propagated?
I have two hypotheses. The first is that people are lazy. Most people never bother looking at the historical pieces, such as the one I cowrote with Betty, much less the source material.
Second, most people are biased (consciously or not) against women. If Ada had been a man, and if the historical record were exactly the same, I have no doubt that far fewer people would be questioning Ada’s contributions. Remember, I fell into this category too. When Betty first told me about the controversy, even though I trusted her, I assumed the source data would be far less conclusive than it was. Truthfully, if I weren’t already friends with Betty, I probably would have just assumed that the skeptics were correct.
Betty, of course, understood all of this far more viscerally than I did. We talked about this extensively back then, and while I thought I understood, my understanding is far more deep and nuanced today. I know that my mere presence on the byline of the article we co-wrote (and that Betty deserves far more credit for) automatically boosted the credibility of our work, and that this had nothing to do with my reputation in the field (which was and deservedly continues to be non-existent).
It’s sobering, but I think there’s something heartening about these current times, despite all of the very real challenges we continue to face. People are far more conscious about unconscious bias — whether it’s about gender, race, sexuality, height, weight, age, fashion, whatever — than ever before. If we can acknowledge it, we can also do something about it.
So here’s to Ada Lovelace, badass and worthy hero to women and men everywhere! And here’s to my friend, Betty Alexandra Toole! Happy International Women’s Day!
My dad is a physicist who idolized Richard Feynman, which meant that I idolized him as well growing up. One thing about him that made a huge impression on me was how simply and clearly he explained things. Feynman was a genius. If he could explain physics clearly to anyone, then no one had any excuse to put on airs.
The Long Now Foundation recently republished one of my favorite essays about Feynman, Danny Hillis’s, “Richard Feynman and the Connection Machine.” (Hat tip to William Barnhill.) It not only shares many great anecdotes about Feynman, it also is an oral history of the early days at Hillis’s groundbreaking parallel computing company, Thinking Machines.
The whole essay is fantastic, but I like this excerpt about Feynman’s explaining prowess in particular:
In the meantime, we were having a lot of trouble explaining to people what we were doing with cellular automata. Eyes tended to glaze over when we started talking about state transition diagrams and finite state machines. Finally Feynman told us to explain it like this,
“We have noticed in nature that the behavior of a fluid depends very little on the nature of the individual particles in that fluid. For example, the flow of sand is very similar to the flow of water or the flow of a pile of ball bearings. We have therefore taken advantage of this fact to invent a type of imaginary particle that is especially simple for us to simulate. This particle is a perfect ball bearing that can move at a single speed in one of six directions. The flow of these particles on a large enough scale is very similar to the flow of natural fluids.”
This was a typical Richard Feynman explanation. On the one hand, it infuriated the experts who had worked on the problem because it neglected to even mention all of the clever problems that they had solved. On the other hand, it delighted the listeners since they could walk away from it with a real understanding of the phenomenon and how it was connected to physical reality.
We tried to take advantage of Richard’s talent for clarity by getting him to critique the technical presentations that we made in our product introductions. Before the commercial announcement of the Connection Machine CM-1 and all of our future products, Richard would give a sentence-by-sentence critique of the planned presentation. “Don’t say `reflected acoustic wave.’ Say [echo].” Or, “Forget all that `local minima’ stuff. Just say there’s a bubble caught in the crystal and you have to shake it out.” Nothing made him angrier than making something simple sound complicated.
But what Richard hated, or at least pretended to hate, was being asked to give advice. So why were people always asking him for it? Because even when Richard didn’t understand, he always seemed to understand better than the rest of us. And whatever he understood, he could make others understand as well. Richard made people feel like a child does, when a grown-up first treats him as an adult. He was never afraid of telling the truth, and however foolish your question was, he never made you feel like a fool.