David Perkins on Organizational Intelligence

David Perkins, a professor at the Harvard Graduate School of Education, just published a new book on organizational intelligence: King Arthur’s Round Table: How Collaborative Conversations Create Smart Organizations (John Wiley & Sons, 2003). See Harvard Gazette‘s review of the book and of Perkins’s work.    (37)

E-mail as a Synchronous Collaboration Tool

I just got back from dinner with a friend in Berkeley. (Wow! Details about my life! Not just more boring thoughts about collaboration! Whoops, spoke to soon. Here it comes.) I spent most of the car ride back thinking about Leaping the Abyss and the inherent advantages that synchronous collaboration has over asynchronous collaboration.    (2T)

I then started thinking about my own experience with asynchronous collaboration, and also my observations of various open source communities. I realized something important: E-mail is technically an asynchronous collaborative tool (different time, different place), but it can also be used synchronously (same time, even same place). Some of my best experiences working with people over e-mail have occurred when we were all online at the same time, and we were responding almost in real-time to messages. One of us would receive an e-mail, do some work in response, and send out another e-mail. Momentum would build, and a chain reaction ensued.    (2U)

Many open source communities work in similar fashion. In fact, it’s common among open source developers to follow both e-mail and IRC as they code.    (2V)

Earlier today, I polled the Collaboration Collaboratory about e-mail usage. Dorai Thodla said something interesting in response:    (2W)

Mostly I read the posts in the evenings or over weekends. Sometimes I respond but most of the time I don’t since I look at these responses a bit late. [emphasis added]    (2X)

If e-mail is asynchronous, then how can you look at responses late? Well, you can. There’s a feeling of active inclusion if you’re participating in a conversation as messages are coming in. Following a fairly complete thread after the fact can feel more like reading than collaborating.    (2Y)

There are several potential metrics we can use to study this phenomenon:    (2Z)

  • Average response time in a thread.    (30)
  • Each participant’s average time-of-response.    (31)
  • Average time span of threads.    (32)
  • Correlation between depth of thread and average response time?    (33)
  • Correlation between number of participants in a thread and average response time?    (34)
  • Other possible correlations?    (35)

Using e-mail synchronously may be one effective way to use it, but it may not be the most effective way. One downside is that it excludes people who are unable to participate synchronously, such as people living on the opposite side of the world. Another possible downside is that it encourages superficial thinking at the expense of deeper, more rational discussion. A third possible downside is that it facilitates emotional conflicts (i.e. flames) while impeding resolution of these conflicts.    (36)

Stable URLs in Blosxom

One of the problems with blosxom is that it does not implement truly stable URLs. (Of course, one could argue that no URLs on the Web are truly stable, but that’s a topic for another day.)    (2D)

Bloxom uses the filesystem for data storage and categories. This is one of its distinguishing features. It makes it easy to set up and install, it makes it easy to edit entries (use the editor of your choice, or build your own Web entry system, like wikieditish), and you can easily preprocess entries either statically or dynamically. My blosxom_purple plugin takes advantage of this latter fact, as do many other plugins.    (2E)

Its dependence on the filesystem also has its drawbacks. It’s hard to keep the date of each entry stable, because blosxom uses the timestamp of the file for the date. If you modify the file at all, the date changes. (Plugins like entries_index fix this problem.) Multiple categories are possible via symbolic links, but this is hackish at best. And worst of all, it prevents you from having truly stable URLs.    (2F)

The blog term for such an animal is Perma Link, and the question of the moment is, what’s the proper Perma Link for blosxom? The default flavour suggests that the link should be the date and an anchor link to the entry itself (in case of multiple entries per date). This only works if you don’t modify any of your blog entries, in which case the date will change. If you rename the file, the anchor name will change.    (2G)

I use the category followed by the filename as the Perma Link, because I only wanted a single entry per Perma Link page. Here, the problem is if I recategorize the page (by moving it into a different directory), once again, the Perma Link breaks.    (2H)

These issues will not be a big deal for most folks, but it’s a big deal to me. It means that I have to be extremely careful about my categorization scheme, and I have limited flexibility in changing categories.    (2I)

I want Permalinks that are less fragile. The solution is to separate blosxom’s date and category system from the filesystem itself. This has already been done for dates, and could easily be done for categories as well. The problem is, now you’re getting away from the spirit of blosxom, one of the very features that makes it unique and desirable.    (2J)


One of my pet peeves/interests is e-mail: patterns of e-mail usage, ways to improve the tool, how e-mail fits into the collaborative tools world. We’re using SmartList at Blue Oxen Associates. It’s very hackable, but it’s also very user-unfriendly. For archiving e-mail, we’re using MHonArc and mharc. I’d like to eventually replace all three tools with something better.    (27)

Some series of random events got me thinking about syndicating our archives as RSS, a topic that’s come up before. I did a quick Google search and discovered Kellan Elliott-McCrea‘s recent posting on this topic.    (28)

Reviewing his blog, LaughingMeme, was an absolute joy. There’s lots of good stuff there on tools, and he’s hacked his site in interesting ways as well. For example, he’s implemented a “Similar Entries” feature on his blog using a Latent Semantic Indexing Perl module. Definitely a blog to watch.    (29)

Some mailing list archiving tools mentioned on Laughing Meme:    (2A)


I had many reasons for reading Michael Lewis’s latest book, Moneyball: The Art of Winning an Unfair Game. The book is about baseball, which I love, and more specifically, about the Oakland A’s recent run of success. I’ve been living in the Bay Area for about seven years now, and have adopted the A’s as my American League team. (Becoming a Giants‘ fan was not an option, as I remain loyal to my hometown Dodgers.) The book also talks a lot about Paul De Podesta, a fellow alumnus who graduated one year before me.    (1K)

After finishing the book, I discovered another reason for reading Moneyball: It offers insight that’s relevant to my interest in understanding collaboration and a compelling case study of one particular community.    (1L)

Metrics    (1M)

Moneyball is about how economics have changed the game of baseball. Up until the late 1990s, evaluating ballplayers had been an exercise in hand-waving, gut instincts, and misleading measurements of things like speed, strength, and even the structure of a player’s face. These practices had long been institutionalized, and there was little incentive to change.    (1N)

Market forces created that incentive. First, player salaries skyrocketed. It was more acceptable to risk $10,000 on a prospect than it was to risk $10 million. Second, market imbalances created a league of haves and have-nots, where the richest teams could spend triple the amount on salaries than the poorest. For small market teams — like the A’s — to compete, they had no choice but to identify undervalued (i.e. cheap), overachieving players.    (1O)

This new need led baseball executives like Oakland’s Billy Beane to the work of Bill James, a longtime baseball writer with a cult following. James castigated the professional baseball community for its fundamental lack of understanding of the game, and set about to reform the way it was measured. One problem was an overreliance on subjective observation. James wrote:    (1P)

Think about it. One absolutely cannot tell, by watching, the difference between a .300 hitter and a .275 hitter. The difference is one hit every two weeks. It might be that a reporter, seeing every game that the team plays, could sense that difference over the course of the year if no records were kept, but I doubt it. Certainly, the average fan, seeing perhaps a tenth of the team’s games, could never gauge two performances that accurately — in fact if you see both 15 games a year, there is a 40% chance that the .275 hitter will have more hits than the .300 hitter in the games that you see. The difference between a good hitter and an average hitter is simply not visible — it is a matter of record.    (1Q)

James was not simply a lover of numbers. Baseball was already replete with those. He was an advocate for numbers with meaning. The game had far less of these. Errors, for example, are a subjective measure, determined by a statistician watching the game, of whether or not a player should have made a defensive play. They do not account for a player’s ability to get in position to make a play in the first place. As a result, errors are a misleading measure of defensive ability.    (1R)

Lewis makes an important point about James’s work:    (1S)

James’s first proper essay was the preview to an astonishing literary career. There was but one question he left unasked, and it vibrated between his lines: if gross miscalculations of a person’s value could occur on a baseball field, before a live audience of thirty thousand, and a television audience of millions more, what did that say about the measurement of performance in other lines of work? If professional baseball players could be over- or under-valued, who couldn’t? Bad as they may have been, the statistics used to evaluate baseball players were probably far more accurate than anything used to measure the value of people who didn’t play baseball for a living.    (1T)

Extending this line of thinking further, how do we measure the effectiveness of collaboration? If we can’t measure this accurately, then how do we know if we’re getting better or worse at it? Baseball has the advantage of having well-defined rules and objectives. The same does not hold with most other areas, including collaboration. Is it even possible to measure anything in these areas in a meaningful way?    (1U)

The Sabermetrics Community    (1V)

James called his search for objective measures in baseball “sabermetrics,” named after the Society for American Baseball Research. Out of his work emerged a community of hard-core fans dedicated to studying sabermetrics. Lewis writes:    (1W)

James’s literary powers combined with his willingness to answer his mail to create a movement. Research scientists at big companies, university professors of physics and economics and life sciences, professional statisticians, Wall Street analysts, bored lawyers, math wizards unable to hold down regular jobs — all these people were soon mailing James their ideas, criticisms, models, and questions.    (1X)

…    (1Y)

Four years into his experiment James was still self-publishing his Baseball Abstract but he was overwhelmed by reader mail. What began as an internal monologue became, first, a discussion among dozens of resourceful people, and then, finally, a series of arguments in which fools were not tolerated.    (1Z)

…    (20)

The swelling crowd of disciples and correspondents made James’s movement more potent in a couple of ways. One was that it now had a form of peer review: by the early 1980s all the statistical work was being vetted by people, unlike James, who had a deep interest in, and understanding of, statistical theory. Baseball studies, previously an eccentric hobby, became formalized along the lines of an academic discipline. In one way it was an even more effective instrument of progress: all these exquisitely trained, brilliantly successful scientists and mathematicians were working for love, not money. And for a certain kind of hyperkinetic analytical, usually male mind there was no greater pleasure than searching for new truths about baseball.    (21)

Bill James had three important effects on the community. First, his writing created a Shared Language that enabled the community to grow and thrive. Second, he Led By Example, which gave him added credibility and created a constant flow of ideas and activity. Third, he answered his mail, and in such a way, transformed his monologue into a community dialogue.    (22)

The story of the sabermetrics community’s constant battle with baseball insiders provides a lesson on the dissemination of new ideas. Lewis describes how Major League Baseball largely ignored sabermetricians, even in the face of indisputable evidence, and explains how even today, the impediments continue. Sadly, the forces of institutionalization have prevented progress in many organizations and communities, not just baseball.    (23)

A Brief Review    (24)

Moneyball isn’t just a thought-provoking treatise on the objective nature of human affairs. At its core, it’s simply a great baseball book. Lewis is an excellent story-teller, and his retelling of interactions with players such as David Justice, Scott Hatteberg, and Chad Bradford make you feel like you’re in the clubhouse. More than anything, Moneyball is a fascinating portrayal of Billy Beane, the can’t-miss prospect who missed, and who then turned the Oakland A’s into baseball’s biggest success story.    (25)