Doug Engelbart, Human Systems, Tribes, and Collective Wisdom

Sunday, December 9 was the 50th anniversary of Doug Engelbart’s The Mother of All Demos. There was a symposium in his honor at The Computer History Museum and much media and Twitter activity throughout.

Among the many things said and written that caught my eye that weekend was a Twitter exchange between Greg Lloyd and Mark Szpakowski. Greg tweeted a quote from this Los Angeles Review of Books article:

“At the very heart of Engelbart’s vision was a recognition of the fact that it is ultimately humans who have to evolve, who have to change, not technology.”

Mark responded:

And yet 99% of the Engelbart tribe work has been on the techie Tool System. … used to say “coming soon”; now it has disappeared. Time to join up with recent progress on Social Technologies for Complex Adaptive Anticipatory Human Systems?

I agree with Mark, with one caveat: It depends on how you define the “Engelbart tribe.” Let’s explore this caveat first.

Tribes and Movements

There are many folks specializing in process design (what Doug would have categorized as “Human Systems”) who consider Doug a mentor or, at worst, an inspiration. I’m one of them, although I didn’t start (exclusively) from this place when I started working with him in 2000.

Three others in this group have been direct mentors to me: Jeff Conklin, who spent a good amount of time with Doug, and Gail and Matt Taylor, who didn’t, but who knew of him and his work. David Sibbet, the graphic facilitation pioneer, came across Doug’s work in 1972 and worked some with Geoff Ball, who was on Doug’s SRI team doing research on facilitating groups with a shared display. Those four people alone make for an impressive, accomplished, world-changing group.

There are also many, many more folks doing important work in human systems who aren’t familiar with Doug’s work at all or who don’t identify with him for whatever reason. Doug himself thought that lots of what was happening in both open source software development communities and in the Agile Movement were highly relevant, although he had nothing to do with either. At the Symposium celebrating Doug, Christina Engelbart, Doug’s daughter and the keeper of his intellectual legacy, connected the Lean movement to her dad’s work and invited Brant Cooper, the author of The Lean Entrepreneur, to speak.

An effective movement is an inclusive one. What matters more: Seeing Doug’s vision through, or establishing tribal boundaries? If the former, then it’s important to acknowledge and embrace the work of those who may not have the same heroes or conceptual frames of reference.

I don’t think many of us who loved Doug and were inspired by his vision have been very good at this, and unfortunately, our tribalism has extended to technologists too. After the Symposium, I had drinks with my friend, James Cham, who is a long-time fan of Doug’s, but who wasn’t lucky enough to spend much time with him. James told me that Dylan Field (co-founder of Figma Design) was inspired by Doug and that he had hosted his own celebration of the Demo that same Sunday that 300 people attended. Amjad Masad (founder of, a tool that Doug would have loved) gave a thoughtful toast about Doug’s work there.

I didn’t know either Dylan or Amjad, and I certainly didn’t know that they tracked Doug’s work and were inspired it. I’m fairly certain that the organizers of the official celebration didn’t either. That’s pretty remarkable, given how small of a place Silicon Valley is. Now that we know, I hope we can start making some fruitful connections.

Capabilities and Collective Wisdom

The movement of folks committed to Doug’s larger vision is much larger than the “official” tribe to which Mark referred in his tweet. But even if we take into account this larger group, I think Mark’s criticism still holds.

Doug sought to make the world collectively smarter. He believed the path to achieving this would be a co-evolutionary process involving both tool and human systems. In other words, new tools would give us new capabilities, assuming we learned how to master them. Those new capabilities would inspire us to create even better tools. Rinse, and repeat.

As my friend, Travis Kriplean, pointed out to me this morning, we can already test this hypothesis. Technology has already evolved exponentially. Have our collective capabilities — or even more importantly, our collective wisdom — evolved with it?

Let’s narrow the question. Our ability to capture, store, and share information has improved by leaps and bounds since Doug’s Demo in 1968. Has our collective memory increased as a result of that?

If you were pinning me down, I would guess, “no.” The mere existence of those tools don’t guarantee that we remember more. Furthermore, the tools have a nasty side effect of overwhelm. But, these tools certainly create the potential for us to remember more — we just have to figure out how.

Right now, my eight- and 14-year old nephews have access to this blog, where they can read many of my innermost thoughts, including stories I wrote about them when they were younger. Right now, they can explore my Flickr, Instagram, and YouTube accounts without even having to ask for permission. If they asked for permission, I would probably let them go through my Google Maps Timeline, which is automatically harvested from my cell phone’s location data and which contains a comprehensive journal of my every day travels over the past few years. They already have access to lots of information about me, including my efforts to distill little bits and pieces of my experience. Most of this is purely the result of technology, with a little bit coming from my occasional discipline of sharing thoughts here and there.

But does any of this help them become wiser? If not, is it because our technology has not evolved enough, or is it because our human practices have not evolved with the technology?

The best example I know of a human system that evolved with the technology are wikis in general and Wikipedia in particular. Not enough people realize that wikis and Wikipedias aren’t just tools. They are a wonderful marriage of human and tool systems that created fundamentally new collective capabilities, exactly the type of thing that Doug envisioned. They are also 20-year old examples. I think this speaks very much to Mark’s critique.

10 replies to “Doug Engelbart, Human Systems, Tribes, and Collective Wisdom”

  1. Thanks for sharing! I’ve been increasingly aware that I’m fuzzy on what, exactly technology means. In this case, shouldn’t technology refer to new techniques—as opposed to tools. Of course, how we use the tools (or how make them) depend on other techniques. And, in that case, is the problem that we haven’t developed better techniques or that we have tools that don’t lend themselves to wiser techniques?

    1. Yes, “technology” should refer to techniques, not just technology. That’s generally how I use the term, just not in this case. 🙂 And yes to your second question as well — we need both!

  2. Thanks for this, Eugene! Maybe “karass” would have been a better word: perhaps “tribe” is too strong. Nevertheless, what seems important is that Douglas Engelbart was inspired to create a self-augmenting process to improve our capacity to deal with critical issues, and that there are other people and groups sharing that intention, who have advanced aspects of the praxis (“reflection and action directed at the structures to be transformed.” – from Paulo Freire via George Por) of the human system: individuals and groups collaborating on transformational change. I think that would include you, the U.Lab/Theory U network, the Art of Hosting people, etc. Joining self-reflective human system capabilities with tool-system enhancement, in sympathy with our supporting life system, seems essential at this time. I’ve written a bit about this at (I’m working on a longer version of that to unpack some condensed points there).

    I think some sense of mobilization (rousing, in face of a challenge, spirit and energy that cuts to the essence of what and how needs to be done), as perhaps exemplified by Greta Thunberg, is called for.

  3. I think the claim of the Los Angeles Review of Books is wrong or misleading at least. I interpret Engelbart’s work as an attempt to demonstrate that it has to be both sides together, so it’s not useful to look at them as separate categories. A tool without a human to use it is obviously useless, but a human without a tool can’t do anything either (in fact, some would argue that there wouldn’t be any humans if not for tools, I do not necessarily subscribe to that, but you get the idea). If one claims that technology improves a lot and fast and the human systems don’t, OK, in some sense, but a biological system like the human body and man-made systems and machines are fundamentally different. To install new software in a computer doesn’t take very long, but rewiring/rewriting the firmware in the brain can take generations or ages even, and both for legitimate reasons and for different results. But this is not meant to suggest that they can be viewed separate just because they start at different positions on the scale and follow different functions for their improvement over time, not tracking each other even remotely, as every improvement in the tool system like writing, the printing press, integrated circuits and the Internet (to use the also hugely misleading single narrative of history, as if it were one exclusive, harmonic consecutive line of improvement on the chart) boosted many improvements on the human system side, and improvements in the human system (go name some major ones in a similar reductionary narrative of only a single aspect and consecutive progression, the pendants to the tool system improvements I’ve listed like accounting, public education, interface and component standardization as well as decentralized networking must only be boring ones, just as the tool system pendants to improvements on the system side can arguably be expected to be somewhat boring as well). To say that our technology doesn’t need to improve, is utterly confused, as quite a lot of it in the young digital sector (see, the human system is on a different timescale again, how to change that?) is more or less crap for stupid reasons, and true, in no small part due to immense deficiencies in the human system, so if the idea is that we don’t face any difficulty to build awesome technology, but lack the human system models and support to actually get it going, fine, that still leaves you without the tools to help you with improving the human system that would allow/support the creation of the tool system that’s wanted/needed. Or the other way around: if the tools were there and good, human system improvement would be much less needed as focus finally could shift to the problems and topics we actually care about as knowledge workers, as I can imagine that good technology would be flexible/generic enough to incorporate or support almost any human system and the creation of such, and the handling of any situation, but we have absolutely no idea how to build technology that allow just that, which is a serious problem I’m not willing to dismiss. And if it should be a question of human system design to come up with a concept for it, please work on it and don’t fail to implement. I don’t want to claim that Engelbart provided us with such a design nor that he didn’t, but it’s a pity that his tool(s) aren’t available to us any more, while his human systems are much more accessible, because they can travel and be transmitted from printed pages and electronic screens to be consumed by eyes and processed by brains, but it doesn’t seem like these brains alone are doing a particular good job with solving our complex, urgent problems, so something is obviously missing here. I’m fine with whatever the solution might be or from where it might come from, but for that reason, I want to hesitate to artificially restrict the types of potential solutions or directions they may come from, for no good reason other than a narrative that sounds good and familiar (not to dismiss the value of good narratives, but to point out that there are other stories as well).

      1. Thanks a lot for replying! And apologies for the fast + sloppy brain dump without much structure…

        If there’s an overemphasis of the technical side, the importance of the human system should be pointed out, and the other way around. I think we can easily agree that conceptually, both are intended to support each other. Now, I’m not very aware of what stages we’re on for the human and tool system side, what’s readily available today to the knowledge worker and what’s “missing”.

        50 years after the demo, let’s see what activities the Engelbart Institute is planning, or if something can be done for HyperScope.

        1. Please, no apologies! I thought there was a lot of thought underlying your comment, and I appreciated you sharing. I also enjoyed learning about the work that you’re doing.

          I don’t where we are in the grand scheme of things around finding the balance between human and tool systems. From my limited perspective, I think what we lack is an acknowledgement of that balance, especially when it comes to tools. I think this is rampant among the knowledge repository crowd. Collaborative editing with interoperable hypermedia capabilities and smart indexing are great, but meaningless without the very human practices of knowledge capture, revisiting, refining, and refactoring (what Doug called CoDIAK). I want to see careful attention paid to both sides of this equation.

          1. Totally agree. I think it could easily be consensus that exclusive focus on a single aspect isn’t of any help, and that over-emphasis of one of the sides is risky for reasons like running into the chicken-and-egg confusion, preventing bootstrapping, missing augmentation or improvement potential. Now, being more of a tool builder, I have great difficulty to find any decent tools readily available or even a capability infrastructure in place. At the same time, I would love to learn what knowledge there is by whom and where to be captured/curated/consumed, where the knowledge workers are, what topics are of interest, which procedures need to be created or improved, etc. I mean, I can make these things up myself, but failed to get some serious, real answers from the Engelbart movement to at least look into. Maybe I need to study it in more detail. In my mind, any and all sides and aspects need way more emphasis than they have now! 🙂

          2. In lack of knowing a way to contact you directly, I do it here. Some of the time, I work on hypertext system infrastructure/architecture, but approaching from the other direction, there needs to be bootstrapping and prototyping of practical applications as well. Not only does this require/demand human system design, it’s also an opportunity to prototype/develop human systems and reflect them in the technical implementation. Now, the issue with this is: where are the human system designers who have a need/want for augmentation and are serious/committed/caring enough to engage in such an effort? This is an open invitation to those who are interested in investigating/exploring their human system needs/wants, and a request to please point me to existing human system development projects I can look at. One has to wonder where the collaboration can be found. There are plenty of siloed projects, walled gardens for reasons I can well imagine, but no single one of them can possibly solve it all, the complex, urgent problems. I could try to force some improvements on these islands from the outside, potentially face resistance and dismiss such group if it fails, but it would be much smarter to develop meta-improvement from within, so we end up with a kit/box/collection of generic tools that can be applied to any project that is encountered, and connects them all together. And this needs some major human system tools development of course.

Leave a Reply to Eugene Eric Kim Cancel reply