Web Site Redesigned!

My personal web site has been badly in need of a refresher for years now. I played with elements of a redesign a few years ago, but the bigger challenge was migrating the content, and the longer I waited, the more technical debt I accumulated. This was a problem, as I’m a notorious yak shaver.

Well, I finally did it. Welcome to the new eekim.com!

Not all of the data is migrated to the new design, but the gist of it is all here. Thank you, WordPress and MediaWiki. You’ve made my life much easier.

The background image is from my trip to Kano, Nigeria in 2008.

Peering Out Over Kano

I stole elements of the design from many places, including Zak Greant’s blog, which I enjoy quite a bit.

Sadly, the Purple Numbers are gone. So is the blog/Wiki integration via Link As You Think. I hope they (or something with equivalent functionality) make their return soon.

Let me know how you like the new site!

Spreadsheets 2.0 and Transclusions

A few weeks ago, I had dinner with my old HyperScope buddies, Brad Neuberg and Jonathan Cheyer. We talked a bit about this Office 2.0 madness, and how a lot of these Web-based applications were disappointly uninteresting. Don’t get me wrong. There’s a lot of really nifty hacking going on behind the scenes to make this all work. But in the end, all you have is a Web-based office application. Most of these applications do little to take advantage of the network paradigm.    (M2P)

A simple and extremely cool way for Web-based spreadsheets to exploit the medium would be to support Transclusions across multiple web sites. As I’ve observed before, spreadsheets were the first applications to popularize the notion of a Transclusion, even though they didn’t call them that. When I type =E27 in a cell, it displays the content of cell E27. This, in a nutshell, is a Transclusion, and oh, is it useful.    (M2Q)

With Web-based spreadsheets, if you made cell addresses universally resolvable, you could easily support Transclusions across web sites. In other words, I could transclude the content of cell =E27 from a spreadsheet hosted on my web site into a cell on a spreadsheet hosted on another web site.    (M2R)

Why would this be useful? Well, why is it useful to link to other web sites? Today’s Web-based spreadsheets are no more collaborative than desktop spreadsheets. In theory, they’re more convenient than emailing spreadsheets back-and-forth, but they’re no different in capability. Cross-spreadsheet Transclusions would break down silos and encourage collaboration.    (M2S)

I would start with spreadsheet-to-spreadsheet Transclusions with an eye toward supporting Transclusion of non-spreadsheet content using Purple Numbers or something similar. The main technical barrier is coming up with the right addressability scheme. Seems to me that the Simplest Thing That Works would be to use fragment identifiers (which is what we did for the HyperScope). In other words, cell =E27 on a spreadsheet at http://foo/bar would have the address:    (M2T)

  http://foo/bar#E27    (M2U)

Eventually, you’d want persistent, non-URL-based identifiers, but first things first.    (M2V)

The Blue Oxen Way

Back when Chris Dent and I started Blue Oxen Associates, we often referred to something called the The Blue Oxen Way. It was something that we both understood and recognized, but that we never actually articulated. Over the years, I tried to rectify this, and I generated pages and pages of notes (including three years worth of rambling blog posts) in the process, to no avail.    (LVU)

Recently, Chris articulated his visions for “Wiki Everywhere,” where he referenced some of our early conversations. As I read it, I relived many of these discussions, and suddenly, it all clicked for me.    (LVV)

The essence of The Blue Oxen Way can be boiled down into three ideas, each of which form the framework for our entire philosophy about collaboration:    (LVW)

The Squirm Test    (LW0)

The Squirm Test is a thought experiment for measuring the amount of Shared Understanding in a group by observing the amount of squirming in a room. Shared Understanding (which is not the same as “same understanding”) manifests itself in the formation of Shared Language. Shared Language is a prerequisite for collaboration.    (LW1)

Much of the messiness of the collaborative process can actually be attributed to lack of Shared Language. Great collaborative design accounts for this rather than wishing it away, which is how most of us deal with it.    (LW2)

Shared Language is The Red Thread that binds all of the crazy things I’m involved with, from Pattern Languages to Wikis, from face-to-face facilitation to organizational strategy. The Squirm Test is a wonderful embodiment of Shared Language.    (LW3)

Be Less Dumb    (LW4)

If Shared Language is the tie that binds, then being Less Dumb is the state that we are all striving to reach. Why are we playing this game in the first place? To be Less Dumb, of course! As you go to bed every night, if you can’t look in the mirror and say, “Today, I became Less Dumb,” then you’re not doing your job.    (LW5)

Less Dumb is the negative framing of “augmentation,” but it sounds a heckuva lot better, and it embodies the same philosophy. Tools should make people Less Dumb. Processes should make people Less Dumb. How do we measure collaboration? One way is to see if we’re Less Dumb in the process.    (LW6)

That’s obvious, you say? If it’s so obvious, why do most tools and processes make us More Dumb rather than Less Dumb? And why are we so often willing to live with that? It may sound obvious, but are we really paying enough attention to this?    (LW7)

Bootstrapping    (LW8)

With Less Dumb and Shared Language (as embodied by the Squirm Test), we have our target and the glue that keeps us together. Our process — the way we get to our target — is bootstrapping. Bootstrapping is building on top of things that already exist, then building on top of that. (The notion of bootstrapping is also the reason why we called ourselves Blue Oxen Associates.)    (LW9)

The most vivid images of my best experiences collaborating have to do with movement — my actions resulting in other people’s actions, which result in even more actions, which inspire me to act further. This is bootstrapping at its best.    (LWA)

Purple Numbers are ultimately about building ideas on top of pre-existing ideas — knowledge synthesis (i.e. becoming Less Dumb) by reusing existing ideas. Also known as bootstrapping.    (LWB)

Visualizing Wiki Life Cycles

On the first day of WikiSym in Denmark last August, I spotted Alex Schroeder before the workshop began and went over to say hello. Pleasantries naturally evolved into a discussion about Purple Numbers. (Yes, I’ve got problems.) Alex suggested that unique node identifiers were more trouble than they were worth, because in practice, nodes that you wanted to link to were static. Me being me, my response was, “Let’s look at the numbers.” Alex being Alex, he went off and did the measurements right away for Community Wiki, and he did some followup measurements based on further discussions after the conference.    (LSP)

As it turned out, the numbers didn’t tell us anything useful, but our discussions firmly implanted some ideas in my head about Wiki decay rates — the time it takes for information in a Wiki page to stop being useful.    (LSQ)

I had toyed with this concept before. A few years ago, I came up with the idea of changing the background color of a page to correspond to the age of the page. A stale page would be yellowed; an active page would be bright white. I had originally envisioned the color to be based on number of edits. However, I realized this past week that I was mixing up my metaphors. There have been a few studies indicating a strong correlation between frequent edits and content quality, so it makes sense to indicate edit frequencies ambiently. However, just because content has not been edited recently does not mean the information itself is stale. You need to account for how often the page is accessed as well.    (LSR)

(At the Wikithon last week, Kirsten Jones implemented the page coloring idea. She came up with a metric that combined edits and accesses, which she will hopefully document on the Wiki soon! It’s cool, and it should be easy to deploy and study. Ingy dot Net suggested that the page should become moldy, a suggestion I fully endorse.)    (LSS)

This past Sunday, I had brunch with the Socialtext Bloomington Boys. Naturally, pleasantries evolved into Matthew and me continuing along our Wiki Analytics track, this time with help from Shawn Devlin and Matt Liggett. We broke Wiki behavior into a number of different archetypes, then brainstormed ways to visually represent the behavior of each of these types. We came up with this:    (LST)

https://i2.wp.com/farm1.static.flickr.com/149/388587151_3f730b0a5c_m.jpg?w=700    (LSU)

The x-axis represents time. The blue line is accesses; the green line is edits. Edits are normalized (edits per view) so that, under normal circumstances, the green line will always be below the blue (because users will usually access a page before editing it). The exception is when software is interacting with the Wiki more than people. The whole graph should consist of a representative time-slice in that Wiki’s lifespan.    (LSV)

The red line indicates the median “death” rate of Wiki pages. After much haggling, we decided that the way to measure page death was to determine the amount of time it takes for a page to reach some zero-level of accesses. We’ll need to look at actual data to see what the baseline should be and whether this is a useful measurement.    (LSW)

The red line helps distinguish between archetypes that may have the same access/edit ratio and curve. For example, on the upper left, you see idealized Wiki behavior. Number of edits are close to number of accesses, both of which are relatively constant across the entire Wiki over time. Because it’s a healthy Wiki, you’ve got a healthy page death rate.    (LSX)

On the upper right, you see a Wiki that is used for process support. A good example of this is a Wiki used to support a software development process. At the beginning of the process, people might be capturing user stories and requirements. Later in the process, they might be capturing bugs. Once a cycle is complete, those pages rapidly become stale as the team creates new pages to support a new cycle. The death line in this case is much shorter than it is for the idealized Wiki.    (LSY)

Again, one use of the Wiki isn’t better than the other. They’re both good in that they’re both augmenting human processes. The purpose of the visualization is to help identify the archetypes so that you can adjust your facilitation practices and tools to best support these behaviors.    (LSZ)

This is all theory at this point. We need to crunch on some real data. I’d love to see others take these ideas and run with them as well.    (LT0)

URI Syntax for the HyperScope

At last Tuesday’s HyperScope meeting, Jonathan Cheyer and I spent an inordinate amount of time debating the syntax for HyperScope URIs, much to the amusement and chagrin of our peers. Although the topic may seem insignificant, it is actually quite layered with no easy answers. I’m going to summarize the issues here. Most of you will probably not care about the intricacies of the argument itself, but at minimum, it should reveal a bit more about the project itself.    (KI2)

HyperScope is meant to be a transitional tool that will enable people to play with Augment‘s more sophisticated capabilities within their existing environments. In the immediate future, that means the Firefox web browser (and probably Internet Explorer as well). In the not-to-distant future, that could extend to a range of applications, from Eclipse to OpenOffice and beyond.    (KI3)

This intent strongly informs our requirements. In particular, we need to make sure we are bootstrapping the system on top of existing technologies (such as URIs) effectively and correctly, and we need to make sure the system is evolvable. Both of these requirements play a big role in our debate.    (KI4)

So what’s all the fuss about? One of Augment‘s coolest (and most fundamental) features is its sophisticated addressability. The example most folks know about manifests itself as Purple Numbers, namely the ability to reference a specific node in a document in a standard way. But Augment can do much, much more. It can do path expressions, similar in spirit to XPath, which allows you to reference some subset of nodes in a document. (See my notes on transcluding a subset of nodes via Purple Numbers).    (KI5)

You can also embed View Specs in an address. For example, suppose you decided that the best way to view a page was the first level of nodes only. You could specify that as a View Spec in the link itself, so that when someone followed that link, they would see only the first level of nodes rather than the entire document.    (KI6)

With the HyperScope, we’re bringing these capabilities to the plain ol’ World Wide Web — that is, assuming your client knows how to interpret these addresses properly. With our initial release (due September 2006), this will require loading a JavaScript library. All document addressability will happen entirely on the client-side. This is a good thing for a lot of reasons, the most important being adoptability. It will be easy for people to play with the HyperScope. All they’ll have to do is click on a link in Firefox (and probably Internet Explorer).    (KI7)

However, the fact that we’re doing a client-side only version of the HyperScope does not preclude the creation of a joint client/server version or even a mostly server-side version where the client is essentially a dumb web browser. In fact, we’d encourage the creation of both. We don’t care some much about implementation as we do capabilities and interoperability.    (KI8)

Here’s the question: How should we include these extended addressing capabilities in real-life URIs?    (KI9)

There are three possible solutions:    (KIA)

  • Embed them as a fragment address (i.e. following a hash mark at the end of the URI).    (KIB)
  • Embed them either as part of the path or query string parameters (i.e. following a question mark at the end of the URI).    (KIC)
  • All of the above and more.    (KID)

I side with the first and third solutions. Jonathan thinks it should be the second.    (KIE)

Fragment Address    (KIF)

Pros:    (KIG)

  • These extended capabilities seem to belong here semantically. Purple Numbers are an obvious example of this. XPointer is another.    (KIH)
  • The URIs will be bookmarkable as the user manipulates the document from the HyperScope. We can do this, because we can change the fragment identifier from within JavaScript. We can’t do the same with any other part of the URI.    (KII)

Cons:    (KIJ)

  • These URIs cannot be used for a server-side only solution, because HTTP does not pass the fragment identifier to web servers.    (KIK)

Path or Query String    (KIL)

Pros:    (KIM)

  • Can be used for client-only or server-only solutions, or anywhere in between.    (KIN)

Cons:    (KIO)

  • You’re still left with the problem of a standard addressing syntax that doesn’t interfere with any other kind of addressing. For example, if you’re going to use a query parameter, what do you call it? Granted, if you namespace it correctly, the likelihood of namespace clash is tiny, but it’s still there.    (KIP)
  • No one’s building a server-side only solution right now.    (KIQ)

Why Limit Yourself?    (KIR)

This is ultimately my argument: Go with the first syntax for now, because it best suits our current needs, and don’t worry about the fact that it won’t satisfy all potential future needs, because nothing will. What’s important is that we standardize the conceptual semantics, and then standardize the syntax to the extent possible. In all likelihood, most people will be passing these links around by copying and pasting them anyway, so the actual link syntax isn’t completely critical.    (KIS)