FLOSS Usability Sprint III

This weekend, Aspiration and Blue Oxen Associates are once again co-hosting FLOSS Usability Sprint III, the pre-eminent event for bringing usability to Open Source projects. This year, we’ve moved venues from San Francisco to Mountain View. Thanks to Google and especially Rick Boardman for sponsoring the event!    (LET)

As always, we have a great set of participants, and it should be both productive and fun. What’s even cooler for me this year is that one of my projects, HyperScope, will be a participant. I love it when things converge! The other projects are Drupal, Social Source Commons, Socialtext Open, and Sustainable Civil Society. Really looking forward to it!    (LEU)

Scanning Large Texts with HyperScope

Speaking of capabilities, while digging up relevant passages in 1984, I really wished I had a digital version of the book so that I could use HyperScope on it. Specifically, I wanted to find the passages that contained the word, “dictionary.”    (LEH)

With a standard Web browser and an HTML version of the text, I would have hit Alt-F, typed “dictionary,” and scanned the surrounding paragraphs of each found instance, most likely scrolling up and down to examine the context.    (LEI)

With HyperScope, I would have changed the View Spec to wi;"dictionary";. This would show me only paragraphs containing the word, “dictionary.” I would then scan each paragraph, and if I found something that piqued my interest, I would Jump to that Item with the View Spec lj, which would show me only that paragraph and its sibling paragraphs (a “plex” in Augment terminology) with the previous content filter turned off.    (LEJ)

If HyperScope had multiwindow capabilities (which it will eventually), I could do each of these operations in its own window, which would allow me to jump back and forth easily between different views of the same document.    (LEK)

This is a great example of how View Specs and Granular Addressability allow you to navigate through a large text easily and effectively in ways that aren’t possible with today’s everyday tools.    (LEL)

Reversing 1984: Augmenting Language

Brad Neuberg and I were having a conversation about HyperScope earlier today, and Brad said something interesting about language. He said that HyperScope expands people’s vocabulary. He contrasted that to George Orwell‘s 1984, where Newspeak is constantly shrinking.    (LEA)

I loved the Orwell reference, and thought it would be worth rereading the relevant passages. (I ended up rereading much more. It’s such a well-written and engaging book, it was hard to put down.)    (LEB)

“Don’t you see that the whole aim of Newspeak is to narrow the range of thought? In the end we shall make thoughtcrime literally impossible, because there will be no words in which to express it. Every concept that can ever be needed will be expressed by exactly one word, with its meaning rigidly defined and all its subsidiary meanings rubbed out and forgotten. Already, in the Eleventh Eidition, we’re not far from that point. But the process will still be continuing long after you and I are dead. Every year fewer and fewer words, and the range of consciousness always a little smaller. Even now, of course, there’s no reason or excuse for committing thoughtcrime. It’s merely a question of self-discipline, reality-control. But in the end there won’t be any need even for that. The Revolution will be complete when the language when the language is perfect.” (46)    (LEC)

If reducing vocabulary and narrowing language are prerequisites for diminishing our ability to think, then it follows that augmenting our ability to think results in an increase in our vocabulary. Doug Engelbart always talks about augmenting existing capabilities while adding new ones. You could replace the word “capabilities” with “vocabulary,” and you would essentially be saying the same thing.    (LED)

Multi-Touch Screen

Tristan Naramore referred me to a video clip of Jeff Han‘s talk at TED earlier this year. Jeff demonstrated his multi-touch screen, which is incredibly cool. Unlike touch screens today, you can interact with the screen with multiple fingers simultaneously. It’s also pressure sensitive. More importantly, according to Jeff, the technology is affordable.    (L7M)

One of my passions is integrating online tools into face-to-face environments. The problem is that computer interfaces obstruct physical collaboration. They discourage engagement with others who are present. The design challenge is understanding the tradeoffs. But the tradeoffs could disappear if only we had better physical interfaces. The multi-touch screen is a step in the right direction.    (L7N)

Watching the demo reminded me of another important HyperScope design point: Views of a document should have an address. With HyperScope, any view of a document is addressable. Similarly, with Google Maps, I can always generate a URL to a particular view of a map. Jeff Han‘s geospatial demo was cool, but when the app is actually deployed, I hope the developers don’t forget to make every view addressable.    (L7O)