Socialtext 2.0 Released

Congratulations to Ross Mayfield, Peter Kaminski, Adina Levin, and all the excellent folks at Socialtext for the release of Socialtext 2.0. Even bigger props for slipping in “Purple Consulting” in the screencast. I’ve been cranking so hard over the past six months, I didn’t have a chance to congratulate them on their Open Source release last July, so now I get to combine my commentary here. (In fact, I’m sitting on a bunch of Wiki-related posts right now that I need to push out; a lot of really cool stuff has been happening.) That’s good, because I have plenty to say.    (L72)

Socialtext 2.0 is an important release for three reasons. First, it doesn’t just look good, it’s highly usable. Adina and Pete deserve big-time credit for this. They’ve spent months painstakingly experimenting and testing the design. More importantly, they haven’t just focused on making it easy to use, but they’ve also agonized over how to accomodate expert usage as well.    (L73)

Have they succeeded? I think the personal home base concept is great. I love the fact that Backlinks are visible on the page and get lots of love. I love their new Recent Changes interface (and I hope to see a Tag Cloud view of the all pages index in the next release). I hate the fact that a Recent Changes link is not on every Wiki page. Both Pete and Adina are well aware of this beef, and I’m also well aware of their reason for not including it. Testing and user observation will tell what’s better.    (L74)

Second, Socialtext 2.0 has a really cool REST interface. Chris Dent has been boasting about it for months, but I didn’t look at it myself until Kirsten Jones walked me through it last week. (Her WikiWednesday presentation from earlier this month is online.) It really is cool, and it’s also useful. Congrats to Chris, Kirsten, Matthew O’Connor, and Matt Liggett for their excellent work!    (L75)

What’s great about this API is that it could very well serve as a standard URI scheme for all Wikis. This would obviate the need for a separate SOAP or Atom API. You just have a regular Web app, and you get the API behavior for free.    (L76)

For example, Alex Schroeder‘s currently going through the same process that Chris went through a year ago with Atom and OddMuse. An easier way around this problem would be to implement these REST APIs.    (L77)

(This is also a great opportunity for me to mention WikiOhana again, which gained great traction at WikiSym last month and which now has a lively Wiki of its own. PBWiki recently announced its own Wiki API, which is a good thing. We are all part of the same Wiki family. Socialtext and PBWiki need to talk about how their two efforts can work together. That’s the WikiOhana Way.)    (L78)

The third important thing about Socialtext 2.0 is that it’s Open Source. (Big props to Jonas Luster and Andy Lester for finally making this happen.) Here’s the thing. I think the announcement a few months back was overblown by a lot of blogosphere hype. The reality of all corporate Open Source releases is that — in and of themselves — they’re mostly meaningless. Mostly, but not completely. The fact that Socialtext 2.0 is Open Source means that other Wiki implementations can benefit from the great work that the Socialtext developers have done, from the APIs to the user interface. That makes for a healthier ecosystem, which is good for everybody.    (L79)

That said, the reason the actual open sourcing of Socialtext 2.0 (and any proprietary software project) is mostly meaningless is that the license is a critical, but tiny part of what makes Open Source software interesting and important. The big part is the community and collaborative process, and a lot of other things besides an open license are required to make that successful.    (L7A)

Before Socialtext went Open Source, I spent many hours talking to a bunch of people there about the impending release. I wanted to know how committed they were to making this a truly open and collaborative software project, because I felt the potential impact on the Wiki community was enormous. The answer I got was complex. The fact that everyone was willing to talk to me with no strings attached, in and of itself, demonstrated a commitment to openness, and I’m still grateful for that. The code itself will be a short-term bottleneck, as it needs a lot of work before outside developers will find it compelling. I also think the licensing terms are weaker than they need to be, although I also understand the outside pressures that make it so.    (L7B)

In short, I think the spirit is strong within Socialtext to fully realize the potential of this Open Source project, but there are also roadblocks. Hopefully, external pressures won’t squash that spirit. If Socialtext ever fulfills its potential as an Open Source company, it will not only help the ecosystem, but it will also tremendously benefit Socialtext as a business.    (L7C)

Creole Wiki Markup, WikiOhana, and More Ward Wisdom

Several folks asked me on Saturday what I thought about the conference, and I kept saying, “It’s been great, except I haven’t had any interesting conversations about Wikis.” That changed in an unexpectedly generative way on Saturday afternoon, resulting in a new little cooperative effort we’re calling WikiOhana.    (KZ1)

At Thursday night’s conference party, I got reacquainted with Chuck Smith, whom I had met at Wikimania last year. Chuck is the founder of Esperanto Wikipedia and was the one who first introduced Brion Vibber — now the lead developer of Mediawiki — to the Wikipedia community. At the conference last year, Chuck met Christoph Sauer, a German Wiki researcher, and the two of them started working together.    (KZ2)

At the party, Chuck told me that he and Christoph had been working on a proposed WikiMarkup standard, and I loudly grimaced in response. At WikiSym last October, I made it clear to several people that I thought trying to standardize WikiMarkup was a noble waste of time.    (KZ3)

First, people are really attached to their own markup. Previous discussions about standardization usually started and ended with, “Great idea. Let’s just standardize on mine.”    (KZ4)

Second, WYSIWYG in theory obviates the value of a standard markup from a user’s perspective. It seemed more valuable to work out a standard interchange language, which WYSIWYG widgets could use to interact with different Wiki engines and which would make data migration between different Wiki engines easier. It also seemed like it would be easier to come to agreement on an interchange language. In fact, folks have already worked out a good proposal for an XHTML interchange language on the Interwiki mailing list (now migrated to the wiki-standards list) and on CommunityWiki.    (KZ5)

What made all this talk worse for me was that I thought that people were trying to go about this the wrong way. We were not going to come to agreement by group authoring a spec, then getting everyone to agree on it. The only way this was going to work was if a few Wikis came to some agreement and actually implemented the changes. This wouldn’t necessarily result in a standard, but it could act as a catalyst that could lead to a standard. Sunir Shah has been advocating this approach for a long time, to no avail. I would have organized such an effort myself, except that I didn’t think it was important enough to spend any time on it.    (KZ6)

Chuck and Christoph changed my mind. Chuck explained that he and Christoph had extensively analyzed existing Wiki markup, and they had taken pains to come up with a small subset of things that would be as noncontroversial as possible. (There had been a similar effort by others to do such an analysis before, but it hadn’t gotten very far.) They were going to present the results at WikiSym in a few weeks, but they had prepared a poster for Wikimania. Still clucking my disapproval, I promised Chuck that I would check out his poster.    (KZ7)

On Saturday, I finally made some time to check out the poster. I studied it and found myself thinking, “Hmm. This isn’t bad.” Chuck came over later, and we started going over it in depth. I still have some issues with it (discussed below), but I told him, “You know what, this is almost good enough for us to adopt in PurpleWiki. But you really should find one or two other Wikis who might say the same.” We kicked a few ideas around, and then I said, “Let’s go see what Ward thinks.”    (KZ8)

So we rounded up Ward Cunningham, and he took a look and started asking questions. He seemed to be okay with the answers, because he said that if someone were to write a Perl function that converted from his markup to the proposed markup and vice versa, he would consider incorporating it into his Wiki.    (KZ9)

Creole Markup    (KZA)

Well, you know me. Low-hanging fruit, a spare hour before the next party. The long and the short of it is, half of the converter now exists. It’s called Creole, and it’s written in Perl. Currently, it consists of a brain-dead simple script that converts Ward’s markup into Creole Markup (Chuck and Christoph’s proposal) and some tests.    (KZB)

The name was Ward’s suggestion. Rather than call it “pidgin,” which is what it actually is (a non-native language cobbled together from two or three other languages), we chose to optimistically call it “creole” (a native language that emerges from a pidgin), which is what we hope it will become.    (KZC)

I have three issues with the current proposal. First, I don’t like the exclamation point syntax for headers, not because of the character itself, but because of the numbering:    (KZD)

!!!Heading 1 !!Heading 2 !Heading 3    (KZE)

As you can see, you need to remember that a top-level header has three exclamation points, not one, and you are only capable of having three levels of headers. The advantage of the equals syntax:    (KZF)

= Heading 1 = == Heading 2 == === Heading 3 ===    (KZP)

is that the heading level is represented by the indentation.    (KZQ)

Second, there needs to be a proposal for preformatted blocks.    (KZR)

Third, the spec isn’t rigorous enough. You have to answer questions like, “Will italics work across multiple lines or blocks?” The reason I didn’t write a Creole-to-Ward converter was that these questions were not yet answered.    (KZS)

Most of the time developing the Creole converter was actually spent reverse engineering Ward’s markup. Ward pointed me to a Literate Programming version of his source code — implemented in a Wiki, of course — which helped a lot. As it turns out, Ward’s been thinking a bit about how to write a good markup parser recently. Parsers are often on my mind as well, as we’ve been talking about rewriting the PurpleWiki parser forever. Plus, the topic came up at Hacking Days among the Mediawiki developers before the conference. We’ll probably noodle on this problem a bit both before and during WikiSym. Fortunately, Ward knows a lot more about writing parsers than I, as anyone who’s seen the PurpleWiki code can attest.    (KZT)

WikiOhana    (KZJ)

When we were discussing names, Christoph proposed “ohana,” which means “family” in Hawaiian. It has a wonderful connotation — respecting our individuality while emphasizing the bonds that keep us together in a positive, welcoming fashion. It’s also consistent with Ward’s original naming convention, as “WikiWiki” means “quick” in Hawaiian.    (KZU)

We decided to call our little cooperative effort, WikiOhana, of which Creole Markup is a first project. We hope the family grows over time.    (KZV)

More Ward Wisdom    (KZK)

While retelling an experience he had writing a parser, Ward said, “That’s when I learned how to write beautiful code: Get it working, then spend the same amount of time making it better.”    (KZW)