« »
August 3, 2003 » 7:33 am

Do We Need the Semantic Web?

The Semantic Web, by Michael DaConta, Leo Obrst, and Kevin Smith (Wiley 2003), is a good book. I’ve worked with Michael a bit in an editorial context, and I’ve enjoyed some of his other writing. He thinks and explains things clearly, and this book is no exception. I especially enjoyed how The Semantic Web‘s crisply defined a number of hairy concepts — ontologies, taxonomies, semantics, etc. With some restructuring and condensing — there is some technical detail that isn’t that important, and the sections on ontologies could be more cohesive and should come earlier — this book could go from good to great.    (4V)

My goal here, however, is not to review The Semantic Web. My goal here is to complain about its premise.    (4W)

The authors say that the Semantic Web is about making data smarter. If we expend some extra effort making our data machine-understandable, then machines can do a better job of helping us with that data. By “machine-understandable,” the authors mean making the machines understand the data the same way we humans do. However, the authors make a point early in the book of separating their claims from those of AI researchers in the 1960s and 1970s. They are not promising to make machines as smart as humans. They are claiming that we can exploit machine capabilities more fully, presumably so that machines can better augment human capabilities.    (4X)

The authors believe that the Semantic Web will have an enormous positive effect on society, just as soon as it catches on. There’s the rub. It hasn’t. The question is why.    (4Y)

The answer lies with two related questions: What’s the cost, and what’s the return?    (4Z)

Consider the return first. Near the end of the book, the authors say:    (50)

With the widespread development and adoption of ontologies, which explicitly represent domain and cross-domain knowledge, we will have enabled our information technology to move upward — if not a quantum leap, then at least a major step — toward having our machines interact with us at our human conceptual level, not forcing us human beings to interact at the machine level. We predict that the rise in productivity at exchanging meaning with our machines, rather than semantically uninterpreted data, will be no less revolutionary for information technology as a whole. (238)    (51)

The key phrase above is, “having our machines interact with us at our human conceptual level, not forcing us human beings to interact at the machine level.” There are two problems with this conclusion. First, machines interacting with humans at a human conceptual level sounds awfully like artificial intelligence. Second, the latter part of this phrase contradicts the premise of the book. To make the Semantic Web happen, humans have to make their data “smarter” by interacting at the machine level.    (52)

That leads to the cost question: How much effort is required to make data smarter? I suppose the answer to that question depends on how you read the book, it seems to require quite a bit. Put aside the difficulties with RDF syntax — those can be addressed with better tools. I’m concerned about the human problem of constructing semantic models. This is a hard problem, and tools aren’t going to solve it. Who’s going to be building ontologies? I don’t think regular folks will, and if I’m right, then that makes it very difficult to expect a network effect on the order of the World Wide Web.    (53)

Human-Understandable Ontologies    (54)

There were three paragraphs in the book that really struck me:    (55)

Semantic interpretation is the mapping between some structured subset of data and a model of some set of objects in a domain with respect to the intended meaning of those objects and the relationships between those objects.    (56)

Typically, the model lies in the mind of the human. We as humans “understand” the semantics, which means we symbolically represent in some fashion the world, the objects of the world, and the relationships among those objects. We have the semantics of (some part of) the world in our minds; it is very structured and interpreted. When we view a textual document, we see symbols on a page and interpret those with respect to what they mean in our mental model; that is, we supply the semantics (meaning). If we wish to assist in the dissemination of the knowledge embedded in a document, we make that document available to other human beings, expecting that they will provide their own semantic interpreter (their mental models) and will make sense out of the symbols on the document pages. So, there is no knowledge in that document without someone or something interpreting the semantics of that document. Semantic interpretation makes knowledge out of otherwise meaningless symbols on a page.    (57)

If we wish, however, to have the computer assist in the dissemination of the knowledge embedded in a document — truly realize the Semantic Web — we need to at least partially automate the semantic interpretation process. We need to describe and represent in a computer-usable way a portion of our mental models about specific domains. Ontologies provide us with that capability. This is a large part of what the Semantic Web is all about. The software of the future (including intelligent agents, Web services, and so on) will be able to use the knowledge encoded in ontologies to at least partially understand, to semantically interpret, our Web documents and objects. (195-197)    (58)

To me, these paragraphs beautifully explain semantics and describe the motivation for the Semantic Web. I absolutely agree with what is being said and how. My concerns are with scope — the cost and benefit questions — and with priority.    (59)

The Semantic Web is only important in so far as it helps humans with our problems. The problem that the Semantic Web is tackling is information overload. In order to tackle that problem, the Semantic Web has to solve the problem of getting machines to understand human semantics. This is related to the problem of getting humans to understand human semantics. To me, solving the problem of humans understanding each other is far more important than getting machines to understand humans.    (5A)

Ontologies are crucial for solving both problems. Explicit awareness of ontologies helps humans communicate. Explicit expression of ontologies helps machines interpret humans. The difference between the two boils down, once again, to costs and returns. The latter costs much more, but the return does not seem to be proportionately greater. I think it would be significantly cheaper and more valuable to develop better ways of expressing human-understandable ontologies.    (5B)

I’m not saying that the Semantic Web is a waste of time. Far from it. I think it’s a valuable pursuit, and I hope that we achieve what the authors claim we will achieve. Truth be told, my inner gearhead is totally taken by some of the work in this area. My concern is that our collective inner gearhead is causing us to lose sight of the original goal. To paraphrase Doug Engelbart, we’re trying to make machines smarter. How about trying to make humans smarter?    (5C)

« »

5 Responses to “Do We Need the Semantic Web?”

  1. Good ideas (as ever) Eugene.

    The only point I’d take issue with is to be found around these bits :

    “How much effort is required to make data smarter? … Who’s going to be building ontologies? … I think it would be significantly cheaper and more valuable to develop better ways of expressing human-understandable ontologies.”

    The “by whom” point is pretty critical. But if you view the ontologies in a similar light to XML formats, it clearly isn’t always necessary for every end user to be able to create them. To some extent the publication of RSS data by tools like Movable Type shows how ontologically marked up data can be produced on a wide scale. Ok, it’s only basic RDF schema stuff, but similar system could pump out tight little OWL pellets.

    So in answer to “How much effort is required to make data smarter?” I’d say ‘Very little.’.

    For “Who’s going to be building ontologies?” I’d say *for the most part* regular application developers.

    But…”I think it would be significantly cheaper and more valuable to develop better ways of expressing human-understandable ontologies”. I agree with your underlying point here, but think it’s just the kind of the Semantic Web technologies can help with.
    The model used is basically very human-friendly – just saying stuff about things, using (triple) statements. The part of the expression that is ugly is the encoding this in RDF/OWL whatever. I personally think this is primarily a UI issue, and eminently doable without any new invention.

    I’ve been working on supporting custom user vocabularies in IdeaGraph, the idea being that they use whichever (human) words they want, but that these personal ontologies will actually be stored in an RDF form. Creating a custom schema isn’t really any harder than creating instance data, and if you give the user/project/whatever a namespace of its own, it’s really straightforward. I’m pretty sure the term constraints etc can be added using fairly standard UI components, it’s just a matter of shielding the end user from the ugliness of formal/code language/syntax.

    btw, I’m also now shamed into finishing Mike’s book and doing the review I promised weeks ago…

  2. Good post Eugene, and good comments Danny. Coincidentally, I started writing a post for my blog (should be up later today) called “Semantic web 2003: not unlike making music on a TRS-80 in the 1970’s” basically about interaction design and UI in relation to the semantic web.

    What I think both of you are pointing out is that the semantic web development to date has included very few developments to serve creative human interaction and understanding.

    Making music is a good analogy for me personally, and the semantic web to date seems to be more about getting richer and richer data about sound synthesis than about making the human interfaces of musical instruments that might creatively use those sounds for music.

  3. I think that in order to get the network effects, you’ve got to have “ordinary” people creating their own unique subjective ontologies, and communicating them to other humans. Only when we have spoken this new language of ontology with one another on a broad scale over a period of time will we begin to discover a “common” ontology that the “regular application developers” Danny mentions will then be able to encode.

    Doug Engelbart is spot on, we’ve got to make the humans smarter, but fortunately, the machines can help us with that. For example, Google’s repository, sliced up by word count is one half of a Bayesian classification system. As Google crawls the web, why not build up an accompanying RDF file for every web page it encounters, assigning the page to various taxonomies based on word count? That’s the machine learning side of the equation. The human learning side comes in this way: develop a web-based aggregation service, where each user has a personal RDF aggregator. The personal aggregator at first has an empty ontology, but as the user interacts with the aggregator, he has an opportunity to “score” each piece of data in various ways: is it categorized correctly, is it personally relevant or interesting, etc. In so doing, the user is slowly, over time, building up a personal, subjective ontology (the closest present-day example of this are people’s OPML based blogrolls). The news aggregator web service could then find a way to synergize these personal, subjective ontologies, ultimately using them to alter the weights used in their automated algorithm for generating RDF metadata for ordinary web content. In so doing, each of our subjective ontologies serves to inform the “common” ontology.

    People are already working on introducing machine learning to news aggregators. In addition to Classifier4J, AI::Categorizer and Weka are good condidates for supporting the machine learning side of things.

  4. Nice post, a few random thoughts.

    One of the biggest reasons the SW hasn’t taken off is that the w3c hasn’t finished standardizing that list mile, the connection between the web of semantics and the user. The recent XForms spec is a step in the right direction.

    Another gap to be bridged is between us XML geeks and regular people. Getting people to write their own ontologies or even understand what an ontology is will be a tricky task. Ontology is the cornerstone of the semantic web and XML, and should be the first thing that is explained.

    Modeling the physical world is a better place to start than the modeling the informational world. It’s easier for people to connect with because there’s a clearer distinction between the two. The ontologies of chairs and apples and legos and so forth. I wrote a short intro to the SW where I tried to draw an analogy between the ontology of a lego block (very simple) and the ontology of other real world objects.

    Ontology should be taught in grade school, just as a way of thinking and looking at the world, even if there were no computers. We use something similar to an ontology every time we recognize an object as being in a set of objects. It’s just a matter of getting people to realize what their mind is doing unconsciously anyway.

  5. Just sit back and think we’re not in 2004. We’re in January 1600 AD, in a couple of weeks Giordano Bruno will make his last voyage to Campo dei Fiori, architecture, construction and building is almost as flourishing as software industry today. Cathedrals and palaces have been built for hundreds of years, without the slightest knowledge of partial differential equations or strength of materials. Did those people think that mathematical speculations would be useful? Do you think that the Empire State Building could be built using 17th century techniques? (Yes, they built pyramides. We also have the nowadays equivalent – software systems costing 1000 times more than what was expected and not meeting their schedule).
    I think – and I wrote it, albeit in French http://etudiants.fsa.ulaval.ca/altom2/ – that we DO need a scientific basis to the realization of information artifacts, just like civil engineering is based upon mathematics, physics and material science. We don’t need it for what Brown in “Objects in plain language” calls software equivalents of “dog houses”, but we need it for skyscrapers.
    However, I agree that the price to pay is high. The traditional “trial and error” way of “building” information systems would not go away in a couple of years, as we cannot by miracle teach all the professionals in the field the new paradigm. It won’t simply “catch on”. Not at a large scale. But I think it’s worth trying to introduce it at a smalll scale, and show its advantages. Other paradigm shifts (like the object orientation) provide insights. It might be the “next big thing”, if properly *marketed*.

Leave a Reply