The human search engine

Shortly after the “Web 2.0” buzz slowed down a little bit, around 2006, “Web 3.0” started to become the place to be. Well it only started. These monikers actually say nothing and are equally unable to describe the ideas they stand for as they are overarching.

There is of course an official interpretation as to what people mean by 3.0ing the web. They usually reference the semantic web concpets out there. Semantics in web technology often relate to a certain amount of added meaning that glues together otherwise unconnected and wide-spread singular objects of information to a cluster of wisdom referenceable in natural language.

Instead of typing a generic search query into the search bar of Google, tweaking its efficiency by using logic operators like AND or OR, people should be able to send requests shaped after how they think. Instead of managing the question “How do you search for something?” people are supposed to focus on the “What are you searching for?” part in a semantic web environment.

Semantic search engines more or less become a generic meta-browser, drawing information from different sources, connecting it by meaning and presenting a cluster of knowledge to the user in a clean and transparent way. Be sure to check out the Freebase Parallax (Christoph, thanks for the link), an open source project trying to reach out for this new kind of web experience.

Instead of searching for “European Union Members” on google you start by typing “European Union”. Afterwards the Parallax will present you a number of possible alternative contexts to surround your search, like e.g. “Member States”. After adding “Member States” the interface of the Parallax will sequentially adapt its interface to present more complex associations and contexts to your initial request. You could even start to visualize your results on a map, defining charts and statistic analysis.

freebase-parallax-interface

At the moment the interface is far from being intuitive. But an intuitive and usable interface seems to be a killer feature to semantic web applications.

Another approach to semantic applications is Ubiquity. A project by the Mozilla Foundation. It’s essentially some kind of browser plugin that gives you the power to work with information showed in your browser in a new, direct and semantic way. Let’s say you are searching for movies that are showing this coming weekend. After selecting the search results of interest with your mouse you could enter “map this” into the Ubiquity interface, resulting in a map representation of your search results withou having to launch Google Maps or another geo-mapping service. The official project video gives you a glimpse at what it’s all about:

On Twitip I came a across an interesting article by John Goalby basically arguing that the way we interact with Twitter and other social messaging tools will – in the years to come – replace the current paradigm of search engines. Entering your search query into Twitter and receiving context-sensitive direct responses from other Twitter users could be seen as a new kind of search engine. A social search engine with every single user representing one processing unit.

At the moment there are still some uncertainties to that kind of search engine as to how long you have to wait for your results (remember … we are talking about people. 100% availability seems to be utopian from that point of view). There is also no guaranteed quality-of-service (QoS) involved in querying this human search engine. Hence one can never be sure if the resulting information is completely dumb and unrelated or if it is a precious piece of scientific expertise to name it.

But the implications reach further. This Twitter-Google analogon is founded on one simple axiom: people being not much more than information processing units woven into a worldwide grid of information clusters and workstations. Am I the only one who is forced to think about this:

the-matrix-wwwdan-dareorg

Seeing Twitter and it’s brothers and sisters as a legitimate replacement to the traditional search engine concept also implies that we explicitly accept the chaos of the web. There is no FAT32 or NTFS of cyberspace. Information is stored chaotically across the web as soon as you lift your view up from some standardized transport and infrastructure level.

The future of the semantic web? Maybe it is some kind of meta operating system. An operating system for the web. Translating our queries into a chaotic pool of information, while this system itself is capable to build new connections and retrieve additional refining information on its own that – in turn – might or might not be of interest to the user. So the web itself becomes some kind of soup-like file systems presenting no apparent arrangement and being only accessible by search queries instead of a traditional path and address structure.

Socialize this