Thanks To Siri and Kinect, Web 3.0 can now happen - Brandon Werner - Site Home - MSDN Blogs

Brandon Werner

Extending the magic of software to the cloud

Thanks To Siri and Kinect, Web 3.0 can now happen

Thanks To Siri and Kinect, Web 3.0 can now happen

  • Comments 2

Savas Parastatidis wrote a blog entry on the semantic web and machine learning, covering the existing history of this area and discussing the future of semantic processing as well as the false starts the area has had in past.  For hope of a future, he calls to Apple's Siri technology and discusses how the knowledge web will require loose processing of the existing web instead of taking in to account the models proposed in OWL, micro formats, etc. which have failed. Although a good goal, and Google and Bing have proven some capability at this, I think this approach - which requires AI and un-aided relationship reasoning - is doomed to be another failure in the goal of the semantic web. Instead, I see something even more amazing occurring which could finally give us Web 3.0 dreamers the world we wanted ten years ago (indeed, the path was laid out for us well by Nicola Guarino and Christopher Welt) - saving the environment and giving us a richer future for our interaction with our devices.

The First Generation of NUI Arrived Last Year


The last year has given us huge advancements in NUI interfaces such as Siri, TellMe, and Kinect. These interfaces allow for us to interface with knowledge either locally or world wide through voice and motion gestures. This has proven to be a lot of fun for customers, and opens up technology to more people in a new and engaging way. However it was not long until customers discovered the limitations of the first generation of these NUI devices. These limitations can be itemized as the following:

  • The accessible knowledge is scoped to the local knowledge of the device or service
  • The interface is limited only to the ontologies (read: keywords, models) that the device or service understands
  • The interface is locked to the platform and the platform provider (for ease of use)
  • The experience degrades substantially as soon as the query for knowledge extends beyond any of the above restrictions
  • The requirement of backend processing of queries against a single point of knowledge leads to service slowing down or not being available

These restrictions are understood if one knows the following fact: the internet is dumb.

The academic and engineering world has known this for some time, and have tried various attempts to rectify this problem.

This is the history so far:

  1. First, they tried web services and the ability to "discover" knowledge providers through a self-describing knowledge warehouse (SOAP, UDDI, XML)
  2. Second, they tried to mark up the existing websites to deliver knowledge the same way we share knowledge with people (XHTML, RDF, microformats)
  3. Finally, they tried to just parse the web as it exists, using probability and graph analysis to determine relevancy.

The third is where we are now, and it is ridiculous state of affairs.

Mining the Internet Kills Us and The Planet

This "third way" of mining the internet just doesn't scale. Certainly it as created new technologies to process large amounts of data, including map reduce and bulk synchronous parallel algorithms becoming mainstream. Hadoop offered through both Amazon and Microsoft on a PaaS level demonstrates the customer demand this has created. However this is more a reflection of the problem presented to the young Google trying to mine knowledge from the web ten years ago than it is any solution, or even a good one. If one considers the developer cost of using any of the other approaches listed above (discoverable services, self-describing web pages) vs. the huge amount of server space, energy use, and mindshare being used to parse the unstructured internet, the allegation that we software engineers are perpetrating something liken to a crime against humanity holds merit.

One would think that agreeing to - and using - any of the methods that would lead to a more rich and discoverable web of knowledge would receive attention and mindshare considering the consequences of mining the internet as it is today. What we have given the world instead are competing and diverse standards such as schema.org, RDFa, microformats, xfn, and the like accompanied by breathless blog posts about why a certain standard will change the world. In the face of these standards, many of which are backed by different companies to different ends (IBM for RDF, Google for schema.org) the engineers have not picked a winner but instead sat out the battle. This results not only in a dumb web staying dumb, but in the increase in the use of computing and natural resources to mine knowledge that could so easily be made to be discovered by the very people who seek to find ways of discovering it.

Facebook shows us the way forward: The Semantic Markup Rush

Something interesting happened while this was going on: Facebook introduced "likes" and the Open Graph. While many lamented the proliferation of these buttons across the internet mucking up the web, some went about doing something that for ten years standards could not get them to do: they started marking up their websites with semantic knowledge.

For the first time IMDB wasn't just a webpage, but a rich and machine readable database of movies. Amazon.com became a rich resource of book data, and every blog finally had "articles" which had an "author" that had a "first name" and a "last name". It happened for one reason: the profit motive. Companies who participated in the Open Graph got the benefit of the "link back" when users shared content on their Facebook Walls and profiles. This meant that anyone who didn't participate in the Open Graph were left out in the competition for eyeballs and money. Even those who use advertising as their chief revenue stream could use the rich and seamless sharing with Facebook as a way of increasing the network effect of their sites.

You could argue that Twitter could also provide this benefit and without the requirement of semantic markup, but this is where vision is important: Facebook anticipates the future need for machine learning to make the user experience more compelling, allowing users to discover products and services that relate to each other in richer ways. The need is not for "dumb" links that surround the user, but knowledge of the objects, and their type, that surround the user. Here we see Facebook part ways with the ten year old philosophy of Google, which relies on graph analysis to feel out links a user puts on their profile. Facebook goes towards the true knowledge web of links to objects. It's telling that Google doesn't pitch the semantic tags of their schema.org proposal in their tutorial on how to add the Google Plus button to a website, instead indicating only the description and name properties. This highlights the gap in understanding in competitor's minds of what makes Facebook so useful to customers now and in the future.

However, for NUI interfaces to become truly useful and the knowledge web to take off, this gathering of objects is just the first step.

Siri, Kinect: The Next Great Markup Rush

In order to solve the local knowledge problems of the NUI interfaces, we have to agree on a shared ontological taxonomy. Again, I don't believe that those who propose the crowd sourced model previously put forward by the likes of MIT or Stanford will get us where we need to go. Going forward there is an opportunity for those who are developing these next generation NUI interfaces to lead on a new standard. Anyone that wishes to get in front of Kinect or Siri and have their weather content, flight information, or recipes hit the maximum number of eyeballs with participate.

The only way this works is if, like Facebook with the Open Graph, taxonomy is forced to be developed and registered during the NUI app creation process. This ensures that developers get the immediate benefit of their labor - the ability to get in front of customers through a NUI interface - while at the same time there are no semantic breaks in the taxonomy that is being developed. This would not only change the web, but also allow for the benefits I listed above: 

  • NUI interfaces would break out of their limited scope
  • Reasoning engines could actually work with a structured taxonomy allowing better functionality over time
  • Our computer resources used to parse the dumb web would be eliminated

Although those in the open source world will be disappointed in this proposal of a commercial entity building the semantic structure as the price for access to their NUI experience, history has proven that most developers are coin operated and require ROI. The App Store boom has proven this. Now that we have NUI interfaces that are requiring this level of reasoning and interaction, the time is ripe to push forward. 

No matter what the source, we cannot accomplish this new goal without a killer NUI app as well as the means to write semantic content for it.

Let's hope it happens. The Semantic Web has waited long enough.

About Brandon

About Brandon

 

I’m Brandon Werner, a software engineer in Seattle. I love good friends, good coffee, and good ideas shared around a room.
 

I work for Microsoft in the Windows Azure team helping design the next generation cloud identity platform for people and businesses around the world. I lead the design of the SDKs for all Non-Microsoft platforms for this effort, including node.js, Java, PHP on the server and Android and iOS on devices. I also drive the open sourcing of Microsoft protocol and identity libraries and work to modernize the development experience for developers at Microsoft.

I joined Microsoft in 2008 as part of the scrappy team to build the competitor to Google Apps, which became Office365.

You can follow me on Twitter.

  • Where's the 'Like' button on here? :)

  • Good point Roy ;-) The metadata in MSDN blogs are very poor and can't be altered by mere man.

Page 1 of 1 (2 items)