Subscribe:

ads

Pages

Friday 12 August 2011

HTML5 + Linked Data + Multimedia + TV experience = An HTML5 Leanback TV webapp from grennhughes.com

This post really attracts me:

An HTML5 Leanback TV webapp that brings SPARQL to your living room | greenhughes.com

"When you are sat on the sofa at the end of the day relaxing and watching TV, maybe eating food and not in the mood to have to keep constantly making decisions about what to watch you might not think that you are in a situation where Linked Data and SPARQL queries could be useful. Yet the flexibility of the data that can be obtained from data sources supporting these technologies makes them ideal candidates to power a Leanback TV experience."

"...By taking an existing template and an existing, very flexible, source of data we can create a whole new way for people to discover content on offer"

Well, people keep on asking : "where is linked data? How can I feel it?". Here is a good example. Well, you can say it could be done using Web 2.0 mashups. Yes, you can. But in this example, it is SPARQL endpoint from Open University, which distinguish it from normal Web 2.0 ways. Web 2.0 enable you to publish your data using some Web technologies, such as Restful Web Services or Ajax. However, applications still don't understand each other, while experienced developers can "understand". In Web 3.0, you not only publish your data using common Web technologies, but also using Semantic Web technologies. You represent your data in rdf, publish it through SPARQL endpoint, using 3XX and content negotiation to dereference your rdf data, etc, etc. People with creative thinking can then build more powerful applications on it. That's linked data!

It seems to me that the roadmap described by Tim Berners Lee is, step by step, becoming true. You can never image how you can use linked data. You can't! Just because people are so creative.

Monday 1 August 2011

Is Rich Snippet Saving Linked Data as well as Multimedia?

As I said in my previous post Want a semantic web/ linked data job?, how linked data is distributed to end users will depend on big companies. The current "big companies" will not necessarily survive after 10 years. When you are using the latest Google Chrome browser, can you still remember that it was Netscape who introduced massive users about what is a browser for the first time. Anyway, me, as a researcher and developer of linked data, have to flatter Google, Yahoo, Microsoft's favours, i.e. I have to create what they think valuable, and publish data to their favours... poor me...

Here is an interesting presentation about Google's vision of using Linked data in its search:
How Google is using Linked Data Today and Vision For Tomorrow

Before I looked the presentation, I would expect Google mentions some giant projects in Linked Data, something like DBpedia, Freebase, RKBExplorer, Jena, Sesame, etc. But quite to my surprise, the presentation is more like an advertisement of RDFa and microformats (known as Rich Snippet). Why? Google, you naughty boy! Google is a search engine indexing web pages, so where is the reason for Google to develop another big chunk of functions like Sindice to crawl triple stores? Google is clever and has already dominate most users' search experience. If you go to Sindice, you will find that if you are not an expert of semantic Web, you will have no idea how to do an efficient search and how to understand the search results.

Google adopt the RDFa as the pattern of publishing linked data and use it to provide better search results. Some vocabularies, such as Event, Reviews, Geo location, etc, are now machine-understandable ( or more exactly, "Google understandable").  OK, what I really care about is the position of multimedia in Rich Snippet. If you copy the url of a Youtube video replaying page into Sindice's inspector tool, you won't find any RDFa. Is Youtube a company owned by Google? Yes! But where is RDFa in Youtube?

Well, there are some resource about multimedia and RDFa Google is trying to work on RDFa and videos:
Supporting Facebook Share and RDFa for Videos
It's a good step forward at least we can see some basic metadata about video in RDFa. And if you do embed these attributes in your page, you will find your search result in Google will display an thumb image instead of just a link. Isn't it great?

However, that is not the end. Where are the media fragments then? I am fed up with WHOLE multimedia search now. I need to find media fragments! Can you index media fragments in RDFa? This picture is quite amazing in Google's linked data presentation:

But unfortunately, this is only a mockup! I am recently working on a demo of indexing UK Parliament debate video. I have tried to embed RDFa into the debate replay page. See the screen cast of the demo from youtube video. I used Media Fragment 1.0 draft and some Linked Open Data Event (LODE) ontologies to give a simple model of the debate.



I think Google will index media fragments embedded in RDFa one day. I can't see any reason why not. It will be same to Bing, Yahoo, or even Baidu in China~~~