While working on a project, I needed to to have a feature like facebook’s “Post a link”. The user enters a URL, clicks on preview and he gets a preview of how that page will be posted on his profile. Facebook does a few tricks
- It fetches the URL’s title and description (a small paragraph about that page’s content)
- It provides the user with a list of images from that page so that he pick one of them
- It also provides the user with the ability to edit the title and description if he doesn’t like what Facebook suggested
A user then posts that URL. That’s when things start to get even more interesting
- if the link was for a video from youtube or vimeo – for example – it’ll show the youtube or vimeo player (embedded object)
- if it’s a flickr photo page, it’ll show the image
- if it’s not any of the websites that are handled specially, the title, description and image are shown
I needed to have the same behavior.
First, let’s handle the generic case of any webpage. Given any URL, I had to get that pages HTML content and figure out the following
We can figure out the page’s title from any of the following
- The title tag <title>This is the title!</title>
- The meta title tag <meta content="This is the title!" name="title">
- The URL itself
Again, it can come from multiple sources but I prefer to rely on the first only
- The meta description tag The meta title tag <meta content="This is the title!" name="description" />
- The first text appearing after the body. But it can never guarantee anything meaningful
All sources of the all image tags on that page
Now to the special pages. We’ll have to keep a database of all sites that we consider special and that we are able to treat differently. For example, youtube, so that if we know it’s a youtube’s video url, we can build the proper embed object.
I felt it was too much for a simple application to handle but I really wanted to get that feature. Before implementing anything, I tried to search and even asked at stackoverflow. Although I didn’t find the answer I wanted, the answers were really helpful. It was then that I learned about
“oEmbed is a format for allowing an embedded representation of a URL on third party sites. The simple API allows a website to display embedded content (such as photos or videos) when a user posts a link to that resource, without having to parse the resource directly.” – http://oembed.com/
Unfortunately, it was too good to be true. The list of websites that support oEmbed is very limited. oohEmbed is a project trying to address that issue by providing oEmbed to a greater number of websites.
oohEmbed proxies calls to oEmbed providers if possible and provides its own implementation when needed. The thing I like the most about oohEmbed is that it provides a single url through which you access oEmbed for various websites in contrast with the original oEmbed protocol where each website provided its own url for accessing oEmbed.
For example to get the representation for a flickr URL (http://www.flickr.com/photos/ianpollock/6707007)
- Through flickr itself http://www.flickr.com/services/oembed/?url=http://www.flickr.com/photos/ianpollock/6707007
- Through oohEmbed http://oohembed.com/oohembed/?url=http://www.flickr.com/photos/ianpollock/6707007
It’s simply a trick so as to be able to grab JSON data bypassing the cross-domain issues. The idea is, many APIs provide JSON responses.
How can a browser make use of these APIs directly?
- A script tag is inserted dynamically that points to the API endpoint specifying the response format as JSON
- Normally, the endpoint will respond with a JSON object
- JSONP calls for providing an additional parameter callback. Now, instead of returning a JSON object, the text returned represents a function call where its parameter is the JSON object.
Flickr supports this technique. Without providing a callback, the call should return the following response
Now with the callback
Solving the problem
- The user enters a URL and hits the preview button
- Using AJAX, a request is sent to a dedicated action on the application server. That action would make an HTTP request to the URL the user provided, parse the HTML page to get the page’s info (title, description, image sources) and convert them to JSON and submit that JSON object as a response to the browser
- The browser then uses that JSON data to show some data about the URL the user requested
The problem with that solution is that the HTTP requests made by the server are blocking and if too many requests of that type occur, the application servers might get caught in these requests and thus will not be able to service normal requests.
Learning about JSONP – as a clean solution to cross domain communication, I decided it’s best to separate the action of requesting a page and parsing its HTML into a separate application. This way the original application’s performance will not get affected. The new application is called Analyze This. In its current form it’s a simple Sinatra application that has a single action.
The kind of requests – blocking and take some time – that Analyze This handles made it a perfect candidate to use an upcoming version of NeverBlock that not only handles DB but sockets and networking as well.
For example, an HTML page requesting information about a user entered URL use a code similar to the following
var url = encodeURIComponent($('#url').value);
To see it live, just download Analyze This, run the application and visit the sample page.
The Analyze This application itself is no big deal but the point is architecture really matters.