I want a Contact

  • that keeps phones/emails
  • linked to facebook/twitter/linkedin/windows live/google/etc
  • doesnt just autoupdate its data from any of the linked accounts (it might notify me and allow me to accept changes… no behind the scene smart ass dumbness plz
  • a contact that i can tag in applications photos/calendars or whatever

I want something that represents real people!

May be just a website with profiles for people where they state their social profiles (like google profile)?
The profile is just there as a proof to the existence of that person, nothing more, nothing less. I exist!

Each and every social network want the user to use its identity everywhere (Google Connect, OpenID, Facebook Connect, Twitter Anywhere)
They all solve different problems for theirselves but none solves it for me.

I want the concept of people loud and clear… I don’t know Facebook’s Ahmed or Twitter’s Ramez, I know Ahmed and Ramez!

Will the smartphones solve this problem trying to blend all the social craziness into the phone contact?

What will happen when I switch to another smartphone, another make, another model?

it’s 2010 for god sake and I want a solution to this!

These were just the chaotic ramblings of someone falling asleep at 3 am!


RubyInstaller: Getting mysql gem to build its native extensions

Since I’m one of those windows ruby developers, I was so interested to give Rails 3 a try. What was stopping me was the ruby version limitation that Rails 3 now has as it requires ruby 1.8.7 at least.

Since most of us, windows developers, use the ruby one click installer, we were stuck at 1.8.6. Luis Lavena, however, have just released RubyInstaller RC2 with an installer for 1.8.7 which resurrected my hopes for running Rails 3 on windows.

I wanted to give Rails 3 a try by converting an existing app to Rails 3. A major show stopper is the mysql gem which refused to install because the native extensions did not build when doing

gem install mysql

I read the tutorial on Getting started with Rails and Mysql but I had Mysql 5.1 so I really wanted to be able to build the gem’s native extensions.

I had

  1. Old ruby installer at C:ruby1.8
  2. Mysql 5.1 at C:Program Files (x86)MySQLMySQL Server 5.1

What I did

  1. Install RubyInstaller 1.8.7 RC2 at C:ruby187rc2
  2. Modified the environment variables to point at the new ruby’s installation
  3. Install DevKit from http://rubyinstaller.org/ and follow its installation instructions from http://http://wiki.github.com/oneclick/rubyinstaller/development-kit
  4. Since I don’t have mysql’s bin directory in the Path environment variable, I copied the file libmySQL.dll from C:Program Files (x86)MySQLMySQL Server 5.1bin to C:ruby187rc2bin
  5. I found several blogs pointing out how to build the mysql gem for the RubyInstaller but they all required that Mysql be installed in a directory that had no spaces. The problem is I already have mysql installed at a directory with spaces, so I used the following command to fake mysql’s location into some place with no spaces
    mklink /J C:mysql51 "C:Program Files (x86)MySQLMySQL Server 5.1"
  6. Install mysql gem using the command
    gem install mysql –platform ruby — –with-mysql-include=C:mysql51include –with-mysql-lib=c:mysql51libopt

and everything now works like a charm 🙂

Analyze this !

While working on a project, I needed to to have a feature like facebook’s “Post a link”. The user enters a URL, clicks on preview and he gets a preview of how that page will be posted on his profile. Facebook does a few tricks

  1. It fetches the URL’s title and description (a small paragraph about that page’s content)
  2. It provides the user with a list of images from that page so that he pick one of them
  3. It also provides the user with the ability to edit the title and description if he doesn’t like what Facebook suggested

A user then posts that URL. That’s when things start to get even more interesting

  1. if the link was for a video from youtube or vimeo – for example – it’ll show the youtube or vimeo player (embedded object)
  2. if it’s a flickr photo page, it’ll show the image
  3. if it’s not any of the websites that are handled specially, the title, description and image are shown

I needed to have the same behavior.

First, let’s handle the generic case of any webpage. Given any URL, I had to get that pages HTML content and figure out the following

  • Title
    We can figure out the page’s title from any of the following
    1. The title tag <title>This is the title!</title>
    2. The meta title tag <meta content="This is the title!" name="title">
    3. The URL itself
  • Description
    Again, it can come from multiple sources but I prefer to rely on the first only
    1. The meta description tag The meta title tag <meta content="This is the title!" name="description" />
    2. The first text appearing after the body. But it can never guarantee anything meaningful
  • Images
    All sources of the all image tags on that page


Now to the special pages. We’ll have to keep a database of all sites that we consider special and that we are able to treat differently. For example, youtube, so that if we know it’s a youtube’s video url, we can build the proper embed object.


I felt it was too much for a simple application to handle but I really wanted to get that feature. Before implementing anything, I tried to search and even asked at stackoverflow. Although I didn’t find the answer I wanted, the answers were really helpful. It was then that I learned about

  1. oEmbed
  2. JSONP


“oEmbed is a format for allowing an embedded representation of a URL on third party sites. The simple API allows a website to display embedded content (such as photos or videos) when a user posts a link to that resource, without having to parse the resource directly.” – http://oembed.com/

Unfortunately, it was too good to be true. The list of websites that support oEmbed is very limited. oohEmbed is a project trying to address that issue by providing oEmbed to a greater number of websites.

oohEmbed proxies calls to oEmbed providers if possible and provides its own implementation when needed. The thing I like the most about oohEmbed is that it provides a single url through which you access oEmbed for various websites in contrast with the original oEmbed protocol where each website provided its own url for accessing oEmbed.

For example to get the representation for a flickr URL (http://www.flickr.com/photos/ianpollock/6707007)



It’s simply a trick so as to be able to grab JSON data bypassing the cross-domain issues. The idea is, many APIs provide JSON responses.
How can a browser make use of these APIs directly?

  1. A script tag is inserted dynamically that points to the API endpoint specifying the response format as JSON
  2. Normally, the endpoint will respond with a JSON object
  3. JSONP calls for providing an additional parameter callback. Now, instead of returning a JSON object, the text returned represents a function call where its parameter is the JSON object.
  4. The end result is that a javascript function is called whenever the script file finishes loading and this function is given the JSON response as a parameter

Flickr supports this technique. Without providing a callback, the call should return the following response

<script type=”text/javascript” src=”http://api.flickr.com/services/rest/?method=flickr.photos.search&api_key=a38b5237570253cc91d6b54ed9cf1535&place_id=Pu5_HsObApilq4vUtQ&tags=funny&format=json&nojsoncallback=1”></script>


Now with the callback

<script type=”text/javascript” src=”http://api.flickr.com/services/rest/?method=flickr.photos.search&api_key=a38b5237570253cc91d6b54ed9cf1535&place_id=Pu5_HsObApilq4vUtQ&tags=funny&format=json&jsoncallback=showPhotos”></script>


Solving the problem

As I’m using Rails for the application I’m writing and without considering cross-domain javascript calls, the typical scenario would have been as follows

  1. The user enters a URL and hits the preview button
  2. Using AJAX, a request is sent to a dedicated action on the application server. That action would make an HTTP request to the URL the user provided, parse the HTML page to get the page’s info (title, description, image sources)  and convert them to JSON and submit that JSON object as a response to the browser
  3. The browser then uses that JSON data to show some data about the URL the user requested

The problem with that solution is that the HTTP requests made by the server are blocking and if too many requests of that type occur, the application servers might get caught in these requests and thus will not be able to service normal requests.


Analyze This!

Learning about JSONP – as a clean solution to cross domain communication, I decided it’s best to separate the action of requesting a page and parsing its HTML into a separate application. This way the original application’s performance will not get affected. The new application is called Analyze This. In its current form it’s a simple Sinatra application that has a single action.

The kind of requests – blocking and take some time – that Analyze This handles made it a perfect candidate to use an upcoming version of NeverBlock that not only handles DB but sockets and networking as well.

For example, an HTML page requesting information about a user entered URL use a code similar to the following



      var url = encodeURIComponent($('#url')[0].value);




That would result in a call to the javascript function analyze_this({..}) passing in a JSON object that has the page’s info.

To see it live, just download Analyze This, run the application and visit the sample page.

The Analyze This application itself is no big deal but the point is architecture really matters.

Get a Google Map for WAP !

Recently, I’ve been involved in a project where I was creating a WAP site using Ruby On Rails that needed to display certain locations using Google Maps. Given the limited capabilities for the average WAP browsers (mainly the lack of JavaScript) I needed to figure out a way.

To my good luck, Google had Google Static Maps API which is basically a URL that you put in the src attribute of an img tag. The URL accepts parameters that will define the image you get: The most important parameters are the Longitude/Latitude of the map’s center and the zoom level. The URL looks like the following


Since Google Static Maps API is just an image provider, what if we wanted richer interaction (zooming, panning) with the map?

We’ll have to implement those features on our own.

Here’s the approach we took

  1. Make a page on your website dedicated to showing the map  (http://m.example.com/map)
  2. Have that page accept URL parameters to indicate the longitude, latitude and zoom level (http://m.example.com/map?lat=30.1037&lng=31.3366&zoom=14)
    The map’s page will use the given parameters to construct the Google Static Maps API’s image url
  3. On the map’s page, create 6 links (2 for zooming, 2 for panning east and west, 2 for panning north and south)

This is the easy part. The 2 links, one just increments the zoom level, the other just decrements. For example:
Current URL: http://m.example.com/map?lat=30.1037&lng=31.3366&zoom=14
Zoom in URL: http://m.example.com/map?lat=30.1037&lng=31.3366&zoom=15
Zoom out URL: http://m.example.com/map?lat=30.1037&lng=31.3366&zoom=13

This is a bit more difficult. If we want to pan east or west, we’ll have to move the map’s center a little bit right or left. How much exactly is that little bit?
To answer that question, we need a little more info on how Google Maps work. Google Maps provide the map in the form of 256×256 pixels tiles. At zoom level 0, the whole world is represented by a single tile. at zoom level 1, the whole world is represented in a 2×2 tiles. At zoom level 2, it’s 4×4 tiles.

There’s a projection that maps the earth to 2D. That is the Mercator Projection. Using that projection we can

  1. Convert the Latitude/Longitude and Zoom to Pixels (X,Y)
  2. Add some value (pixels) to (X,Y) that would allow us to pan east/west, north/south
  3. Convert the new (X,Y) to Latitude/Longitude

I got the previous 3 steps form here. For example if we have the Latitude/Longitude and Zoom and width/height of the image in pixels:

# get the x,y of the image’s center
x = Mercator.lng_to_x(longitude, zoom)
y = Mercator.lat_to_y(latitude, zoom)

# To pan east or west we add/subtract half the image’s width (not necessarily half of course) and convert it back to longitude
x = x + 0.5 * width
longitude = Mercator.x_to_lng(x, zoom)

# To pan north or south we modify latitude similarly
y = y + 0.5 * height
latitude = Mercator.y_to_lat(y, zoom)

In the Google Maps API there’s a converter that allows converting between Latitude/Longitude and Pixels (X,Y) at different zoom levels.
The problem is – in addition to me not being an expert with all that projection stuff and its equations – is that I couldn’t take a look at the JavaScript source so that I could implement it in Ruby. Searching for that I came across an implementation in JavaScript here. I ported it to Ruby and it can be found here. There’s an issue with the code I implemented but I noted it in the gist. That issue would prevent you from using zoom levels greater than 17.

Hopefully these were enough info to get you started on using Zooming/Panning on Google Static Maps API.

HTTP Basic Authentication for functional tests !

While I was trying to cover a controller with some tests I faced a problem. The controller actions where protected by a filter that prompted the users for login via basic http authentication.

I found a solution in rails code here http://github.com/rails/rails/tree/master/actionpack/lib/action_controller/http_authentication.rb where it said you should do your get as follows

get("/notes", nil, :authorization =>
ActionController::HttpAuthentication::Basic.encode_credentials(users(:x).name, users(:x).password))

This didn’t work for me where basic http authentication required sending the encoded credentials in the request headers while the previous get request sent the authorization credentials in the session.

I found the following code snippet in http://snippets.dzone.com/posts/show/3785 which allowed me to set request headers

class ActionController::TestRequest
def set_header(name, value)
@env[name] = value

In my tests, I now write the following

@request.set_header "HTTP_AUTHORIZATION", ActionController::HttpAuthentication::Basic.encode_credentials(users(:x).email, '0000')
get :show, {:user_id => users(:x).id, :format => "rss"}

And it works like a charm !

reCAPTCHA your rails application !

CAPTCHA stands for "Completely Automated Public Turing test to tell Computers and Humans Apart". As the acronym suggests, the main reason of using CAPTCHA is to tell computers and human apart. It is a challenge-response test used to ensure that the response is not machine generated. CAPTCHA comes in many forms, some are more popular than the others

  1. Text based captchas in which the user sees an image displaying letters or numbers and is asked to type what he sees
  2. Image recognition captchas which display some images and asks questions about their content. Microsoft Assira is an example
  3. 3D captchas which display come complex computer generated 3D graphics scene and asks about the 3D details and contents

Image recoginition and 3D recognition try to impose more difficulty on computer programs that try to break CAPTCHAs.

reCAPTCHA is one of the CAPTCHA efforts. It also tries to solve another problem in addition to fighting spam. It tries to improve the process of digitizing books by sending words that cannot be read by computers to the Web in the form of CAPTCHAs for humans to decipher. The question that popped immediately in my mind was how does that reCAPTCHA verify the answers if it’s using images of words that the computer couldn’t really figure out what they were while scanning them. The answer is simple: it display two words at a time, one word can be easily verified and for the other word, your solution is taken to be a suggestion for that word. That word is used many times in different CAPTCHAs and eventually many people will suggest the same thing.

Currently, reCAPTCHA is recommended as the official CAPTCHA implementation by the original CAPTCHA creators.

This way reCAPTCHA not only helps you to fight spam but also gets you to participate into a good cause like digitizing the world’s books.

Using reCAPTCHA in your Rails application is so easy thanks to the recaptcha plugin. This plugin gives you 2 main methods that you can use in your application

  1. recaptcha_tags which should be used in the view in your form.
  2. verify_recaptcha which should be used in the controllers to verify the user’s answer

You should register at reCAPTCHA to get your public and private keys which are required by the plugin. The plugin requires that you define them as Environment variables.
recaptcha_tags accepts an options hash which can define the public key with :public_key so that it doesn’t look in your environment variables.
verify_recaptcha – which uses the private key – doesn’t provide a way for you to pass the private_key.

I’ve forked the plugin here and modified verify_recaptcha such that it now accepts an options hash – like recaptcha_tags – which allows you to define :private_key which will be used instead of looking into the environment variables.

Fight spam, help in digitizing books, use reCAPTCHA !

update: I sent a pull request to the guys over at http://github.com/ambethia/recaptcha to include my changes. Peter Abrahamsen replied and after a couple of messages we modified the plugin such that we no longer need to set the public and private key in any environment variables.  We also added a toggle to enable/disable the plugin. We can use the plugin as follows now

  Ambethia::ReCaptcha.enabled = true
Ambethia::ReCaptcha.public_key = '0123456789ABCDEF'
Ambethia::ReCaptcha.private_key = '0123456789ABCDEF'

If the toggle is set to false the recaptcha_tags will return nothing and the verify_recaptcha will always return true meaning that the recaptcha code does nothing which is what we want in case of disabling it.

version_cache: 1, 2, 3, It’s all in the numbers !

Building on my previous post about caching where the method_cache plugin was introduced, today I’d like to introduce another Rails plugin that also deals with the caching problem. This time, it’s about caching views. It uses a technique called version caching. Version caching frees you from having to worry about writing code that expires your cache. For more info about version caching check Yasser Wahba’s blog post explaining version caching and how we used it. I’ve used version caching in my last 2 projects and found it quite useful and quite easy. The problem is that we used to repeat a lot of code violating DRY. It was so clear we had to do something about it. I’ve taken that code, refactored it and made it available as a Rails (v2.1.0) plugin available at GitHub. Check it here.

The plugin assumes using a cache store that uses LRU (least recently used) to handle when the cache becomes full and that also supports time caching. Cutting it short, it assumes we’re using memcached.

The plugin makes a couple of methods available in all controllers. These methods are

  • version_cache
  • time_cache
  • version_cache_key_part

The first 2 are the most important. time_cache as the name implies allows for time caching for a page. it can be used as follows

class WelcomeController < ApplicationController 
  time_cache :index => {:expiry => 5, :browser_cache_enabled => true}
  def index

What we’ve just done is that we declared our intention of caching the index page of the welcome controller for 5 minutes in our cache store. In addition to the cache store, we declared our intention that we also want the page to be cached in the browser for the same period.

version_cache is the one responsible for tying the caching of a page to a model’s version. we can use it as follows

class ItemsController < ApplicationController
  version_cache Item, :associates => ["user"], :expiry => 5


By default, version_cache caches the show action unless an :action is provided. In the previous example we’re saying that we want to cache the show action of the items controller.

Item: the model whose objects versions are used.
:associates: an array of members on the model whose versions need to be updated as well when the the main model’s version is updated. i.e when an item is changed, its version is incremented and the item’s user’s version is also incremented. This is useful if we also cache users#show based on User model and changing an item reflects on the user’s page. :associates is optional
:expiry => an optional maximum time for the page to expire. if not specified, expiry will happen on the normal LRU

Models’ versions are maintained by the means of an observer. The observer has to observer the models we use and to be also declared in the environment.rb

version_cache_key_part is another method that allows a page to have multiple cached versions. We can use it as follows

class PostsController < ApplicationController
  version_cache Post, :associates => ["user"], :expiry => 5
  def show
  def version_cache_key_part
    if logged_in_user


What just happened here is that based on some conditions we return a string that will be part of the cache key. Now we have 2 versions cached for the show page; one for logged in users and one for guests. You can have as many versions as you want based on whatever conditions as long as they return distinct strings.

To get the plugin

ruby script/plugin install git://github.com/humanzz/version_cache.git

Then use the plugin’s generator to generate the cache observer

ruby script/generate version_cache_observer Cache Model1 Model2

The first argument “Cache” is taken to be the observer’s name. Any arguments after that are taken to be the models that the observer will observe.

Version caching is a great caching technique and hopefully with the introduction of the plugin many developers will find it appealing and easy to use.


Technorati Tags: