Why we chose Amazon’s EC2 over Google’s App Engine: the CTO’s Perspective

A previous post discussed our decision to use Amazon’s EC2 service over Google’s App Engine and Microsoft’s Azure service primarily from a business perspective: the risks we weighed when we made our technology choices last August, and why the decision went in favor of EC2.

(We have also noted previously – lest anyone consider us uncritical fanboys! – that there are limitations with Amazon’s Web Services that have given us some heartburn.)

In today’s post, we present the CTO’s perspective: why EC2 was more attractive from a technology perspective:

The advantage of using Amazon’s EC2 over Google App Engine and Microsoft Azure is that for a highly interactive web applications such as Kerika, there needs to be multiple levels of caching of data.

Kerika maintains a cache within the browser so that it can avoid round trips to the server when the user moves objects around on the screen. Without that cache, the delays would be so long that users wouldn’t feel they were using a highly responsive desktop application – making it harder for us to .

There also needs to be a cache on the server side, and the most compelling reason for this is to reduce load on the database.

There are various ways to scale up a database, but none are as effective as simply minimizing the load on the database in the first place. A server-side cache is a simple way to do that. Because Kerika supports relationships between users, projects, idea pages, and so on, the server side cache has to maintain those relationships.

The Kerika server also must keep track of the state of the caches on its clients so that the it can maintain the clients in a current state, and thus avoid round trips to the server.

As a consequence of all this, Kerika uses servers with a large amount of RAM that needs to be quickly accessible for all of its users. Storing a large amount of data RAM is where EC2 becomes the only way to solve the problem. Because App Engine and Azure do not allow us to manage large amounts of RAM, they just weren’t good solutions for Kerika.

Another technical challenge is maintaining the long-lived browser connections that a Web application like Kerika depends upon. Google App Engine has the Channel API, but that doesn’t quite cut it for an application like Kerika.

Kerika needs to maintain several channels concurrently because users can be working with different combination of objects at the same time.

Kerika also makes use of broadcasts as a simple way to keep multiple users up to date. Because of EC2’s open architecture, we can use CometD as an off-the-shelf solution for client communication.

Finally, Google App Engine’s business model, which calls for charging two hours for each token doesn’t make economic sense in an environment where users are expected to navigate from page to page through the course of the day. EC2’s “pay-as-you-go” allows us to manage our traffic and keep operating costs down.