Monthly Archives: March 2011

Why we chose Amazon’s EC2 over Google’s App Engine: the CTO’s Perspective

A previous post discussed our decision to use Amazon’s EC2 service over Google’s App Engine and Microsoft’s Azure service primarily from a business perspective: the risks we weighed when we made our technology choices last August, and why the decision went in favor of EC2.

(We have also noted previously – lest anyone consider us uncritical fanboys! – that there are limitations with Amazon’s Web Services that have given us some heartburn.)

In today’s post, we present the CTO’s perspective: why EC2 was more attractive from a technology perspective:

The advantage of using Amazon’s EC2 over Google App Engine and Microsoft Azure is that for a highly interactive web applications such as Kerika, there needs to be multiple levels of caching of data.

Kerika maintains a cache within the browser so that it can avoid round trips to the server when the user moves objects around on the screen. Without that cache, the delays would be so long that users wouldn’t feel they were using a highly responsive desktop application – making it harder for us to .

There also needs to be a cache on the server side, and the most compelling reason for this is to reduce load on the database.

There are various ways to scale up a database, but none are as effective as simply minimizing the load on the database in the first place. A server-side cache is a simple way to do that. Because Kerika supports relationships between users, projects, idea pages, and so on, the server side cache has to maintain those relationships.

The Kerika server also must keep track of the state of the caches on its clients so that the it can maintain the clients in a current state, and thus avoid round trips to the server.

As a consequence of all this, Kerika uses servers with a large amount of RAM that needs to be quickly accessible for all of its users. Storing a large amount of data RAM is where EC2 becomes the only way to solve the problem. Because App Engine and Azure do not allow us to manage large amounts of RAM, they just weren’t good solutions for Kerika.

Another technical challenge is maintaining the long-lived browser connections that a Web application like Kerika depends upon. Google App Engine has the Channel API, but that doesn’t quite cut it for an application like Kerika.

Kerika needs to maintain several channels concurrently because users can be working with different combination of objects at the same time.

Kerika also makes use of broadcasts as a simple way to keep multiple users up to date. Because of EC2’s open architecture, we can use CometD as an off-the-shelf solution for client communication.

Finally, Google App Engine’s business model, which calls for charging two hours for each token doesn’t make economic sense in an environment where users are expected to navigate from page to page through the course of the day. EC2’s “pay-as-you-go” allows us to manage our traffic and keep operating costs down.

A surprisingly inelastic Elastic Load Balancer

Amazon’s Web Services are generally good, but they are not without flaws by any means, and in the past week we ran headlong into one of their limitations.

Some background, first: about a decade or two ago, people were very careful about how they typed in web addresses when using their browsers, because browsers weren’t very good at handling anything less than a perfectly formed URL. So, if you wanted to visit our website an eon ago, you probably would have typed in “http://www.kerika.com”. Ah, those were the days… You would direct people to “hot top dub dub dub kerika dot com”.

As lifeforms evolved, the “hot top” (did anyone really pronounce it as “hiptytipty colon whack whack dub dub dub dot domain dot com”?) got dropped, and we had the alliterative rhythm of “dub dub dub kerika dot com”.

From a branding perspective, however, having all that extra junk in front of your name isn’t very attractive, and we would rather be known as “Kerika.com” than “www.kerika.com”. After all, even in the crazy late 1990s when every company was guaranteed an immediate stock boost of 20% just by appending “.com” to their name, no one rushed to prefix their company names with “www.”

At Kerika, we decided that our canonical name of kerika.com was the way to go, which is why typing in “www.kerika.com” will land you at “kerika.com”. Setting that up is easy enough using the CNAME fields in DNS. So far, so hiptytipty…

The problem arose when we decided to use Amazon’s Elastic Load Balancer, so that we could have multiple servers (or, more precisely, multiple Amazon Machine Instances or AMIs) in the cloud, all hidden behind a single load balancer. It all sounded good in theory, but the problem came with routing all of our traffic to this ELB.

Getting www.kerika.com pointing to the ELB was easy enough by changing the CNAME in DNS, but in trying to kerika.com to also point to the ELB we fell into that gap that exists all too often between documentation and reality. But what really surprised us was finding out that the problem had been known to Amazon since May 2009, and it still hadn’t been fixed.

We can only guess why a known problem like this would remain unfixed for two years. The promised solution is supposed to be Route 53, Amazon’s new DNS for their cloud services. We heard an interesting presentation about Route 53 at Amazon’s premises a week ago; it was too bad we didn’t know then that our planned use of ELB would hit a brick wall or we could have pestered the Amazonians about this problem. According to the Route 53 website:

In the future, we plan to add additional integration features such as the ability to automatically tie your Amazon Elastic Load Balancer instances to a DNS name, and the ability to route your customers to the closest EC2 region.

We are waiting…

In the Spring, a young man’s thoughts turn seriously to marriage…

In the spring, a young man’s thoughts turn lightly to love…

(from “Locksley Hall,” by Alfred, Lord Tennyson.

Well, one young man’s thoughts turned to marriage. A self-professed “English code monkey” (when was the last time you saw that particular conjunction of words?) has been using Kerika to help plan out his wedding.

He lives locally, as does his fiance, but his parents live in the U.K. and his in-laws-to-be are in Texas, and so – in a very unexpected and utterly delightful use case for Kerika – this fellow has been using Kerika to plan his wedding.

We have been fortunate to share his experiences first hand, and since he is a very organized code monkey, he took a number of screenshots of his Kerika application that he has written about in his own blog. He calls it his “big fat geek wedding”!

Choosing between Google’s App Engine and Amazon’s EC2

Back in the summer, when we were making our technology choices, we had to decide which company was going to provide our “cloud”. It really boiled down to a decision between Google’s App Engine and Amazon’s EC2. Microsoft’s Azure service was still relatively new, and in talking to one of their product evangelists we got the distinct impression that Microsoft was targeting Azure for their biggest customers. (That was nice of their product evangelist: not taking us down the garden path if there was a mismatch between our needs and Microsoft’s goals!)

Here’s why we chose EC2 over App Engine:

  • Amazon is accessible: both Amazon and Google have engineering centers in our neighborhood, and Amazon’s is, of course, the much larger presence, but the real issue was which company was more accessible? Amazon does a lot of outreach to the startup community in the Seattle area – Jeff Barr is everywhere! – whereas Google is a lot more aloof. It’s much easier to find an engineer or product manager at Amazon who will talk to you, and that really makes a difference for a startup. Real people matter.
  • There was negative buzz about App Engine, at least back in August. People in the startup community were talking about suffering outages, and not getting very good explanations about what had gone wrong. Google’s aloofness didn’t help: there wasn’t anyone reaching out to the community to explain what was happening and there was little acknowledgment of problems coming through on Google’s official channels.
  • To beta or not to beta? Google’s rather lavish use of the “beta” label (remember the many years that Gmail was in beta?), and their recent decision to pull the plug on Wave added to our nervousness. Were they serious about App Engine, or was this another science experiment?
  • An infinite loop of web pages: in the absence of any physical contact with Google, we tried to learn about their service through their web pages – well, duh, of course, dude! The problem was that much of the information is presented in a series of rather small pages, with links to more small pages. As an approach to information architecture – particularly one that’s biased towards being “discoverable” by a search engine – this makes a lot of sense, and the Kerika website takes as similar approach. The risk with this approach is that you can very easily find that you are following links that take you right back to where you started. (And our website runs the same risk; we look forward to your comments detailing our failings in this regard!)

While we were pondering this choice, Google came out with a rather interesting offer: unlimited free storage for a select handful of developers. (How select? Probably as select as one of Toyota’s Limited Edition cars, production of which is strictly limited to the number they can sell.) The offer was presented in the usual mysterious way: you went to an unadvertised website and made an application. Someone would let you know “soon” whether you would get past the velvet rope.

We thought we had a good chance of getting included in the program: after all, Kerika was designed to add value to Google Docs, and we were using a number of Google’s services, including their Open ID for registration. And, indeed, we were selected!

The only problem was that the reply came about 4 weeks later: an eon in startup time. By that time, we had already decided to go with EC2…

(Is EC2 perfect? No, not by a long shot, and we were particularly annoyed to find a rather fatal flaw – from our perspective – in the way their Elastic Load Balancer can be configured with canonical web addresses: a problem that’s been noted for 2 years already. Not cool to sit on a problem for 2 years, guys. Not cool at all.)

The development environment that’s worked for us

In our last post, we referred to our “dogfooding” Kerika by using for our testing process. Here’s some more information about how we set up our software development environment…

Our Requirements:

  • We needed to support a distributed team, of course. And our distributed team had a variety of preferences when it came to operating systems: there were folks that liked using Windows, folks that had only Macs, folks that used only Linux, and folks that liked using Linux as virtual machines running inside Windows… Since we were were developing a Web application, it didn’t make sense – neither from a philosophical perspective nor from a business strategy perspective – to insist that everybody use the same operating system. If we were going to build for the browser, we should be able to use any operating system we preferred: our output was, after all, going to be Java, Javascript and SVG.
  • We needed to deploy our software “in the cloud”, and that meant making sure our software could run well in a Linux environment. Microsoft’s Azure service was still relatively new, and, in any case, our back-end developers were more comfortable with Java than ASP.Net, so that fixed Java on Linux as a back-end requirement.
  • Our software had to run in all browsers: that effectively meant “modern browsers”, that happy euphemism for browsers that would conform to HTML5 standards. When we started development, we were uncertain about how Internet Explorer 9, which is still in beta, would evolve, but we have been very pleasantly surprised by the evolution of that product. (And a discussion in December with Ziad Ismail, IE9’s Director of Product Management, has since further reinforced our belief that Microsoft is very serious about standards compliance in IE9.)
  • We decided early on to build our front-end in Javascript and SVG, rather than Flash. We know of very few applications, even today, that attempt to deliver the kind of user interface combined with real-time networking that Kerika does using Javascript, and it’s not that we like crazy challenges because we are crazy people –we just didn’t want to get caught in the crossfire between Apple and Adobe over Flash. Having to choose between Caradhras and the Mines of Moria, like Gandalf we choose to brave the Mines…

Our choices:

  • Amazon’s EC2 and RDS for creating test and production environments; A2 hosting for our development environment. The choice was between EC2 and Google App Engine, and we will talk more in a separate blog post about why we chose Amazon over Google.
  • Eclipse for the developer’s IDE: it was cross-platform, and since everyone wanted to stick with their favorite operating system, it was Eclipse that straddled them all.
  • Jetty for web server: the choice was between Apache and Jetty, and we went with Jetty. Why? Well, that’s a blog post in itself…
  • Git for source code control: we looked at Subversion, but Git is better suited for distributed teams.
  • Maven for build management: once we decided on Eclipse, Git and Jetty, Maven kind of fell in place as a natural choice, since it worked well with all of the above.
  • Bugzilla for bug tracking: it’s a great tool, with a simple, flexible interface and it has matured very nicely over the past decade.

All of this has worked out quite well, and we would recommend this stack for other startups that are looking to develop Web applications.

Using Kerika to QA itself

Since we are developing software for helping distributed teams collaborate more effectively, it was only natural that we would set up our own team as a distributed one.

Working remotely and relying principally on electronic communications, supplemented by some in-person meetings, has helped us understand better the challenges faced by distributed teams. As we believed from the outset, the biggest challenge has been keeping everyone on the same page in terms of our company strategy: business strategy, marketing strategy, and product roadmap.

Throughout our development process one of us would bemoan the lack of software like Kerika, that could help us build software like Kerika… Once the product became usable enough, we started “dogfooding” it –that elegant phrase invented by Microsoft that refers to a product team “eating it’s own dog food.”

One of the ways in which the Kerika team is using Kerika is for our QA: whenever we decide to work on a bug, we create a new project and name it after that bug number. Inside, we put our Build Verification Test (BVT) as well as the exit (success) criteria for that particular bug. It’s a neat trick: by going through the BVT, which we use as a sanity test before the developers hand off their code for QA, we end up creating a mini Kerika project for each bug.

For example, our BVT requires developers to upload a document to a Kerika page: well, for each bug, we upload a document that represents the exit criteria for that particular bug. The BVT requires users to go through various steps that provide a general coverage of Kerika’s main features. This means logging in using at least 3 different browsers (we usually test with Firefox, Safari and Chrome), and going through the process of adding Web links, videos, etc.

By using Kerika to test Kerika, at the end of each bug’s coding cycle we have a new project that we can look at and see whether it passed the BVT. It’s self-referential: the existence of a correctly set up project, with a particular team consisting of both Team Members and Visitors who perform certain actions, confirms that the BVT passed.

We combine the BVT with the exit criteria for each bug: these are derived from the reproduction steps of the bug report, plus the functional specifications. Going through the exit criteria for a particular bug, we end up with items in that bug’s project folder that confirm whether the bug was successfully fixed.

For example, if there was a bug about embedding YouTube videos on Kerika pages, the exit criteria would be such that at the end of the developer’s testing, the project would contain a YouTube video that could be examined to confirm that the bug was fixed.

So if the project for that bug is set up correctly at the end of the bug-specific repro steps and the BVT, then the developer knows he can check in his code for QA on our test environment. Of course, during our QA cycle we do more extensive testing, but this initial use of Kerika helps developers avoid breaking the build when they check in the code.

Pretty neat way of dogfooding, wouldn’t you agree?

Hello once again, world!

After a rather long hiatus (over three years!), Kerika is back, and with a bang!

We owe the world some explanation of why Kerika disappeared, and why it is now reappearing…

Here’s what happened: when we launched Kerika back in 2006, it was as a desktop application, written entirely in Java so that it would run on Windows, Macs and Linux computers. People really liked the concept, particularly the innovative user interface and the ease with which one could do document management. But, there were two serious drawbacks with that first version:

  • The biggest problem was our reliance upon JXTA, an open-source peer-to-peer communication technology that had been hatched at Sun Microsystems (remember them?) by none other than the legendary Bill Joy. On paper, JXTA looked perfect: it’s theoretical model and architecture exactly matched our needs. In practice, however, it proved to be a disastrous technology choice.
  • The other big problem we had was that Kerika 1.0 was a desktop application, which meant that it needed to get downloaded, and users needed to configure their firewalls to let the JXTA traffic go through. This proved to be a huge hurdle for many people who were interested in the product, but couldn’t get it past their IT gatekeepers.

Eventually, we ran out of time and money, which is really the same thing from a startup’s perspective. Of the two flaws listed above, the dependency on JXTA was really the killer: it meant that we couldn’t reliably provide communication or transfer of files between users. (And the topic of JXTA really merits its own blog post.)

And, so, we had to pack up our tents and go get “regular” jobs.

That’s the story of why Kerika disappeared.

What’s more interesting, is the story of why Kerika is now reappearing:

A funny thing happened, in the 3 years that Kerika v1 was taken off the market: people kept writing in asking for the product. (We had never taken down the website, so the demos were still available; you just couldn’t download the product any more.)

This got us thinking that maybe Kerika was fundamentally a good idea, but we had screwed up the execution of that idea. And there was another thing that surprised us: in the intervening years, no one else released anything like Kerika –a flexible whiteboard on which you could sketch out your ideas and plans, and also embed your content.

Last Spring, we sent an email to our old user base, trying to understand better what it was they found attractive about Kerika, and, in the process, trying to gauge the interest in reviving the product. The replies we received convinced us that (a) Kerika was, fundamentally, a good idea, at heart, and (b) the needs it served were still not being met by anything else in the market.

In August we reconstituted the Kerika team: a different team than before, with the skills that we would need to rebuild Kerika, from scratch, as a Web application. We have been hard at work ever since, and have done a compete rebuild – not a single line of code was reused! – and now we are ready to present to you the fruits of our labors.

In the next few blog posts, we will talk about the new product, the challenges we faced, the choices we made, and the lessons we learned from Kerika v1 (or “K1” as we like to call it.)

Welcome back.

Hello once again, world!

After a rather long hiatus (over three years!), Kerika is back, and with a bang!

We owe the world some explanation of why Kerika disappeared, and why it is now reappearing…

Here’s what happened: when we launched Kerika back in 2006, it was as a desktop application, written entirely in Java so that it would run on Windows, Macs and Linux computers. People really liked the concept, particularly the innovative user interface and the ease with which one could do document management. But, there were two serious drawbacks with that first version:

  • The biggest problem was our reliance upon JXTA, an open-source peer-to-peer communication technology that had been hatched at Sun Microsystems (remember them?) by none other than the legendary Bill Joy. On paper, JXTA looked perfect: it’s theoretical model and architecture exactly matched our needs. In practice, however, it proved to be a disastrous technology choice.
  • The other big problem we had was that Kerika 1.0 was a desktop application, which meant that it needed to get downloaded, and users needed to configure their firewalls to let the JXTA traffic go through. This proved to be a huge hurdle for many people who were interested in the product, but couldn’t get it past their IT gatekeepers.

Eventually, we ran out of time and money, which is really the same thing from a startup’s perspective. Of the two flaws listed above, the dependency on JXTA was really the killer: it meant that we couldn’t reliably provide communication or transfer of files between users. (And the topic of JXTA really merits its own blog post.)

And, so, we had to pack up our tents and go get “regular” jobs.

That’s the story of why Kerika disappeared.

What’s more interesting, is the story of why Kerika is now reappearing:

A funny thing happened, in the 3 years that Kerika v1 was taken off the market: people kept writing in asking for the product. (We had never taken down the website, so the demos were still available; you just couldn’t download the product any more.)

This got us thinking that maybe Kerika was fundamentally a good idea, but we had screwed up the execution of that idea. And there was another thing that surprised us: in the intervening years, no one else released anything like Kerika –a flexible whiteboard on which you could sketch out your ideas and plans, and also embed your content.

Last Spring, we sent an email to our old user base, trying to understand better what it was they found attractive about Kerika, and, in the process, trying to gauge the interest in reviving the product. The replies we received convinced us that (a) Kerika was, fundamentally, a good idea, at heart, and (b) the needs it served were still not being met by anything else in the market.

In August we reconstituted the Kerika team: a different team than before, with the skills that we would need to rebuild Kerika, from scratch, as a Web application. We have been hard at work ever since, and have done a compete rebuild – not a single line of code was reused! – and now we are ready to present to you the fruits of our labors.

In the next few blog posts, we will talk about the new product, the challenges we faced, the choices we made, and the lessons we learned from Kerika v1 (or “K1” as we like to call it.)

Welcome back.