Tag Archives: AWS

About our use of Amazon Web Services.

First, we kill all the managers. (Or, maybe not.)

In earlier posts we described our decision to chose Amazon’s EC2 over Google App Engine in recent days, and as part of the business perspective we noted that:

Amazon is accessible: both Amazon and Google have engineering centers in our neighborhood, and Amazon’s is, of course, the much larger presence, but the real issue was which company was more accessible? Amazon does a lot of outreach to the startup community in the Seattle area – Jeff Barr is everywhere! – whereas Google is a lot more aloof. It’s much easier to find an engineer or product manager at Amazon who will talk to you, and that really makes a difference for a startup. Real people matter.

Events in the past week have reassured us that we made the right choice. The very end of that post on choosing EC2 included a passing grumble about some long-standing problems with Amazon’s Elastic Load Balancer.

What happened following that blog post was a pleasant surprise: an old colleague saw the post on LinkedIn and forwarded it to someone at Amazon, who promptly contacted Jeff Barr, the peripatetic and indefatigable evangelist for Amazon Web Services.

And within just one day after our griping, Jeff asked for a meeting with us to see how Amazon could make things better for us. This level of responsiveness was, frankly, astonishing when one considers the size of Kerika (rather tiny) and the size of Amazon (rather Amazonian).

We met Jeff today, and are happy to report that this ELB issue will soon go away for us and everyone else. But that was a minor aspect of the meeting: much of our discussion was a wide-ranging conversation about Kerika, collaboration platforms and the many uses for cloud computing in different market segments.

And the best part was finding out that Jeff had taken the trouble, earlier in the day, to try out Kerika for himself.

Returning from the meeting, we couldn’t help but reflect upon cultures of the two companies that we are relying upon for our business model: Amazon and Google. Amazon is providing the cloud computing infrastructure, and Google is providing the OpenID authentication and, more importantly, the Google Docs suite that is integral to the Kerika product offering. Both sets of technologies are essential to Kerika’s success.

But the cultures of these two companies are clearly different: Amazon relies on both technology and people, whereas Google, it would appear, is purely an online presence. (To be fair, we must note that while both companies have engineering centers in the Seattle area, Amazon’s footprint is many times larger than Google’s.)

But, still: who could we have tried to reach at Google? Who is the public face of Google’s App Engine? Or, of Google Docs, for that matter? There are two substantial Google engineering centers nearby, but Google the company remains a cloud of distributed intelligence – much their servers – accessible only via HTTP.

A recent article at All Things D speculates that Larry Page may eliminate Google’s managers in large amounts, in order to free the animal spirits of the engineers, and a similar article on TechCrunch notes that “Page famously has a low opinion of managers, especially product managers who try to tell engineers what to do.”

There can be no gainsaying Google’s engineering talents, or its remarkable achievements, but will Google 3.0 be an organization that has even less human contact with its customers and partners?

Why we chose Amazon’s EC2 over Google’s App Engine: the CTO’s Perspective

A previous post discussed our decision to use Amazon’s EC2 service over Google’s App Engine and Microsoft’s Azure service primarily from a business perspective: the risks we weighed when we made our technology choices last August, and why the decision went in favor of EC2.

(We have also noted previously – lest anyone consider us uncritical fanboys! – that there are limitations with Amazon’s Web Services that have given us some heartburn.)

In today’s post, we present the CTO’s perspective: why EC2 was more attractive from a technology perspective:

The advantage of using Amazon’s EC2 over Google App Engine and Microsoft Azure is that for a highly interactive web applications such as Kerika, there needs to be multiple levels of caching of data.

Kerika maintains a cache within the browser so that it can avoid round trips to the server when the user moves objects around on the screen. Without that cache, the delays would be so long that users wouldn’t feel they were using a highly responsive desktop application – making it harder for us to .

There also needs to be a cache on the server side, and the most compelling reason for this is to reduce load on the database.

There are various ways to scale up a database, but none are as effective as simply minimizing the load on the database in the first place. A server-side cache is a simple way to do that. Because Kerika supports relationships between users, projects, idea pages, and so on, the server side cache has to maintain those relationships.

The Kerika server also must keep track of the state of the caches on its clients so that the it can maintain the clients in a current state, and thus avoid round trips to the server.

As a consequence of all this, Kerika uses servers with a large amount of RAM that needs to be quickly accessible for all of its users. Storing a large amount of data RAM is where EC2 becomes the only way to solve the problem. Because App Engine and Azure do not allow us to manage large amounts of RAM, they just weren’t good solutions for Kerika.

Another technical challenge is maintaining the long-lived browser connections that a Web application like Kerika depends upon. Google App Engine has the Channel API, but that doesn’t quite cut it for an application like Kerika.

Kerika needs to maintain several channels concurrently because users can be working with different combination of objects at the same time.

Kerika also makes use of broadcasts as a simple way to keep multiple users up to date. Because of EC2’s open architecture, we can use CometD as an off-the-shelf solution for client communication.

Finally, Google App Engine’s business model, which calls for charging two hours for each token doesn’t make economic sense in an environment where users are expected to navigate from page to page through the course of the day. EC2’s “pay-as-you-go” allows us to manage our traffic and keep operating costs down.

A surprisingly inelastic Elastic Load Balancer

Amazon’s Web Services are generally good, but they are not without flaws by any means, and in the past week we ran headlong into one of their limitations.

Some background, first: about a decade or two ago, people were very careful about how they typed in web addresses when using their browsers, because browsers weren’t very good at handling anything less than a perfectly formed URL. So, if you wanted to visit our website an eon ago, you probably would have typed in “http://www.kerika.com”. Ah, those were the days… You would direct people to “hot top dub dub dub kerika dot com”.

As lifeforms evolved, the “hot top” (did anyone really pronounce it as “hiptytipty colon whack whack dub dub dub dot domain dot com”?) got dropped, and we had the alliterative rhythm of “dub dub dub kerika dot com”.

From a branding perspective, however, having all that extra junk in front of your name isn’t very attractive, and we would rather be known as “Kerika.com” than “www.kerika.com”. After all, even in the crazy late 1990s when every company was guaranteed an immediate stock boost of 20% just by appending “.com” to their name, no one rushed to prefix their company names with “www.”

At Kerika, we decided that our canonical name of kerika.com was the way to go, which is why typing in “www.kerika.com” will land you at “kerika.com”. Setting that up is easy enough using the CNAME fields in DNS. So far, so hiptytipty…

The problem arose when we decided to use Amazon’s Elastic Load Balancer, so that we could have multiple servers (or, more precisely, multiple Amazon Machine Instances or AMIs) in the cloud, all hidden behind a single load balancer. It all sounded good in theory, but the problem came with routing all of our traffic to this ELB.

Getting www.kerika.com pointing to the ELB was easy enough by changing the CNAME in DNS, but in trying to kerika.com to also point to the ELB we fell into that gap that exists all too often between documentation and reality. But what really surprised us was finding out that the problem had been known to Amazon since May 2009, and it still hadn’t been fixed.

We can only guess why a known problem like this would remain unfixed for two years. The promised solution is supposed to be Route 53, Amazon’s new DNS for their cloud services. We heard an interesting presentation about Route 53 at Amazon’s premises a week ago; it was too bad we didn’t know then that our planned use of ELB would hit a brick wall or we could have pestered the Amazonians about this problem. According to the Route 53 website:

In the future, we plan to add additional integration features such as the ability to automatically tie your Amazon Elastic Load Balancer instances to a DNS name, and the ability to route your customers to the closest EC2 region.

We are waiting…

Choosing between Google’s App Engine and Amazon’s EC2

Back in the summer, when we were making our technology choices, we had to decide which company was going to provide our “cloud”. It really boiled down to a decision between Google’s App Engine and Amazon’s EC2. Microsoft’s Azure service was still relatively new, and in talking to one of their product evangelists we got the distinct impression that Microsoft was targeting Azure for their biggest customers. (That was nice of their product evangelist: not taking us down the garden path if there was a mismatch between our needs and Microsoft’s goals!)

Here’s why we chose EC2 over App Engine:

  • Amazon is accessible: both Amazon and Google have engineering centers in our neighborhood, and Amazon’s is, of course, the much larger presence, but the real issue was which company was more accessible? Amazon does a lot of outreach to the startup community in the Seattle area – Jeff Barr is everywhere! – whereas Google is a lot more aloof. It’s much easier to find an engineer or product manager at Amazon who will talk to you, and that really makes a difference for a startup. Real people matter.
  • There was negative buzz about App Engine, at least back in August. People in the startup community were talking about suffering outages, and not getting very good explanations about what had gone wrong. Google’s aloofness didn’t help: there wasn’t anyone reaching out to the community to explain what was happening and there was little acknowledgment of problems coming through on Google’s official channels.
  • To beta or not to beta? Google’s rather lavish use of the “beta” label (remember the many years that Gmail was in beta?), and their recent decision to pull the plug on Wave added to our nervousness. Were they serious about App Engine, or was this another science experiment?
  • An infinite loop of web pages: in the absence of any physical contact with Google, we tried to learn about their service through their web pages – well, duh, of course, dude! The problem was that much of the information is presented in a series of rather small pages, with links to more small pages. As an approach to information architecture – particularly one that’s biased towards being “discoverable” by a search engine – this makes a lot of sense, and the Kerika website takes as similar approach. The risk with this approach is that you can very easily find that you are following links that take you right back to where you started. (And our website runs the same risk; we look forward to your comments detailing our failings in this regard!)

While we were pondering this choice, Google came out with a rather interesting offer: unlimited free storage for a select handful of developers. (How select? Probably as select as one of Toyota’s Limited Edition cars, production of which is strictly limited to the number they can sell.) The offer was presented in the usual mysterious way: you went to an unadvertised website and made an application. Someone would let you know “soon” whether you would get past the velvet rope.

We thought we had a good chance of getting included in the program: after all, Kerika was designed to add value to Google Docs, and we were using a number of Google’s services, including their Open ID for registration. And, indeed, we were selected!

The only problem was that the reply came about 4 weeks later: an eon in startup time. By that time, we had already decided to go with EC2…

(Is EC2 perfect? No, not by a long shot, and we were particularly annoyed to find a rather fatal flaw – from our perspective – in the way their Elastic Load Balancer can be configured with canonical web addresses: a problem that’s been noted for 2 years already. Not cool to sit on a problem for 2 years, guys. Not cool at all.)

The development environment that’s worked for us

In our last post, we referred to our “dogfooding” Kerika by using for our testing process. Here’s some more information about how we set up our software development environment…

Our Requirements:

  • We needed to support a distributed team, of course. And our distributed team had a variety of preferences when it came to operating systems: there were folks that liked using Windows, folks that had only Macs, folks that used only Linux, and folks that liked using Linux as virtual machines running inside Windows… Since we were were developing a Web application, it didn’t make sense – neither from a philosophical perspective nor from a business strategy perspective – to insist that everybody use the same operating system. If we were going to build for the browser, we should be able to use any operating system we preferred: our output was, after all, going to be Java, Javascript and SVG.
  • We needed to deploy our software “in the cloud”, and that meant making sure our software could run well in a Linux environment. Microsoft’s Azure service was still relatively new, and, in any case, our back-end developers were more comfortable with Java than ASP.Net, so that fixed Java on Linux as a back-end requirement.
  • Our software had to run in all browsers: that effectively meant “modern browsers”, that happy euphemism for browsers that would conform to HTML5 standards. When we started development, we were uncertain about how Internet Explorer 9, which is still in beta, would evolve, but we have been very pleasantly surprised by the evolution of that product. (And a discussion in December with Ziad Ismail, IE9’s Director of Product Management, has since further reinforced our belief that Microsoft is very serious about standards compliance in IE9.)
  • We decided early on to build our front-end in Javascript and SVG, rather than Flash. We know of very few applications, even today, that attempt to deliver the kind of user interface combined with real-time networking that Kerika does using Javascript, and it’s not that we like crazy challenges because we are crazy people –we just didn’t want to get caught in the crossfire between Apple and Adobe over Flash. Having to choose between Caradhras and the Mines of Moria, like Gandalf we choose to brave the Mines…

Our choices:

  • Amazon’s EC2 and RDS for creating test and production environments; A2 hosting for our development environment. The choice was between EC2 and Google App Engine, and we will talk more in a separate blog post about why we chose Amazon over Google.
  • Eclipse for the developer’s IDE: it was cross-platform, and since everyone wanted to stick with their favorite operating system, it was Eclipse that straddled them all.
  • Jetty for web server: the choice was between Apache and Jetty, and we went with Jetty. Why? Well, that’s a blog post in itself…
  • Git for source code control: we looked at Subversion, but Git is better suited for distributed teams.
  • Maven for build management: once we decided on Eclipse, Git and Jetty, Maven kind of fell in place as a natural choice, since it worked well with all of the above.
  • Bugzilla for bug tracking: it’s a great tool, with a simple, flexible interface and it has matured very nicely over the past decade.

All of this has worked out quite well, and we would recommend this stack for other startups that are looking to develop Web applications.