Tag Archives: AWS

About our use of Amazon Web Services.

The nature of “things”: Mukund Narasimhan’s talk at Facebook

Mukund Narasimhan, an alumnus of Microsoft and Amazon who is currently a software engineer with Facebook in Seattle, gave a very interesting presentation at Facebook’s Seattle offices this week. It was a right-sized crowd, in a right-sized office: just enough people, and the right sort of people to afford interesting conversations, and a very generous serving of snacks and beverages from Facebook in an open-plan office that had the look-and-feel of a well-funded startup.

Mukund isn’t posting his slides, and we didn’t take notes, so this isn’t an exhaustive report of the evening, but rather an overview of some aspects of the discussion.

Mukund’s talk was principally about data mining and the ways that Facebook is able to collate and curate vast amounts of data to create metadata about people, places, events and organizations. Facebook uses a variety of signals, the most important of which is user input, to deduce the “nature of things”: i.e. is this thing (entity) a place? Is this place a restaurant? Is this restaurant a vegetarian restaurant?

Some very powerful data mining techniques have been developed already, and this was illustrated most compelling by a slide showing a satellite image of a stadium: quite close to the stadium, on adjacent blocks, were two place markers. These marked the “official location” of the stadium, as told to Facebook by two separate data providers. The stadium itself was pin-pricked with a large number of dots, each marking a spot where a Facebook user had “checked in” to Facebook’s Places.

The visual contrast was dramatic: the official data providers had each estimated the position of the stadium to within a hundred yards, but the checkins of the Facebook users had perfectly marked the actual contours of the stadium.

Mukund’s talk was repeatedly interrupted by questions from the audience, since each slide offered a gateway to an entire topic, and eventually he ran out of time and we missed some of the material.

During the Q&A, Nikhil George from Mobisante asked a question that seemed highly relevant: was Facebook trying to create a “semantic Web? Mukund sidestepped the question adroitly by pointing out there was no established definition of the term semantic Web, and that is certainly true – the Wikipedia article linked above is tagged as having “multiple issues”– but while Facebook may not be using the specifc protocols and data formats that some might argue are indispensible to semantic Webs, one could certainly make the case that deducing the nature of things, particularly the nature of things that exist on the Internet, is the main point of creating a semantic Web.

While much of the Q&A was about technical matters, a more fundamental question occupied our own minds: at the outset, Mukund asserted that the more people interact with (and, in particular, contribute to) Facebook, the better the Facebook experience is for them and all of their friends.

This, clearly, is the underpinning of Facebook’s business model: people must continue to believe, on a rational basis, that contributing data to Facebook – so that Facebook can continue to deduce the nature of things and offer these things back to their users to consume – offers some direct, reasonably tangible rewards for them and their friends.

Presumably, then, Facebook must be taking very concrete measures to measure this sense of reward that their users experience; we didn’t hear much about that last night since that wasn’t Mukund’s area of focus, but it must surely be well understood within Facebook that the promise of continued reward for continued user interaction – which is essentially their brand promise – must be kept at all times. Has a lot of research been done in this area? (There is, of course, the outstanding research done by dana boyd on social networks in general.)

At a more practical level, a question that bedevils us is how we can improve the signal:noise ration in our Facebook Wall! In our physical worlds, we all have some friends that are exceptionally chatty: some of them are also very witty, which makes their chatter enjoyable, but some are just chatty in a very mundane way. In our Facebook worlds, it is very easy for the chatty people (are they also exceptionally idle or under-employed?) to dominate the conversation.

In a physical world, if we find ourselves cornered by a boor at a party we would quickly, determinedly sidle away and find someone more interesting to talk to, but how does one do that in Facebook? One option, offered by Mukund, would be to turn off their posts, which seems rather like “unfriending” them altogether. But we don’t want to unfriend these people altogether, we just don’t want to hear every detail of every day.

Mukund suggested that by selecting hiding individual posts, as well as “liking” others more aggressively, we could send clearer indications of preferences to Facebook that would help the system improve the signal:noise ratio, and that’s what we have been trying over the past few days.

It is an intriguing topic to consider, and undoubtedly a difficult problem to solve, because you need to weed out individual messages rather than block entire users. For example, one Facebook friend used an intermission of the movie “Thor” to report that he was enjoying the movie. It’s great that he is enjoying the movie, but this low-value update spawned a low-value thread all of its own. We don’t want this person blocked altogether; we need some way of telling Facebook that updates sent during movie intermssions are not very valuable. If Facebook misinterprets our signals and relegates him to the dustbin, we might miss a more useful notification in the future, such as a major life event or career move.

The problem seems analogous to developing a version of Google’s PageRank, but at the message level. In the example above, if a post sent during a movie intermission is of low value, it would affect the ranking of everyone who piled on to create its low-value thread.

People like us who are more technical would probably prefer a more direct way of manipulating (i.e. correcting) the signal:noise ratio. Presumably someone at Facebook is working, even as we write this blog, on some visual tools that provide sliders or other ways for people to rank their friends and notifications. One idea that comes to mind might be a sort of interactive tag cloud that shows you who posts the most, but which also lets you promote or demote individual friends.

Some email clients and collaboration tools assume that people who email you the most are the ones that matter the most, but with a social network, wouldn’t this bias have the opposite effect? Wouldn’t the most chatty people be the ones who have the least to say that’s worth hearing?

One piece of good news from Mukund is that Facebook is working on a translator: one of our closest friends is a Swede, and his posts are all in Swedish which makes them incomprehensible. Having a built-in translator will certainly make Facebook more useful for people with international networks, although it will be very interesting indeed to see how Facebook’s translation deals with the idiosyncrasies of slang and idiom.

Update: we got it wrong. Facebook isn’t working on a translator; Mukund was referring to a third-party application.

One topic that particularly intrigues, but which we couldn’t raise with Mukund for lack of time, was Paul Adam’s monumental slideshare presentation on the importance of groups within social networks. Paul argues that people tend to have between 4-6 groups, each of which tend to have 2-10 people. (This is based upon the research behind Dunbar’s Number, which posits that there is a limit to the number of friends which whom one can form lasting relationships, and this number is around 150.)

Facebook still doesn’t have groups, which is surprisingly since Paul Adam decamped Google for Facebook soon after making his presentation available online. It is a massive presentation, but fascinating material and surprisingly light reading: just fast-forward to slide 64 and the picture there sums up the entire presentation rather well.

Update: we got that wrong, too. Faceb0ok does have a groups product, and it is becoming increasingly popular within the company itself

All in all, one of the most enjoyable presentations we have attended in recent days. Mukund needs special commendation for his fortitude and confident good humor in standing up before a savvy crowd and braving any and all questions about Facebook’s past, present and future.

Seattle’s “evening tech scene” is really getting interesting these days: perhaps we are seeing the working of a virtuous cycle where meetups and other events start to “up their game”!

First, we kill all the managers. (Or, maybe not.)

In earlier posts we described our decision to chose Amazon’s EC2 over Google App Engine in recent days, and as part of the business perspective we noted that:

Amazon is accessible: both Amazon and Google have engineering centers in our neighborhood, and Amazon’s is, of course, the much larger presence, but the real issue was which company was more accessible? Amazon does a lot of outreach to the startup community in the Seattle area – Jeff Barr is everywhere! – whereas Google is a lot more aloof. It’s much easier to find an engineer or product manager at Amazon who will talk to you, and that really makes a difference for a startup. Real people matter.

Events in the past week have reassured us that we made the right choice. The very end of that post on choosing EC2 included a passing grumble about some long-standing problems with Amazon’s Elastic Load Balancer.

What happened following that blog post was a pleasant surprise: an old colleague saw the post on LinkedIn and forwarded it to someone at Amazon, who promptly contacted Jeff Barr, the peripatetic and indefatigable evangelist for Amazon Web Services.

And within just one day after our griping, Jeff asked for a meeting with us to see how Amazon could make things better for us. This level of responsiveness was, frankly, astonishing when one considers the size of Kerika (rather tiny) and the size of Amazon (rather Amazonian).

We met Jeff today, and are happy to report that this ELB issue will soon go away for us and everyone else. But that was a minor aspect of the meeting: much of our discussion was a wide-ranging conversation about Kerika, collaboration platforms and the many uses for cloud computing in different market segments.

And the best part was finding out that Jeff had taken the trouble, earlier in the day, to try out Kerika for himself.

Returning from the meeting, we couldn’t help but reflect upon cultures of the two companies that we are relying upon for our business model: Amazon and Google. Amazon is providing the cloud computing infrastructure, and Google is providing the OpenID authentication and, more importantly, the Google Docs suite that is integral to the Kerika product offering. Both sets of technologies are essential to Kerika’s success.

But the cultures of these two companies are clearly different: Amazon relies on both technology and people, whereas Google, it would appear, is purely an online presence. (To be fair, we must note that while both companies have engineering centers in the Seattle area, Amazon’s footprint is many times larger than Google’s.)

But, still: who could we have tried to reach at Google? Who is the public face of Google’s App Engine? Or, of Google Docs, for that matter? There are two substantial Google engineering centers nearby, but Google the company remains a cloud of distributed intelligence – much their servers – accessible only via HTTP.

A recent article at All Things D speculates that Larry Page may eliminate Google’s managers in large amounts, in order to free the animal spirits of the engineers, and a similar article on TechCrunch notes that “Page famously has a low opinion of managers, especially product managers who try to tell engineers what to do.”

There can be no gainsaying Google’s engineering talents, or its remarkable achievements, but will Google 3.0 be an organization that has even less human contact with its customers and partners?

Why we chose Amazon’s EC2 over Google’s App Engine: the CTO’s Perspective

A previous post discussed our decision to use Amazon’s EC2 service over Google’s App Engine and Microsoft’s Azure service primarily from a business perspective: the risks we weighed when we made our technology choices last August, and why the decision went in favor of EC2.

(We have also noted previously – lest anyone consider us uncritical fanboys! – that there are limitations with Amazon’s Web Services that have given us some heartburn.)

In today’s post, we present the CTO’s perspective: why EC2 was more attractive from a technology perspective:

The advantage of using Amazon’s EC2 over Google App Engine and Microsoft Azure is that for a highly interactive web applications such as Kerika, there needs to be multiple levels of caching of data.

Kerika maintains a cache within the browser so that it can avoid round trips to the server when the user moves objects around on the screen. Without that cache, the delays would be so long that users wouldn’t feel they were using a highly responsive desktop application – making it harder for us to .

There also needs to be a cache on the server side, and the most compelling reason for this is to reduce load on the database.

There are various ways to scale up a database, but none are as effective as simply minimizing the load on the database in the first place. A server-side cache is a simple way to do that. Because Kerika supports relationships between users, projects, idea pages, and so on, the server side cache has to maintain those relationships.

The Kerika server also must keep track of the state of the caches on its clients so that the it can maintain the clients in a current state, and thus avoid round trips to the server.

As a consequence of all this, Kerika uses servers with a large amount of RAM that needs to be quickly accessible for all of its users. Storing a large amount of data RAM is where EC2 becomes the only way to solve the problem. Because App Engine and Azure do not allow us to manage large amounts of RAM, they just weren’t good solutions for Kerika.

Another technical challenge is maintaining the long-lived browser connections that a Web application like Kerika depends upon. Google App Engine has the Channel API, but that doesn’t quite cut it for an application like Kerika.

Kerika needs to maintain several channels concurrently because users can be working with different combination of objects at the same time.

Kerika also makes use of broadcasts as a simple way to keep multiple users up to date. Because of EC2’s open architecture, we can use CometD as an off-the-shelf solution for client communication.

Finally, Google App Engine’s business model, which calls for charging two hours for each token doesn’t make economic sense in an environment where users are expected to navigate from page to page through the course of the day. EC2’s “pay-as-you-go” allows us to manage our traffic and keep operating costs down.

A surprisingly inelastic Elastic Load Balancer

Amazon’s Web Services are generally good, but they are not without flaws by any means, and in the past week we ran headlong into one of their limitations.

Some background, first: about a decade or two ago, people were very careful about how they typed in web addresses when using their browsers, because browsers weren’t very good at handling anything less than a perfectly formed URL. So, if you wanted to visit our website an eon ago, you probably would have typed in “http://www.kerika.com”. Ah, those were the days… You would direct people to “hot top dub dub dub kerika dot com”.

As lifeforms evolved, the “hot top” (did anyone really pronounce it as “hiptytipty colon whack whack dub dub dub dot domain dot com”?) got dropped, and we had the alliterative rhythm of “dub dub dub kerika dot com”.

From a branding perspective, however, having all that extra junk in front of your name isn’t very attractive, and we would rather be known as “Kerika.com” than “www.kerika.com”. After all, even in the crazy late 1990s when every company was guaranteed an immediate stock boost of 20% just by appending “.com” to their name, no one rushed to prefix their company names with “www.”

At Kerika, we decided that our canonical name of kerika.com was the way to go, which is why typing in “www.kerika.com” will land you at “kerika.com”. Setting that up is easy enough using the CNAME fields in DNS. So far, so hiptytipty…

The problem arose when we decided to use Amazon’s Elastic Load Balancer, so that we could have multiple servers (or, more precisely, multiple Amazon Machine Instances or AMIs) in the cloud, all hidden behind a single load balancer. It all sounded good in theory, but the problem came with routing all of our traffic to this ELB.

Getting www.kerika.com pointing to the ELB was easy enough by changing the CNAME in DNS, but in trying to kerika.com to also point to the ELB we fell into that gap that exists all too often between documentation and reality. But what really surprised us was finding out that the problem had been known to Amazon since May 2009, and it still hadn’t been fixed.

We can only guess why a known problem like this would remain unfixed for two years. The promised solution is supposed to be Route 53, Amazon’s new DNS for their cloud services. We heard an interesting presentation about Route 53 at Amazon’s premises a week ago; it was too bad we didn’t know then that our planned use of ELB would hit a brick wall or we could have pestered the Amazonians about this problem. According to the Route 53 website:

In the future, we plan to add additional integration features such as the ability to automatically tie your Amazon Elastic Load Balancer instances to a DNS name, and the ability to route your customers to the closest EC2 region.

We are waiting…

Choosing between Google’s App Engine and Amazon’s EC2

Back in the summer, when we were making our technology choices, we had to decide which company was going to provide our “cloud”. It really boiled down to a decision between Google’s App Engine and Amazon’s EC2. Microsoft’s Azure service was still relatively new, and in talking to one of their product evangelists we got the distinct impression that Microsoft was targeting Azure for their biggest customers. (That was nice of their product evangelist: not taking us down the garden path if there was a mismatch between our needs and Microsoft’s goals!)

Here’s why we chose EC2 over App Engine:

  • Amazon is accessible: both Amazon and Google have engineering centers in our neighborhood, and Amazon’s is, of course, the much larger presence, but the real issue was which company was more accessible? Amazon does a lot of outreach to the startup community in the Seattle area – Jeff Barr is everywhere! – whereas Google is a lot more aloof. It’s much easier to find an engineer or product manager at Amazon who will talk to you, and that really makes a difference for a startup. Real people matter.
  • There was negative buzz about App Engine, at least back in August. People in the startup community were talking about suffering outages, and not getting very good explanations about what had gone wrong. Google’s aloofness didn’t help: there wasn’t anyone reaching out to the community to explain what was happening and there was little acknowledgment of problems coming through on Google’s official channels.
  • To beta or not to beta? Google’s rather lavish use of the “beta” label (remember the many years that Gmail was in beta?), and their recent decision to pull the plug on Wave added to our nervousness. Were they serious about App Engine, or was this another science experiment?
  • An infinite loop of web pages: in the absence of any physical contact with Google, we tried to learn about their service through their web pages – well, duh, of course, dude! The problem was that much of the information is presented in a series of rather small pages, with links to more small pages. As an approach to information architecture – particularly one that’s biased towards being “discoverable” by a search engine – this makes a lot of sense, and the Kerika website takes as similar approach. The risk with this approach is that you can very easily find that you are following links that take you right back to where you started. (And our website runs the same risk; we look forward to your comments detailing our failings in this regard!)

While we were pondering this choice, Google came out with a rather interesting offer: unlimited free storage for a select handful of developers. (How select? Probably as select as one of Toyota’s Limited Edition cars, production of which is strictly limited to the number they can sell.) The offer was presented in the usual mysterious way: you went to an unadvertised website and made an application. Someone would let you know “soon” whether you would get past the velvet rope.

We thought we had a good chance of getting included in the program: after all, Kerika was designed to add value to Google Docs, and we were using a number of Google’s services, including their Open ID for registration. And, indeed, we were selected!

The only problem was that the reply came about 4 weeks later: an eon in startup time. By that time, we had already decided to go with EC2…

(Is EC2 perfect? No, not by a long shot, and we were particularly annoyed to find a rather fatal flaw – from our perspective – in the way their Elastic Load Balancer can be configured with canonical web addresses: a problem that’s been noted for 2 years already. Not cool to sit on a problem for 2 years, guys. Not cool at all.)

The development environment that’s worked for us

In our last post, we referred to our “dogfooding” Kerika by using for our testing process. Here’s some more information about how we set up our software development environment…

Our Requirements:

  • We needed to support a distributed team, of course. And our distributed team had a variety of preferences when it came to operating systems: there were folks that liked using Windows, folks that had only Macs, folks that used only Linux, and folks that liked using Linux as virtual machines running inside Windows… Since we were were developing a Web application, it didn’t make sense – neither from a philosophical perspective nor from a business strategy perspective – to insist that everybody use the same operating system. If we were going to build for the browser, we should be able to use any operating system we preferred: our output was, after all, going to be Java, Javascript and SVG.
  • We needed to deploy our software “in the cloud”, and that meant making sure our software could run well in a Linux environment. Microsoft’s Azure service was still relatively new, and, in any case, our back-end developers were more comfortable with Java than ASP.Net, so that fixed Java on Linux as a back-end requirement.
  • Our software had to run in all browsers: that effectively meant “modern browsers”, that happy euphemism for browsers that would conform to HTML5 standards. When we started development, we were uncertain about how Internet Explorer 9, which is still in beta, would evolve, but we have been very pleasantly surprised by the evolution of that product. (And a discussion in December with Ziad Ismail, IE9’s Director of Product Management, has since further reinforced our belief that Microsoft is very serious about standards compliance in IE9.)
  • We decided early on to build our front-end in Javascript and SVG, rather than Flash. We know of very few applications, even today, that attempt to deliver the kind of user interface combined with real-time networking that Kerika does using Javascript, and it’s not that we like crazy challenges because we are crazy people –we just didn’t want to get caught in the crossfire between Apple and Adobe over Flash. Having to choose between Caradhras and the Mines of Moria, like Gandalf we choose to brave the Mines…

Our choices:

  • Amazon’s EC2 and RDS for creating test and production environments; A2 hosting for our development environment. The choice was between EC2 and Google App Engine, and we will talk more in a separate blog post about why we chose Amazon over Google.
  • Eclipse for the developer’s IDE: it was cross-platform, and since everyone wanted to stick with their favorite operating system, it was Eclipse that straddled them all.
  • Jetty for web server: the choice was between Apache and Jetty, and we went with Jetty. Why? Well, that’s a blog post in itself…
  • Git for source code control: we looked at Subversion, but Git is better suited for distributed teams.
  • Maven for build management: once we decided on Eclipse, Git and Jetty, Maven kind of fell in place as a natural choice, since it worked well with all of the above.
  • Bugzilla for bug tracking: it’s a great tool, with a simple, flexible interface and it has matured very nicely over the past decade.

All of this has worked out quite well, and we would recommend this stack for other startups that are looking to develop Web applications.