First, we kill all the managers. (Or, maybe not.)

In earlier posts we described our decision to chose Amazon’s EC2 over Google App Engine in recent days, and as part of the business perspective we noted that:

Amazon is accessible: both Amazon and Google have engineering centers in our neighborhood, and Amazon’s is, of course, the much larger presence, but the real issue was which company was more accessible? Amazon does a lot of outreach to the startup community in the Seattle area – Jeff Barr is everywhere! – whereas Google is a lot more aloof. It’s much easier to find an engineer or product manager at Amazon who will talk to you, and that really makes a difference for a startup. Real people matter.

Events in the past week have reassured us that we made the right choice. The very end of that post on choosing EC2 included a passing grumble about some long-standing problems with Amazon’s Elastic Load Balancer.

What happened following that blog post was a pleasant surprise: an old colleague saw the post on LinkedIn and forwarded it to someone at Amazon, who promptly contacted Jeff Barr, the peripatetic and indefatigable evangelist for Amazon Web Services.

And within just one day after our griping, Jeff asked for a meeting with us to see how Amazon could make things better for us. This level of responsiveness was, frankly, astonishing when one considers the size of Kerika (rather tiny) and the size of Amazon (rather Amazonian).

We met Jeff today, and are happy to report that this ELB issue will soon go away for us and everyone else. But that was a minor aspect of the meeting: much of our discussion was a wide-ranging conversation about Kerika, collaboration platforms and the many uses for cloud computing in different market segments.

And the best part was finding out that Jeff had taken the trouble, earlier in the day, to try out Kerika for himself.

Returning from the meeting, we couldn’t help but reflect upon cultures of the two companies that we are relying upon for our business model: Amazon and Google. Amazon is providing the cloud computing infrastructure, and Google is providing the OpenID authentication and, more importantly, the Google Docs suite that is integral to the Kerika product offering. Both sets of technologies are essential to Kerika’s success.

But the cultures of these two companies are clearly different: Amazon relies on both technology and people, whereas Google, it would appear, is purely an online presence. (To be fair, we must note that while both companies have engineering centers in the Seattle area, Amazon’s footprint is many times larger than Google’s.)

But, still: who could we have tried to reach at Google? Who is the public face of Google’s App Engine? Or, of Google Docs, for that matter? There are two substantial Google engineering centers nearby, but Google the company remains a cloud of distributed intelligence – much their servers – accessible only via HTTP.

A recent article at All Things D speculates that Larry Page may eliminate Google’s managers in large amounts, in order to free the animal spirits of the engineers, and a similar article on TechCrunch notes that “Page famously has a low opinion of managers, especially product managers who try to tell engineers what to do.”

There can be no gainsaying Google’s engineering talents, or its remarkable achievements, but will Google 3.0 be an organization that has even less human contact with its customers and partners?

Jakob Nielsen’s Power of Ten Principle

We like to think that we have done a fairly good job in terms of designing our user interface. A major influence was the writings of Jakob Nielsen: Kerika’s CEO had the good fortune to meet Mr. Nielsen at a conference in the mid-90s, when corporate America was slowly waking up to the reality – and permanence – of the Web, and Mr. Nielsen just been laid off from Sun Microsystems, which, in its infinite wisdom, decided to get rid of their entire Advanced Technology Group as a cost-savings measure.

Mr. Nielsen fast became a popular speaker at the few Web conferences that were held in the early days, and one could listen to his speeches practically for free. (Now, we understand, it costs about $15,000 to get Mr. Nielsen’s attentions for a single day…). Mr. Nielsen went on to found the Nielsen Norman Group, with Donald Norman who had done pioneering work on product design, and he created a simple newsletter-based website (www.useit.com) that remains a wonderful source of research on Web usability.

What was remarkable about Mr. Nielsen’s approach then, and which we think still is a relatively rare ability among the many design pundits today, is a rigorous emphasis on scientific observation and testing. Mr. Nielsen has never given the impression of being someone who has relied very much on his instincts when it comes to design; he has always emphasized the need for usability testing.

Too many other “pundits” – and here we use that phrase in the American sense of a talking head, rather than the Indian sense of a priest or wise man – rely upon what they believe, often erroneously, to be a superior design aesthetic which they deftly package with enough jargon to make it appear more like fact than opinion.

(Today, we have a beta version of Kerika that we are using to gather usability data: we are sitting down with our initial users, directly observing their reactions – their many sources of confusion and occasional moments of delight – to see fine-tune our user interface. We believe we have done a good job on the main aspects of the design, but there are many rough edges that we still need to sand over, to get the “fit-and-finish” just right. So, while we are immensely proud of what we have accomplished – and the tens of thousands of lines of Javascript and Java code we have written in a remarkably short period of time – we are only too aware of our shortcomings as well…)

Mr. Nielsen writes mostly about website usability, but his observations and principles are very apropos to the design of Web applications as well. We have tried to incorporate his suggestions on perceived affordance and information scent in the various elements of our user interface, but when we expand the discussion from UI to UX – from user interface to user experience – it is clear that performance is a key contributor to an overall good experience.

In his article on the “Powers of Ten” principle, Mr. Nielsen points out that 0.1 second is the response time limit if you want users to feel like their actions are directly causing something to happen on the screen. If a system responds within 0.1 seconds, the system essential disappears from view: the user believes he or she is directly manipulating the objects on the screen.

(A simple analogy is the mouse: when one moves the mouse, the cursor moves immediately on the screen, which is why it is so easy to learn to use a mouse: one quickly forgets its presence altogether, and concentrates upon looking at the screen instead. We often see people who hunt-and-peck at their keyboards; when was the last time you saw someone look down at their mouse to make sure they were moving it correctly?)

To that end, we have tired to ensure that most user interactions when using Kerika fall within the 0.1 seconds time limit: when you add an item to a page, or move it, or delete it, it happens instantly.

Next up is the 1 second time limit, and here we quote Mr. Nielsen:

When the computer takes more than 0.1 second but less than 1 second to respond to your input, it feels like the computer is causing the result to appear. Although users notice the short delay, they stay focused on their current train of thought during the one-second interval.This means that during 1-second response times, users retain the feeling of being in control of the interaction even though they notice that it’s a 2-way interaction (between them and the computer). By contrast, with 0.1 second response times, users simply feel like they’re doing something themselves.

In the Kerika user interface, there are moments when a user will experience a 1 second response time, although not very often: most commonly, this happens when we are waiting for an external website to respond. For example, if you have built a “video wall” of YouTube videos, you may have to wait a second (or two or three) for YouTube to respond when you decide to play a video. This, regrettably, is out of our control. But for the parts of the user interface that are within our control, we have tried to stay within the 1 second time limit.

After 1 second, users get impatient and notice that they’re waiting for a slow computer to respond. The longer the wait, the more this impatience grows; after about 10 seconds, the average attention span is maxed out. At that point, the user’s mind starts wandering and doesn’t retain enough information in short-term memory to easily resume the interaction once the computer finally loads the next screen. More than 10 seconds, and you break the flow.

Nothing, in Kerika, breaks the flow.

Why we chose Amazon’s EC2 over Google’s App Engine: the CTO’s Perspective

A previous post discussed our decision to use Amazon’s EC2 service over Google’s App Engine and Microsoft’s Azure service primarily from a business perspective: the risks we weighed when we made our technology choices last August, and why the decision went in favor of EC2.

(We have also noted previously – lest anyone consider us uncritical fanboys! – that there are limitations with Amazon’s Web Services that have given us some heartburn.)

In today’s post, we present the CTO’s perspective: why EC2 was more attractive from a technology perspective:

The advantage of using Amazon’s EC2 over Google App Engine and Microsoft Azure is that for a highly interactive web applications such as Kerika, there needs to be multiple levels of caching of data.

Kerika maintains a cache within the browser so that it can avoid round trips to the server when the user moves objects around on the screen. Without that cache, the delays would be so long that users wouldn’t feel they were using a highly responsive desktop application – making it harder for us to .

There also needs to be a cache on the server side, and the most compelling reason for this is to reduce load on the database.

There are various ways to scale up a database, but none are as effective as simply minimizing the load on the database in the first place. A server-side cache is a simple way to do that. Because Kerika supports relationships between users, projects, idea pages, and so on, the server side cache has to maintain those relationships.

The Kerika server also must keep track of the state of the caches on its clients so that the it can maintain the clients in a current state, and thus avoid round trips to the server.

As a consequence of all this, Kerika uses servers with a large amount of RAM that needs to be quickly accessible for all of its users. Storing a large amount of data RAM is where EC2 becomes the only way to solve the problem. Because App Engine and Azure do not allow us to manage large amounts of RAM, they just weren’t good solutions for Kerika.

Another technical challenge is maintaining the long-lived browser connections that a Web application like Kerika depends upon. Google App Engine has the Channel API, but that doesn’t quite cut it for an application like Kerika.

Kerika needs to maintain several channels concurrently because users can be working with different combination of objects at the same time.

Kerika also makes use of broadcasts as a simple way to keep multiple users up to date. Because of EC2’s open architecture, we can use CometD as an off-the-shelf solution for client communication.

Finally, Google App Engine’s business model, which calls for charging two hours for each token doesn’t make economic sense in an environment where users are expected to navigate from page to page through the course of the day. EC2’s “pay-as-you-go” allows us to manage our traffic and keep operating costs down.

A surprisingly inelastic Elastic Load Balancer

Amazon’s Web Services are generally good, but they are not without flaws by any means, and in the past week we ran headlong into one of their limitations.

Some background, first: about a decade or two ago, people were very careful about how they typed in web addresses when using their browsers, because browsers weren’t very good at handling anything less than a perfectly formed URL. So, if you wanted to visit our website an eon ago, you probably would have typed in “http://www.kerika.com”. Ah, those were the days… You would direct people to “hot top dub dub dub kerika dot com”.

As lifeforms evolved, the “hot top” (did anyone really pronounce it as “hiptytipty colon whack whack dub dub dub dot domain dot com”?) got dropped, and we had the alliterative rhythm of “dub dub dub kerika dot com”.

From a branding perspective, however, having all that extra junk in front of your name isn’t very attractive, and we would rather be known as “Kerika.com” than “www.kerika.com”. After all, even in the crazy late 1990s when every company was guaranteed an immediate stock boost of 20% just by appending “.com” to their name, no one rushed to prefix their company names with “www.”

At Kerika, we decided that our canonical name of kerika.com was the way to go, which is why typing in “www.kerika.com” will land you at “kerika.com”. Setting that up is easy enough using the CNAME fields in DNS. So far, so hiptytipty…

The problem arose when we decided to use Amazon’s Elastic Load Balancer, so that we could have multiple servers (or, more precisely, multiple Amazon Machine Instances or AMIs) in the cloud, all hidden behind a single load balancer. It all sounded good in theory, but the problem came with routing all of our traffic to this ELB.

Getting www.kerika.com pointing to the ELB was easy enough by changing the CNAME in DNS, but in trying to kerika.com to also point to the ELB we fell into that gap that exists all too often between documentation and reality. But what really surprised us was finding out that the problem had been known to Amazon since May 2009, and it still hadn’t been fixed.

We can only guess why a known problem like this would remain unfixed for two years. The promised solution is supposed to be Route 53, Amazon’s new DNS for their cloud services. We heard an interesting presentation about Route 53 at Amazon’s premises a week ago; it was too bad we didn’t know then that our planned use of ELB would hit a brick wall or we could have pestered the Amazonians about this problem. According to the Route 53 website:

In the future, we plan to add additional integration features such as the ability to automatically tie your Amazon Elastic Load Balancer instances to a DNS name, and the ability to route your customers to the closest EC2 region.

We are waiting…

In the Spring, a young man’s thoughts turn seriously to marriage…

In the spring, a young man’s thoughts turn lightly to love…

(from “Locksley Hall,” by Alfred, Lord Tennyson.

Well, one young man’s thoughts turned to marriage. A self-professed “English code monkey” (when was the last time you saw that particular conjunction of words?) has been using Kerika to help plan out his wedding.

He lives locally, as does his fiance, but his parents live in the U.K. and his in-laws-to-be are in Texas, and so – in a very unexpected and utterly delightful use case for Kerika – this fellow has been using Kerika to plan his wedding.

We have been fortunate to share his experiences first hand, and since he is a very organized code monkey, he took a number of screenshots of his Kerika application that he has written about in his own blog. He calls it his “big fat geek wedding”!

Choosing between Google’s App Engine and Amazon’s EC2

Back in the summer, when we were making our technology choices, we had to decide which company was going to provide our “cloud”. It really boiled down to a decision between Google’s App Engine and Amazon’s EC2. Microsoft’s Azure service was still relatively new, and in talking to one of their product evangelists we got the distinct impression that Microsoft was targeting Azure for their biggest customers. (That was nice of their product evangelist: not taking us down the garden path if there was a mismatch between our needs and Microsoft’s goals!)

Here’s why we chose EC2 over App Engine:

  • Amazon is accessible: both Amazon and Google have engineering centers in our neighborhood, and Amazon’s is, of course, the much larger presence, but the real issue was which company was more accessible? Amazon does a lot of outreach to the startup community in the Seattle area – Jeff Barr is everywhere! – whereas Google is a lot more aloof. It’s much easier to find an engineer or product manager at Amazon who will talk to you, and that really makes a difference for a startup. Real people matter.
  • There was negative buzz about App Engine, at least back in August. People in the startup community were talking about suffering outages, and not getting very good explanations about what had gone wrong. Google’s aloofness didn’t help: there wasn’t anyone reaching out to the community to explain what was happening and there was little acknowledgment of problems coming through on Google’s official channels.
  • To beta or not to beta? Google’s rather lavish use of the “beta” label (remember the many years that Gmail was in beta?), and their recent decision to pull the plug on Wave added to our nervousness. Were they serious about App Engine, or was this another science experiment?
  • An infinite loop of web pages: in the absence of any physical contact with Google, we tried to learn about their service through their web pages – well, duh, of course, dude! The problem was that much of the information is presented in a series of rather small pages, with links to more small pages. As an approach to information architecture – particularly one that’s biased towards being “discoverable” by a search engine – this makes a lot of sense, and the Kerika website takes as similar approach. The risk with this approach is that you can very easily find that you are following links that take you right back to where you started. (And our website runs the same risk; we look forward to your comments detailing our failings in this regard!)

While we were pondering this choice, Google came out with a rather interesting offer: unlimited free storage for a select handful of developers. (How select? Probably as select as one of Toyota’s Limited Edition cars, production of which is strictly limited to the number they can sell.) The offer was presented in the usual mysterious way: you went to an unadvertised website and made an application. Someone would let you know “soon” whether you would get past the velvet rope.

We thought we had a good chance of getting included in the program: after all, Kerika was designed to add value to Google Docs, and we were using a number of Google’s services, including their Open ID for registration. And, indeed, we were selected!

The only problem was that the reply came about 4 weeks later: an eon in startup time. By that time, we had already decided to go with EC2…

(Is EC2 perfect? No, not by a long shot, and we were particularly annoyed to find a rather fatal flaw – from our perspective – in the way their Elastic Load Balancer can be configured with canonical web addresses: a problem that’s been noted for 2 years already. Not cool to sit on a problem for 2 years, guys. Not cool at all.)

The development environment that’s worked for us

In our last post, we referred to our “dogfooding” Kerika by using for our testing process. Here’s some more information about how we set up our software development environment…

Our Requirements:

  • We needed to support a distributed team, of course. And our distributed team had a variety of preferences when it came to operating systems: there were folks that liked using Windows, folks that had only Macs, folks that used only Linux, and folks that liked using Linux as virtual machines running inside Windows… Since we were were developing a Web application, it didn’t make sense – neither from a philosophical perspective nor from a business strategy perspective – to insist that everybody use the same operating system. If we were going to build for the browser, we should be able to use any operating system we preferred: our output was, after all, going to be Java, Javascript and SVG.
  • We needed to deploy our software “in the cloud”, and that meant making sure our software could run well in a Linux environment. Microsoft’s Azure service was still relatively new, and, in any case, our back-end developers were more comfortable with Java than ASP.Net, so that fixed Java on Linux as a back-end requirement.
  • Our software had to run in all browsers: that effectively meant “modern browsers”, that happy euphemism for browsers that would conform to HTML5 standards. When we started development, we were uncertain about how Internet Explorer 9, which is still in beta, would evolve, but we have been very pleasantly surprised by the evolution of that product. (And a discussion in December with Ziad Ismail, IE9’s Director of Product Management, has since further reinforced our belief that Microsoft is very serious about standards compliance in IE9.)
  • We decided early on to build our front-end in Javascript and SVG, rather than Flash. We know of very few applications, even today, that attempt to deliver the kind of user interface combined with real-time networking that Kerika does using Javascript, and it’s not that we like crazy challenges because we are crazy people –we just didn’t want to get caught in the crossfire between Apple and Adobe over Flash. Having to choose between Caradhras and the Mines of Moria, like Gandalf we choose to brave the Mines…

Our choices:

  • Amazon’s EC2 and RDS for creating test and production environments; A2 hosting for our development environment. The choice was between EC2 and Google App Engine, and we will talk more in a separate blog post about why we chose Amazon over Google.
  • Eclipse for the developer’s IDE: it was cross-platform, and since everyone wanted to stick with their favorite operating system, it was Eclipse that straddled them all.
  • Jetty for web server: the choice was between Apache and Jetty, and we went with Jetty. Why? Well, that’s a blog post in itself…
  • Git for source code control: we looked at Subversion, but Git is better suited for distributed teams.
  • Maven for build management: once we decided on Eclipse, Git and Jetty, Maven kind of fell in place as a natural choice, since it worked well with all of the above.
  • Bugzilla for bug tracking: it’s a great tool, with a simple, flexible interface and it has matured very nicely over the past decade.

All of this has worked out quite well, and we would recommend this stack for other startups that are looking to develop Web applications.

Using Kerika to QA itself

Since we are developing software for helping distributed teams collaborate more effectively, it was only natural that we would set up our own team as a distributed one.

Working remotely and relying principally on electronic communications, supplemented by some in-person meetings, has helped us understand better the challenges faced by distributed teams. As we believed from the outset, the biggest challenge has been keeping everyone on the same page in terms of our company strategy: business strategy, marketing strategy, and product roadmap.

Throughout our development process one of us would bemoan the lack of software like Kerika, that could help us build software like Kerika… Once the product became usable enough, we started “dogfooding” it –that elegant phrase invented by Microsoft that refers to a product team “eating it’s own dog food.”

One of the ways in which the Kerika team is using Kerika is for our QA: whenever we decide to work on a bug, we create a new project and name it after that bug number. Inside, we put our Build Verification Test (BVT) as well as the exit (success) criteria for that particular bug. It’s a neat trick: by going through the BVT, which we use as a sanity test before the developers hand off their code for QA, we end up creating a mini Kerika project for each bug.

For example, our BVT requires developers to upload a document to a Kerika page: well, for each bug, we upload a document that represents the exit criteria for that particular bug. The BVT requires users to go through various steps that provide a general coverage of Kerika’s main features. This means logging in using at least 3 different browsers (we usually test with Firefox, Safari and Chrome), and going through the process of adding Web links, videos, etc.

By using Kerika to test Kerika, at the end of each bug’s coding cycle we have a new project that we can look at and see whether it passed the BVT. It’s self-referential: the existence of a correctly set up project, with a particular team consisting of both Team Members and Visitors who perform certain actions, confirms that the BVT passed.

We combine the BVT with the exit criteria for each bug: these are derived from the reproduction steps of the bug report, plus the functional specifications. Going through the exit criteria for a particular bug, we end up with items in that bug’s project folder that confirm whether the bug was successfully fixed.

For example, if there was a bug about embedding YouTube videos on Kerika pages, the exit criteria would be such that at the end of the developer’s testing, the project would contain a YouTube video that could be examined to confirm that the bug was fixed.

So if the project for that bug is set up correctly at the end of the bug-specific repro steps and the BVT, then the developer knows he can check in his code for QA on our test environment. Of course, during our QA cycle we do more extensive testing, but this initial use of Kerika helps developers avoid breaking the build when they check in the code.

Pretty neat way of dogfooding, wouldn’t you agree?

Hello once again, world!

After a rather long hiatus (over three years!), Kerika is back, and with a bang!

We owe the world some explanation of why Kerika disappeared, and why it is now reappearing…

Here’s what happened: when we launched Kerika back in 2006, it was as a desktop application, written entirely in Java so that it would run on Windows, Macs and Linux computers. People really liked the concept, particularly the innovative user interface and the ease with which one could do document management. But, there were two serious drawbacks with that first version:

  • The biggest problem was our reliance upon JXTA, an open-source peer-to-peer communication technology that had been hatched at Sun Microsystems (remember them?) by none other than the legendary Bill Joy. On paper, JXTA looked perfect: it’s theoretical model and architecture exactly matched our needs. In practice, however, it proved to be a disastrous technology choice.
  • The other big problem we had was that Kerika 1.0 was a desktop application, which meant that it needed to get downloaded, and users needed to configure their firewalls to let the JXTA traffic go through. This proved to be a huge hurdle for many people who were interested in the product, but couldn’t get it past their IT gatekeepers.

Eventually, we ran out of time and money, which is really the same thing from a startup’s perspective. Of the two flaws listed above, the dependency on JXTA was really the killer: it meant that we couldn’t reliably provide communication or transfer of files between users. (And the topic of JXTA really merits its own blog post.)

And, so, we had to pack up our tents and go get “regular” jobs.

That’s the story of why Kerika disappeared.

What’s more interesting, is the story of why Kerika is now reappearing:

A funny thing happened, in the 3 years that Kerika v1 was taken off the market: people kept writing in asking for the product. (We had never taken down the website, so the demos were still available; you just couldn’t download the product any more.)

This got us thinking that maybe Kerika was fundamentally a good idea, but we had screwed up the execution of that idea. And there was another thing that surprised us: in the intervening years, no one else released anything like Kerika –a flexible whiteboard on which you could sketch out your ideas and plans, and also embed your content.

Last Spring, we sent an email to our old user base, trying to understand better what it was they found attractive about Kerika, and, in the process, trying to gauge the interest in reviving the product. The replies we received convinced us that (a) Kerika was, fundamentally, a good idea, at heart, and (b) the needs it served were still not being met by anything else in the market.

In August we reconstituted the Kerika team: a different team than before, with the skills that we would need to rebuild Kerika, from scratch, as a Web application. We have been hard at work ever since, and have done a compete rebuild – not a single line of code was reused! – and now we are ready to present to you the fruits of our labors.

In the next few blog posts, we will talk about the new product, the challenges we faced, the choices we made, and the lessons we learned from Kerika v1 (or “K1” as we like to call it.)

Welcome back.