All posts by user

Our latest version, and then some!

In the immortal words of Jim Anchower: “Hola, amigos. I know it’s been a long time since I rapped at ya.”

Our apologies for not posting blog entries for a while, but we have the usual excuse for that, and this time it’s true: “We’ve been incredibly busy building great software!” It’s going to be hard to summarize all the work that we have done since June, but let’s give it a shot:

  1. We have curved lines now. And not just any old curved lines, but the most flexible and easy to use drawing program that you are likely to encounter anywhere. You can take a line and bend it in as many ways as you like, and – this is the kicker – straighten it out as easily as you bent it in the first place. There’s a quick demo video on YouTube that you should check out.
  2. We have greatly improved the text blocks feature of Kerika. The toolbar looks better on all browsers now (Safari and Chrome used to make it look all scrunched up before), and we have added some cool features like using it to add an image to your Kerika page that’s a link to another website. (So you could, for example, add a logo for a company to your Kerika page and have that be a link to your company’s website.) Check out the nifty tutorial on YouTube on text blocks.
  3. You can set your styling preferences: colors, fonts, lines, etc. Previously, all the drawing you did on your Kerika pages was with just one set of colors, fonts, etc., but now you can set your own styling preferences, with a new button, and also adjust the appearance of individual items.
  4. We have improved the whole Invitations & Requests process. Now, when you invite people to join your projects, the emails that get sent out are much better looking and much more helpful, and the same goes for requests that come to you from people who want to join your projects, or change their roles in your projects. Check out this quick tutorial on how invitations and requests work.
  5. We have made it easier for you to personalize your Account. You can add a picture and your own company logo, which means that when you use Kerika your users see your logo, not ours! Check out this quick tutorial on how to personalize your Account.
  6. We have hugely increased the kinds of third-party content you can embed on your Kerika pages. The list is so long, we really should put that in a separate blog post. We have gone way beyond YouTube videos now; we are talking about all the major video sites (Vimeo, etc.), Hulu, Google Maps, Scribd and Slideshare… The mind boggles.
  7. Full screen view of projects. There’s a little button now, at the top-right corner of the Kerika canvas: click on it and you will go into full-screen mode, where the canvas takes up all the space and all the toolbars disappear. This makes it easy to surf pages that contain lots of content, or work more easily with your Google Docs.
  8. Full support for Internet Explorer 9 (IE9). Not as easy as you might think, given that Microsoft has historically gone their own way, but we have sweated the details and now Kerika works great with IE9. As Microsoft continues to converge around common standards, this should get easier for us over time.
  9. Full support for all desktop platforms. OK, so this isn’t really a new feature, but since we are bragging we might as well emphasize that Kerika works, and is tested, to work identically on Safari, Chrome and Firefox on Windows 7, Mac OSX and Linux.
  10. Literally hundreds of usability improvements. Yeah, okay, we should have gotten it all right in the first place, but our focus over the past few months has been very much on working directly with our early adopters, observing them use the product, and noting all the tiny friction points that we could improve upon. We are not saying that we have all the friction removed, we are just bragging about the hundreds of tweaks we have made in the past 3 months.

Since June, we have had two major releases: one at the end of July that had nearly 150 bug fixes and usability improvements, and one this week, with over 120 bug fixes and usability improvements.

The product is now in great shape from an infrastructure perspective: the core software has been well debugged and is now very robust. Performance is great: you should get sub-second responsiveness when working in an environment with decent broadband wireless, where you see updates to your project pages in less than one second after a team member makes a change. (We test this with users in Seattle and India working simultaneously on the same project.)

Having this robust infrastructure that’s been well debugged and tuned makes it easy for us to add new features. In the coming weeks, look for more social media hooks, a revamped website, an extensive collection of public projects (that you can use as templates for your own work), and more. Much more. After all, if “less is more”, just think how much more “more” could be ;-)

Some thoughts on software patents

A very big day for us at Kerika: after more than six years of perseverance, we finally got our first patent issued today: United States Patent No. 7,958,080. (Yeah, that’s certainly a big number: a lot of people were granted patents before us!)

There is a lot of debate in our profession about software patents: strong opinions have been voiced in favor of granting software patents, and equally vociferous views have been aired in opposition to the entire concept. What’s discussed much less frequently is how long it can take to get a software patent, or expensive and arduous the process can be.

Our original provisional application was made on October 29, 2004; this was followed up with a final application, and then dead silence for a couple of years. When the Patent Office finally examined the patent application, there was a multi-year exchange of correspondence with the Examiner, with the Examiner making various arguments in opposition to granting a patent, and our lawyers making counter-arguments in favor of Kerika being granted a patent.

All this back-and-forth is quite normal, by the way, and works as designed: it’s the Patent Examiner’s job to not grant patents without challenging their claims, and it’s the applicant’s responsibility to offer increasingly stronger arguments in favor of being granted a patent. What is finally granted is usually a subset of the claims that were made in the initial application. It’s very rare for all of the initial claims to be granted.

Software patents are harder to get, and take much longer to be granted, than hardware patents, probably because it is easy to compare one tangible object with another and determine whether one is materially different than the second tangible object.

Software is harder because it is intangible: you have to describe an intangible item, using the jargon that’s peculiar to patent applications, in way that clearly distinguishes it from another intangible item.

The net result is that a software patent can take many more years to be granted, in comparison to a hardware patent. One of the problems with the process taking so long is that, by the time the Patent Examiner gets around to examining your application, the technology may have become so prevalent in other, competing products that it no longer appears to be as innovative as it was when the application was first made.

The entire process is also very expensive, particularly if you are a solo inventor or small startup (as we are): the Patent Office has been hiking its fees over the years, with the stated intention of hiring up more examiners, and thereby speeding up the process, but clearly they are having a hard time keeping up with the flood of applications because it seems like the delays are getting worse, not better.

Regardless of whether you are pro or con software patents, we would argue that both sides of the argument would be served better if the process of examining, granting or rejecting a patent were shortened very considerably. If a patent could be examined and acted upon within 2-3 months, it would entirely change the debate as to whether software patents are good or bad for innovation.

Meanwhile, back at the ranch… we have other patent claims in the works, which we hope will take less time to work its way through the Patent Office now that our first patent has been granted!

Our latest, greatest version: Copy/Paste, Share/Join, and more

We have been hard at work over the past couple of months: taking in feedback from our early adopters and using this to fix bugs, improve usability, and provide some very useful new features.

Here’s a quick summary of some of the stuff we have built in the past month or so:

  1. Added Copy, Cut, Paste as New, and Paste as Links. The Paste as Links feature is particularly powerful.
  2. Added a Share! button and a Join! button that makes it easy for people to create public projects and get people to join in their particular advocacy cause.
  3. Brought back double-clicking (after having taken it out in an previous version…)
  4. Made the text blocks more flexible, with the ability to embed URLs and autosizing.
  5. Improved our Google Docs Finder.
  6. Worked around numerous limitations of Google Docs.

We will describe each of these in more detail in subsequent blog posts, and we obviously have a lot of work on our hands to update our website to reflect all this new functionality! Oh, and about 200 bugs were found and obliterated in the past 2 months…

(Lots of people have provided useful feedback, but special thanks go to Michael Parker from Marketingeek and Alexander Caskey from High Bridge Communications.)

 

Running Windows on my iPhone…

Some years ago, a friend presented me with an old iMac that he didn’t need any more – one of those blob-like machines with a clear cover that let you see its innards. It was a heavy, inconveniently shaped machine that nevertheless started a slow migration to all-Mac household today: Mac laptops, iMac desktop, Mac wireless router, iPod, iPhone (and, soon, iPad).

Macs were always attractive for a simple reason: you didn’t have to worry about tuning the operating system. When I had PCs at home, I found myself having to do a clean re-installation of the operating system every 9 months or so, just to clean out the grit and gum that clogged up the Windows registry and system directories.

The way Windows programs were installed was particularly problematic: each application would leave bits and pieces of code and configuration files lying around all over the place, where it would inevitably trip up some other program. If you liked trying out different software, as I did, this meant that every 9-12 months you had to rebuild your PC from scratch: repartition the hard drive, reinstall the operating system (and several years worth of updates), and then all your applications, and then all your files. It typically took a full day, but it was worth it because your machine was all zippy and fast once again – at least for a few months…

The main selling point of Macs was that all this fiddling with your machine was unnecessary: your computer would “just work”, and you wouldn’t have to worry about rebuilding the machine periodically to get it in fighting trim. And, with the new iCloud strategy, Apple is trying to take that brand promise even further, by demoting your expensive gadgets to “just another device”.

Is this all going to work like magic, as Jobs promises? I doubt it, based upon my recent experience with my iPhone… For the past few months I had noticed that many of my basic iPhone applications would just quietly crash when I first tried to launch them: launching the music player, for example, would frequently take two attempts. The maps application would take up to a minute to launch, and then it would operate so slowly that there would be a 2 second delay in echo-back of character input.

I assumed this was due to the fact that the iOS operating system was getting fatter with every new update, while my hardware remained svelte. After all, the upgrade to iOS 4.0 had been particularly painful for my iPhone 3GS: it taught me a painful lesson about not automatically taking the latest software updates, even from Apple!

(And it wasn’t just me: lots of iPhone 3GS users complained bitterly after upgrading to iOS 4 – it was like trying to run Vista on hardware designed for Windows 95)

And that’s the long-winded segue into why it feels like I am actually running Windows on my iPhone: a couple of weeks ago my battery started to run out very quickly (it wouldn’t last even a few hours of slight use), I took my iPhone to the local Apple genius bar where they quickly determined that my iPhone’s OS was in terrible shape. Their recommendation was to do a complete rebuild of the iPhone: from scratch, which meant setting it up as a brand new phone and adding back in all my email accounts, music, etc. by hand.

This rebuild worked: now my iPhone is as fast and responsive as it used to be, but the experience of completely rebuilding an iPhone (which, incidentally, is not the same as doing a “restore from backup”), was all too reminiscent of having to rebuild my Windows machines periodically to clean out the gunk in the registry and system directories.

It is to the credit of the Apple “geniuses” that they quickly determined that my phone’s OS was in trouble, but I also suspect it is because this is a “known but not publicized” problem with iOS 4. Over time, iOS 4 gets in trouble just like Windows XP and Vista (and perhaps Windows 7 as well, although I use that too infrequently to say for sure).

Think different, work same.

Out of the billowing smoke and dust of tweets and trivia emerges our first patent

“The literati sent out their minions to do their bidding. Washington cannot tolerate threats from outsiders who might disrupt their comfortable world. The firefight started when the cowardly sensed weakness. They fired timidly at first, then the sheep not wanting to be dropped from the establishment’s cocktail party invite list unloaded their entire clip, firing without taking aim their distortions and falsehoods. Now they are left exposed by their bylines and handles. But surely they had killed him off. This is the way it always worked. A lesser person could not have survived the first few minutes of the onslaught. But out of the billowing smoke and dust of tweets and trivia emerged U.S. Patent No. 7,958,080 to be issued on June 7, 2011. Once again ready to lead those who won’t be intimated by the political elite and are ready to take on the challenges America faces.”

Out of the billowing smoke and dust of tweets emerges our first patent
Out of the billowing smoke and dust of tweets emerges our first patent

Our first patent! We are hoping the coming apocalypse doesn’t cause any more delays at the Patent Office; we have been waiting a long time for this patent to get issued, and there are more in the pipeline!

Nice discussion here.

What’s your best guess about the following? Was it written by a computer, written by a human in a different language and then poorly translated into English, or written by a steering committee with representatives from all the relevant business units and impacted stakeholders?

Nice discussion here. I want to join with you by giving my comments in your web page. First of all, your page and design of your website is so elegant, it is attractive. Also, in explaining your opinion and other news, frankly you are the best one. I’ve never met other person with special skills in my working environment. By using this comment page, you are great to give me opportunities for announcing some events. Secondly, I’ve checked your website in search engine result page; some of your website pages have ranked very well. Additionally, your topic is relevant with my desired one. Lastly, if you don’t mind, could you join with us as a team, in order to develop website, not only general web page, but also many specific websites we must create for our missions. From time to time, we, I and my team have great position for you to develop that.

Yup, we really are the best one. We hope this topic is also relevant with your desired one.

Safe house, honey pot, or just dazed and confused at the Wall Street Journal?

The Wall Street Journal is creating its own version of Wikileaks, call SafeHouse, where you are invited to

Help The Wall Street Journal uncover fraud, abuse and other wrongdoing.

What would they like from you?

Documents and databases: They’re key to modern journalism. But they’re almost always hidden behind locked doors, especially when they detail wrongdoing such as fraud, abuse, pollution, insider trading, and other harms. That’s why we need your help.

If you have newsworthy contracts, correspondence, emails, financial records or databases from companies, government agencies or non-profits, you can send them to us using the SafeHouse service.

“SafeHouse”, however, sounds more like a “honey pot” when you read the WSJ’s Terms and Conditions, which offers three ways of communicating with SafeHouse:

1. Standard SafeHouse: […] Dow Jones does not make any representations regarding confidentiality.

2. Anonymous SafeHouse: […] Despite efforts to minimize the information collected, we cannot ensure complete anonymity.

3. Request Confidentiality: […] If we enter into a confidential relationship, Dow Jones will take all available measures to protect your identity while remaining in compliance with all applicable laws.

OK, so as they say, “You use this service at your own risk.” But, here’s where things start to get puzzling:

You agree not to use SafeHouse for any unlawful purpose.

Huh?

If you upload or submit any Content, you represent to Dow Jones that you have all the necessary legal rights to upload or submit such Content and it will not violate any law or the rights of any person.

How can anyone provide confidential documents that belong to another organization without violating the rights of that organization?

Even if you put aside for a moment the absurd notion of a “SafeHouse” where you can post materials that belongs to others, without violating the rights of these others, or any laws for that matter, there is a more puzzling question of why the Wall Street Journal has decided to create their own version of Wikileaks in the first place.

An editorial from June 29, 2010 was entitled “Wikileaks ‘Bastards'” which effectively summarizes their views on anonymous leaking. The WSJ has been consistent in their stance: there are other editorials from April 25, Jan 21, Dec 31, etc., and even today’s editorial page notes that

…we have laws on the books that prohibit the unauthorized disclosure of government secrets. Those laws would fall hard on any official who violated his oath to protect classified information.

So why would they try to create their own version of a communication forum that they clearly despise? An editorial from Dec 2, 2010 provides a clue:

We can’t put the Internet genie back in the bottle.

Well, one place the Internet genie hasn’t yet visited is the Wall Street Journal’s main website: a search for “SafeHouse” there yields no results. Nothing to see here, folks, move along now…

The nature of “things”: Mukund Narasimhan’s talk at Facebook

Mukund Narasimhan, an alumnus of Microsoft and Amazon who is currently a software engineer with Facebook in Seattle, gave a very interesting presentation at Facebook’s Seattle offices this week. It was a right-sized crowd, in a right-sized office: just enough people, and the right sort of people to afford interesting conversations, and a very generous serving of snacks and beverages from Facebook in an open-plan office that had the look-and-feel of a well-funded startup.

Mukund isn’t posting his slides, and we didn’t take notes, so this isn’t an exhaustive report of the evening, but rather an overview of some aspects of the discussion.

Mukund’s talk was principally about data mining and the ways that Facebook is able to collate and curate vast amounts of data to create metadata about people, places, events and organizations. Facebook uses a variety of signals, the most important of which is user input, to deduce the “nature of things”: i.e. is this thing (entity) a place? Is this place a restaurant? Is this restaurant a vegetarian restaurant?

Some very powerful data mining techniques have been developed already, and this was illustrated most compelling by a slide showing a satellite image of a stadium: quite close to the stadium, on adjacent blocks, were two place markers. These marked the “official location” of the stadium, as told to Facebook by two separate data providers. The stadium itself was pin-pricked with a large number of dots, each marking a spot where a Facebook user had “checked in” to Facebook’s Places.

The visual contrast was dramatic: the official data providers had each estimated the position of the stadium to within a hundred yards, but the checkins of the Facebook users had perfectly marked the actual contours of the stadium.

Mukund’s talk was repeatedly interrupted by questions from the audience, since each slide offered a gateway to an entire topic, and eventually he ran out of time and we missed some of the material.

During the Q&A, Nikhil George from Mobisante asked a question that seemed highly relevant: was Facebook trying to create a “semantic Web? Mukund sidestepped the question adroitly by pointing out there was no established definition of the term semantic Web, and that is certainly true – the Wikipedia article linked above is tagged as having “multiple issues”– but while Facebook may not be using the specifc protocols and data formats that some might argue are indispensible to semantic Webs, one could certainly make the case that deducing the nature of things, particularly the nature of things that exist on the Internet, is the main point of creating a semantic Web.

While much of the Q&A was about technical matters, a more fundamental question occupied our own minds: at the outset, Mukund asserted that the more people interact with (and, in particular, contribute to) Facebook, the better the Facebook experience is for them and all of their friends.

This, clearly, is the underpinning of Facebook’s business model: people must continue to believe, on a rational basis, that contributing data to Facebook – so that Facebook can continue to deduce the nature of things and offer these things back to their users to consume – offers some direct, reasonably tangible rewards for them and their friends.

Presumably, then, Facebook must be taking very concrete measures to measure this sense of reward that their users experience; we didn’t hear much about that last night since that wasn’t Mukund’s area of focus, but it must surely be well understood within Facebook that the promise of continued reward for continued user interaction – which is essentially their brand promise – must be kept at all times. Has a lot of research been done in this area? (There is, of course, the outstanding research done by dana boyd on social networks in general.)

At a more practical level, a question that bedevils us is how we can improve the signal:noise ration in our Facebook Wall! In our physical worlds, we all have some friends that are exceptionally chatty: some of them are also very witty, which makes their chatter enjoyable, but some are just chatty in a very mundane way. In our Facebook worlds, it is very easy for the chatty people (are they also exceptionally idle or under-employed?) to dominate the conversation.

In a physical world, if we find ourselves cornered by a boor at a party we would quickly, determinedly sidle away and find someone more interesting to talk to, but how does one do that in Facebook? One option, offered by Mukund, would be to turn off their posts, which seems rather like “unfriending” them altogether. But we don’t want to unfriend these people altogether, we just don’t want to hear every detail of every day.

Mukund suggested that by selecting hiding individual posts, as well as “liking” others more aggressively, we could send clearer indications of preferences to Facebook that would help the system improve the signal:noise ratio, and that’s what we have been trying over the past few days.

It is an intriguing topic to consider, and undoubtedly a difficult problem to solve, because you need to weed out individual messages rather than block entire users. For example, one Facebook friend used an intermission of the movie “Thor” to report that he was enjoying the movie. It’s great that he is enjoying the movie, but this low-value update spawned a low-value thread all of its own. We don’t want this person blocked altogether; we need some way of telling Facebook that updates sent during movie intermssions are not very valuable. If Facebook misinterprets our signals and relegates him to the dustbin, we might miss a more useful notification in the future, such as a major life event or career move.

The problem seems analogous to developing a version of Google’s PageRank, but at the message level. In the example above, if a post sent during a movie intermission is of low value, it would affect the ranking of everyone who piled on to create its low-value thread.

People like us who are more technical would probably prefer a more direct way of manipulating (i.e. correcting) the signal:noise ratio. Presumably someone at Facebook is working, even as we write this blog, on some visual tools that provide sliders or other ways for people to rank their friends and notifications. One idea that comes to mind might be a sort of interactive tag cloud that shows you who posts the most, but which also lets you promote or demote individual friends.

Some email clients and collaboration tools assume that people who email you the most are the ones that matter the most, but with a social network, wouldn’t this bias have the opposite effect? Wouldn’t the most chatty people be the ones who have the least to say that’s worth hearing?

One piece of good news from Mukund is that Facebook is working on a translator: one of our closest friends is a Swede, and his posts are all in Swedish which makes them incomprehensible. Having a built-in translator will certainly make Facebook more useful for people with international networks, although it will be very interesting indeed to see how Facebook’s translation deals with the idiosyncrasies of slang and idiom.

Update: we got it wrong. Facebook isn’t working on a translator; Mukund was referring to a third-party application.

One topic that particularly intrigues, but which we couldn’t raise with Mukund for lack of time, was Paul Adam’s monumental slideshare presentation on the importance of groups within social networks. Paul argues that people tend to have between 4-6 groups, each of which tend to have 2-10 people. (This is based upon the research behind Dunbar’s Number, which posits that there is a limit to the number of friends which whom one can form lasting relationships, and this number is around 150.)

Facebook still doesn’t have groups, which is surprisingly since Paul Adam decamped Google for Facebook soon after making his presentation available online. It is a massive presentation, but fascinating material and surprisingly light reading: just fast-forward to slide 64 and the picture there sums up the entire presentation rather well.

Update: we got that wrong, too. Faceb0ok does have a groups product, and it is becoming increasingly popular within the company itself

All in all, one of the most enjoyable presentations we have attended in recent days. Mukund needs special commendation for his fortitude and confident good humor in standing up before a savvy crowd and braving any and all questions about Facebook’s past, present and future.

Seattle’s “evening tech scene” is really getting interesting these days: perhaps we are seeing the working of a virtuous cycle where meetups and other events start to “up their game”!

A single-click if you are under 35, a double-click if you are over 35

When we first built Kerika, we deliberately modeled the user interface using the desktop application metaphor: projects were organized in folders, and mouse actions were as follows:

  • Select an item on the page with a single mouse click.
  • Open (or launch) an item on the page with a double mouse click.

It seemed the most natural thing in the world to us: everyone on the Kerika team liked it, and we assumed that our users would like just as much.

So it came as a very considerable surprise to us when we observed a generation gap among our beta users, in terms of what they considered to be the most natural way to open and launch items.

The breakpoint is roughly at the 35 years-old mark: people older than 35 had a strong preference for the double-click metaphor, and people under 35 had an equally strong preference for the single-click metaphor: where you select an item with one gesture, and then you select the action you wish to take from a menu that pops up.

The preference grew almost exponentially in intensity as you moved away from the 35-year breakpoint: people in their 50s, for example, had a very strong preference for double-clicking, while people in their early 20s were, for the most part, surprised by the notion that a double-click might do anything at all.

Our theory for this phenomenon is simple: roughly 15 years ago, the Netscape browser came into wide use. People who are older than 35 started their careers before the Web, and first learned to use desktop applications before learning to browse the Web. People under 35, on the other hand, probably first learned to use computers by surfing the Web at college.

(You might guess from all this that the Kerika team’s average age is well over 35, because it never occurred to us that the double-click metaphor might be strange or unappealing to anyone.)

At any rate, Kerika now supports the single-click metaphor exclusively – at this time. The initial feedback we got from our beta users was skewed by the younger demographic, and this caused us to reluctantly abandon the double-click in favor of the single-click. However, we are now hearing more from older users, and a future version – very soon! – will support both the single-click and double-click metaphors.

And while the Kerika application doesn’t run completely smoothly on iPads – for one thing, the Google Docs application doesn’t really run on iPads – supporting the single-click metaphor positions us to ensure that Kerika will run, intact, on tablet computers in the near future.

Use AT&T: it will boost your self-esteem

When you are in a startup, you always a sense of frustration that you haven’t been able to deliver all of your wonderful vision. Of course, it could happen in a big company as well, if you are really passionate about your job, but in a startup it happens all the time because entrepreneurs are wildly, dangerously passionate about what they do!

All the compromises that you had to make because of time and money constraints, the features you were forced to postpone, and this nagging sense of embarrassment that your 1.0 wasn’t all it could have been: “If only I had had more time and money, I could have built all these other cool features for our first release” you say to yourself.

If you ever feel a little depressed about the state of your version 1.0, there are two ways to boost your self-esteem and feel re-energized:

  • Remember the experienced entrepreneur’s mantra: “if you aren’t embarrassed by your first version, you have waited too long to ship“. We are not sure who coined this phrase – it’s been attributed to a number of people over the years – and there’s a great post called “1.0 is the loneliest number” that makes the case for shipping early, warts and all, and then iterating fast in response to customer feedback.
  • The other way to quickly start feeling better about your product is to visit the AT&T website to do something relative straightforward, like adding money to a pre-paid phone. The user experience that AT&T provides is guaranteed to make you feel better about your own product.