Category Archives: Technology

Posts related to technology in general.

Like a project? Copy it with a single click

We are encouraging our users to create public projects: projects that are open to the public to visit, and which can serve as templates for other users who need to perform similar functions. For example, a template on how to use Michael Porter’s Functions of a Competitor Intelligence System, adapted from his classic book on Competitive Strategy, might be useful for a number of people who are either studying the topic as part of an MBA program, or are simply applying these concepts as part of their jobs in marketing, product management or strategy.

Whenever you visit a public project, you now have an easy way to make a personal copy of that project using a new “Copy Project” button that we have introduced. Clicking on this button makes a complete copy of the public project for your personal use: it gets added to your own Account, and you can start working on it for your own purposes right away. This is what it looks like:

The Copy Project button lets you make a personal copy of a public project
The Copy Project button lets you make a personal copy of a public project

If you like the idea of contributing your ideas to the Kerika community, please consider making some of your projects accessible by the public. If you have a methodology, process or set of ideas or content that you want to share with the world, it’s easy to make your project accessible by the public: just click on the Team button and check the “Make project public” box, like this:

Make your project open to the public
Make your project open to the public

Remember, when you make your project open to the public, it doesn’t mean that anyone can just wander by and vandalize it. It just means that people can view your work, and if they like what they are doing, they could ask to join your project (which is great if you want to attract people to an open source or advocacy project), or they could use the “Copy Project” button to make a personal copy of the project that you have so kindly provided!

Kerika adds Social Media Links

Finally… yes, we know we should have done this a while ago, but we were busy making the core Kerika software more robust and polishing away some of the usability friction that users had reported, but now we have done it: we added social media links to Kerika!

The old “Share!” button at the top of the Kerika UI is still there:

The Share! button
The Share! button is on the toolbar, just above the canvas

Clicking on this button brings up a whole bunch of new possibilities:

The Share! options for a project that's open to the public
The Share! options for a project that's open to the public

The first option, from the left, is the “People” action: you can use the Share! button to quickly add people to your project team. (This is an alternative to using the Team button to manage your project teams.) Here’s how you can add people using the Share! button:

Using the Share! button to add someone to the project team
Using the Share! button to add someone to the project team

You can also use the Share! dialog to:

  • Post to your Facebook wall
  • Tweet about your project. (Your tweet will include a link to the project page.)
  • Post it as an update to your LinkedIn profile page. (That will include a link to the project page as well.)
  • Share it with your Google circles by doing a Google +1 on the page.

And, you can simply grab the URL of the page it you want to share it with someone:

Grab the URL for your project page
Grab the URL for your project page

And, finally, you can email a friend or coworker about the project, and include a link to the page in your email message.

There’s more to come: in the near future we will be making it possible for you to embed a picture of your project page in your own website or blog!

Our latest version, and then some!

In the immortal words of Jim Anchower: “Hola, amigos. I know it’s been a long time since I rapped at ya.”

Our apologies for not posting blog entries for a while, but we have the usual excuse for that, and this time it’s true: “We’ve been incredibly busy building great software!” It’s going to be hard to summarize all the work that we have done since June, but let’s give it a shot:

  1. We have curved lines now. And not just any old curved lines, but the most flexible and easy to use drawing program that you are likely to encounter anywhere. You can take a line and bend it in as many ways as you like, and – this is the kicker – straighten it out as easily as you bent it in the first place. There’s a quick demo video on YouTube that you should check out.
  2. We have greatly improved the text blocks feature of Kerika. The toolbar looks better on all browsers now (Safari and Chrome used to make it look all scrunched up before), and we have added some cool features like using it to add an image to your Kerika page that’s a link to another website. (So you could, for example, add a logo for a company to your Kerika page and have that be a link to your company’s website.) Check out the nifty tutorial on YouTube on text blocks.
  3. You can set your styling preferences: colors, fonts, lines, etc. Previously, all the drawing you did on your Kerika pages was with just one set of colors, fonts, etc., but now you can set your own styling preferences, with a new button, and also adjust the appearance of individual items.
  4. We have improved the whole Invitations & Requests process. Now, when you invite people to join your projects, the emails that get sent out are much better looking and much more helpful, and the same goes for requests that come to you from people who want to join your projects, or change their roles in your projects. Check out this quick tutorial on how invitations and requests work.
  5. We have made it easier for you to personalize your Account. You can add a picture and your own company logo, which means that when you use Kerika your users see your logo, not ours! Check out this quick tutorial on how to personalize your Account.
  6. We have hugely increased the kinds of third-party content you can embed on your Kerika pages. The list is so long, we really should put that in a separate blog post. We have gone way beyond YouTube videos now; we are talking about all the major video sites (Vimeo, etc.), Hulu, Google Maps, Scribd and Slideshare… The mind boggles.
  7. Full screen view of projects. There’s a little button now, at the top-right corner of the Kerika canvas: click on it and you will go into full-screen mode, where the canvas takes up all the space and all the toolbars disappear. This makes it easy to surf pages that contain lots of content, or work more easily with your Google Docs.
  8. Full support for Internet Explorer 9 (IE9). Not as easy as you might think, given that Microsoft has historically gone their own way, but we have sweated the details and now Kerika works great with IE9. As Microsoft continues to converge around common standards, this should get easier for us over time.
  9. Full support for all desktop platforms. OK, so this isn’t really a new feature, but since we are bragging we might as well emphasize that Kerika works, and is tested, to work identically on Safari, Chrome and Firefox on Windows 7, Mac OSX and Linux.
  10. Literally hundreds of usability improvements. Yeah, okay, we should have gotten it all right in the first place, but our focus over the past few months has been very much on working directly with our early adopters, observing them use the product, and noting all the tiny friction points that we could improve upon. We are not saying that we have all the friction removed, we are just bragging about the hundreds of tweaks we have made in the past 3 months.

Since June, we have had two major releases: one at the end of July that had nearly 150 bug fixes and usability improvements, and one this week, with over 120 bug fixes and usability improvements.

The product is now in great shape from an infrastructure perspective: the core software has been well debugged and is now very robust. Performance is great: you should get sub-second responsiveness when working in an environment with decent broadband wireless, where you see updates to your project pages in less than one second after a team member makes a change. (We test this with users in Seattle and India working simultaneously on the same project.)

Having this robust infrastructure that’s been well debugged and tuned makes it easy for us to add new features. In the coming weeks, look for more social media hooks, a revamped website, an extensive collection of public projects (that you can use as templates for your own work), and more. Much more. After all, if “less is more”, just think how much more “more” could be 😉

Some thoughts on software patents

A very big day for us at Kerika: after more than six years of perseverance, we finally got our first patent issued today: United States Patent No. 7,958,080. (Yeah, that’s certainly a big number: a lot of people were granted patents before us!)

There is a lot of debate in our profession about software patents: strong opinions have been voiced in favor of granting software patents, and equally vociferous views have been aired in opposition to the entire concept. What’s discussed much less frequently is how long it can take to get a software patent, or expensive and arduous the process can be.

Our original provisional application was made on October 29, 2004; this was followed up with a final application, and then dead silence for a couple of years. When the Patent Office finally examined the patent application, there was a multi-year exchange of correspondence with the Examiner, with the Examiner making various arguments in opposition to granting a patent, and our lawyers making counter-arguments in favor of Kerika being granted a patent.

All this back-and-forth is quite normal, by the way, and works as designed: it’s the Patent Examiner’s job to not grant patents without challenging their claims, and it’s the applicant’s responsibility to offer increasingly stronger arguments in favor of being granted a patent. What is finally granted is usually a subset of the claims that were made in the initial application. It’s very rare for all of the initial claims to be granted.

Software patents are harder to get, and take much longer to be granted, than hardware patents, probably because it is easy to compare one tangible object with another and determine whether one is materially different than the second tangible object.

Software is harder because it is intangible: you have to describe an intangible item, using the jargon that’s peculiar to patent applications, in way that clearly distinguishes it from another intangible item.

The net result is that a software patent can take many more years to be granted, in comparison to a hardware patent. One of the problems with the process taking so long is that, by the time the Patent Examiner gets around to examining your application, the technology may have become so prevalent in other, competing products that it no longer appears to be as innovative as it was when the application was first made.

The entire process is also very expensive, particularly if you are a solo inventor or small startup (as we are): the Patent Office has been hiking its fees over the years, with the stated intention of hiring up more examiners, and thereby speeding up the process, but clearly they are having a hard time keeping up with the flood of applications because it seems like the delays are getting worse, not better.

Regardless of whether you are pro or con software patents, we would argue that both sides of the argument would be served better if the process of examining, granting or rejecting a patent were shortened very considerably. If a patent could be examined and acted upon within 2-3 months, it would entirely change the debate as to whether software patents are good or bad for innovation.

Meanwhile, back at the ranch… we have other patent claims in the works, which we hope will take less time to work its way through the Patent Office now that our first patent has been granted!

Our latest, greatest version: Copy/Paste, Share/Join, and more

We have been hard at work over the past couple of months: taking in feedback from our early adopters and using this to fix bugs, improve usability, and provide some very useful new features.

Here’s a quick summary of some of the stuff we have built in the past month or so:

  1. Added Copy, Cut, Paste as New, and Paste as Links. The Paste as Links feature is particularly powerful.
  2. Added a Share! button and a Join! button that makes it easy for people to create public projects and get people to join in their particular advocacy cause.
  3. Brought back double-clicking (after having taken it out in an previous version…)
  4. Made the text blocks more flexible, with the ability to embed URLs and autosizing.
  5. Improved our Google Docs Finder.
  6. Worked around numerous limitations of Google Docs.

We will describe each of these in more detail in subsequent blog posts, and we obviously have a lot of work on our hands to update our website to reflect all this new functionality! Oh, and about 200 bugs were found and obliterated in the past 2 months…

(Lots of people have provided useful feedback, but special thanks go to Michael Parker from Marketingeek and Alexander Caskey from High Bridge Communications.)

 

Running Windows on my iPhone…

Some years ago, a friend presented me with an old iMac that he didn’t need any more – one of those blob-like machines with a clear cover that let you see its innards. It was a heavy, inconveniently shaped machine that nevertheless started a slow migration to all-Mac household today: Mac laptops, iMac desktop, Mac wireless router, iPod, iPhone (and, soon, iPad).

Macs were always attractive for a simple reason: you didn’t have to worry about tuning the operating system. When I had PCs at home, I found myself having to do a clean re-installation of the operating system every 9 months or so, just to clean out the grit and gum that clogged up the Windows registry and system directories.

The way Windows programs were installed was particularly problematic: each application would leave bits and pieces of code and configuration files lying around all over the place, where it would inevitably trip up some other program. If you liked trying out different software, as I did, this meant that every 9-12 months you had to rebuild your PC from scratch: repartition the hard drive, reinstall the operating system (and several years worth of updates), and then all your applications, and then all your files. It typically took a full day, but it was worth it because your machine was all zippy and fast once again – at least for a few months…

The main selling point of Macs was that all this fiddling with your machine was unnecessary: your computer would “just work”, and you wouldn’t have to worry about rebuilding the machine periodically to get it in fighting trim. And, with the new iCloud strategy, Apple is trying to take that brand promise even further, by demoting your expensive gadgets to “just another device”.

Is this all going to work like magic, as Jobs promises? I doubt it, based upon my recent experience with my iPhone… For the past few months I had noticed that many of my basic iPhone applications would just quietly crash when I first tried to launch them: launching the music player, for example, would frequently take two attempts. The maps application would take up to a minute to launch, and then it would operate so slowly that there would be a 2 second delay in echo-back of character input.

I assumed this was due to the fact that the iOS operating system was getting fatter with every new update, while my hardware remained svelte. After all, the upgrade to iOS 4.0 had been particularly painful for my iPhone 3GS: it taught me a painful lesson about not automatically taking the latest software updates, even from Apple!

(And it wasn’t just me: lots of iPhone 3GS users complained bitterly after upgrading to iOS 4 – it was like trying to run Vista on hardware designed for Windows 95)

And that’s the long-winded segue into why it feels like I am actually running Windows on my iPhone: a couple of weeks ago my battery started to run out very quickly (it wouldn’t last even a few hours of slight use), I took my iPhone to the local Apple genius bar where they quickly determined that my iPhone’s OS was in terrible shape. Their recommendation was to do a complete rebuild of the iPhone: from scratch, which meant setting it up as a brand new phone and adding back in all my email accounts, music, etc. by hand.

This rebuild worked: now my iPhone is as fast and responsive as it used to be, but the experience of completely rebuilding an iPhone (which, incidentally, is not the same as doing a “restore from backup”), was all too reminiscent of having to rebuild my Windows machines periodically to clean out the gunk in the registry and system directories.

It is to the credit of the Apple “geniuses” that they quickly determined that my phone’s OS was in trouble, but I also suspect it is because this is a “known but not publicized” problem with iOS 4. Over time, iOS 4 gets in trouble just like Windows XP and Vista (and perhaps Windows 7 as well, although I use that too infrequently to say for sure).

Think different, work same.

Out of the billowing smoke and dust of tweets and trivia emerges our first patent

“The literati sent out their minions to do their bidding. Washington cannot tolerate threats from outsiders who might disrupt their comfortable world. The firefight started when the cowardly sensed weakness. They fired timidly at first, then the sheep not wanting to be dropped from the establishment’s cocktail party invite list unloaded their entire clip, firing without taking aim their distortions and falsehoods. Now they are left exposed by their bylines and handles. But surely they had killed him off. This is the way it always worked. A lesser person could not have survived the first few minutes of the onslaught. But out of the billowing smoke and dust of tweets and trivia emerged U.S. Patent No. 7,958,080 to be issued on June 7, 2011. Once again ready to lead those who won’t be intimated by the political elite and are ready to take on the challenges America faces.”

Out of the billowing smoke and dust of tweets emerges our first patent
Out of the billowing smoke and dust of tweets emerges our first patent

Our first patent! We are hoping the coming apocalypse doesn’t cause any more delays at the Patent Office; we have been waiting a long time for this patent to get issued, and there are more in the pipeline!

So, why are we still learning typesetting?

Many, many years ago, as a young boy living in Delhi I had the good fortune of being a neighbor and friend to the then-elderly, since deceased, M. Chalapathi Rau, the publisher and editor of the newspaper National Herald which had been founded by Jawarharlal Nehru himself during India’s freedom struggle. (“M.C” or “Magnus” as he was known to his friends was a man of many talents and a true eminence grise who unobtrusively operated the levers of power in India.)

To help me research a school project, M.C. took me to his newspaper’s printing press, a vast, clanking space where I watched with great fascination the painstaking process of laying out moveable type by hand: a craftsman’s job that had remained essentially unchanged, at least in India, since the 19th century. I did my school project, thinking that this would be my first and last experience with typesetting…

At college, however, typesetting reappeared: in order to get a job, one had to have a beautifully laid out resume, particularly if one had no “professional experience” to list other than the insalubrious qualification of having toiled in the scullery of a campus dining hall for minimum wage. So, I dutifully learned the obscure commands that helped set fonts and margins using troff, the first document preparation software for Unix computers.

I prepared and padded my resume, bluffed my way into my first job, and assumed that that would be my last encounter with typesetting. Ironically, my first job was at AT&T Bell Labs, working on Unix-based applications.

Typesetting is closely tied to the history of Unix, and, indeed, provided the raison d’etre for Unix’s existence. In 1971, when Ken Thompson and Dennis Ritchie (and others) at Bell Labs wanted to get funding for developing the Unix operating system, their business case was based upon the rather tenuous argument that developing this new operating system (Unix) would help them develop a better typesetting program (troff), which could be used by Bell Labs to file patents.

In those halcyon days, Bell Labs generously recognized and encouraged geniuses to explore their ideas, and, more mundanely, Bell Labs actually did need a better typesetting programs: since it’s inception in 1925 the organization had averaged one patent per business day (and collected about nine Nobel Prizes by the time I showed up as a very junior programmer).

So troff, the typesetting program, is responsible for the creation of Unix, which means that typesetting is the reason why Linux, cloud computing, Google, Facebook, Twitter, etc. all exist today!

Typesetting occupied a relatively small part of my workday until I started moving into management roles, which coincided with the widespread adoption of Microsoft’s Word software. Suddenly, most of my day was spent typesetting memos, performance appraisals, proposals, etc. I emphasize “typesetting”, rather than “writing”, because Microsoft Word remains, at heart, a typesetting program, not a writing program. It requires you to learn the same obscure catechism of tab settings, kerns and serifs, character and line spacings that those ancient typesetters at the National Herald had mastered as a craft.

And, yet, no one considers it strange that all of us highly trained, highly paid “knowledge workers” are required to master a craft that was first invented in China in 1040 AD!

The advent of the modern Web, starting with the release of the Netscape browser in 1995, has provided little relief: we exchanged one set of obscure keystroke combinations for another, equally opaque set of symbols (i.e. HTML). It is only in recent years that blogging tools, like the excellent WordPress software I use to pen this essay, has helped hide the typesetting and allow users to focus on the writing.

Between the release of the Netscape browser and the current robustness of WordPress came the advent of Google Docs. Google Docs’ primary innovation (or, more precisely, Writely’s primary innovation — remember Writely?) was to offer online editing; Google Docs did nothing to fundamentally alter the typesetting nature of word processing.

Google Docs continues to evolve, but as a persistent shadow of Microsoft Office. This makes sense from a business perspective, of course: it is easier for Google to get customers signed up if they can state simply that Google Docs works like the familiar Microsoft Office, and is a lot cheaper and easier to access. It would be much harder to get people to sign up for a Google Docs that seemed to fundamentally alien in comparison to that reliable reference, Microsoft Office.

And, so it continues… Centuries after the invention of moveable type, we remain trapped in its formatting conventions. At Kerika, we are starting to think seriously about making our embedded text editor (which is based upon Whizzywig) be the primary way for people to write for the Web. Kerika is all about creating and sharing pages stuffed with your greatest ideas and coolest content, and it’s high time we put aside typesetting. For good.

Safe house, honey pot, or just dazed and confused at the Wall Street Journal?

The Wall Street Journal is creating its own version of Wikileaks, call SafeHouse, where you are invited to

Help The Wall Street Journal uncover fraud, abuse and other wrongdoing.

What would they like from you?

Documents and databases: They’re key to modern journalism. But they’re almost always hidden behind locked doors, especially when they detail wrongdoing such as fraud, abuse, pollution, insider trading, and other harms. That’s why we need your help.

If you have newsworthy contracts, correspondence, emails, financial records or databases from companies, government agencies or non-profits, you can send them to us using the SafeHouse service.

“SafeHouse”, however, sounds more like a “honey pot” when you read the WSJ’s Terms and Conditions, which offers three ways of communicating with SafeHouse:

1. Standard SafeHouse: […] Dow Jones does not make any representations regarding confidentiality.

2. Anonymous SafeHouse: […] Despite efforts to minimize the information collected, we cannot ensure complete anonymity.

3. Request Confidentiality: […] If we enter into a confidential relationship, Dow Jones will take all available measures to protect your identity while remaining in compliance with all applicable laws.

OK, so as they say, “You use this service at your own risk.” But, here’s where things start to get puzzling:

You agree not to use SafeHouse for any unlawful purpose.

Huh?

If you upload or submit any Content, you represent to Dow Jones that you have all the necessary legal rights to upload or submit such Content and it will not violate any law or the rights of any person.

How can anyone provide confidential documents that belong to another organization without violating the rights of that organization?

Even if you put aside for a moment the absurd notion of a “SafeHouse” where you can post materials that belongs to others, without violating the rights of these others, or any laws for that matter, there is a more puzzling question of why the Wall Street Journal has decided to create their own version of Wikileaks in the first place.

An editorial from June 29, 2010 was entitled “Wikileaks ‘Bastards'” which effectively summarizes their views on anonymous leaking. The WSJ has been consistent in their stance: there are other editorials from April 25, Jan 21, Dec 31, etc., and even today’s editorial page notes that

…we have laws on the books that prohibit the unauthorized disclosure of government secrets. Those laws would fall hard on any official who violated his oath to protect classified information.

So why would they try to create their own version of a communication forum that they clearly despise? An editorial from Dec 2, 2010 provides a clue:

We can’t put the Internet genie back in the bottle.

Well, one place the Internet genie hasn’t yet visited is the Wall Street Journal’s main website: a search for “SafeHouse” there yields no results. Nothing to see here, folks, move along now…

The nature of “things”: Mukund Narasimhan’s talk at Facebook

Mukund Narasimhan, an alumnus of Microsoft and Amazon who is currently a software engineer with Facebook in Seattle, gave a very interesting presentation at Facebook’s Seattle offices this week. It was a right-sized crowd, in a right-sized office: just enough people, and the right sort of people to afford interesting conversations, and a very generous serving of snacks and beverages from Facebook in an open-plan office that had the look-and-feel of a well-funded startup.

Mukund isn’t posting his slides, and we didn’t take notes, so this isn’t an exhaustive report of the evening, but rather an overview of some aspects of the discussion.

Mukund’s talk was principally about data mining and the ways that Facebook is able to collate and curate vast amounts of data to create metadata about people, places, events and organizations. Facebook uses a variety of signals, the most important of which is user input, to deduce the “nature of things”: i.e. is this thing (entity) a place? Is this place a restaurant? Is this restaurant a vegetarian restaurant?

Some very powerful data mining techniques have been developed already, and this was illustrated most compelling by a slide showing a satellite image of a stadium: quite close to the stadium, on adjacent blocks, were two place markers. These marked the “official location” of the stadium, as told to Facebook by two separate data providers. The stadium itself was pin-pricked with a large number of dots, each marking a spot where a Facebook user had “checked in” to Facebook’s Places.

The visual contrast was dramatic: the official data providers had each estimated the position of the stadium to within a hundred yards, but the checkins of the Facebook users had perfectly marked the actual contours of the stadium.

Mukund’s talk was repeatedly interrupted by questions from the audience, since each slide offered a gateway to an entire topic, and eventually he ran out of time and we missed some of the material.

During the Q&A, Nikhil George from Mobisante asked a question that seemed highly relevant: was Facebook trying to create a “semantic Web? Mukund sidestepped the question adroitly by pointing out there was no established definition of the term semantic Web, and that is certainly true – the Wikipedia article linked above is tagged as having “multiple issues”– but while Facebook may not be using the specifc protocols and data formats that some might argue are indispensible to semantic Webs, one could certainly make the case that deducing the nature of things, particularly the nature of things that exist on the Internet, is the main point of creating a semantic Web.

While much of the Q&A was about technical matters, a more fundamental question occupied our own minds: at the outset, Mukund asserted that the more people interact with (and, in particular, contribute to) Facebook, the better the Facebook experience is for them and all of their friends.

This, clearly, is the underpinning of Facebook’s business model: people must continue to believe, on a rational basis, that contributing data to Facebook – so that Facebook can continue to deduce the nature of things and offer these things back to their users to consume – offers some direct, reasonably tangible rewards for them and their friends.

Presumably, then, Facebook must be taking very concrete measures to measure this sense of reward that their users experience; we didn’t hear much about that last night since that wasn’t Mukund’s area of focus, but it must surely be well understood within Facebook that the promise of continued reward for continued user interaction – which is essentially their brand promise – must be kept at all times. Has a lot of research been done in this area? (There is, of course, the outstanding research done by dana boyd on social networks in general.)

At a more practical level, a question that bedevils us is how we can improve the signal:noise ration in our Facebook Wall! In our physical worlds, we all have some friends that are exceptionally chatty: some of them are also very witty, which makes their chatter enjoyable, but some are just chatty in a very mundane way. In our Facebook worlds, it is very easy for the chatty people (are they also exceptionally idle or under-employed?) to dominate the conversation.

In a physical world, if we find ourselves cornered by a boor at a party we would quickly, determinedly sidle away and find someone more interesting to talk to, but how does one do that in Facebook? One option, offered by Mukund, would be to turn off their posts, which seems rather like “unfriending” them altogether. But we don’t want to unfriend these people altogether, we just don’t want to hear every detail of every day.

Mukund suggested that by selecting hiding individual posts, as well as “liking” others more aggressively, we could send clearer indications of preferences to Facebook that would help the system improve the signal:noise ratio, and that’s what we have been trying over the past few days.

It is an intriguing topic to consider, and undoubtedly a difficult problem to solve, because you need to weed out individual messages rather than block entire users. For example, one Facebook friend used an intermission of the movie “Thor” to report that he was enjoying the movie. It’s great that he is enjoying the movie, but this low-value update spawned a low-value thread all of its own. We don’t want this person blocked altogether; we need some way of telling Facebook that updates sent during movie intermssions are not very valuable. If Facebook misinterprets our signals and relegates him to the dustbin, we might miss a more useful notification in the future, such as a major life event or career move.

The problem seems analogous to developing a version of Google’s PageRank, but at the message level. In the example above, if a post sent during a movie intermission is of low value, it would affect the ranking of everyone who piled on to create its low-value thread.

People like us who are more technical would probably prefer a more direct way of manipulating (i.e. correcting) the signal:noise ratio. Presumably someone at Facebook is working, even as we write this blog, on some visual tools that provide sliders or other ways for people to rank their friends and notifications. One idea that comes to mind might be a sort of interactive tag cloud that shows you who posts the most, but which also lets you promote or demote individual friends.

Some email clients and collaboration tools assume that people who email you the most are the ones that matter the most, but with a social network, wouldn’t this bias have the opposite effect? Wouldn’t the most chatty people be the ones who have the least to say that’s worth hearing?

One piece of good news from Mukund is that Facebook is working on a translator: one of our closest friends is a Swede, and his posts are all in Swedish which makes them incomprehensible. Having a built-in translator will certainly make Facebook more useful for people with international networks, although it will be very interesting indeed to see how Facebook’s translation deals with the idiosyncrasies of slang and idiom.

Update: we got it wrong. Facebook isn’t working on a translator; Mukund was referring to a third-party application.

One topic that particularly intrigues, but which we couldn’t raise with Mukund for lack of time, was Paul Adam’s monumental slideshare presentation on the importance of groups within social networks. Paul argues that people tend to have between 4-6 groups, each of which tend to have 2-10 people. (This is based upon the research behind Dunbar’s Number, which posits that there is a limit to the number of friends which whom one can form lasting relationships, and this number is around 150.)

Facebook still doesn’t have groups, which is surprisingly since Paul Adam decamped Google for Facebook soon after making his presentation available online. It is a massive presentation, but fascinating material and surprisingly light reading: just fast-forward to slide 64 and the picture there sums up the entire presentation rather well.

Update: we got that wrong, too. Faceb0ok does have a groups product, and it is becoming increasingly popular within the company itself

All in all, one of the most enjoyable presentations we have attended in recent days. Mukund needs special commendation for his fortitude and confident good humor in standing up before a savvy crowd and braving any and all questions about Facebook’s past, present and future.

Seattle’s “evening tech scene” is really getting interesting these days: perhaps we are seeing the working of a virtuous cycle where meetups and other events start to “up their game”!