OK, so our website has long stressed the word “unlimited”, and maybe that’s not such a great idea…
Most people have boards with a few dozen cards.
Some folks have boards with a couple of hundred cards.
Very few people have boards with up to 1,000 cards.
It turned out that one of our users had a board with nearly 4,200 cards, of which over 4,000 were in the Trash.
And that’s not good! A board with several thousand cards on it is going to take a long time to load, because each card has many different attributes to it: details, chat, assignments, status, history, etc.
And when someone with a very large board uses Kerika, this can cause very unexpected loads on the servers, particularly in terms of network I/O and CPU utilization.
This is what it looks like when someone suddenly opens a board with thousands of cards on it:
Most of the time, boards get very big because they are very old: stuff piles up in the Done column, or, as in this case, in the Trash column.
Having very large boards can impact performance: unlike, say, email which you may be accustomed to leaving piled up in your Inbox for years on end, Kerika’s cards are “live object”: even when a card is in the Done or Trash columns, it is constantly listening for updates from the server, because it could be moved out of Done or Trash at any time by someone on the team, or have some of it’s attributes changed.
For example, even though a card is “Done” or “Trashed”, it could have its details updated, or new chat added to it.
This is different from email messages, which are essentially “dead objects”: once you get an email and read it, it doesn’t update after that, so it can sit for years in your Inbox without trying to get fresh updates from the mail server.
So, when you have 4,200 cards on a single board, you have that many live objects trying to get updates from the server at the same time, and that’s not good for your browser or our server!
(Imagine your laptop trying to read and display 4,200 emails at the same time, and you will get an idea of the problem…)
Having very large boards is not a good idea from a workflow or process management perspective, and so perhaps Kerika needs to do something about it: we need to think about how we can help our users improve their workflow, while also avoiding these CPU spikes.
A couple of ideas that we are considering:
Start warning users when their boards are getting too big.
If boards hit some threshold values (which we have yet to figure out, maybe it’s 1,000 cards?) force the user to reconfigure their board so they don’t affect the quality of the service for everyone.
To access this feature, simply click on the Project Info button that’s shown at the top-right of each Kerika Board, and you will see the Project Info display (that we have talked about in an earlier blog post):
CSV format is useful if you want to want to take data from Kerika and put it into Excel or some other analysis tool;
HTML format is useful if you want to print material from Kerika, or insert it into Word, PowerPoint or similar tools.
With both CSV and HTML exports, hidden cards are not exported: this means that if you are currently choosing to hide some columns (by using the Workflow button), or hide some cards (by using the Tags filters), then the cards that you are not viewing right now will not be part of the export.
When you export a board in CSV format, you get the following data, for each visible card:
Column Name: e.g. Backlog, In Progress, Done, etc.
Card Name: e.g. “Create PR news release”.
Card Description: e.g. “We need to create a PR news release once our latest version is ready…” (Rich text will be converted to plain text, since CSV files can only deal with plain text.)
Status: e.g. Needs Review, Needs Rework, etc. (If the card doesn’t have a special status, “Normal” will be shown.)
Due Date: the date the card is due, if a date has been set. (If the card doesn’t have a Due Date, “Not Scheduled” will appear.)
Assigned To: a list of names of the people the card is currently assigned to. (If the card isn’t assigned to anyone, “Not Assigned” will appear.)
Exporting could take a while: the exported data are put into a file in your Google Drive or Box account — depending upon whether you are using Kerika+Google or Kerika+Box — and when the process completes, you get an email with a link to the file containing your data.
It’s a similar experience if you do a HTML export; however the format of the data is different, giving you an indented set of attributes for each card, like this example from a Kerika+Box project:
One caveat about exporting HTML: if you open the results in Google Docs, Google shows a preview of the output, and that doesn’t look good: instead of rendering the HTML, Google actually exposes it.
Here’s an example from a Kerika+Google project:
The export feature can be used for many different purposes, of course: the most common scenario we envision is people wanting to include material from Kerika in their analysis and presentations.
And, of course, one use would be for government agencies that have to respond to Freedom of Information Act (FOIA) requests, or other Sunshine laws.
Arun Kumar, Kerika’s CEO, will present at the Washington State Office of Financial Management (OFM) Fall Forum, at 1:30PM on September 17, 2014 at the Thurston County Fairgrounds in Olympia, Washington.
The topic will be experimentation with Visual Management in government and administrative processes.
Arun Kumar, Kerika’s CEO, will present a special Breakout Session on One team, many places: creating collaboration networks for distributed workgroups
The vision of Lean Government is about collaborating across offices, across agencies, and even across sectors. In an era of flat or even declining budgets, it’s become essential to get Lean across the state, not just across the room.
The old technologies never really supported distributed Lean and Agile, but that’s all changed now: a new generation of browser-based work management tools makes it fast and easy to build Lean and Agile teams that connect professionals across agencies, and across sectors so that expertise from the private sector, academia, and nonprofits can be leveraged to deliver great results in Washington.
This breakout session will feature a look at some great cross-agency projects and cross-sector projects: initiatives that have succeeded in delivering in a way that was unimaginable only a couple of years ago.
The session will be presented at 12:15PM in Room 318 on October 21, and again at 10AM in Room 318 on October 22.
If you are working in state, county or local city government and are interested in Lean and Agile, be sure to join us: the cost for attending is just one can of food, which will be donated to a food bank 🙂
We were at Boxworks14 last week, and had a great time!
We met a bunch of interesting folks, including Heidi Williams, Senior Director of Platform Engineering, who — along with Peter Rexer and others from her team — gave some really insightful deep-dives into Box’s technology stack.
(Among other things we learned that we could improve the Kerika user experience by changing the way we do OAuth 2.0 with Box.)
Keynote speeches were amazing: the hyper-kinetic Aaron Levie made for a rousing start, but the real star was Jared Leto who not only brought his Oscar onstage, but in a jaw-dropping move handed it over the audience for people to take selfies with while he blithely continued with his “Fireside Chat”.
Jared’s move even upstaged Aaron, which is pretty hard to do (as you will know, if you have ever encountered Aaron in the flesh…)
Other great speakers included Jim Collins, author of Built to Last, Vinod Khosla of Khosla Ventures (and, originally, Kleiner Perkins and Sun Microsystems), and Andrew McAfee from MIT.
We are sometimes asked (usually by our more techie users) whether Kerika has a published API.
The short answer is “No”; the longer answer is “Not yet.”
We do have a server API, of course, that the Kerika front-end client application itself uses, but it is a very proprietary and non-standard API at present.
This is largely because of an early decision we made to use CometD for our real-time client-server communications.
CometD is a form of a long-poll architecture, but our implementation, unfortunately, is not very standard, in part because we built an “API generator” a long time ago that allows us to create new APIs fairly quickly using metadata descriptions of the desired features.
This was helpful when we were first getting started, but, quite honestly, it isn’t an approach that makes a lot of sense any more and we have migrated away from using that API generator.
But, because of our history/legacy code, we currently have a mix of auto-generated APIs and newer API, and this isn’t really something that we want to publish and support for external third-party development.
We plan to redo our API this year to make it more standard and easier for third-party developers to use, at which point we will publish it and start encouraging more platform development around Kerika.
We often get asked if Kerika has an integration with Git. The short answer is “No”, but the longer answer is more nuanced…
We use Git ourselves for managing our own source code and other software assets.
Git was designed from the git go (ha!) to be used by distributed teams, having originated with the Linux kernel team, perhaps the most important distributed team in the whole world, so it made perfect sense for us to use it: it works across operating systems, and a number of simple GUIs are now available for managing your various source-code branches.
We simply embed the git references within cards on our project boards: sometimes in the chat conversation attached to a card, but more often within the card’s details.
Here’s an actual example of a bug that we fixed recently:
We use multiple Git branches at the same time, because we put every individual feature into a separate branch.
That’s not a fixed rule within Git itself; it’s just our own team’s practice, since it makes it easier for us to stick with a 2-week Sprint cycle: at the end of every 2 weeks we can see which features are complete, and pull these git branches together to build a new release.
So while Kerika doesn’t have a direct integration with Git, it’s pretty easy to use Kerika alongside Git, or other source management systems.
One of our users wrote in last night with this great story, which we wanted to share with you…
I did a one hour webinar for the software company (Software AG) that we develop all of our software with as they were impressed with the way we were using their software development environment (NaturalOne).
I threw a little Kerika spice into my presentation as it has become such an important part of our development environment and I actually used it to prepare my presentation.
Instead of preparing the presentation by myself I used a Kerika project and had my software developers contribute cards and instructions in the areas that they specialized.
While I was doing a live presentation I was referring to the cards on my other monitor and swiping them to the ‘Done’ column as I completed them.
I know you like to hear stories about how people use your software and this worked very well for this presentation. It was recorded and I will send you a link to it once it is published. It might put you to sleep at night, except for the Kerika part.
Here at Kerika, we often get asked how we do Scrum as a distributed team.
Here’s the model we have evolved, which works for us mainly because we are the very essence of a distributed Agile team: we have people working together on the same product from locations that are 10,000 miles apart!
And this means that we are the most enthusiastic consumers of our products: we use Kerika to manage every part of our business and we only build what we would ourselves love to use.
Here’s the basic outline of our Scrum model:
Each Sprint is 2 weeks long: that that works well for us; other folks might find that 3 weeks or 4 weeks i better. Pick what works for you.
Each Sprint begins with Sprint Planning, where the Scrum Team gets together with the Product Owner to decide which cards will be pull from our main Product Backlog into the Sprint Backlog.
Each Sprint is organized as a separate Scrum Board: this makes it really easy for us to concentrate upon needs to get delivered in that particular Sprint, without getting distracted by what was done in the past or what remains to be done.
And Kerika makes it really easy to pull cards (literally!) from the Backlog onto a Scrum Board, and then hide the Backlog from view so it doesn’t distract the Team while the Sprint is underway.
Half-way the Sprint, at the end of the first week, we do a gut-check: does the Sprint look like it is going reasonably well? We don’t ask if it is going perfectly: almost no Sprint does; what we are looking for is any indication that the Sprint is going to severely under-deliver in terms of the Team’s commitments to the Product Owner.
We could do these gut-checks every day during our Daily Standups, but in the first part of a Sprint cycle these can often give us false positives: it’s easy to tell early on if a Sprint is going to be disastrous, but it’s hard to tell for sure that it is actually going to end well. But about midway through the Sprint we start to have a more reliable sense for how things may turn out.
In keeping with the Scrum model, our goal is to complete a potentially shippable set of features and bug fixes with each Sprint, although this doesn’t necessarily mean that we will always ship what gets built after each Sprint. (More on that later.)
For each feature or bug, however large or small, we make sure that we have design and testing baked into the process:
An analysis document is prepared and attached to the card, either as a Google Doc or as Box document. (We had been using Kerika+Google exclusively for years, but have recently switched to Kerika+Box since we completed our Box integration.)
The document is often just a few paragraphs long, because we always take cards representing large features (or other big work items) and break them up into smaller cards, so that no card is likely to take more than a day’s work. Kerika makes it really easy to link cards together, so it’s easy to trace work across multiple cards.
For bugs, the attached document describes the expected behavior, the actual behavior, and the root cause analysis. Very frequently, screenshots showing the bugs are attached to the cards.
For new features, several documents may be attached, all quite small: there may be a high-level analysis document and a separate low-level design document.
For all features and bugs, we do test planning at the time we take on the work: for back-end (server) work we rely primarily on JUnit for writing automated tests; for front-end (UI) work we have found that automated testing is not very cost-effective, and instead rely more on manual testing. The key is to be as “test-driven” in our development as possible.
There are several benefits from doing formal planning, which some folks seem to view as being antithetical to an Agile model:
It helps find holes in the original requirements or UI design: good technical analysis finds all the edge cases that are overlooked when a new feature is being conceptualized.
It helps ensure that requirements are properly interpreted by the Team: the back-and-forth of analysis and reviewing the requirement helps ensure that the Product Owner and the Team are in synch on what needs to get done, which is especially important for new features, of course, but can also be important to understand the severity and priority of bugs.
It deliberately slows down the development pace to the “true” pace, by ensuring that time and effort for testing, especially the development of automated tests, is properly accounted for. Without this, it’s easy for a team to quickly hack new features, which is great at first but leads to unmaintainable and unstable code very soon.
At the end of the 2-week cycle, the Team prepares to end the Sprint…
We like to talk about Sprints as “buses”: a bus comes by on a regular schedule, and if you are ready and waiting at the bus stop, you can get on the bus.
But if you are not ready when the bus comes along, you are going to have to wait for the next bus, which thankfully does come by on a regular 2-week schedule.
This metaphor helps the Team understand that Sprints are time-boxed, not feature-boxed: in other words, at the end of every 2 weeks a Sprint will end, regardless of whether a feature is complete or not. If the feature is complete, it catches the bus; otherwise it will have to wait for the next bus.
And here’s where the Kerika team differs from many other Scrum teams, particularly those that don’t consume their own products:
At the end of each Sprint, we do the normal Sprint Retrospective and Show & Tell for the Product Owner.
But, we also then take the output of the Sprint and deploy it to our test environment.
Our test environment is the one we actually use on a daily basis: we don’t use the production environment as often, preferring to risk all of our work by taking the latest/greatest version of the software on the test environment.
This forces us to use our newest software for real: for actual business use, which is a much higher bar to pass than any ordinary testing or QA, because actual business use places a higher premium on usability than regular QA can achieve.
(And, in fact, there have been instances where we have built features that passed testing, but were rejected by the team as unusable and never released.)
This is illustrated above: the version of Kerika that’s built in Sprint 1 is used by the team to work on Sprint 2.
This is where the rubber meets the road: the Kerika Team has to build Sprint 2, while using what was built in the previous Sprint. If it isn’t good enough, it gets rejected.
At the end of Sprint 2, we will release the output of Sprint 1 to production. By this time it will have been used in a real sense by the Kerika Team for at least 2 weeks, post regular QA, and we will have a high confidence that the new features and bug fixes are solid and truly usable.
We could summarize our model by saying that our production releases effectively lag our Sprint output by one Sprint, which gives us the change to “eat our own dogfood” before we offer it to anyone else.
This website uses cookies to improve your experience. We'll assume you're OK with this, but you can opt-out if you wish.AcceptRead More
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.