Interested in using Kerika for software development? Here’s a practical example, taken from one of our own boards, that highlights best practices for tracking, investigating and fixing bugs.
First, a look at our Workflow
Every software development team will want to set up its own workflow, of course, reflecting its internal dynamics and available resources. Here’s the Scrum Board workflow we use in our team:
Our Scrum Boards are organized with these columns:
The Backlog contains all the ideas, large and small, that we have on our product road map: in Scrum terminology, it’s our Product Backlog.
The Sprint Backlog is the set of cards that we pulled from the Backlog at the start of this Sprint: right now, only 4 items are left as we are close to the end of our 2-week Sprint Cycle.
Within each Sprint, cards are picked up developers and first moved into the Planning column, where detailed analysis of the work to be done is completed.
Depending upon the complexity of a particular work item, a developer may request a design review before moving the card further into the Development column.
Developers do their own unit testing as part of the Development phase, but then the work item moves further down to the the QA column which frequently includes formal code review. (More on that below…)
After a bug has been fixed, had its code reviewed and passed unit testing, it gets Deployed to the Test environment.
We usually wait until the 2-week Sprint is over before asking the entire team to present the entire Sprint’s output to the Product Owner for the Show & Tell; this avoids distracting the team midway through the Sprint.
Once the output of a Sprint has passed the Show & Tell, it can then be Deployed to Production.
The Done column shows all the work that got done in this Sprint. That’s where all cards are supposed to go, but sometimes a work item is abandoned and moved to the Trash.
(Side note: we sometimes use WIP Limits to make sure that people are not over-committed to work, but this is not a consistent practice within our team.)
Logging the bug
Now, let’s take a look at an example of an actual bug card that was worked on by our team:
(This particular bug doesn’t have a lot in the details, because it is related to another task that is currently underway, and Kerika makes it easy to link cards, canvases or boards.)
Some bugs go into the Product Backlog, if they are not considered especially urgent, but others go straight into the Sprint Backlog if they represent serious production problems that might affect user’s access or the reliability of their data.
Adding bugs to the Product Backlog lets us process bugs along with other development, e.g. of new features, in the same way: everything can be prioritized by the Product Owner and handled through a consistent workflow.
Documenting the bug
Our bug reports typically come with two attachments; at least one of these should always be included so that the developer has a clear starting point for her work:
- A screenshot showing the user experience (if the bug can be observed directly by the user.)
- An excerpt from the error log, if the server reported any errors around the time the bug was observed.
Kerika makes it easy to attach any kind of content to any card, canvas or board: for bug fixing, particularly in the analysis phase, this is very useful if the user needs to include URLs, material from Sourceforge or similar sites, links to Github, etc.
Repro, Cause & Fix
Before any bug is fixed, our developers always add a standard document we call Repro, Cause and Fix as an attachment to the card:
This document is added no matter how trivial the bug.
Why? Because, on average our team goes through about 30-40 cards a week, and has been doing so for years now. If we don’t document our analysis now, we will never recall our logic in the future.
Repro, Cause and Fix
Our team adopts a consistent layout of the Repro, Cause & Fix document, which we adhere to even for trivial bugs:
The Repro Steps are the steps needed to trigger the bug: usually they consist of a specific sequence of actions taken by a user that cause the bug to appear, but they can also consist of a sequence of events in the background, like specific server activity or memory/CPU utilization conditions, or overall network traffic, that trigger the bug.
The Expected Behavior helps clarify the developer’s understanding of the how the software should behave in this situation. It’s not uncommon for a developer who is new to the team to misunderstand how a particular feature is expected to behave, which could lead to more bugs being introduced. Clarifying Expected Behavior in this document provides an easy way for the Product Owner and the Team Lead to confirm that the developer is not going to go down the wrong track.
Introduced Since: our developers try to pinpoint the specific point in the software’s life where the bug was introduced. This helps the developers reflect upon how errors are created in the first place: the blind spots lie in their analytical patterns that need to be strengthened.
(In this particular example, the developer has been able to point back to some of her own work on an earlier feature implementation that caused this bug to appear.)
Root Cause is not the same as the Repro Steps. While a specific sequence of actions or events may reliably display the effects of a particular bug, they only provide the starting point for the analysis; the Root Cause itself is discovered only when the developer examines the code in detail and determines exactly what is breaking.
Affected Feature: all of our code goes through code review, which we view as one of the most effective QA processes we could adopt, but it can be very challenging when you are processing 30 cards each week, each affecting a different part of the software.
Identifying the Affected Feature helps with the code review process, since the reviewer can consider the bug fix in the larger context of the feature that’s being modified. Without this, it is doubtful that we could review so many changes each week.
Affected User: in most cases, this is “everyone” we offer the same Kerika to all our users, whether they are on free trials or have paid for professional subscriptions.
Identifying Affected Users is useful, nonetheless, when dealing with bugs that are browser-specific, or service-specific: for example, determining that a particular bug affects only Kerika+Google users, or only Internet Explorer users.
The Fix: OK, this might seem obvious, but this section really refers to identifying the specific modules in the software that will be changed as a result of the bug fix being applied.
Like many others, we use Git for managing our source code — along with Maven for builds — since our server environment is all Java-based.
The Fix section usually includes references to Git checkins: this helps with future bug fixes, by making it easy to traceback sources of new bugs — part of the Introduced Since section described above.
We use separate Git branches for each feature that we develop so that we can decide precisely what gets released to production, and what is held back for future work.
This helps with the Show & Tell phase of our workflow, when the developers demonstrate the output of the Sprint to the Product Owner, who has the option to accept or reject specific features (i.e. cards on the Scrum Board).
Code Review
All of our code goes through code review, no matter how trivial the change:
We really believe that code reviews are one of the most effective QA practices we could adopt: having a second pair of eyes look over code can very considerably reduce the chances of new bugs being introduced.
So…
Kerika is great for distributed software teams: our own team is spread out between Seattle and India (roughly 10,000 miles apart!), and all of our work is done using Kerika for task management, content management and team collaboration.
Of course, an added benefit of “eating our own dogfood” is that we are highly motivated to make sure Kerika is the best tool there is for distributed teams 🙂