So, here’s how we dealt with the Heartbleed bug…
We learned about the bug just like you did: through news reports early on April 7th. Heartbleed was a “zero-day” bug, and the OpenSSL team put out an updated (patched) version of the OpenSSL protocol the same day, which meant that everyone, everywhere, had to scramble to get their systems patched as quickly as possible.
(And the bad guys, meanwhile, scrambled to grab sensitive information, with the Canadian tax authorities being among the first to report that they had been hacked. Of course, “first to report” isn’t the same as “first to actually get hacked”. Most people who got hacked either never found out, or never said anything…)
Kerika uses OpenSSL too, and our immediate concern was updating the Elastic Load Balancer that we use to manage access to our main Amazon Web Services (AWS) servers: the load balancers are where OpenSSL is installed; not on the Web servers that sit behind the load balancer.
Using Amazon Web Services turned out to be a really smart decision in this respect: Amazon just went ahead and patched all their load balancers one by one, without waiting for their customers to take any action. In fact, they patched our load balancer faster than we expected!
Patching the load balancer provided critical immediate protection, and gave us the time to do a more leisurely security review of all our operations. This was long overdue, it turned out, and so we went into “housecleaning mode” for over a week.
One part of this, of course, was updating all our Ubuntu Linux machines: Canonical Software was also very prompt in releasing a patched version of Ubuntu which we loaded onto all of our development, test, and production services. So, even though the OpenSSL vulnerability had been patched at the load balancer, we also applied patches on all our development, test and production servers even though these couldn’t be directly accessed from the Internet.
Next, we decided to clean up various online services that we weren’t actively using: like many other startups, we frequently try out various libraries and third-party services that look promising. We stick with some; others get abandoned. We had accumulated some API keys for services that we weren’t using any more (e.g. we had a YouTube API key that no one could even remember why we had gotten in the first place!), and we deactivated everything that wasn’t actively been used.
Closing unneeded online accounts helped reduce our “attack surface”, which adds to our overall security.
And, of course, we changed all our passwords, everywhere. All of our email passwords, all of our third-party passwords. All of our online passwords and all of our local desktop passwords. (On a personal level, our staff also took the opportunity to change all their banking and other online passwords, and to close unneeded online accounts, to reduce our personal attack surfaces as well.)
We got new SSL certificates: from Verisign for our production load balancer, and from GoDaddy for our test load balancer. Getting a new SSL certificate from Verisign took much longer than we would have liked; getting one from GoDaddy took just seconds, but on the other hand, Verisign does have a better reputation…
We reviewed our internal security policies and procedures, and found a few places where we could tighten things up. This mostly involved increased use of two-party authentication and — most importantly — further tightening up access to various services and servers within the Kerika team. Access to our production servers is highly restricted even within the Kerika team: we use AWS’s Identity & Access Management service to restrict access using roles and permissions, even within the small subset of people who have any access to the production server.
Finally, we are adding more monitoring, looking out for malicious activity by any user, such as the use of automated scripts. We have seen a couple of isolated examples in the past: not malicious users, but compromised users who had malware on their machines. Fortunately these attempts were foiled thanks to our robust access control mechanisms which manage permissions at the individual object level in Kerika — but, like every other SaaS company, we need to be vigilant on this front.
All of this was good housekeeping. It disrupted our normal product development by over a week as we took an “all hands on deck” approach, but well worth it.
2 thoughts on “Heartbleed: no heartache, but it did prompt a complete security review”
Comments are closed.