What We Learn from the Heartbleed Bug - 2/3


2014-05-26, 14:01 Posted by: Makers of a secure future

Testing is Difficult 

In my previous post, I elaborated about the Heartbleed Bug. The flaw was created by a single programmer, and tested (?) by a single tester. The open source promise about legions of volunteers, doing security testing for free, just wasn't true. Sure, the OpenSSL source code is infamous for it's low quality but shouldn't that just have encouraged more testers? 

The issue here is that testing is hard. It's work made in the shadow of the light cast on the developers, something that often is considered a little bit more creative and challenging. Good testers need to be creative, patient, and have a great deal of "bloodhound instinct". 

And the bad news: no matter how great the testers are, they will still need to be sent off to test other stuff when the project budget comes to an end. It would be neat to have a machine continuing the testing work for years to come, wouldn't it? 

How Hard it is to Find Security Bugs? 

You'll just have to decide. Is it OK with you to find a couple of bugs and then fix them? Or do you need to find them all?

Remember that if there are 200 security flaws in a system - then you need to find all of them if you want to really close the door for an intruder. IT security and traditional warfare are each other's true opposites. In IT security, defense is so much harder that attack. 

Testing in the IT Security Community vs. the Telecom Sector

"Dear client, you need some security tests for your new web site. We have this great consultant here, with all the certifications you can think of. He's really smart and very interested in his work. He will arrive at your locations with a laptop equipped with a large number of software security tools (that everybody else use too) and spend the agreed 80 hours on this project". 

If defense is so much harder than attack, how come the typical security tests are smaller than the amount of work a hacker would be prepared to spend on attacking a system? And what's the ide of using tools that gives no exceptional benefit compared to the methods the hacker will use?

I had the pleasure of experiencing a wholly different attitude in teleconference with a "top floor executive" a couple of weeks ago. We spoke about security tests on a large scale and he stopped me in a middle of a sentence. 

- You're not going to offer me security consultants on a time/material basis, right? Tell me about the infrastructure you've got, the stuff you already have and that is prepared for us to start working with immediately. 

First Conclusion from the Heartbleed Bug

His first question was about labs equipped with technology for fuzz testing. Fuzz testing is about generating enormous amounts of erroneous data to test. Data could be erroneous in itself (say... a negative number where a positive number is expected) or it could be consist of violations of stateful mechanisms in the system being tested. All errors identified in the systems are then investigated for any security implications. 

The Heartbleed bug was discovered by a machine, using these methods. I'm happy to report that it was a Cybercom partner company, Codenomicon, that found this! 

The telecom sector asks for testing platforms, operated by testers. They've already tried out manual testing and they don't want to go back. When it comes into security tests, the logic is very much the same: 

  • the systems under test are extremely complex
  • it's hard to test it all
  • the testers do not always have access to all source code
  • the test projects needs to get things done quickly
  • there is a need to continue testing for a long time (years?) 

If you haven't started to fuzz-test your applications, you'll just have to hope no bad guys have already done these testings before you. There might be a yet-unknown bug waiting for you, or the hacker, in your systems.

But there is still one lesson to learn from the Heartbleed bug. We now know that there are no free lunches and that nobody tests your software for free. We also know that it's plain wrong to disregard automatic testing if it is in any way possible.

Sometimes You Still Need to Go Manual. Stay Smart When You do. 

Still, there will remain some cases when we need to do manual testing. We need to do some thinking about how to change the rules so that IT protection isn't a fair game anymore. 

Let's spend the next blog post going thorugh the logics of an unexpected performance indicator when reviewing proposals for a manual security test.

Make sure to remember - we desperately want to avoid ending up in a fair fight!

Stay tuned. 


comments powered by Disqus