Monthly Archives: May 2015

Security for QA Testers: The Importance

Quality Assurance (QA) testing is a critical role for any application that is being developed. The purpose: to identify flaws within the application that effect how the application runs and the users that use it. Typically this has focused on the goal of identifying flaws that prohibited the application functions from performing as expected. When I say expected, I mean that the end user is not able to complete his identified task.

Over the past decade there has been a growing focus on the missing QA testing focus: security flaws. What makes a security flaw different than the other flaws generally identified? Most security flaws, at least exploitable ones, focus on the ability to make the application do something it was not intended to do. If the application is supposed to allow me to view my bank account and I can make it show me someone else’s bank account it indicates a security flaw. The assumption here is that I shouldn’t be able to view another user’s account.

Typically in QA, the test would ensure that I could see my account and the data returned was in fact my information when I requested it. It does not check things like authorization issues to see what happens when I attempt to view another user’s account.

This is the time for QA to add this type of testing to their current test cases. Recently released reports attempt to show that the security field is suffering by a huge shortage. While we do have may different types of entities that will test our applications for security flaws, the best one is our own QA teams. Here are a few key indicators for why QA is so important in security testing.

With the exception of the actual developers, no one is closer to the development phase than the QA team. You may have heard that a bug found in production costs a lot more to fix than one found in QA or development. There are multiple reasons for this, which we will cover in a different article. The key here is that we are getting almost a pseudo immediate feedback process for bugs found to send back to development. Not only does this mean we don’t lose the time of the application going through multiple other phases of the lifecycle only to be sent back, but the developers will adopt better secure coding techniques much quicker.

Application Knowledge
Many security flaws are based on the idea that we are able to make the application do something it was not intended to do. While some flaws like injection flaws don’t require any knowledge of the application functionality, authorization/authentication/logic flaws do require an understanding of the application. QA should have intimate knowledge of how an application should and shouldn’t work. Having this understanding makes it more efficient to understand what is a flaw and what is correct functionality.

Bug Tracking
Most likely, QA already has some sort of bug tracking system. Lets be clear, a bug is a bug is a bug. It doesn’t matter that it is a security bug, a simple logic flaw, or a typo in the UI, these bugs go through the same process. They get identified, logged, reported, analyzed, ranked/prioritized, and handled. It doesn’t make sense to have a separate system for managing bugs based on classification. Place an indicator if needed to indicate it is a security issue so that a report could be created to give to the infosec team for audit or review purposes.

We have an opportunity, as QA, to step up and take responsibility and ownership of enhancing our testing of applications. Is it different than your current tests? Probably. Can we do it? You bet. Lets start working on getting security built into our process instead of relying on a number of 3rd parties to do it for us. As we have seen, that doesn’t work so well.

Best Practices for Cyber Incident: DoJ Released Guide

Breaches and other security incidents are happening all of the time, and can happen to anyone. Do you know what to do if an incident occurs in your backyard? The Department of Justice recently released the Best Practices for Victim Response and Reporting of Cyber Incidents to help you understand the process. Looking through the 15 page document, there are quite a few great points that are made. Here are just a few examples of what are included. I encourage you to check out the entire document as this won’t do it justice. It covers 4 topics:

  • Before the Incident
  • Responding to the Intrusion
  • What Not to do Following an Incident
  • After an Incident

The document is broken down into different topics, starting with before the intrusion actually occurs. This step is often overlooked because we never think it will happen to us. Talk to anyone that performs incident response or forensics and while these tasks are performed after an incident, what was done before the incident can be a game changer. Don’t forget baselines of your systems to know what is normal.

It is good practice to identify what is important to your business, that is, what you need to protect. This is different for every company. The next step is to create an action and response plan in the event an incident occurs. Well thought out plans make dealing with an incident easier. Make sure you are including non-technical resources in this planning. This may include the legal teams, human resources, public relations, and a wide array of other personnel within the company. When a breach occurs, there are a lot of moving parts to deal with.

Forming a relationship with law enforcement is also a good idea. It makes it easier to contact them in the event of an incident and you may feel more comfortable with the situation. The relationship may also lead to information ahead of time that could be useful to thwart an impending attack.

Once an incident occurs it is time to respond to it. This is the 2nd topic covered by the report. This starts with making an initial assessment of the situation. Who is logged on, what systems are affected, etc. Once you have identified affected systems it is time to implement measures to minimize damage. This might include removing systems from the network, shutting them down or segregating them. Once protected it is time to collect information about the incident, often requiring imaging of the affected systems. Note: If you are not sure how to do this it is a good idea to contact a professional. You do not want to risk damaging the evidence.

Once you have identified the affected systems/data it is type to put the notify portion of your action plan into effect. Don’t forget that it is more than just notifying customers. You want to understand which customers need to be notified, but also vendors/partners and internal employees. Depending on the situation, law enforcement may also need to be notified.

I really like that the document contains information about what NOT to do following an incident. Professionals that don’t focus in IR and Forensics tend to not think about what they shouldn’t be doing that could cause problems with the investigation or themselves. The affected systems should not be used for any communication unless absolutely necessary. These systems are most likely compromised and you can’t expect any information to be safe at this point.

While hacking back seems to be a debated topic these days, the recommendation is to avoid it. While the laws are broad when it comes to CFAA and other computer crimes, you don’t want to go from victim to defendant. Let the authorities deal with the issue.

Finally, once the incident is cleared and complete, stay vigilant. Don’t assume that once you get attacked it won’t happen again. Learn from what happened to help reduce the chances it will happen again.

My summary above just touches on the tip of the information provided in the document linked above. It is nice to see we have a list of best practices that are not technical that many should be able to understand. Even if you are not part of incident response or dealing directly with cyber incidents, take a moment to read the information as it might be helpful one day.