Monthly Archives: April 2015

Beware the Edge Cases: Master the Fundamentals

Have you seen some of the cool things that hackers can do? I remember a few years back when they created the BlueSniper Rifle, a device that could allow picking up a BlueTooth signal from up to a mile away. That is pretty impressive for a technology that was meant to be used at a distance of just a few meters. This is just one example of ways that hackers have found ways to bypass the limits of a technology.

Some of these things you may have never heard of, whereas some are picked up by the media and become the latest and greatest buzzwords. You may hear terms like APT or sophisticated in breaches that have occurred. The reality is that there is a lot of hype that surrounds low probability issues. Am I worried someone may try to intercept my bluetooth from a mile away? No. Could it happen? Possibly, but it is unlikely.

We need to make sure that we are focusing on the fundamentals that have been preached for so many years. If we can’t get the basics right, there is no need for the effort to attempt advanced attacks.

Do you currently have a patching policy that is implemented and a program that is functional?

Are your firewalls properly maintained and monitored?

Do you have secure coding policies and procedures and are the developers held to them?

Is QA trained on security testing techniques to look for at least the low hanging fruit?

Are your users trained on social engineering and understand what to do when they feel as though they are the target of an attack?

While this is a small list, it is something that is often overlooked in companies. If you look at many of the breaches that have been in the news recently, we see more social engineering attacks than we do attacks from a blue sniper rifle. Most of the attacks we see are not sophisticated and rely on weak controls or insecure configurations.

When maturing your security program, it is important to focus on the tasks that are most relevant to your business and situation. Many of the fundamentals cross industry boundaries, while some do not. As developers, we may have different secure coding policies based on the language/frameworks we use. For administrators, maybe it is our topology/technology in use that guides us. In either case, we need to keep our focus on building a base foundation before we start getting caught up in the glitz and glamour of some of these edge case scenarios.

It is cool to say I am protected from a bluetooth sniper rifle from a mile away, but not cool to fall victim because you have default passwords. It is the latter that is more important in most cases.

These edge cases can be really cool and interesting, but take a moment to determine if it really effects you and your situation or not before diving in head first to try and find a solution.

Static Analysis: Analyzing the Options

When it comes to automated testing for applications there are two main types: Dynamic and Static.

  • Dynamic scanning is where the scanner is analyzing the application in a running state. This method doesn’t have access to the source code or the binary itself, but is able to see how things function during runtime.
  • Static analysis is where the scanner is looking at the source code or the binary output of the application. While this type of analysis doesn’t see the code as it is running, it has the ability to trace how data flows the the application down to the function level.

An important component to any secure development workflow, dynamic scanning analyzes a system as it is running. Before the application is running the focus is shifted to the source code which is where static analysis fits in. At this state it is possible to identify many common vulnerabilities while integrating into your build processes.

If you are thinking about adding static analysis to your process there are a few things to think about. Keep in mind there is not just one factor that should be the decision maker. Budget, in-house experience, application type and other factors will combine to make the right decision.

Disclaimer: I don’t endorse any products I talk about here. I do have direct experience with the ones I mention and that is why they are mentioned. I prefer not to speak to those products I have never used.

Budget

I hate to list this first, but honestly it is a pretty big factor in your implementation of static analysis. The vast options that exist for static analysis range from FREE to VERY EXPENSIVE. It is good to have an idea of what type of budget you have at hand to better understand what option may be right.

Free Tools

There are a few free tools out there that may work for your situation. Most of these tools depend on the programming language you use, unlike many of the commercial tools that support many of the common languages. For .Net developers, CAT.Net is the first static analysis tool that comes to mind. The downside is that it has not been updated in a long time. While it may still help a little, it will not compare to many of the commercial tools that are available.

In the Ruby world, I have used Brakeman which worked fairly well. You may find you have to do a little fiddling to get it up and running properly, but if you are a Ruby developer then this may be a simple task.

Managed Services or In-House

Can you manage a scanner in-house or is this something better delegated to a third party that specializes in the technology?

This can be a difficult question because it may involve many facets of your development environment. Choosing to host the solution in-house, like HP’s Fortify SCA may require a lot more internal knowledge than a managed solution. Do you have the resources available that know the product or that can learn it? Given the right resources, in-house tools can be very beneficial. One of the biggest roadblocks to in-house solutions is related to the cost. Most of them are very expensive. Here are a few in-house benefits:

  • Ability to integrate directly into your Continuous Integration (CI) operations
  • Ability to customize the technology for your environment/workflow
  • Ability to create extensions to tune the results

Choosing to go with a managed solution works well for many companies. Whether it is because the development team is small, resources aren’t available or budget, using a 3rd party may be the right solution. There is always the question as to whether or not you are ok with sending your code to a 3rd party or not, but many are ok with this to get the solution they need. Many of the managed services have the additional benefit of reducing false positives in the results. This can be one of the most time consuming pieces of a static analysis tool, right there with getting it set up and configured properly. Some scans may return upwards of 10’s of thousands of results. Weeding through all of those can be very time consuming and have a negative effect on the poor person stuck doing it. Having a company manage that portion can be very beneficial and cost effective.

Conclusion

Picking the right static analysis solution is important, but can be difficult. Take the time to determine what your end goal is when implementing static analysis. Are you looking for something that is good, but not customizable to your environment, or something that is highly extensible and integrated closely with your workflow? Unfortunately, sometimes our budget may limit what we can do, but we have to start someplace. Take the time to talk to other people that have used the solutions you are looking at. Has their experience been good? What did/do they like? What don’t they like? Remember that static analysis is not the complete solution, but rather a component of a solution. Dropping this into your workflow won’t make you secure, but it will help decrease the attack surface area if implemented properly.

This article was also posted at https://www.jardinesoftware.net

The Importance of Baselines

To understand what is abnormal, we must first understand what is normal. All too often we have overlooked the basic first step of understanding and recording our baselines. Whether it is for network traffic, data input, or binary sizes it is imperative we understand what is normal. Once we have an understanding of what normal is it becomes easier to start identifying abnormalities that can be of concern.

Related podcast: Ep. 24: The Importance of Baselines

Take a moment to think about how we determine if our body is healthy or not. Of course, healthy can be relative. In general, we have some baselines. We know that the normal body temperature is 98.6 degrees with a slight deviation. WE have ranges for good pressure, cholesterol, blood/sugar, etc. With the body there are usually “normal” ranges for many of these values. This is true for our information systems as well.

What is the average size of a 302 redirect from a web server: 1 KiloByte, 100 KiloBytes, 1 MegaByte? Lets say that it is less then 1kb, this makes it easier to understand that if you have 302 redirects that are 500kb then something may be going on and an investigation is in order. While this doesn’t always mean there is a problem, it is that initial event to look at the situation to determine if something is going on.

Having a baseline of the size of applications that are installed on your system may also help identify if an application binary has been modified. Maybe a malicious application has been placed on the system that replaces calc.exe but is 2MB larger than the original one. It may be possible that this was just a software update, but could also mean it is an imposter.

These same questions apply to network traffic as well. Understanding the types of traffic and amount of traffic that generally pass through the network is critical when it comes to identifying an attack. It is not enough to just say a spike in traffic at any given time is a potential concern. It may be possible a legitimate event was happening. Imagine if your backups ran between 3 and 5am every morning and the network saw a spike in traffic. If you didn’t usually watch the traffic and saw that spike one day you might have serious concern. However, if you understand the traffic patterns it may turn out to be an ordinary event.

Once you understand these baselines it is possible to start creating events for things that are now abnormal. No guarantee that these events are malicious or of concern, but it is the starting point to what you are going to investigate. With so many things going on in our applications and networks, these baselines turn out to be critical for securing our systems.

The truth is, creating these baselines is going to be time consuming. Obviously a lot of that depends on your systems and the complexity of them. The time will be required, but is necessary for being able to detect many security related events. The good news is that you don’t need a security group to do this. The network administrators or engineers can do most of this since it is the lifelines of their networks that you will be measuring. The application developers and QA can certainly understand what is normal for the application. It doesn’t have to be a complex task. Start out small, use a spreadsheet or some other collaborative solution to record these values. Of course, that isn’t easy to trigger alerts off of, but that can be an initial first step. Once you get that maturing, then looking at solutions to identify these abnormalities and trigger events becomes imperative.