• Skip to main content
  • Skip to primary sidebar
  • Skip to footer

DevelopSec

  • Home
  • Podcast
  • Blog
  • Resources
  • About
  • Schedule a Call

James Jardine

April 28, 2015 by James Jardine Leave a Comment

Beware the Edge Cases: Master the Fundamentals

Have you seen some of the cool things that hackers can do? I remember a few years back when they created the BlueSniper Rifle, a device that could allow picking up a BlueTooth signal from up to a mile away. That is pretty impressive for a technology that was meant to be used at a distance of just a few meters. This is just one example of ways that hackers have found ways to bypass the limits of a technology.

Some of these things you may have never heard of, whereas some are picked up by the media and become the latest and greatest buzzwords. You may hear terms like APT or sophisticated in breaches that have occurred. The reality is that there is a lot of hype that surrounds low probability issues. Am I worried someone may try to intercept my bluetooth from a mile away? No. Could it happen? Possibly, but it is unlikely.

We need to make sure that we are focusing on the fundamentals that have been preached for so many years. If we can’t get the basics right, there is no need for the effort to attempt advanced attacks.

Do you currently have a patching policy that is implemented and a program that is functional?

Are your firewalls properly maintained and monitored?

Do you have secure coding policies and procedures and are the developers held to them?

Is QA trained on security testing techniques to look for at least the low hanging fruit?

Are your users trained on social engineering and understand what to do when they feel as though they are the target of an attack?

While this is a small list, it is something that is often overlooked in companies. If you look at many of the breaches that have been in the news recently, we see more social engineering attacks than we do attacks from a blue sniper rifle. Most of the attacks we see are not sophisticated and rely on weak controls or insecure configurations.

When maturing your security program, it is important to focus on the tasks that are most relevant to your business and situation. Many of the fundamentals cross industry boundaries, while some do not. As developers, we may have different secure coding policies based on the language/frameworks we use. For administrators, maybe it is our topology/technology in use that guides us. In either case, we need to keep our focus on building a base foundation before we start getting caught up in the glitz and glamour of some of these edge case scenarios.

It is cool to say I am protected from a bluetooth sniper rifle from a mile away, but not cool to fall victim because you have default passwords. It is the latter that is more important in most cases.

These edge cases can be really cool and interesting, but take a moment to determine if it really effects you and your situation or not before diving in head first to try and find a solution.

Filed Under: General Tagged With: administrators, attacks, bluesniper, developer, developer security, edge cases, hackers, outliers, security

April 17, 2015 by James Jardine Leave a Comment

Static Analysis: Analyzing the Options

When it comes to automated testing for applications there are two main types: Dynamic and Static.

  • Dynamic scanning is where the scanner is analyzing the application in a running state. This method doesn’t have access to the source code or the binary itself, but is able to see how things function during runtime.
  • Static analysis is where the scanner is looking at the source code or the binary output of the application. While this type of analysis doesn’t see the code as it is running, it has the ability to trace how data flows the the application down to the function level.

An important component to any secure development workflow, dynamic scanning analyzes a system as it is running. Before the application is running the focus is shifted to the source code which is where static analysis fits in. At this state it is possible to identify many common vulnerabilities while integrating into your build processes.

If you are thinking about adding static analysis to your process there are a few things to think about. Keep in mind there is not just one factor that should be the decision maker. Budget, in-house experience, application type and other factors will combine to make the right decision.

Disclaimer: I don’t endorse any products I talk about here. I do have direct experience with the ones I mention and that is why they are mentioned. I prefer not to speak to those products I have never used.

Budget

I hate to list this first, but honestly it is a pretty big factor in your implementation of static analysis. The vast options that exist for static analysis range from FREE to VERY EXPENSIVE. It is good to have an idea of what type of budget you have at hand to better understand what option may be right.

Free Tools

There are a few free tools out there that may work for your situation. Most of these tools depend on the programming language you use, unlike many of the commercial tools that support many of the common languages. For .Net developers, CAT.Net is the first static analysis tool that comes to mind. The downside is that it has not been updated in a long time. While it may still help a little, it will not compare to many of the commercial tools that are available.

In the Ruby world, I have used Brakeman which worked fairly well. You may find you have to do a little fiddling to get it up and running properly, but if you are a Ruby developer then this may be a simple task.

Managed Services or In-House

Can you manage a scanner in-house or is this something better delegated to a third party that specializes in the technology?

This can be a difficult question because it may involve many facets of your development environment. Choosing to host the solution in-house, like HP’s Fortify SCA may require a lot more internal knowledge than a managed solution. Do you have the resources available that know the product or that can learn it? Given the right resources, in-house tools can be very beneficial. One of the biggest roadblocks to in-house solutions is related to the cost. Most of them are very expensive. Here are a few in-house benefits:

  • Ability to integrate directly into your Continuous Integration (CI) operations
  • Ability to customize the technology for your environment/workflow
  • Ability to create extensions to tune the results

Choosing to go with a managed solution works well for many companies. Whether it is because the development team is small, resources aren’t available or budget, using a 3rd party may be the right solution. There is always the question as to whether or not you are ok with sending your code to a 3rd party or not, but many are ok with this to get the solution they need. Many of the managed services have the additional benefit of reducing false positives in the results. This can be one of the most time consuming pieces of a static analysis tool, right there with getting it set up and configured properly. Some scans may return upwards of 10’s of thousands of results. Weeding through all of those can be very time consuming and have a negative effect on the poor person stuck doing it. Having a company manage that portion can be very beneficial and cost effective.

Conclusion

Picking the right static analysis solution is important, but can be difficult. Take the time to determine what your end goal is when implementing static analysis. Are you looking for something that is good, but not customizable to your environment, or something that is highly extensible and integrated closely with your workflow? Unfortunately, sometimes our budget may limit what we can do, but we have to start someplace. Take the time to talk to other people that have used the solutions you are looking at. Has their experience been good? What did/do they like? What don’t they like? Remember that static analysis is not the complete solution, but rather a component of a solution. Dropping this into your workflow won’t make you secure, but it will help decrease the attack surface area if implemented properly.

This article was also posted at https://www.jardinesoftware.net

Filed Under: General Tagged With: developer awareness, developer security, qa, qa awareness, qa test, quality assurance, security testing, static analysis, testing

April 2, 2015 by James Jardine Leave a Comment

The Importance of Baselines

To understand what is abnormal, we must first understand what is normal. All too often we have overlooked the basic first step of understanding and recording our baselines. Whether it is for network traffic, data input, or binary sizes it is imperative we understand what is normal. Once we have an understanding of what normal is it becomes easier to start identifying abnormalities that can be of concern.

Related podcast: Ep. 24: The Importance of Baselines

Take a moment to think about how we determine if our body is healthy or not. Of course, healthy can be relative. In general, we have some baselines. We know that the normal body temperature is 98.6 degrees with a slight deviation. WE have ranges for good pressure, cholesterol, blood/sugar, etc. With the body there are usually “normal” ranges for many of these values. This is true for our information systems as well.

What is the average size of a 302 redirect from a web server: 1 KiloByte, 100 KiloBytes, 1 MegaByte? Lets say that it is less then 1kb, this makes it easier to understand that if you have 302 redirects that are 500kb then something may be going on and an investigation is in order. While this doesn’t always mean there is a problem, it is that initial event to look at the situation to determine if something is going on.

Having a baseline of the size of applications that are installed on your system may also help identify if an application binary has been modified. Maybe a malicious application has been placed on the system that replaces calc.exe but is 2MB larger than the original one. It may be possible that this was just a software update, but could also mean it is an imposter.

These same questions apply to network traffic as well. Understanding the types of traffic and amount of traffic that generally pass through the network is critical when it comes to identifying an attack. It is not enough to just say a spike in traffic at any given time is a potential concern. It may be possible a legitimate event was happening. Imagine if your backups ran between 3 and 5am every morning and the network saw a spike in traffic. If you didn’t usually watch the traffic and saw that spike one day you might have serious concern. However, if you understand the traffic patterns it may turn out to be an ordinary event.

Once you understand these baselines it is possible to start creating events for things that are now abnormal. No guarantee that these events are malicious or of concern, but it is the starting point to what you are going to investigate. With so many things going on in our applications and networks, these baselines turn out to be critical for securing our systems.

The truth is, creating these baselines is going to be time consuming. Obviously a lot of that depends on your systems and the complexity of them. The time will be required, but is necessary for being able to detect many security related events. The good news is that you don’t need a security group to do this. The network administrators or engineers can do most of this since it is the lifelines of their networks that you will be measuring. The application developers and QA can certainly understand what is normal for the application. It doesn’t have to be a complex task. Start out small, use a spreadsheet or some other collaborative solution to record these values. Of course, that isn’t easy to trigger alerts off of, but that can be an initial first step. Once you get that maturing, then looking at solutions to identify these abnormalities and trigger events becomes imperative.

Filed Under: General Tagged With: baselines, developer awareness, developer security, network, network security, qa, qa awareness, qa testing, security, security testing

March 27, 2015 by James Jardine Leave a Comment

Amazon XSS: Thoughts and Takeaways

It was recently identified, and Amazon was quick (2 days) to fix it, that one of their sites was vulnerable to cross-site scripting. Cross-site scripting is a vulnerability that allows an attacker to control the output in the user’s browser. A more detailed look into cross-site scripting can be found on the OWASP site.

Take-Aways


  • QA could have found this
  • Understand your input validation routines
  • Check to make sure the proper output encoding is in place in every location user supplied data is sent to the browser

Vulnerabilities like the one listed above are simple to detect. In fact, many can be detected by automated scanners. Unfortunately, we cannot rely on automated scanners to find every vulnerability. Automated scanning is a great first step in identifying flaws like cross-site scripting. It is just as important for developers and QA analysts to be looking for these types of bugs. When we break it down, a cross-site scripting flaw is just a bug. It may be classified under “security” but nonetheless it is a bug that effects the quality of the application.

We want to encourage developers and QA to start looking for these types of bugs to increase the quality of their applications. Quality is more than just if the app works as expected. If the application has a bug that allows an attacker the ability to send malicious code to another user of the application that is still a quality issue.

If you are a developer, take a moment to think about what output you send to the client and if you are properly encoding that data. It is not as simple as just encoding the less than character or greater than character. Context matters. Look for the delimiters and control characters that are relative to where the output is going to determine the best course of action. It is also a good idea to standardize the delimiters you use for things like HTML attributes. Don’t use double quotes in some places, single quotes in others and then nothing in the rest. Pick one (double or single quotes) and stick to it everywhere.

If you are a QA analyst, understand what input is accepted by the application and then where that output is then used again. The first step is testing what data you can send to the server. Has there been any input validation put in place? Input validation should be implemented in a way to limit the types and size of data in most of the fields. The next step is to verify that any special characters are being encoded when they are returned back down to the browser. These are simple steps that can be performed by anyone. You could also start scripting these tests to make it easier in the future.

It is our (dev,qa,ba,application owners) responsibility to create quality applications. Adding these types of checks do not add a lot of time to the cycle and the more you do it, the less you will start to see allowing you to increase the testing timelines. Bugs are everywhere so be careful and test often.

Filed Under: Take-Aways Tagged With: cross-site scripting, developer, developer awareness, qa, qa awareness, quality assurance, security, security awareness, security testing, security training, xss

March 17, 2015 by James Jardine

Is HTTP being left behind for HTTPS?

A few years ago, a FireFox plugin was created called FireSheep.  This tool was designed to sniff network traffic looking for common websites that were being visited over HTTP.  HTTP sends the traffic between your system and the server in clear text.  If it found a request/response of an authenticated user, it would capture the session cookie and allow the user of FireSheep to hijack the current session.  While the site most likely performed the initial authentication with the username and password over an encrypted channel, such as HTTPS, it then degraded to HTTP for the rest of the site visits.  The premise was that the credentials were protected, but the flaw in that approach is that the session cookie used to represent an authenticated user also needs to be protected.  In this case, it was not.

It is starting to become more popular for sites to allow support for HTTPS (the encrypted transport channel) all of the time.  Many sites like Facebook, Google, LinkedIn, Twitter, etc. started making this available as an option after the release of FireSheep.  If your site uses any type of authentication, it is recommended to only use an encrypted channel (HTTPS) for communication. 

What if your site doesn’t use authentication?  What if it is just your company’s marketing website?   What if your site just provides information to people but there are no passwords, sensitive information, or sessions to protect?   Should you still switch to HTTPS?

This is a debate that is starting to grow in the information security world.  With concerns of government snooping, or other entities snooping on traffic, many suggest dropping HTTP and only supporting HTTPS.  There is also the concern that your site, if using HTTP, could allow an attacker to intercept and modify your responses to directly attack your system.  While not attacks, we have seen ISPs deliver ads or other content by inserting it into the responses.   If an attacker can do this, it is possible for an attack payload to be sent and your system comprimised.  If you are on a corporate network this is the first step at attacking the internal network from teh outside.  

On the flip side, we don’t see these types of attacks very widespread and many people are not worried about any type of snooping.  They just want to get their information.   So does it make sense to just go ahead and drop HTTP and go for the gusto?   Pintrest just joined the ranks of going to all HTTPS (https://threatpost.com/https-opens-door-to-paid-pinterest-bug-bounty/111687) as an improvement to their security.  Does it make sense for you?   

There are some things to think about with implementation of HTTPS.   Of course, there is a monetary piece to this.   You have to buy a certificate for your domain so that HTTPS will work.  Depending on the site you go to, these certs vary in price.  The other big concern I often hear is about performance, that HTTPS will slow the site down and users will be unhappy.  Recent advances in SSL and TLS have pretty much negated this issue.  If a site like Facebook can implement it, there is a good chance you can as well.  If you are serving up external content there may be some hurdles as browsers may get upset when trying to display mixed content: that is content from both HTTP and HTTPS.  

I am not sure if it is going mainstream for all sites yet, or just the sites that have the sensitive transactions.  There is a chance that it could make the switch.  Another aspect is search rankings.   Google has stated that it will rank HTTPS sites higher (http://www.zdnet.com/article/google-confirms-its-giving-https-sites-higher-search-rankings/).  Is this the push that is needed?   Is it enough to push everyone to HTTPS?

Filed Under: General Tagged With: awareness, developer, encrypted transport, google, http, https, search engine, search results, site, technology

March 13, 2015 by James Jardine Leave a Comment

Input Validation: Keep It Simple

Attackers take advantage of an application by manipulating the inputs to the system. For example, a first name field or even a request header like the user-agent. Applications wouldn’t be very useful if they didn’t accept any input from the end user. Unfortunately, this is the key attack vector. One of the basic techniques used to help protect a system is to us input validation, which assesses the input to determine if it is should be accepted.

Many development groups have fought with the concept of input validation in regards to how much should be done. Get to extreme with it and you may never implement it at all. It is important to understand what your goals are for the input validation, goals that should align with what the application does. Make the validation to complex and it is too easy to introduce errors. Start out with a simple plan, don’t go for the gusto out of the start gate. Here are some tips for starting out with input validation.

Type
One of the easiest checks to implement is checking the data type of the input field. If you are expecting a date data type and the input received is not a valid date then it is invalid and should be rejected. Numbers are also fairly simple to validate to ensure they match the type expected. Many of the most popular frameworks provide methods to determine if a string matches a specific data type. Use these features to validate this type of data. Unfortunately, when the data type is a free form string then the check isn’t as useful. This is why we have other checks that we will perform.

Range
You have determined that the input is of the correct type, say a number or a date. Now it is time to validate that it falls within the correct range of acceptable values. Imagine that you have an age field on a form. Depending on your business cases, lets say that a valid age is 0-150. It is a simple step to ensure that the value entered falls into this valid range. Another example is a date field. What if the date represents a signed date and can’t be in the future. In this case, it is important to ensure that the value entered is not later than the current day.

Length
This is really more for the free form text fields to validate that the input doesn’t exceed the expected length for the field. In many cases, this lines up with the database or back end storage. For example, in SQL Server you may specify a first name field that is 100 characters or a state field that only allows two characters for the abbreviation. Limiting down the length of the input can significantly inhibit some of the attacks that are out there. Two characters can be tough to exploit, while possible depending on the application design.

White Lists
White lists can be a very effective control during input validation. Depending on the type of data that is being accepted it may be possible to indicate that only alphabetical characters are acceptable. In other cases you can open the white list up to other special characters but then deny everything else. It is important to remember that even white lists can allow malicious input depending on the values required to be accepted.

Black Lists
Black lists are another control that can be used when there are specifically known bad characters that shouldn’t be allowed. Like white lists, this doesn’t mean that attack payloads can’t get through, but it should make them more difficult.

Regular Expressions
Regular expressions help with white and black lists as well as pattern matching. With the former it is just a matter of defining what the acceptable characters are. Pattern matching is a little different and really aligns more with the range item discussed earlier. Pattern matching is great for free form fields that have a specific pattern. A social security number or an email address. These fields have an exact format that can be matched, allowing you to determine if it is right or not.

Test Your Validation
Make sure that you test the validation routines to ensure they are working as expected. If the field shouldn’t allow negative numbers, make sure that it rejects negative numbers. If the field should only allow email addresses then ensure that it rejects any pattern that doesn’t match a valid email address.

Validate Server Side
Maybe this should have been first? Client side validation is a great feature for the end user. They get immediate feedback to what is wrong with their data without having that round trip to the server and back. THAT IS EASILY BYPASSED!! It is imperative that the validation is also checked on the server. It is too easy to bypass client side validation so this must be done at the server where the user cannot intercept it and bypass it.

Don’t Try To Solve All Problems
Don’t try to solve all output issues with input validation. It is common for someone to try and block cross site scripting with input validation, it is what WAFs do right? But is that the goal? What if that input isn’t even sent to a browser? Are you also going to try and block SQL Injection, LDAP Injection, Command Injection, XML Injection, Parameter Manipulation, etc. with input validation? Now we are getting back to an overly complex solution when there are other solutions for these issues. These types of items shouldn’t be ignored and that is why we have the regular expressions and white lists to help decrease the chance that these payloads make it into the system. We also have output encoding and parameterized queries that help with these additional issues. Don’t spend all of your time focusing on input validation and forget about where this data is going and protecting it there.

Input validation is only half of the solution, the other half is implemented when the data is transmitted from the application to another system (ie. database, web browser, command shell). Just like we don’t want to rely solely on output encoding, we don’t want to do that with input validation. Assess the application and its needs to determine what solution is going to provide the best results. If you are not doing any input validation, start as soon as possible. It is your first line of defense. Start small with the Type/Range/Length checks. When those are working as expected then you can start working into the more advanced input validation techniques. Make sure the output encoding routines are also inplace.

Don’t overwhelm yourself with validating every possible attack. This is one cog in the system. There should also be other controls like monitoring and auditing that catch things as well. Implementing something for input validation is better than nothing, but don’t let it be a false sense of security. Just because you check the length of a string and limit it to 25 characters, don’t think a payload can’t be sent and exploited. It is, however, raising the bar (albeit just a little).

Take the time to assess what you are doing for input validation and work with the business to determine what valid inputs are for the different fields. You can’t do fine validation if you don’t understand the data being received. Don’t forget that data comes in many different forms or encodings. Convert all data to a specific encoding before you perform any validation on it.

Filed Under: General Tagged With: developer, developer awareness, input validation, qa, sdlc, secure coding, secure development, security, security testing

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 13
  • Go to page 14
  • Go to page 15
  • Go to page 16
  • Go to Next Page »

Primary Sidebar

Contact Us:

Contact us today to see how we can help.
Contact Us

Footer

Company Profile

Are you tackling the challenge to integrate security into the development process? Application security can be a complex task and often … Read More... about Home

Resources

Podcasts
DevelopSec
Down the Security Rabbithole (#DTSR)

Blogs
DevelopSec
Jardine Software

Engage With Us

  • Email
  • GitHub
  • Twitter
  • YouTube

Contact Us

DevelopSec
Email: james@developsec.com



Privacy Policy

© Copyright 2018 Developsec · All Rights Reserved