• Skip to main content
  • Skip to primary sidebar
  • Skip to footer

DevelopSec

  • Home
  • Podcast
  • Blog
  • Resources
  • About
  • Schedule a Call

testing

June 1, 2015 by James Jardine Leave a Comment

Business Analysts and Product Managers: Security Roles

In a previous post I discussed how QA plays a critical role in the security of an application. As with QA and developers, the business analysts and product managers are also crucial to a successful security development lifecycle. Not to add any pressure, but it is these two roles that feed into the other groups for the security requirements.

When designing an application the focus is usually placed on ensuring that the end user functionality, the functionality to solve a specific problem, is working as expected. To use a simple bank application as an example, there may be a need for a customer to view their account online, or on a mobile device. There may be a need for a user to transfer money between bank accounts or other banking functionality. The business analysts job is to identify this needed functionality and define how it should work.

A lot of what we do in security involves looking deeper into how the application “should” work. It is more than just ensuring that when I pull up my account I see my account info. If we dig a little deeper a question may be “What happens if I attempt to view another user’s account?” By going a little farther with our questioning we can start to flush out the details of how we expect the system to react in these scenarios.

One of the biggest issues in security is the ability to view other user’s information. We have seen this in many breaches where just modifying a simple query string value allowed viewing the private details of another user. When we create a design requirement that states: When trying to view an account you are not authorized to see, you must receive a HTTP Status code of 403: Access Forbidden. This requirement helps ensure that the developers are thinking about this during the development phase. It also gives the QA testers another test case to test for. Of course, the example requirement may raise other concerns if there are issues with harvesting flaws, but that is beyond the point of this post.

The point here is that by doing what we have been doing all along, just digging a little deeper, it is possible to start adding simple requirements that relate directly to adding security into the application. With the example above, we even stop looking at this as a separate classification of flaw, no longer a “security” bug but just a functional bug. It starts getting tracked with all other issues and should get remediated in a timely fashion.

When we don’t define these types of requirements then it is up to the developers to implement this which makes it more of guesswork. How does a developer know if an account should be limited to one person or not? Depends on the application. How would QA know to test it or not if there are no requirements. Of course we can make assumptions as to how it should work, but having the requirements defined ahead of time makes it concrete.

Another aspect that could be better defined by the business teams are the input fields for the application. One of the critical components of a good application security program is strong input validation. The better fields can be defined, the easier it is to implement stronger validation as well as create test scripts for those definitions. Here are a few examples:

  • What are the types of data that should be accepted?
  • Is there a max length for that field?
  • Can that date be before a certain time period?
  • Can that number be negative?

Developing secure applications is a team effort. No single group can do it alone. There needs to be a strong union between each group to ensure that the product is the best it can be. Stay tuned for more posts on how all of the teams in the SDLC can work together and play a critical role in the overall security of the application.

Filed Under: General Tagged With: Business analyst, business requirements, design, product manager, requirements, sdlc, security, software design, software testing, testing

May 24, 2015 by James Jardine Leave a Comment

Security for QA Testers: The Importance

Quality Assurance (QA) testing is a critical role for any application that is being developed. The purpose: to identify flaws within the application that effect how the application runs and the users that use it. Typically this has focused on the goal of identifying flaws that prohibited the application functions from performing as expected. When I say expected, I mean that the end user is not able to complete his identified task.

Over the past decade there has been a growing focus on the missing QA testing focus: security flaws. What makes a security flaw different than the other flaws generally identified? Most security flaws, at least exploitable ones, focus on the ability to make the application do something it was not intended to do. If the application is supposed to allow me to view my bank account and I can make it show me someone else’s bank account it indicates a security flaw. The assumption here is that I shouldn’t be able to view another user’s account.

Typically in QA, the test would ensure that I could see my account and the data returned was in fact my information when I requested it. It does not check things like authorization issues to see what happens when I attempt to view another user’s account.

This is the time for QA to add this type of testing to their current test cases. Recently released reports attempt to show that the security field is suffering by a huge shortage. While we do have may different types of entities that will test our applications for security flaws, the best one is our own QA teams. Here are a few key indicators for why QA is so important in security testing.

Proximity
With the exception of the actual developers, no one is closer to the development phase than the QA team. You may have heard that a bug found in production costs a lot more to fix than one found in QA or development. There are multiple reasons for this, which we will cover in a different article. The key here is that we are getting almost a pseudo immediate feedback process for bugs found to send back to development. Not only does this mean we don’t lose the time of the application going through multiple other phases of the lifecycle only to be sent back, but the developers will adopt better secure coding techniques much quicker.

Application Knowledge
Many security flaws are based on the idea that we are able to make the application do something it was not intended to do. While some flaws like injection flaws don’t require any knowledge of the application functionality, authorization/authentication/logic flaws do require an understanding of the application. QA should have intimate knowledge of how an application should and shouldn’t work. Having this understanding makes it more efficient to understand what is a flaw and what is correct functionality.

Bug Tracking
Most likely, QA already has some sort of bug tracking system. Lets be clear, a bug is a bug is a bug. It doesn’t matter that it is a security bug, a simple logic flaw, or a typo in the UI, these bugs go through the same process. They get identified, logged, reported, analyzed, ranked/prioritized, and handled. It doesn’t make sense to have a separate system for managing bugs based on classification. Place an indicator if needed to indicate it is a security issue so that a report could be created to give to the infosec team for audit or review purposes.

We have an opportunity, as QA, to step up and take responsibility and ownership of enhancing our testing of applications. Is it different than your current tests? Probably. Can we do it? You bet. Lets start working on getting security built into our process instead of relying on a number of 3rd parties to do it for us. As we have seen, that doesn’t work so well.

Filed Under: General Tagged With: bug tracking, flaws, lifecycle, qa, qa awareness, qa testing, security, security testing, security training, software assurance, testing

April 17, 2015 by James Jardine Leave a Comment

Static Analysis: Analyzing the Options

When it comes to automated testing for applications there are two main types: Dynamic and Static.

  • Dynamic scanning is where the scanner is analyzing the application in a running state. This method doesn’t have access to the source code or the binary itself, but is able to see how things function during runtime.
  • Static analysis is where the scanner is looking at the source code or the binary output of the application. While this type of analysis doesn’t see the code as it is running, it has the ability to trace how data flows the the application down to the function level.

An important component to any secure development workflow, dynamic scanning analyzes a system as it is running. Before the application is running the focus is shifted to the source code which is where static analysis fits in. At this state it is possible to identify many common vulnerabilities while integrating into your build processes.

If you are thinking about adding static analysis to your process there are a few things to think about. Keep in mind there is not just one factor that should be the decision maker. Budget, in-house experience, application type and other factors will combine to make the right decision.

Disclaimer: I don’t endorse any products I talk about here. I do have direct experience with the ones I mention and that is why they are mentioned. I prefer not to speak to those products I have never used.

Budget

I hate to list this first, but honestly it is a pretty big factor in your implementation of static analysis. The vast options that exist for static analysis range from FREE to VERY EXPENSIVE. It is good to have an idea of what type of budget you have at hand to better understand what option may be right.

Free Tools

There are a few free tools out there that may work for your situation. Most of these tools depend on the programming language you use, unlike many of the commercial tools that support many of the common languages. For .Net developers, CAT.Net is the first static analysis tool that comes to mind. The downside is that it has not been updated in a long time. While it may still help a little, it will not compare to many of the commercial tools that are available.

In the Ruby world, I have used Brakeman which worked fairly well. You may find you have to do a little fiddling to get it up and running properly, but if you are a Ruby developer then this may be a simple task.

Managed Services or In-House

Can you manage a scanner in-house or is this something better delegated to a third party that specializes in the technology?

This can be a difficult question because it may involve many facets of your development environment. Choosing to host the solution in-house, like HP’s Fortify SCA may require a lot more internal knowledge than a managed solution. Do you have the resources available that know the product or that can learn it? Given the right resources, in-house tools can be very beneficial. One of the biggest roadblocks to in-house solutions is related to the cost. Most of them are very expensive. Here are a few in-house benefits:

  • Ability to integrate directly into your Continuous Integration (CI) operations
  • Ability to customize the technology for your environment/workflow
  • Ability to create extensions to tune the results

Choosing to go with a managed solution works well for many companies. Whether it is because the development team is small, resources aren’t available or budget, using a 3rd party may be the right solution. There is always the question as to whether or not you are ok with sending your code to a 3rd party or not, but many are ok with this to get the solution they need. Many of the managed services have the additional benefit of reducing false positives in the results. This can be one of the most time consuming pieces of a static analysis tool, right there with getting it set up and configured properly. Some scans may return upwards of 10’s of thousands of results. Weeding through all of those can be very time consuming and have a negative effect on the poor person stuck doing it. Having a company manage that portion can be very beneficial and cost effective.

Conclusion

Picking the right static analysis solution is important, but can be difficult. Take the time to determine what your end goal is when implementing static analysis. Are you looking for something that is good, but not customizable to your environment, or something that is highly extensible and integrated closely with your workflow? Unfortunately, sometimes our budget may limit what we can do, but we have to start someplace. Take the time to talk to other people that have used the solutions you are looking at. Has their experience been good? What did/do they like? What don’t they like? Remember that static analysis is not the complete solution, but rather a component of a solution. Dropping this into your workflow won’t make you secure, but it will help decrease the attack surface area if implemented properly.

This article was also posted at https://www.jardinesoftware.net

Filed Under: General Tagged With: developer awareness, developer security, qa, qa awareness, qa test, quality assurance, security testing, static analysis, testing

February 5, 2015 by James Jardine Leave a Comment

Sensitive Data and Storage Issues

Do you know what constitutes sensitive data in your organization? How about in your state or industry? As developers or business analysts we often do not follow the nitty gritty details of sensitive information regulations or laws. Not that we don’t want to enforce them, but often times I think we often just don’t know about them. It is often thought that the CIO, CISO or a privacy officer is responsible for understanding our data and to what level it needs to be protected. I completely believe that these positions should understand the rules and regulations around privacy and what is sensitive data. Although this can be difficult because there are multiple definitions depending on what state and what industry you are in.

When working on developing an application, do you give much thought about data storage and sensitive information much past the user’s password? What is it that defines sensitive information for you and your organization? While it may not be a developers or business analysts main focus, it is important that everyone in the development lifecycle understand that data processed and stored and any rules around it.

The first place to look in most organizations is probably your policies and procedures. Most likely there are data classification documents that describe what is sensitive data and how that data must be handled. If your organization doesn’t have this type of documentation, this is a good time to start thinking about it. Often, this documentation is created by a privacy team, the security team, or some other office outside of the development teams. While it is probably not your job to create the documentation, it is important that you know and understand it.

The second thing to do is to look at your local state regulations for your industry. Regulations or definitions may be different depending on if you are healthcare, PCI, or some other industry. Unfortunately, many states have laws in place (usually around data breach notifications), but they are not standardized across the states. This may be changed soon as there is a movement in the government to create a nationwide breach notification law which may make things a little easier and more consistent. Until then, we are stuck scouring the internet looking for these different laws.

Some examples of these laws are in New Jersey, with this bill that recently went into effect, and Florida with the Florida Information Protection Act of 2014. Both of these are similar, yet have their differences. For example, while NJ calls out Driver’s license and State ID card, Florida also adds Passport, Military ID and other government documents used to verify identity. The Florida law also discusses Username, password and secret questions and answers. The following shows a quick summary of the data that can be considered sensitive:

New Jersey

  • First Name (or initial) and last name linked with one or more of the following:
    • Social Security Number
    • Driver’s License or State ID card Number
    • Address
    • Identifiable health information

Florida

  • First Name (or initial) and last name with one or more of the following:
    • Social Security Number
    • Driver’s License, Passport, Military ID, or other similar number on government document used to verify identity
    • Financial Account Number, Credit or Debit Card Number in combination with
      • Security Code
      • Access Code
      • Password
    • Medical history, mental or physical condition, medical treatment or diagnosis by a health care professional
    • Health insurance policy number or Subscriber ID and any unique identifier used by health insurer to identify individual
  • User name or email address in combination with a password or security question and answer that would permit access to online account

It is important to protect this sensitive information because many times it is what the attackers are after. Both of the laws above require that the information be rendered unusable (encrypted) to be protected. All too often we think only about the user’s password or possibly their social security number, but rarely are we thinking about some of this other information. When we know during design and development what data we use and how it needs to be protected then it is that much easier to do it right the first time.

Take the time to catalog all of the data elements your application uses and how you are protecting them (if needed). You can’t protect what you don’t know you have, so it is important to first inventory and then determine where the holes may be.

Episode 21 of the DevelopSec Podcast discusses this more if you want to take a listen.

Filed Under: General Tagged With: data, data storage, developer, developer awareness, qa, security, security testing, storage, testing

January 28, 2015 by James Jardine Leave a Comment

Verizon Email API Insecure Direct Object Reference Thoughts and Takeaways

It was recently announced that there was a flaw identified (and since fixed) in the Verizon API that allowed access to Verizon customer email accounts. The way this worked was that there was an ID parameter with the email account’s user ID specified. If a user supplied a different user’s ID name, that user’s email account would be returned. This is known as an Insecure Direct Object Reference. It was also found that the attacker could not only read another user’s email, but also send email from that account. This could be very useful in spear phishing attacks because users are more trustful of emails from their contacts.

Take-Aways


  • Understand the parameters that are used in the application
  • Use a web proxy to see the raw requests and responses for better understanding
  • Create test cases for these parameters that check access to different objects to ensure authorization checks are working properly
  • Implement row-based authorization to ensure the authenticated user can only see his information

The issue that is presented is that the API is not checking if the authenticated user has permission to access the specified mailbox. It would appear that it is only checking that the user is authenticated. Remember that authentication is the process of identifying who the requesting party is. Authorization is the process of determining what the authenticated user has access to. In this situation, the API should first validate that the user is authenticated, and then when a request is made for a resource (email account in this example) verify that that user is authorized to access that account before allowing it.

Unfortunately, many API’s are vulnerable to this type of attack because there is an assumption that the user can’t change the parameter values due to a lack of user interface. It is imperative that developers and QA testers both use a proxy when testing applications to be able to manipulate these types of parameters. This allows testing for unauthorized access to different objects. This is a very simple test case that should be included for every application, and not just for APIs. If you see a parameter value, make sure it is being properly tested from a security standpoint. For example, and ID field that may be an integer may get tested to make sure that the value cannot be any other type of data, but must also be checked to see if different values give access to unauthorized data.

It was also mentioned that the API didn’t use HTTPS for its communication channel. Using HTTP allows other user’s along the communication line to intercept the request and response data, potentially opening up the user to a variety of vulnerabilities. Make sure you are using the proper communication channel to protect your users in your mobile applications as well as the web applications.

Filed Under: Take-Aways Tagged With: api, authorization bypass, developer awareness, developers, insufficient authorization, qa, security, security awareness, security testing, testing, verizon, vulnerability

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 3
  • Go to page 4
  • Go to page 5

Primary Sidebar

Contact Us:

Contact us today to see how we can help.
Contact Us

Footer

Company Profile

Are you tackling the challenge to integrate security into the development process? Application security can be a complex task and often … Read More... about Home

Resources

Podcasts
DevelopSec
Down the Security Rabbithole (#DTSR)

Blogs
DevelopSec
Jardine Software

Engage With Us

  • Email
  • GitHub
  • Twitter
  • YouTube

Contact Us

DevelopSec
Email: james@developsec.com



Privacy Policy

© Copyright 2018 Developsec · All Rights Reserved