• Skip to main content
  • Skip to primary sidebar
  • Skip to footer

DevelopSec

  • Home
  • Podcast
  • Blog
  • Resources
  • About
  • Schedule a Call

testing

March 30, 2016 by James Jardine Leave a Comment

When One Testing Solution Isn’t Enough

Go to any conference, attend some webinars, or just do a search for application security testing solutions and you can quickly see the sheer number of solutions out there. As in every situation, there are some that are great and some that are not so great. With such great marketing, it is often very difficult to determine what is the best solution. All too often people are looking for that silver bullet. That one testing tool or pen testing company that will find everything. Unfortunately, that solution doesn’t exist, or I just haven’t seen it.

Application testing usually falls into two categories: Automated scanning and manual assessments. To break that down further, automated scanning is broken down into two disciplines: Static and dynamic.

Static Analysis

Static scanning means that code is scanned without running it. Here is a quick post about static analyzers. Inputs are traced through the code to see what paths are taken and determine vulnerabilities. Static scans can be efficient at finding flaws such as cross-site scripting, injection flaws, configuration issues, etc. Static scans excel at finding code issues like hard coded passwords, cryptography issues, and other items not possible through a dynamic or manual test. They still do not do very well at finding business logic flaws which can be the most devastating.

Dynamic Analysis

Dynamic scanning means that the application is scanned as it is running. Unlike a static scan, it can be very difficult to identify hard coded passwords or cryptographic issues because they are not visible to the end user. Dynamic scans do a decent job of identifying other flaws such as cross-site scripting, injection flaws, insecure cookies, and some other configuration issues to name a few. Dynamic scanners do not do a good job identifying authentication, authorization, or business logic flaws in general. Properly tuned scanners may be able to do this, but that requires extra work.

Manual Assessments

Manual assessments are mostly a manual process of analyzing source code or using the application to find security flaws. Source code review, has many advantages of the static scanning mentioned above. Penetration testing is very similar to the dynamic scanning mentioned. In both cases, the testing is done manually vs. using just an automated tool. Manual assessments (pen tests) are much more efficient at finding authentication, authorization and business logic flaw types. Of course the quality of the assessment is heavily influenced by the skill of the person doing the assessment.

Some of the big vendors have started offering all three types of assessment for their customers. The big players (White Hat Security, VeraCode, HP, etc.) all offer static, dynamic and manual assessments for your applications. And with good reason. If you have worked in an enterprise where you are required to have multiple assessments by different groups you quickly realize how nothing provides a complete solution. Each assessment paints a portion of the security picture. Items found by one may not be found by the other. In some instances you may think that two should have found the same item, but that is the nature of security.. No one finds everything.

Multiple Solutions

While you may think that it is overkill to have multiple scanning solutions or testers looking at an application, they all bring a different viewpoint and help identify different types of vulnerabilities. Sort of like an enhanced second pair of eyes. Not only can they double check the work the other has done, but in may situations they can identify different types of vulnerabilities. Not only do different solutions find different vulnerabilities, they also find them at different phases of the development lifecycle. It is common to use the static scanners in the development phase. Dynamic scanners are more commonly seen in the verification phase. Manual assessments, especially 3rd party penetration testers, are also in the verification phase, but near the end of the cycle. The closer to development a flaw is found, the quicker it is to remediate it.

The right solution is the one that fits your environment. Take the time to analyze your setup to determine the best way to implement a solution. These solutions take time and should be carefully considered.

Filed Under: General Tagged With: awareness, development, SDL, SDL Development lifecycle, sdlc, secure code, security, security coding, security testing, testing

January 4, 2016 by James Jardine Leave a Comment

Unsupported Browser Support

Ok, so the title is a bit counter-intuitive. I recently saw an article talking about the end of support for some of the Internet Explorer versions out there (http://www.computerworld.com/article/3018786/web-browsers/last-chance-to-upgrade-ie-or-switch-browsers-as-microsofts-mandate-looms.html) and got to thinking about the number of sites that still require supporting some of these older versions of browsers. This is typically more common in the big corporate enterprises, as they have the legacy systems and tend to move slower upgrading internal software.

The unfortunate reality is that, in some environments, you are stuck supporting outdated browsers. Whether it is an internal application with requirements for the network, or your client base is just slow to move on. This requirement also puts those users, and potentially the company network at risk since there are no more security updates for these browsers.

Over the past few years, browsers have been taking on some of the burden of stopping some forms of application security attack vectors. We saw the addition of the cross-site scripting filters added a while back to help detect and neutralize some reflective cross-site scripting attacks. We also saw browsers start to support custom response headers to indicate content security policy, frame handling, etc.. Some of these protections are supported across all browsers, while others are limited to specific browsers, or versions of browsers. As we see these unsupported browsers still out there, it reminds us that as developers, we cannot rely just on browser functionality for security.

While we may not be able to control what version of a browser we support, well we can but we can’t, we can still make sure that our application is as secure as possible. Lets take cross-site scripting as an example. Sure, for reflected XSS, we could just sent the X-XSS-Protection response header and hope for the best. It might neutralize a few instances, but that is a pretty poor defense mechanism. First, there have been numerous bypasses for these types of client-side blocking attempts. Second, it is our responsibility, as developers, to properly handle our data. That starts with good input validation and ends with properly encoding that data before we send it to the browser.

I am not advocating not using the different response headers. Bugs can be missed and having another control there to help reduce the chance of a vulnerability being exploited is a good thing. But the first effort should be spent on making sure the output is encoded properly in the first place. This is where testing comes into play as well, making sure that we are validating that the encoding is working as expected.

By properly protecting our apps with good application security practices, it shouldn’t matter what browser the client is using. Does the end user run a higher risk, in general, by running out of date software. You bet. That doesn’t mean our application has to be the source of that added risk.

If you are stuck supporting out dated browsers due to corporate compliance or other reasons, take the time to analyze the situation. Determine why that browser is needed and what possible solutions are there. Raise the issue up within the organization, identifying the risks that are exposed due to the use of outdated browsers. While the organization may already know this, it doesn’t hurt to say it again.

Filed Under: News Tagged With: application security, developer, developer awareness, developer security, qa, secure coding, security, testing

December 29, 2015 by James Jardine Leave a Comment

Untrusted Data: Quick Overview

In the application security community it is common to talk about untrusted data. Talk about any type of injection attack (SQLi, XSS, XXE, etc) and one of the first terms mentions is untrusted data. In some cases it is also known as user data. While we hear the phrase all the time, are we sure everyone understands what it means? What is untrusted data? It is important that anyone associated with creating and testing applications understand the concept of untrusted data.

Unfortunately, it can get a little fuzzy as you start getting into different situations. Lets start with the most simplistic definition:

Untrusted data is data that is controlled by the user.

Client-Side

While this is very simple, it is often confusing to many. When we say controlled by the user, what does that really mean? For some, they stop at data in text boxes or drop down lists, overlooking hidden fields, request headers (cookies, referer, user agent, etc), etc. And that is just on the client side.

From the client perspective, we need to include any data that could be manipulated before it gets to the server. That includes cookie values, hidden form fields, user agent, referer field, and any other item that is available. A common mistake is to assume if there is no input box on the page for the data, it cannot be manipulated. Thanks to browser plugins and web proxies, that is far from the truth. All of that data is vulnerable to manipulation.

Server-Side

What about on the server side? Can we assume everything on the server is trusted? First, lets think about the resources we have server-side. There are configuration files, file system objects, databases, web services, etc. What would you trust out of these systems?

It is typical to trust configuration files stored on the file system of the web server. Think your web.xml or web.config files. These are typically deployed with the application files and are not easy to update. Access to those files in production should be very limited and it would not be easy to open them up to others for manipulation. What about data from the database? Often I hear people trusting the database. This is a dangerous option. Lets take an example.

Database Example

You have a web application that uses a database to store its information. The web application does a good job of input validation (type, length, etc) for any data that can be stored in the database. Since the web application does good input validation, it lacks on output encoding because it assumes the data in the database is good. Today, maybe no other apps write to that database. Maybe the only way to get data into that database is either via SQL Script run by a DBA or through the app with good input validation. Even here, there are weaknesses. What if the input validation misses an attack payload. Sure the validation is good, but does it catch everything? What if a rogue DBA manipulates a script to put malicious data into the database? Could happen.

Now, think about the future, when the application owner requests a mobile application to use the same database, or they decide to create a user interface for data that previously was not available for update in the application previously. Now, that data that was thought to be safe (even though it probably wasn’t) is now even less trusted. The mobile application or other interfaces may not be as stringent as thought.

The above example has been seen in real applications numerous times. It is a good example of how what we might think at first is trusted data really isn’t.

Web services are similar to the database. Even internal web services should be considered untrusted when it comes to the validity of the data being returned. We don’t know how the data on the other side of that service is being handled, so how can we trust it?

Conclusion

When working with data, take the time to think about where that data is coming from and whether or not you are willing to take the risk of trusting it. Think beyond what you can see. It is more than just fields on the form that users can manipulate. It is more than just the client-side data, even though we use terms like user controlled data. There is a lot of talk about threat modeling in application security, and it is a great way to help identify these trust boundaries. Using a data flow model and showing the separation of trust can make it much easier to understand what data we trust and what we don’t. At the bottom of this article is a very basic, high level, rough draft data flow diagram that shows basic trust boundaries for some of the elements I mentioned above. This is just an example and is not indicative of an actual system.

When it comes to output encoding to protect against cross site scripting or proper encoding for SQL, LDAP, or OS calls, the safest approach is to just perform the encoding. While you may trust a value from the web.config it doesn’t mean you can’t properly do output encoding to protect from XSS. From a pure code perspective, that is the most secure code. Assuming the data is safe, while it may be, does increase risk to some level.

If you are new to application security or training others, make sure they really understand what is meant by untrusted data. Go deeper than what is on the surface. Ask questions. Thank about different scenarios. Use stories from past experience. Once the concept is better understood it makes triaging flaws from the different automated tools much easier and should reduce number of bugs going forward.

Example Data Flow Diagram Showing Trust Boundaries
(This is only an example for demonstration purposes.)

Ex. Data Flow

Filed Under: General Tagged With: developer awareness, developer security, security, security awareness, security testing, testing, untrusted data

October 9, 2015 by James Jardine Leave a Comment

Insufficient Session Expiration: Testing

Insufficient Session Timeout is a security flaw that can mean a few different things. One common finding for this is that the session timeout is set too long. For example, the session is valid after an hour of being idle. Another common finding is when the session is not properly terminated after the user uses the logout/sign out feature. In this post we will cover these two test cases and how to test for them.

A.K.A

  • Insufficient Logout
  • Insecure Logout
  • Insufficient Session Timeout

Testing Session Timeout

The first step when testing the timeout of a session is to understand what an acceptable timeout should be. This differs depending on your organization and the type of application you are testing. Many financial applications have much more strict timeouts around 15 minutes or less. On the other hand, some social media sites may have sessions that are good for months or even longer. Those longer ones are outside the scope of this post.

  • Log into the application.
  • Don’t use the application (just leave it open).
  • Once the expected timeout has occurred, try accessing the site to ensure your session has expired and you should be redirected to the login screen.
  • If the application didn’t end the session as expected, wait even longer (try doubling the time) and test again. Keep doing this until you determine how long the session lasts or until the valid time crosses a specific threshold.

You may want to try this in a different browser than your normal browser so you can continue other testing in your normal browser without effecting the timeout.

Testing Insufficient Logout

The goal of this test is to verify that the sign/log off function is invalidating the entire session. Below are the steps to validate this.

  • Start a proxy (such as Burp Suite) and configure your browser to send all traffic through it.
  • Log into the application and identify the cookies used to manage the session. These could be session identifiers like the ASP.Net session id, PHP Session id, or JSessionId to name a few, or authentication cookies like the ASP.Net forms authentication cookie.
  • Copy the respected session related cookies, these will be used later.
    • In Burp, you could send an authenticated request to Repeater to be replayed later (don’t use the login request).
  • In the browser, execute the logout feature.
  • In the browser, verify that a request to the application redirects you to the login form.
  • Send a new request with the captured session cookies to see if you are granted access to the application or if you are redirected to the login screen.
    • In Burp, just replay the stored request in Repeater.
    • If you are granted access to the resource, the logout was not successful.
    • If you are redirected to the login screen, the logout was successful.

Reporting

When reporting the finding make sure that as much detail as possible is included. Accurately classifying the finding and providing steps to reproduce the issue help the application team understand the flaw and assess the risk. It also helps the developer really understand the flaw and any other tester retest it when ready. Include the cookies that were not being properly cleared if possible. With analytics in all applications, identifying non-session related cookies may just slow things down. Don’t be afraid to try the steps above with different cookie combinations. Remove one cookie at a time to see what happens.

Filed Under: General Tagged With: insufficient session, log out, logout, qa, qa testing, security testing, session, session expiration, sign out, signout, testing

September 17, 2015 by James Jardine Leave a Comment

HTTP Strict Transport Security (HSTS): Overview

A while back I asked the question “Is HTTP being left behind for HTTPS?”. If you are looking to make the move to an HTTPS only web space one of the settings you can configure is HTTP Strict Transport Security, or HSTS. The idea behind HSTS is that it will tell the browser to only communicate with the web site over a secure channel. Even if the user attempts to switch to HTTP, the browser will make the change before it even sends the request.

HSTS is implemented as a response header with a few options shown in the below example:

Strict-Transport-Security: max-age=31536000; includeSubDomains

In the example above, the max-age (required) is specified in seconds ( a year in this case) and indicates how long the browser should remember this setting. Setting a max-age of 0 will tell the browser to stop enforcing HSTS for the domain.

The includeSubDomains is optional and indicates if HSTS should be enforced an all of the sub domains as well.

You might wonder why this is needed when a user can just go to the HTTPS version of a site. If you watch most users, I am guilty of this myself, they just type the domain of the site they are going to. For example, I might just type developsec.com into the browser. With no indication of which protocol I want to use, the browser will start with HTTP and then the server may respond to switch it to HTTPS. With HSTS enabled, the goal is when a user types http://www.developsec.com or www.developsec.com into the URL, the browser will make the switch automatically.

There are some considerations for this to work as expected. The response header is not respected until the first time you visit the domain using HTTPS with a trusted certificate. Until this happens the browser doesn’t know not to use HTTP. There is a way around this by signing your domain up on the preload list. The preload list can be used by browsers to find out about your domain’s HSTS status before accessing the site. The linked site shows what needs to be done to be allowed on the preload list. Both with the preload list and support for HSTS in general, your browser has the final say. For a list of browsers that support HSTS you can go to the OWASP site.

Self Signed Certificates
One of the security features of HSTS is that tries to block man in the middle (MitM) type of attacks. An attack where your traffic is redirected through an attacker so they can view your data. To do this, HSTS requires the certificate to be a trusted certificate. If it is not a trusted certificate, for example a self-signed one, then you will see the following image indicating it cannot connect:

Hsts1

You may notice that this looks very similar to when you try to connect to a site with a certificate issue with one major difference. There is no way to allow the exception. For testers that use a proxy, such as Burp Suite, you can tell your browser to trust the certificate generated from the proxy tool and it should work just fine.

If you are working toward implementing this feature, make sure you perform appropriate testing to make sure it is working properly. If you use the “includeSubDomains” setting make sure your test environments are using SSL or you may have some trouble shooting to do during testing. If your site is HTTPS only, this is a great feature to add to make sure that your users are only making requests over HTTPS. Hopefully the links included will provide any other information you may need on HSTS.

Filed Under: General Tagged With: application security, hsts, penetration testing, secure communication, secure websites, security, security testing, testing

August 15, 2015 by James Jardine

Tips for Securing Test Servers/Devices on a Network

How many times have you wanted to see how something worked, or it looked really cool, so you stood up an instance on your network? You are trying out Jenkins, or you stood up a new Tomcat server for some internal testing. Do you practice good security procedures on these systems? Do you set strong passwords? Do you apply updates? These devices or applications are often overlooked by the person that stood them up, and probably unknown to the security team.

It may seem as though these systems are not critical, or even important, because they are just for testing or don’t touch sensitive information. It is common to hear that they are internal, so an attacker cannot get to them. The reality is that every system or application on a network can be an aide to an attacker. No matter how benign the system may seem, it adds to the attack surface area.

What Can You Do?

There are a few things to think about when any type of application server or device is added to the network.

  • Change Default Passwords
  • Apply Updates
  • Remove Default Files
  • Decommission Appropriately

Change Default Passwords

While this seems simple, it is amazing how often the default password is still in use on systems on a network. This goes beyond unused systems or rogue systems, but to many other production devices. It only takes a moment to change the password on the device. It may not seem like it, but a quick Google search for default passwords for just about any device/COTS application can yield quick results.

Fortunately, many recent systems have switched to not use default passwords, rather they force you to set a password during setup. This is a great step in the right direction. It is also a good idea to change the default administrator account name if possible. This can make it a little more time consuming for an attacker to attempt brute forcing the password if they don’t know the user id.

If you develop software or devices that get deployed to customers you should be thinking about how the setup process works. Rather than setting a default password, have the user create one during the setup process.

Apply Updates

One of the most critical controls for security is patching. Many organizations will have patching procedures for the systems and software they know about, but if you are standing up an unknown device it may not get patched. Software patches often times contain security fixes, some of which are critical. Make sure you are keeping the system updated to help keep everyone safe. It is also a good idea to let the team that handles patching and system maintenance know about the new application/device.

Remove Default Files

If the application is deployed with default files or examples, it may be a good idea to remove them. It is common to see these types of files, meant only for testing purposes, not be very secure. Removing the files will help tighten the security of the system, leading to a more secure network.

Decomission Appropriately

If you are done using the system, remove it. I can’t tell you how many times I have found a system that hadn’t been used in months or even years because it was just to try something out. No one even remembered it, security didn’t know about it, and it was very vulnerable. By removing it, you no longer have to worry about patching it or the default passwords. It reduces the attack surface area and limits an attackers ability to elevate their privileges.

Is the Risk Real?

You bet it is. Imagine you have left an old Tomcat server on the network with default credentials (tomcat/tomcat) or something similar. An attacker is able to get onto the internal network, lets just assume that a phishing attack was successful. I know.. like that would ever happen. They log into the management console of Tomcat and deploy a WAR file containing a shell script.

I have a video that shows deploying Laudanum in just this manner that can be found here.

Now that the attacker has a shell available, he can start running commands against the operating system. Depending on the permissions that the Tomcat user is running under it is possible that he is running as an admin. At this point, he creates a new user on the system and even elevate that user to be an administrative user. Maybe RDP is enabled and remote login is possible. At the very least it will be possible to read files from the system. This could lead to getting a meterpreter shell, stealing administrative hashes, even leading to gaining domain admin access if a domain admin has logged into that system.

That is just one example of how your day may go bad with an old system sitting on the network that no one is maintaining. The point is that every system on the network needs to be taken care of. As a developer who may be looking to try a new application out, take some time to think about these risks. You may also want to talk to your security team (if you have one) about the application to see if there are known vulnerabilities. Let them know that it is out there so they may help keep an eye out for any strange behavior.

This doesn’t mean you can’t stand different things up, but we need to be aware of the risks to it. You may find that there is a network segment that is heavily controlled that the application or device will be put on to help reduce the risk. You don’t know until you ask. Keep in mind that security is everyone’s job, not just the team that has the word in their title. They can’t help protect what they don’t know about.

Filed Under: General Tagged With: application, application security, hackers, laudanum, network, penetration testing, security, security awareness, security testing, shell, testing

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to Next Page »

Primary Sidebar

Contact Us:

Contact us today to see how we can help.
Contact Us

Footer

Company Profile

Are you tackling the challenge to integrate security into the development process? Application security can be a complex task and often … Read More... about Home

Resources

Podcasts
DevelopSec
Down the Security Rabbithole (#DTSR)

Blogs
DevelopSec
Jardine Software

Engage With Us

  • Email
  • GitHub
  • Twitter
  • YouTube

Contact Us

DevelopSec
Email: james@developsec.com



Privacy Policy

© Copyright 2018 Developsec · All Rights Reserved