• Skip to main content
  • Skip to primary sidebar
  • Skip to footer

DevelopSec

  • Home
  • Podcast
  • Blog
  • Resources
  • About
  • Schedule a Call

General

March 24, 2022 by James Jardine

Input validation is less about specific vulnerabilities

Security takes a layered approach to reduce the risk to our organization. Input validation is the perfect example of one of these layers. In most cases, input validation is 1 factor in a multi-pronged approach to protecting against common vulnerabilities.

Take any course on secure development and they will, or should, mention input validation as a mitigating control for so many vulnerabilities. You might notice that it always comes with a but. Use input validation, but also use output encoding. Use input validation, but also use parameterized queries.

The reason for this is that input validation that covers every potential vulnerability is hard, if not impossible. Even input validation for a specific vulnerability can be complex and difficult. We have to walk this line of good validation, but also a useable application.

I always recommend not trying to look for specific vulnerabilities or attack strings within an input validation function. That turns into a blacklist of attacks that is impossible to keep up with. In addition, you may be looking for attack strings that don’t matter to that input.

One of the challenges is not knowing where the input will actually be used. Is the data going to the database? Is the data going to a database and then going to the browser? If the data is going to the browser, what context will it be in? If you don’t know how that data will be used, how do you know what attack strings to look for?

Instead of focusing on vulnerabilities, you should spend the time to define what good data looks like for that field. What type of data is expected? Is it an integer, a date, a string? What are the valid range of values? How long should the data be? I will discuss these in just a moment.

These limitations will not stop all attacks. There is no argument there. They are just going to help make sure the data is acceptable or the field. It is possible that a cross-site scripting attack string is valid input. That can be ok. You have limited the potential attacks. Then there is the but, remember? In this case, you would have output encoding to protect the app from cross-site scripting.

Let’s look at some of these items in more detail.

Type
One of the easiest checks to implement is checking the data type of the input field. If you are expecting a date data type and the input received is not a valid date then it is invalid and should be rejected. Numbers are also fairly simple to validate to ensure they match the type expected. Many of the most popular frameworks provide methods to determine if a string matches a specific data type. Use these features to validate this type of data. Unfortunately, when the data type is a free form string then the check isn’t as useful. This is why we have other checks that we will perform.

Range
You have determined that the input is of the correct type, say a number or a date. Now it is time to validate that it falls within the correct range of acceptable values. Imagine that you have an age field on a form. Depending on your business cases, lets say that a valid age is 0-150. It is a simple step to ensure that the value entered falls into this valid range. Another example is a date field. What if the date represents a signed date and can’t be in the future. In this case, it is important to ensure that the value entered is not later than the current day.

Length
This is really more for the free form text fields to validate that the input doesn’t exceed the expected length for the field. In many cases, this lines up with the database or back end storage. For example, in SQL Server you may specify a first name field that is 100 characters or a state field that only allows two characters for the abbreviation. Limiting down the length of the input can significantly inhibit some of the attacks that are out there. Two characters can be tough to exploit, while possible depending on the application design.

Format
Some fields may have a specific format they must match. For example, a social security number may have to match DDD-DD-DDDD. An account number may have a specific format of DDDDDSSS. There are a lot of options for specific formats. If you verify that the data matches the expected format, this can drastically cut down on potential vulnerabilities because many attacks will not be possible when matching the format.

White Lists
White lists can be a very effective control during input validation. Depending on the type of data that is being accepted it may be possible to indicate that only alphabetical characters are acceptable. In other cases you can open the white list up to other special characters but then deny everything else. It is important to remember that even white lists can allow malicious input depending on the values required to be accepted. 

Black Lists
Black lists are another control that can be used when there are specifically known bad characters that shouldn’t be allowed. Like white lists, this doesn’t mean that attack payloads can’t get through, but it should make them more difficult.

Regular Expressions
Regular expressions help with white and black lists as well as pattern matching. With the former it is just a matter of defining what the acceptable characters are. Pattern matching is a little different and really aligns more with the range item discussed earlier. Pattern matching is great for free form fields that have a specific pattern. A social security number or an email address. These fields have an exact format that can be matched, allowing you to determine if it is right or not.

Test Your Validation
Make sure that you test the validation routines to ensure they are working as expected. If the field shouldn’t allow negative numbers, make sure that it rejects negative numbers. If the field should only allow email addresses then ensure that it rejects any pattern that doesn’t match a valid email address.

Validate Server Side
Maybe this should have been first? Client side validation is a great feature for the end user. They get immediate feedback to what is wrong with their data without having that round trip to the server and back. THAT IS EASILY BYPASSED!! It is imperative that the validation is also checked on the server. It is too easy to bypass client side validation so this must be done at the server where the user cannot intercept it and bypass it.

Don’t Try To Solve All Problems
Don’t try to solve all output issues with input validation. It is common for someone to try and block cross site scripting with input validation, it is what WAFs do right? But is that the goal? What if that input isn’t even sent to a browser? Are you also going to try and block SQL Injection, LDAP Injection, Command Injection, XML Injection, Parameter Manipulation, etc. with input validation? Now we are getting back to an overly complex solution when there are other solutions for these issues. These types of items shouldn’t be ignored and that is why we have the regular expressions and white lists to help decrease the chance that these payloads make it into the system. We also have output encoding and parameterized queries that help with these additional issues. Don’t spend all of your time focusing on input validation and forget about where this data is going and protecting it there. 

Input validation is only half of the solution, the other half is implemented when the data is transmitted from the application to another system (ie. database, web browser, command shell). Just like we don’t want to rely solely on output encoding, we don’t want to do that with input validation. Assess the application and its needs to determine what solution is going to provide the best results. If you are not doing any input validation, start as soon as possible. It is your first line of defense. Start small with the Type/Range/Length checks. When those are working as expected then you can start working into the more advanced input validation techniques. Make sure the output encoding routines are also inplace.

Don’t overwhelm yourself with validating every possible attack. This is one cog in the system. There should also be other controls like monitoring and auditing that catch things as well. Implementing something for input validation is better than nothing, but don’t let it be a false sense of security. Just because you check the length of a string and limit it to 25 characters, don’t think a payload can’t be sent and exploited. It is, however, raising the bar (albeit just a little). 

Take the time to assess what you are doing for input validation and work with the business to determine what valid inputs are for the different fields. You can’t do fine validation if you don’t understand the data being received. Don’t forget that data comes in many different forms or encodings. Convert all data to a specific encoding before you perform any validation on it.

Filed Under: General Tagged With: app sec, applicaiton security, application security, developer security, developer training, input validation, qa, qa security, quality assurance, secure code

March 19, 2022 by James Jardine Leave a Comment

Is encoding really encoding if it is escaping?

The title might be confusing, let’s see if we can clear it up.

I saw an article the other day that was giving a comparison between encoding, encryption and hashing. There was a statement made that basically said:

Encoding has no security purpose.

I thought this was interesting because when training on security topics we mention encoding for specific use cases. For example, when we discuss Cross-Site Scripting, the answer is output encoding.

I want to clarify that I agree with the statement in the article in that encoding does not provide any type of protections regarding confidentiality or anything like that. There is no data protection there. It does start me thinkng about encoding vs. escaping.

In the example above, regarding XSS, we really are talking about escaping, right? For SQL Injection we would say to escape the data, not encode it. For XSS we are trying to achieve the same goal: Ensure that control characters are not interpreted, but read as data.

The difference is that for SQL we would escape something like a single quote (‘) with two single quotes (”). This tells the Interpreter to treat the single quote as data (like O’Reilly) instead of treating it like a delimiter around data.

However, in the browser we typically encode characters rather than escape them. Instead of returning a (<) character, I would return (&lt;). This tells the browser to display a (<) character on the page rather than treat it as the beginning of an HTML tag.

This leads to some confusion when you are following the rules of the interpreter that uses encoding to escape.

When I teach classes I always use the different terminology when I cover these vulnerabilities. SQL Injection uses escaping. XSS is focused on encoding. In the end, the goal is escaping even though one uses encoding.

This becomes confusing when you want to discuss encoding at a pure level as it tends to have a different meaning depending on the context that you use it.

While encoding and escaping are technically different things, their terms are used almost as one when it comes to things like cross-site scripting. In that context, the encoding actually does provide a security purpose even though it is based on the interpreter the data is being sent to.

Security can be confusing at times. If you have questions or thoughts about application security, I would love to have a conversation around them. Feel free to reach out to me. Let me know what struggles you have when it comes to appsec.

Filed Under: General Tagged With: application security, AppSec, cross-site scripting, developer training, training, vulnerability, xss

December 15, 2021 by James Jardine Leave a Comment

Log4J – Reflection and Progression

Open any social media platform or pull up any mainstream media and undoubtably, you have seen many posts/articles talking about the log4j vulnerability. If you haven’t seen this, here is a quick link to catch up https://snyk.io/blog/log4j-rce-log4shell-vulnerability-cve-2021-4428/.

This post is not going to be about log4j, nor is it going to go into any of the details the thousands of others articles out there would go through. Instead, I want to discuss this at a higher level. Log4j is just an example of the risks of using 3rd party components and should be pushing a broader discussion within your organization and team.

The use of Vulnerable and Outdated Components – https://owasp.org/Top10/A06_2021-Vulnerable_and_Outdated_Components/ – Is ranked 6th on the OWASP Top 10.

If you have already started dealing with this (and if you haven’t, you should be), you have probably had similar questions as others out there. The biggest question probably every organization and security team had was “Am I vulnerable?”.

This is a great question, but how easy is it to answer?

Am I vulnerable?

If you had one application with minimal dependencies, maybe this is a quick answer. Maybe it is not a quick answer. As a developer, you may just have responsibility for your application. You also might be able to quickly answer to what version of what dependencies exist in your application. Well, maybe high level dependencies.

As an organization, it may not just be custom in-house applications that we are worried about. What about other applications you use within your organization that could be vulnerable to this. Are you using Software as a Service that could be vulnerable? As we start to pull on these different strings, they start to get tangled together.

[Read more…] about Log4J – Reflection and Progression

Filed Under: General, Take-Aways Tagged With: 3rd party component, application security, AppSec, awareness, components, exploit, log4j, owasp, secure code, training, vulnerability, vulnerable component

May 29, 2020 by James Jardine Leave a Comment

Proxying localhost on FireFox

When you think of application security testing, one of the most common tools is a web proxy. Whether it is Burp Suite from Portswigger, ZAP from OWASP, Fiddler, or Charles Proxy, a proxy is heavily used. From time to time, you may find yourself testing a locally running application. Outside of some test labs or local development, this isn’t really that common. But if you do find yourself testing a site on localhost, you may run into a roadblock in your browser. If you are using a recent version of FireFox, when you go into your preferences screen and click on the Network Settings “Settings” button, you might notice the following image:

DenyProxy

When configuring your proxy, there is a box to list exceptions to not proxy traffic for. In the old days, localhost used to be pre-populated in this box. However, that is not the case anymore. Instead, localhost is explicitly blocked from being proxied. You can see this in the highlighted area of the image above.

So how do you you proxy your localhost application? There are a few ways to handle this.

You could set up your hosts file to give a different name to your local website. In this case, you would access the application using your defined hostname, rather than “localhost”.

Another way to get around this would be to modify the about:config of Firefox and update the network.proxy.allow_hijacking_localhost property true as shown in the following image:

AllowHijacking

Once this change is made, it will update the network settings screen to no longer block localhost from proxying. The following image shows that this statement is no longer there:

AllowProxy

Filed Under: General Tagged With: application security, AppSec, pen test, pen testing, pentesting, qa, secure development, security testing

February 10, 2020 by James Jardine Leave a Comment

Chrome is making some changes.. are you ready?

Last year, Chrome announced that it was making a change to default cookies to SameSite:Lax if there is no SameSite setting explicitly set. I wrote about this change last year (https://www.jardinesoftware.net/2019/10/28/samesite-by-default-in-2020/). This change could have an impact on some sites, so it is important that you test this out. The changes are supposed to start rolling out in February (this month). The linked post shows how to force these defaults in both FireFox and Chrome.

In addition to this, Chrome has announced that it is going to start blocking mixed-content downloads (https://blog.chromium.org/2020/02/protecting-users-from-insecure.html). In this case, they are starting in Chrome 83 (June 2020) with blocking executable file downloads (.exe, .apk) that are over HTTP but requested from an HTTPS site.

The issue at hand is that users are mislead into thinking the download is secure due to the requesting page indicating it is over HTTPS. There isn’t a way for them to clearly see that the request is insecure. The linked Chrome blog describes a timeline of how they will slowly block all mixed-content types.

For many sites this might not be a huge concern, but this is a good time to check your sites to determine if you have any type of mixed content and ways to mitigate this.

You can identify mixed content on your site by using the Javascript Console. It can be found under the Developer Tools in your browser. This will prompt a warning when it identifies mixed content. There may also be some scanners you can use that will crawl your site looking for mixed content.

To help mitigate this from a high level, you could implement CSP to upgrade insecure requests:

Content-Security-Policy: upgrade-insecure-requests

This can help by upgrading insecure requests, but it is not supported in all browsers. The following post goes into a lot of detail on mixed content and some ways to resolve it: https://developers.google.com/web/fundamentals/security/prevent-mixed-content/fixing-mixed-content

The increase in protections of the browsers can help reduce the overall threats, but always remember that it is the developer’s responsibility to implement the proper design and protections. Not all browsers are the same and you can’t rely on the browser to provide all the protections.

Filed Under: General Tagged With: application security, AppSec, chrome, developer, secure code, secure development, secure testing

October 8, 2019 by James Jardine

Investing in People for Better Application Security

Application security, like any facet of security, is a complex challenge with a mountain of solutions. Of course, no one solution is complete. Even throwing multiple solutions will never get 100% coverage.

The push today is around devsecops, or pushing left in the SDLC. I am seeing more solutions recommending also pushing right in the SDLC. I feel like we are stuck at this crossroad where the arrow points both ways.

The good news is that none of these recommendations are wrong. We do need to push left in the SDLC. The sooner we address issues, the better off we are. The idea that if we don’t introduce a vulnerability in the first place is the best case scenario. Unfortunately, we also know that is an unrealistic assumption. So this brings us to pushing right. Here, we are looking to quickly identify issues after they are introduced and, in some cases, actively block attacks. Of course, let’s not leave out that automation is our key to scalable solutions as we build and deploy our applications.

Much of what we focus on is bringing in some form of tool. Tools are great. They take they mundane, repetitive work off of our plate. Unfortunately, they can’t do it all. In fact, many tools need someone that has at least some knowledge of the subject. This is where the people come in.

Over the years, I have worked with many companies as both a developer and an application security expert. I have seen many organizations that put a lot of effort into building an application security team, focused on managing everything application security. Often times, this team is separate from the application development teams. This can create a lot of friction. With the main focus on the application security team, many organizations don’t put as much effort into the actual application development teams.

How does your organization prepare developers, business analysts, project managers and software testers to create secure applications?

In my experience, the following are some common responses. Please feel free to share with me your answers.

  • The organization provides computer based training (CBT) modules for the development teams to watch.
  • The organization sends a few developers to a conference or specialized training course and expects them to brief everyone when they return.
  • The organization brings in an instructor to give an in-house 2-3 day trading class on secure development (once a year).
  • The organization uses its security personnel to provide secure development training to the developers.
  • The organization provides SAST or DAST tools, but the results are reviewed by the security team.
  • The organization has updated the SDLC to included security checkpoints, but no training is provided to the development teams.
  • The organization doesn’t provide any training on security for the development teams.

By no means is this an exhaustive list, but just some of the more common scenarios I have seen. To be fair, many of these responses have a varying range of success across organizations. We will look at some of the pitfalls too many of these approaches in future articles.

The most important point I want to make is that the development teams are the closest you can get to the actual building of the application. The business analysts are helping flush out requirements and design. The developers are writing the actual code, dealing with different languages and frameworks. The QA team, or software testers, are on the front line of testing the application to ensure that it works as expected. These groups know the application inside and out. To help them understand the types of risk they face and techniques to avoid them is crucial to any secure application development program.

My goal is not, let me repeat, NOT, to turn your application development teams into “security” people. I see this concept floating around on social media and I am not a fan. Why? First and foremost, each of you have your own identity, your own role. If you are a developer, you are a developer, not a security person. If you are a software tester, don’t try to be a security person. In these roles, you have a primary role and security is a part of that. It is not the defining attribute of your tasks.

Instead, the goal is to make you aware of how security fits into your role. As a software tester, the historical goals focused on ensuring that the application functions as expected. Looking at validating use cases. When we start to think about security within our role, we start to look at abuse cases. There becomes a desire to ensure that the application doesn’t act in certain ways. Sometimes this is very easy and others it may be beyond our capabilities.

Take the software tester again. The goal is not to turn you into a penetration tester. That role requires much more in-depth knowledge, and honestly, should be reserved for looking for the most complex vulnerabilities. This doesn’t mean that you can’t do simple tests for Direct Object Reference by changing a simple ID in the URL. It doesn’t mean that you don’t understand how to do some simple checks for SQL Injection or Cross-site Scripting. It does mean you should be able to understand what the common vulnerabilities are and how to do some simple tests for them.

If you invest in your people correctly, you will start to realize how quickly application security becomes more manageable. It becomes more scalable. The challenge becomes how to efficiently train your people to provide the right information at the right time. What type of support are you looking for? Is it the simple CBT program, or do you desire something more fluid and ongoing that provides continuing support for your most valuable assets?

Different programs work for different organizations. In all cases, it is important to work with your teams to identify what solution works best for them. Providing the right type of training, mentorship, or support at the right time can make a huge impact.

Don’t just buy a training solution, look for a partner in your development teams training efforts. A partner that gets to know your situation, that is available at the right times, and builds an on-going relationship with the team.

Filed Under: General Tagged With: application security, application security program, developer awareness, developer training, secure code, secure development, security training, training

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 11
  • Go to Next Page »

Primary Sidebar

Contact Us:

Contact us today to see how we can help.
Contact Us

Footer

Company Profile

Are you tackling the challenge to integrate security into the development process? Application security can be a complex task and often … Read More... about Home

Resources

Podcasts
DevelopSec
Down the Security Rabbithole (#DTSR)

Blogs
DevelopSec
Jardine Software

Engage With Us

  • Email
  • GitHub
  • Twitter
  • YouTube

Contact Us

DevelopSec
1044 South Shores Rd.
Jacksonville, FL 32207

P: 904-638-5431
E: james@developsec.com



Privacy Policy

© Copyright 2018 Developsec · All Rights Reserved