• Skip to main content
  • Skip to primary sidebar
  • Skip to footer

DevelopSec

  • Home
  • Podcast
  • Blog
  • Services
    • Secure Development Training
    • Advisory Services
    • Application Penetration Testing
    • Application Security and SDLC Program Review
  • Resources
  • About
  • Schedule a Call

secure coding

October 8, 2019 by James Jardine Leave a Comment

What is the difference between source code review and static analysis?

Static analysis is the process of using automation to analyze the application’s code base for known security patterns. It uses different methods, such as following data from it source (input) to its sink (output) to identify potential weaknesses. It also uses simple search methods in an attempt to identify hard-coded values, like passwords in the code. Automated tools struggle at finding business logic or authentication/authorization flaws.

Code Review is a much larger project where both automated and manual techniques are used to review a code base for potential security risks. A code review may incorporate a static analysis component to quickly scan for some types of vulnerabilities. The manual part of the secure code review involves a person looking at the code to identify flaws that automated scanners cannot. For example, they may look at the authentication logic looking for potential logic flaws. They may also look to identify other authorization or business logic flaws that the automation is challenged to find.

Filed Under: Questions Tagged With: application security, code review, development, sast, secure code review, secure coding, secure development, secure sdlc, security, testing

November 9, 2017 by James Jardine Leave a Comment

XSS in a Script Tag

Cross-site scripting is a pretty common vulnerability, even with many of the new advances in UI frameworks. One of the first things we mention when discussing the vulnerability is to understand the context. Is it HTML, Attribute, JavaScript, etc.? This understanding helps us better understand the types of characters that can be used to expose the vulnerability.

In this post, I want to take a quick look at placing data within a <script> tag. In particular, I want to look at how embedded <script> tags are processed. Let’s use a simple web page as our example.

<html>
	<head>
	</head>
	<body>
	<script>
		var x = "<a href=test.html>test</a>";
	</script>
	</body>
</html>

The above example works as we expect. When you load the page, nothing is displayed. The link tag embedded in the variable is rated as a string, not parsed as a link tag. What happens, though, when we embed a <script> tag?

<html>
	<head>
	</head>
	<body>
	<script>
		var x = "<script>alert(9)</script>";
	</script>
	</body>
</html>

In the above snippet, actually nothing happens on the screen. Meaning that the alert box does not actually trigger. This often misleads people into thinking the code is not vulnerable to cross-site scripting. if the link tag is not processes, why would the script tag be. In many situations, the understanding is that we need to break out of the (“) delimiter to start writing our own JavaScript commands. For example, if I submitted a payload of (test”;alert(9);t = “). This type of payload would break out of the x variable and add new JavaScript commands. Of course, this doesn’t work if the (“) character is properly encoded to not allow breaking out.

Going back to our previous example, we may have overlooked something very simple. It wasn’t that the script wasn’t executing because it wasn’t being parsed. Instead, it wasn’t executing because our JavaScript was bad. Our issue was that we were attempting to open a <script> within a <script>. What if we modify our value to the following:

<html>
	<head>
	</head>
	<body>
	<script>
		var x = "</script><script>alert(9)</script>";
	</script>
	</body>
</html>

In the above code, we are first closing out the original <script> tag and then we are starting a new one. This removes the embedded nuance and when the page is loaded, the alert box will appear.

This technique works in many places where a user can control the text returned within the <script> element. Of course, the important remediation step is to make sure that data is properly encoded when returned to the browser. By default, Content Security Policy may not be an immediate solution since this situation would indicate that inline scripts are allowed. However, if you are limiting the use of inline scripts to ones with a registered nonce would help prevent this technique. This reference shows setting the nonce (https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/script-src).

When testing our applications, it is important to focus on the lack of output encoding and less on the ability to fully exploit a situation. Our secure coding standards should identify the types of encoding that should be applied to outputs. If the encodings are not properly implemented then we are citing a violation of our standards.

Filed Under: General Tagged With: application security, AppSec, cross-site scripting, developer security, secure coding, secure development, security, security awareness, xss

May 11, 2017 by James Jardine Leave a Comment

Properly Placing XSS Output Encoding

Cross-Site Scripting flaws, as well as other injection flaws, are pretty well understood. We know how they work and how to mitigate them. One of the key factors in mitigation of these flaws is output encoding or escaping. For SQL, we escape by using parameters. For cross-site scripting we use context sensitive output encoding.

In this post, I don’t want to focus on the how of output encoding for cross-site scripting. Rather, I want to focus on when in the pipeline it should be done. Over the years I have had a lot of people ask if it is ok to encode the data before storing it in the database. I recently came across this insufficient solution and thought it was a good time to address it again. I will look at a few different cases that indicate why the solution is insufficient and explain a more sufficient approach.

The Database Is Not Trusted

The database should not be considered a trusted resource. This is a common oversight in many organizations, most likely due to the fact that the database is internal. It lives in a protected portion of your production environment. While that is true, it has many sources that have access to it. Even if it is just your application that uses the database today, there are still administrators and update scripts that most likely access it. We can’t rule out the idea of a rogue administrator, even if it is very slim.

We also may not know if other applications access our database. What if there is a mobile application that has access? We can’t guarantee that every source of data is going to properly encode the data before it gets sent to the database. This leaves us with a gap in coverage and a potential for cross-site scripting.

Input Validation My Not Be Enough

I always recommend to developers to have good input validation. Of course, the term good is different for everyone. At the basic level, your input validation may limit a few characters. At the advanced level it may try to limit different types of attacks. In some cases, it is difficult to use input validation to eliminate all cross-site scripting payloads. Why? Due to the different contexts and the types of data that some forms accept, a payload may still squeak by. Are you protecting against all payloads across all the different contexts? How do you know what context the data will be used in?

In addition to potentially missing a payload, what about when another developer creates a function and forgets to include the input validation? Even more likely, what happens when a new application starts accessing your database and doesn’t perform the same input validation. it puts us back into the same scenario described in the section above regarding the database isn’t trusted.

You Don’t Know the Context

When receiving and storing data, the chances are good I don’t know where the data will be used. What is the context of the data when output? Is it going into a span tag, an attribute or even straight into JavaScript? Will it be used by a reporting engine? The context matters because it determines how we encode the data. Different contexts are concerned with different characters. Can I just throw data into the database with an html encoding context? At this point, I am transforming the data at a time where there is no transformation required. Cross-site scripting doesn’t execute in my SQL column.

A Better Approach

As you can see, the above techniques are useful, however, they appear insufficient for multiple reasons. I recommend performing the output encoding immediately before the data is actually used. By that, I mean to encode right before it is output to the client. This way it is very clear what the context is and the appropriate encoding can be implemented.
Don’t forget to perform input validation on your data, but remember it is typically not meant to stop all attack scenarios. Instead it is there to help reduce them. We must be aware that other applications may access our data and those applications may no follow the same procedures we do. Due to this, making sure we make encoding decisions at the last moment provides the best coverage. Of course, this relies on the developers remembering to perform the output encoding.

Filed Under: General Tagged With: application security, AppSec, developer awareness, developer security, secure coding, secure development, xss

March 3, 2017 by James Jardine Leave a Comment

Using the AWS disruption to your advantage

By now you have heard of the amazon issues that plagued many websites a few days ago. I want to talk about one key part of the issue that often gets overlooked. If you read through their message describing their service disruption (https://aws.amazon.com/message/41926/) you will notice a section where they discuss some changes to the tools they use to manage their systems.

So let’s take a step back for a moment. Amazon attributed the service disruption to basically a simple mistake when executing a command. Although following an established playbook, something was entered incorrectly, leading to the disruption. I don’t want to get into all the details about the disruption, but rather, lets fast forward to considering the tools that are used.

Whether we are developing applications for external use or tools used to manage our systems, we don’t always think about all the possible threat scenarios. Sure, we think about some of the common threats and put in some simple safeguards, but what about some of those more detailed issues.

In this case, the tools allowed manipulating sub-systems that shouldn’t have been possible. It allowed shutting systems down too quickly, which caused some issues. This is no different than many of the tools that we write. We understand the task and create something to make it happen. But then our systems get more complicated. They pull in other systems which have access to yet other systems. A change to something directly accessible has an effect on a system 3 hops away.

Now, the chances of this edge case from happening are probably pretty slim. The chances that this case was missed during design is pretty good. Especially if you are not regularly reassessing your systems. What may have not been possible 2 years ago when the system was created is now possible today.

Even with Threat Modeling and secure design, things change. If the tool isn’t being updated, then the systems it controls are. We must have the ability to learn from these situations and identify how we can apply them to our own situations. How many tools have you created that can control your systems where a simple wrong click or incorrect parameter can create a huge headache. Do you even understand your systems well enough to know if doing something too fast or too slow can cause a problem. What about the effect of shutting one machine down has on any other machines. Are there warning messages for things that may have adverse effects? Are there safeguards in place of someone doing something out of the norm?

We are all busy and going back through working systems isn’t a high priority. These tools/systems may require an addition of new features, which makes for a perfect time to reassess it. Like everything in security, peripheral vision is very important. When working on one piece, it is always good to peek at other code around it. Take the time to verify that everything is up to date and as expected. Not any changes and determine if any enhancements or redesign is in order. Most of the time, these are edge case scenarios, but they can have a big impact if they occur.

Jardine Software helps companies get more value from their application security programs. Let’s talk about how we can help you.

James Jardine is the CEO and Principal Consultant at Jardine Software Inc. He has over 15 years of combined development and security experience. If you are interested in learning more about Jardine Software, you can reach him at james@jardinesoftware.com or @jardinesoftware on twitter.

Filed Under: News, Take-Aways Tagged With: application security, AppSec, secure code, secure coding, secure development, security, security testing

October 12, 2016 by James Jardine Leave a Comment

Insulin Pump Vulnerability – Take-aways

It was recently announced that there were a few vulnerabilities found with some insulin pumps that could allow a remote attacker to cause the pump to distribute more insulin than expected. There is a great write up of the situation here. When I say remote attack, keep in mind that in this scenario, it is someone that is within close proximity to the device. This is not an attack that can be performed via the Internet.

This situation creates an excellent learning opportunity for anyone that is creating devices, or that have devices already on the market. Let’s walk through a few key points from the article.

The Issue

The issue at hand was that the device didn’t perform strong authentication and that the communication between the remote and the device was not encrypted. Keep in mind that this device was rolled out back in 2008. Not to say that encryption wasn’t an option 8 years ago, for these types of devices it may not have been as main stream. We talk a lot about encryption today. Whether it is for IoT devices, mobile applications or web applications, the message is the same: Encrypt the communication channel. This doesn’t mean that encryption solves every problem, but it is a control that can be very effective in certain situations.

This instance is not the only one we have seen with unencrypted communications that allowed remote execution. It wasn’t long ago that there were computer mice and keyboards that also had the same type of problem. It also won’t be the last time we see this.

Take the opportunity to look at what you are creating and start asking the question regarding communication encryption and authentication. We are past the time where we can make the assumption the protocol can’t be reversed, or it needs special equipment. If you have devices in place currently, go back and see what you are doing. Is it secure? Could it be better? Are there changes we need to make. If you are creating new devices, make sure you are thinking about these issues. Communications is high on the list for all devices for security vulnerabilities. Make sure it is considered.

The Hype, or Lack of

I didn’t see a lot of extra hype over the disclosure of this issue. Often times, in security, we have a tendency to exaggerate things a bit. I didn’t see that here and I like it. At the beginning of this post I mentioned the attack is remote, but still requires close physical proximity. Is it an issue: Yes. is it a hight priority issue: Depends on functionality achieved. As a matter of fact, the initial post about the vulnerabilities states they don’t believe it is cause for panic and the risk of wide scale exploitation is relatively low. Given the circumstances, I would agree. They even go on to say they would still use this device for their children. I believe this is important because it highlights that there is risk in everything we do, and it is our responsibility to understand and choose to accept it.

The Response

Johnson and Johnson released an advisory to their customers alerting them of the issues. In addition, they provided potential options if these customers were concerned over the vulnerabilities. You might expect that the response would downplay the risk, but I didn’t get that feeling. They list the probability as extremely low. I don’t think there is dishonesty here. The statement is clear and understandable. The key component is the offering of some mitigations. While these may provide some inconvenience, it is positive to know there are options available to help further reduce the risk.

Attitude and Communication

It is tough to tell by reading some articles about a situation, but it feels like the attitudes were positive throughout the process. Maybe I am way off, but let’s hope that is true. As an organization it is important to be open to input from 3rd parties when it comes to the security of our devices. In many cases, the information is being provided to help, not be harmful. If you are the one receiving the report, take the time to actually read and understand it before jumping to conclusions.

As a security tester, it is important for the first contact to be a positive one. This can be difficult if you have had bad experiences in the past, but don’t judge everyone based on previous experiences. When the communication starts on a positive note, the chances are better it will continue that way. Starting off with a negative attitude can bring a disclosure to a screeching halt.

Conclusion

We are bound to miss things when it comes to security. In fact, you may have made a decision that years down the road will turn out to be incorrect. Maybe it wasn’t at the time, but technology changes quickly. We can’t just write off issues, we must understand them. Understand the risk and determine the proper course of action. Being prepared for the unknown can be difficult, but keeping a level head and honest will make these types of engagements easier to handle.

Jardine Software helps companies get more value from their application security programs. Let’s talk about how we can help you.

James Jardine is the CEO and Principal Consultant at Jardine Software Inc. He has over 15 years of combined development and security experience. If you are interested in learning more about Jardine Software, you can reach him at james@jardinesoftware.com or @jardinesoftware on twitter.

Filed Under: Uncategorized Tagged With: application security, AppSec, penetration testing, secure coding, secure design, security, security research, security testing

January 14, 2016 by James Jardine Leave a Comment

Password Storage Overview

Start reading the news and you are bound to read about another data breach involving user credentials. Whether you get any details about how the passwords (that were stolen) were stored, we can assume that in many of these cases that they were not well protected. Maybe they were stored in clear text (no, it can’t be true), or use weak hashes. Passwords hold the key to our access to most applications. What are you doing to help protect them?

First, lets just start with recommending that the users don’t re-use their passwords, ever. Don’t share them across applications, just don’t. Pick a password manager or some other mechanism to allow unique passwords for each application. That doesn’t mean using summer2015 for one site and summer2016 for another. Don’t have them relatable in any way.

Now that we have that out of the way, how can the applications better protect their passwords? Lets look at the storage options:

  • Clear Text
  • Encrypted
  • Hashed

Clear Text

While this may seem harmless, since the data is stored in your highly secured, super secret database, it is looked down upon. As a developer, it may seem easy, throwing a prototype together to have the password in plain text. You don’t have to worry if your storage algorithm is working properly, or if you forget your password as you are building up the feature set. It is right there in the database. While convenience may be there, this is just a bad idea. Remember that passwords should only be known by the user that created it. They should also not be shared. Take an unauthorized breach out of the picture for a moment, the password could still be viewed by certain admins of the system or database administrators. Bring a breach back into the picture, now the passwords can be easily seen by anyone with no effort. There are no excuses for storing passwords as plain text. By now, you should know better.

Encrypted

Another option commonly seen is the use of reversible encryption. With encryption, the data is unreadable, but with the right keys, it is possible to read the actual data. In this case, the real password. The encryption algorithm used can make a difference on how long it takes to reverse it if it falls into the wrong hands. Like the clear text storage, the passwords are still left exposed to certain individuals that have access to the encrypt/decrypt routines. There shouldn’t be a need to ever see the plain text version of the password again, so why store it so that it can be decrypted? Again, not recommended for strong, secure password storage.

Hashing

Hashing is process of transforming the data so that it is un-readable and not reversible. Unlike when the data is encrypted, a hashed password cannot be transformed back to the original value. To crack a hashed password you would typically run password guesses through the same hashing algorithm looking for the same result. If you find a plaintext value that creates the same hash, then you have found the password.

Depending on the hashing algorithm you use, it can be easy or very difficult to identify the password. Using MD5 with no salt is typically very easy to crack. Using a strong salt and SHA-512 may be a lot harder to crack. Using PBKDF or bcrypt, which include work factors, can be extremely difficult to crack. As you can see, not all hashing algorithms are created equally.

The Salt

The salt, mentioned earlier, is an important factor in hashing passwords to create uniqueness. First, a unique, randomly generated salt will make sure that if two users have the same password, they will not have the same hash. The salt is appended to the password, to basically create a new password, before the data is hashed. Second, a unique salt makes it more difficult for people cracking passwords because they have to regenerate hashes for all their password lists now using a new salt for each user. Generating a database of hashes based on a dictionary of passwords is known as creating a rainbow table. Makes for quicker lookups of hashes. The unique salt negates the rainbow table already generated and requires creating a new one (which can cost a lot of time).

The salt should be unique for each user, randomly generated, and of sufficient length.

The Algorithm

As I mentioned earlier, using MD5, probably not a good idea. Take the time to do some research on hashing algorithms to see which ones have collision issues or have been considered weak. SHA256 or SHA512 are better choices, obviously going with the strongest first if it is supported. As time goes on and computers get faster, algorithms start to weaken. Do your research.

Work Factor
The new standard being implemented and recommended is adding not only a salt, but a work factor to the process of hashing the password. This is done currently by using PBKDF, BCRYPT, or SCRYPT. The work factor slows the process of the hashing down. Keep in mind that hashing algorithms are fast by nature. In the example of PBKDF, iterations are added. That means that instead of just appending a salt to the password and hashing it once, it gets hashed 1,000 or 10,000 times. These iterations increase the work factor and can slow down the ability for an attacker to loop through millions of potential passwords. When comparing BCRYPT to SCRYPT, the big difference is where the work factor is. For BCRYPT, the work factor effects the CPU. For SCRYPT, the work factor is memory based.

Moving forward, you will start to see a lot more of PBKDF, BCRYPT and SCRYPT for password storage. If you are implementing these, test them out with different work factors to see what works for your environment. If 10,000 iterations is too slow on the login page, try decreasing it a little. Keep in mind that comparing a hash to verify a user’s credentials should not be happening very often, so the hit is typically on initial login only.

Conclusion

Password storage is very important in any application. Take a moment to look at how you are doing this in your apps and spread the word to help educate others on more secure ways of storing passwords. Not only could storing them incorrectly be embarrassing or harmful if they are breached, it could lead to issues with different regulatory requirements out there. For more information on storing passwords, check out OWASP’s Password Storage Cheat Sheet.

Keep in mind that if users use very common passwords, even with a strong algorithm, they can be cracked rather quickly. This is because a list of common passwords will be tried first. Creating the hash of the top 1000 passwords won’t take nearly as long as a list of a million or billion passwords.

Filed Under: General Tagged With: application security, developer, developer security, owasp, password, password storage, secure code, secure coding, security testing

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Stay Informed

Sign up to receive email updates regarding current application security topics.

Contact Us:

Contact us today to see how we can help.
Contact Us

Footer

Company Profile

Are you tackling the challenge to integrate security into the development process? Application security can be a complex task and often … Read More... about Home

Resources

Podcasts
DevelopSec
Down the Security Rabbithole (#DTSR)

Blogs
DevelopSec
Jardine Software

Engage With Us

  • Email
  • GitHub
  • Twitter
  • YouTube

Contact Us

DevelopSec
1044 South Shores Rd.
Jacksonville, FL 32207

P: 904-638-5431
E: james@developsec.com



Privacy Policy

© Copyright 2018 Developsec · All Rights Reserved