Tag Archives: developer security

MySpace Account Takeover – Take-aways

Have you ever forgotten your password, or lost access to your accounts? I know I have. The process of getting your access back can range from very easy to quite difficult. In one case, I had an account that required that a pin code be physically mailed to me in 7-10 days. Of course, this was a financial account that required extra protections.

I came across this article (https://www.wired.com/story/myspace-security-account-takeover/) that identified that MySpace’s process for regaining access to an account was easily by-passable with just a few pieces of information that is commonly easily found.

According to the issue reported, the following details were needed to gain control of an account:

  • Account Holder’s Name
  • UserName
  • Date of Birth

In addition, the email address was part of the form, but was not actually validated.

With the amount of information available on social media platforms, and even though past breaches, it is very difficult to come up with good questions to help validate a user is who they say they are. In this case, the first 2 pieces of information are reportedly available on a user’s account page and are not difficult to find. It is critical that we give great consideration to the way we attempt to validate a user’s identity.

Security or secret questions have been known to be weak for a long time. While they can help provide some assistance in identifying the user, they should not be used alone. One recommendation is to include a side channel for some verification. For example, requiring a user to receive an email at the address on record before answering these questions helps reduce some of the risk. This method now requires that an attacker gain access to the user’s email account to receive a temporary account reset link. Although this is possible, it adds another hurdle to help protect the user’s account.

Similarly, you could look to send a code to a phone number that is currently on record. Like email, it would then require the interception of the phone message, adding difficulty to the process. One of the points here is that you only want to send to email or phone that is already on record. Never allow the user to provide a new email address or phone number as they could add an untrusted phone number and easily take over the account.

There was another piece of this that should be considered, the unvalidated email address. The request for the email address in this reset process may have been meant as another verification of account information. However, as commonly seen, the value wasn’t actually validated on the server. This is an easy oversight to make when working with web forms.

Remember that validation should always be performed on the server. This is because it is simple to bypass client-side validation with the use of simple add-ins or a web proxy.

Validation of data fields should be a standard part of QA testing. Ensure that when testing forms, such as the account recovery, that the form reacts accordingly with many types of inputs. This includes what happens if a field is not provided or if it does not line up with the expected answer. This same situation has been seen on other sites where the current password may be required to update the password, but is not actually verified.

These types of stories help remind us to review these types of functions and features within our application. For older applications, it may have been a while since we have touched these features and how they were designed initially is no longer recommended. Take a moment to look back at your account recovery process to make sure it is up to par with your current requirements.

Looking for some help with your application security program? Just want someone to talk to about these types of topics? Reach out to us. We are here to help and offer a wide range of services to help make your day easier. Reach out to james@developsec.com for more information.

Properly Placing XSS Output Encoding

Cross-Site Scripting flaws, as well as other injection flaws, are pretty well understood. We know how they work and how to mitigate them. One of the key factors in mitigation of these flaws is output encoding or escaping. For SQL, we escape by using parameters. For cross-site scripting we use context sensitive output encoding.

In this post, I don’t want to focus on the how of output encoding for cross-site scripting. Rather, I want to focus on when in the pipeline it should be done. Over the years I have had a lot of people ask if it is ok to encode the data before storing it in the database. I recently came across this insufficient solution and thought it was a good time to address it again. I will look at a few different cases that indicate why the solution is insufficient and explain a more sufficient approach.

The Database Is Not Trusted

The database should not be considered a trusted resource. This is a common oversight in many organizations, most likely due to the fact that the database is internal. It lives in a protected portion of your production environment. While that is true, it has many sources that have access to it. Even if it is just your application that uses the database today, there are still administrators and update scripts that most likely access it. We can’t rule out the idea of a rogue administrator, even if it is very slim.

We also may not know if other applications access our database. What if there is a mobile application that has access? We can’t guarantee that every source of data is going to properly encode the data before it gets sent to the database. This leaves us with a gap in coverage and a potential for cross-site scripting.

Input Validation My Not Be Enough

I always recommend to developers to have good input validation. Of course, the term good is different for everyone. At the basic level, your input validation may limit a few characters. At the advanced level it may try to limit different types of attacks. In some cases, it is difficult to use input validation to eliminate all cross-site scripting payloads. Why? Due to the different contexts and the types of data that some forms accept, a payload may still squeak by. Are you protecting against all payloads across all the different contexts? How do you know what context the data will be used in?

In addition to potentially missing a payload, what about when another developer creates a function and forgets to include the input validation? Even more likely, what happens when a new application starts accessing your database and doesn’t perform the same input validation. it puts us back into the same scenario described in the section above regarding the database isn’t trusted.

You Don’t Know the Context

When receiving and storing data, the chances are good I don’t know where the data will be used. What is the context of the data when output? Is it going into a span tag, an attribute or even straight into JavaScript? Will it be used by a reporting engine? The context matters because it determines how we encode the data. Different contexts are concerned with different characters. Can I just throw data into the database with an html encoding context? At this point, I am transforming the data at a time where there is no transformation required. Cross-site scripting doesn’t execute in my SQL column.

A Better Approach

As you can see, the above techniques are useful, however, they appear insufficient for multiple reasons. I recommend performing the output encoding immediately before the data is actually used. By that, I mean to encode right before it is output to the client. This way it is very clear what the context is and the appropriate encoding can be implemented.
Don’t forget to perform input validation on your data, but remember it is typically not meant to stop all attack scenarios. Instead it is there to help reduce them. We must be aware that other applications may access our data and those applications may no follow the same procedures we do. Due to this, making sure we make encoding decisions at the last moment provides the best coverage. Of course, this relies on the developers remembering to perform the output encoding.

Application Security and Responsibility

Who is responsible for application security within your organization? While this is something I don’t hear asked very often, when I look around the implied answer is the security team. This isn’t just limited to application security either. Look at network security. Who, in your organization, is responsible for network security? From my experience, the answer is still the security group. But is that how it should be? Is there a better way?

Security has spent a lot of effort to take and accept all of this responsibility. How often have you heard that security is the gate keeper to any production releases? Security has to test your application first. Security has to approve any vulnerabilities that may get accepted. Security has to ….

I won’t argue that the security group has a lot of responsibility when it comes to application security. However, they shouldn’t have all of it, or even a majority of it. If we take a step back for a moment, lets think about how applications are created. Applications are created by application teams which consist of app owners, business analysts, developers, testers, project managers, and business units. Yet, when there is a security risk with the application it is typically the security group under fire. The security group typically doesn’t have any ability to write or fix the application, and they shouldn’t. There is a separation, but are you sure you know where it is?

I have done a few presentations recently where I focus on getting application teams involved in security. I think it is important for organizations to think about this topic because for too long we have tried to separate the duties at the wrong spot.

The first thing I like to point out is that the application development teams are smart, really smart. They are creating complex business functions that drive most organizations. We need to harness this knowledge rather than trying to augment it with other people. You might find this surprising, but most application security tools have GUIs that anyone on your app dev teams can use with little experience. Yet, most organizations I have been into have the security group running the security tools (such as Veracode, Checkmarx, WhiteHat, Contrast, etc). This is an extra layer that just decreases the efficiency of the process.

By getting the right resources involved with some of these tools and tasks, it not only gets security closer to the source, but it also frees up the security team for other activities. Moving security into the development process increases efficiency. Rather than waiting on a scan by the security team, the app team can run the scans and get the results more quickly. Even better, they can build it into their integration process and most likely automate much of the work. This changes the security team to be reserved for the more complex security issues. It also makes the security team more scalable when they do not have to just manage tools.

I know what you are thinking.. But the application team doesn’t understand security. I will give it to you, in may organizations this is very true. But why? Here we have identified what the problem really is. Currently, security tries to throw tools at the issue and manage those tools. But the real problem is that we are not focusing on maturing the application teams. We attempt to separate security from the development lifecycle. I did a podcast on discussing current application security training for development teams.

Listen to the podcast on AppSec Training

Everyone has a responsibility for application security, but we need to put a bigger focus on the application teams and getting them involved. We cannot continue to just hurl statements about getting these teams involved over the fence. We say to implement security into the SDLC, but rarely are we defining these items. We say to educate the developers, but typically just provide offensive security testing training, 1-2 days a year. We are not taking the time to identify how they work, how their processes flow, etc. to determine how to address the problem.

Take a look at your program and really understand it. What are you currently doing? Who is performing what roles? What resources do you have and are you using them effectively? What type of training are you providing and is it effective regarding your goals?

James Jardine is the CEO and Principal Consultant at Jardine Software Inc. He has over 15 years of combined development and security experience. If you are interested in learning more about Jardine Software, you can reach him at james@jardinesoftware.com or @jardinesoftware on twitter.

Originally posted at https://www.jardinesoftware.com

ImageMagick – Take-aways

Do your applications accept file uploads? More specifically, image uploads? Do you use a site that allows you to upload images? If you haven’t been following the news lately, there was recently a few vulnerabilities found in the ImageMagick image library. This library is very common in websites to perform image processing. The vulnerability allows remote code execution (RCE) on the web server, which is very dangerous. For more specific details on the vulnerability itself, check out this post on the Sucuri Blog.

There are a few things I wanted to focus on that are less about ImageMagick and more focused on better security solutions.

Application Inventory

I know, enough already. I get it, I mention application inventory all the time. It is with good reason. We rely heavily on 3rd part components just like ImageMagick and there are bound to be security flaws identified in them. Some may be less critical, while others demanding the highest priority. The issue at hand is that we cannot defend against the unknown. In this case, even if we are aware of a vulnerability, it doesn’t help us if we are not aware we use the vulnerable component. Time is important, especially when it comes to critical type issues. You need to be able to quickly determine if your company uses the vulnerable component, which apps are effected, and what types of data resides in those systems. Performing fire drills for every vulnerability announced is very inefficient. Being able to quickly identify the effected areas allows for a better understanding of any risk changes.

Permissions

When you are configuring your applications, think about the type of permissions the application has on the server. Does it have the ability to write to specific folders, execute specific files, read data, etc.? Does the application run as an administrator with full rights? To help reduce the attack surface we must understand what the app needs to do, and then make sure the permissions are aligned with those needs. In this case, the files may be stored on the file system to be used later so write permissions may be required. It is still possible to limit those write permissions to specific folders and even limiting execution permissions.

Application Configuration

How you design and configure your application plays a significant role in how security is handled. In regards to allowing file uploads, how do you handle storing the files? Are they on the file system or in the database? How are features configured? In the case of ImageMagick, there are configuration settings that can be set to limit the vulnerabilities. Make sure that you are only accepting file types that are needed and discarding others. These configurations can take many forms, but will help provide security when they are truly understood.

Importance of Input Validation

The vulnerability in ImageMagick highlights the importance of sanitizing input and performing solid validation. In this case, the file name is not properly handled which allows an attacker to run unintended commands. We have to check to make sure that the data passed to our applications is what we are expecting. If it is not, then it must be discarded.

This is not the last time we will see a vulnerable component that is used by lots of applications. It is part of the risk of using external components. We can do more on our side to make sure that we know what we are using and preparing for the event that something goes wrong. Take the time to understand your applications and think about the different ways security may be effected.

Reliance on 3rd Party Components

It was just recently announced that Apple will no longer be supporting QuickTime for Windows. Just like any other software, when support has ended, the software becomes a security risk. As a matter of fact, there are current known vulnerabilities in QuickTime that will never get patched. The Department of Homeland Security has an alert recommending removal of QuickTime for Windows.

For users, it may seem simple: Uninstall QuickTime from your Windows system. But wait.. what about software they rely on that may require quicktime? There are times when our users are locked into some technologies because of the software we create. How does our decision to use a specific technology have an effect on our end users when it comes to security?

If you have created an application that relies on the installation of QuickTime for Windows this creates somewhat of an issue. It can be very difficult to justify your position to end users to continue using a component that is not only known vulnerable, but is also past the end of life. This brings up a few things to consider when using third party components in our solutions.

Why use the component/technology?

The first question to ask is why do we need this component or technology. Is this a small feature within our application or does our entire business model rely on this. What benefit are we getting by using this component. Are there multiple components by different sources? What process was used to pick the used component? Did you look at the history of the creator, the reviews from other users, how active development is, etc.? When picking a component, more is required than just picking the one with the prettiest marketing page.

How do we handle Updates?

The number of breaches and security vulnerabilities documented is at an all time high. You need to be prepared for the event that there are updates, most likely critical, to the component. How will you handle these updates. What type of process will be in place to make sure that the application is compatible with all of these updates so the user is not forced to use an out-dated version? What would you do if the update removed a critical feature your application relies upon?

What if the component is discontinued?

Handling the updates and patches is one thing, but what happens if the component becomes end of life? When the component is no longer supported there is a greater possibility of security risks around it that will not get fixed. In most situations this will probably lead to the decision to stop using that component. I know, this seems like something we don’t want to think about, but it is critical that all of these possibilities are understood.

We use components in almost any software that we create. We rely on these components. Taking the time to create a process around how these components are selected and how they are maintained will help reduce headache later on. We often focus on the now, forgetting about what may happen tomorrow. Take the time to think about how the component is maintained going forward and how it would effect your business if it went away one day. Will you ask your users to continue using a known vulnerable component, or will you have a plan to make the necessary changes to no longer rely on that component?

The Hidden Reason for Switching to HTTPS

If you run a website, you have probably debated on whether or not you need to make the switch to HTTPS instead of using HTTP. For those that still don’t know, HTTPS is the encrypted version of HTTP. This is typically seen on banking sites, touted to protect your sensitive information when transmitted between you (your browser) and the application.

I wrote on this topic about a year ago in the post: Is HTTP being left behind for HTTPS? Back then there was a big push for making the switch and since we have seen even government mandates for government operated sites to make the switch to HTTPS.

There are typically two reasons you will hear someone recommend using HTTPS:

  • SEO (search engine optimization) – To learn more about these benefits, check out this great article Should you switch your site to HTTPS? Here’s Why you should or shouldn’t by Neil Patel. Neil does a great job of explaining HTTPS and the pros and cons.
  • Protecting sensitive information – We all should know by now that we need to protect sensitive information as it is transmitted to the application. So if your application transmits any sensitive information (Passwords, Social Security Number, Credit Card Info, Account information, etc) it is a must to use SSL.

But Wait…What about…

There is another big reason that HTTPS is important, even if you do not have sensitive information on your site. Let’s step into our favorite scenario of using your computer in the local coffee shop. You connect to the free wifi and start surfing to your favorite sites. You feel comfortable logging into your bank account because it uses HTTPS and you see the green lock in your browser (although maybe you shouldn’t feel so comfortable). Ideally, that session with your bank is protected from the guy sitting one table over trying to intercept the traffic.

Then you point your browser to a local news site to check out the latest happenings. That site is over HTTP and is not protected while traversing the wifi network. What happens when the attacker is able to intercept that news traffic and he changes the response, that you expect to contain today’s news, to contain malicious content. This would be no different than you clicking on a malicious site to begin with. Except here, you feel safe on that familiar news site.

This scenario shows how your site, the one running without HTTPS, could be used as a launching point to attack a user. While it didn’t effect your actual site, or your servers, it will lead to a break in trust from your visitors. If something happens while on your site, it doesn’t matter how it happened, the finger is pointing straight at you.

Conclusion

So while we put a lot of focus on sensitive information or SEO, there are other very important reasons why a site owner would want to make the switch to HTTPS. Gone are the days when performance is an excuse. Heck, with the Let’s Encrypt project, maybe gone are the days of the cost of purchasing a certificate to enable HTTPS. Sure there may be reasons, even some valid ones, why you don’t need to make the switch. Don’t just look at the constraints. Take the time to really understand your situation, how the change effects you, and make rational decisions. Don’t do it because some site said too. Do it because you understand the situation and know it is right for your situation.

Application Registry: Knowing Your Assets

If an auditor asked you what applications exist on your network, how accurate do you think your answer would be? Do you have a repository or registry of the applications your company maintains? A good application security program starts with knowing what you have. Without this knowledge it is very difficult to understand the risk the applications bring to the company.

Depending on the maturity of your application security program, the way that applications are registered can be very different. On the simplest side, this may be a basic spreadsheet. On the advanced side the registry may be a custom built, or commercial application with lots of bells and whistles. In either case, the central repository of applications gives important insight into what exists. The ability to search through all applications for specific data fields will be a really popular feature.

In addition to the application name, creating a profile can be very handy. Here a list of some common items that may be useful in the application registry:

  • Application name – The name of the application. If the application has any nicknames or known by anything else it may be useful to capture that as well.
  • Application Lead / Contact Information – The name of the contacts responsible for the application. If an audit is occurring, questions arise, or anything else comes up, who should be contacted regarding the application.
  • Short description – A short description about what the application does and what it is used for. This may include how roles are used and/or maintained.
  • Programming Language / Framework – What language is the application written in. For example, Java, .Net, Python, Ruby, etc. The version is also helpful to understand if the latest version is being used or what language security issues may be present. For the framework, is this MVC, Spring, WebForms, etc.
  • Third party components – Applications are commonly made up of many external components which may contain vulnerabilities at some point. Tracking these items can help identify which applications may be effected in the event a vulnerability is found in a component.
  • Data elements / classifications – What data is used by the application? Does the application store credit card data, health data, personal information? Record the sensitive information and classify the application based on the most sensitive data. Your company should have a data classification policy to assist with this.
  • Regulatory Compliance Requirements – Does the application fall under any regulatory requirements such as PCI or HIPAA?

The above items are not a complete list of fields, but should help get you started if you don’t have anything to this point. Remember that an application security program doesn’t mature overnight. It takes small steps to get where you really want to be. Starting with an application registry is a good first step to provide insight into the environment.

In future posts I will dig deeper into the information that can be tracked with the application registry. If you have other fields you track or know of any good applications to help track your applications please feel free to send them over.

Does the End of an Iteration Change Your View of Risk?

You have been working hard for the past few weeks or months on the latest round of features for your flagship product. You are excited. The team is excited. Then a security test identifies a vulnerability. Balloons deflate and everyone starts to scramble.

Take a breath.

Not all vulnerabilities are created equal and the risk that each presents is vastly different. The organization should already have a process for triaging security findings. That process should be assessing the risk of the finding to determine its impact on the application, organization, and your customers. Some of these flaws will need immediate attention. Some may require holding up the release. Some may pose a lower risk and can wait.

Take the time to analyze the situation.

If an item is severe and poses great risk, by all means, stop what you are doing and fix it. But, what happens when the risk is fairly low. When I say risk, I include in that the ability for it to be exploited. The difficulty to exploit can be a critical factor in what decision you make.

When does the risk of remediation override the risk of waiting until the next iteration?

There are some instances where the risk to remediate so late in the iteration may actually be higher than waiting until the next iteration to resolve the actual issue. But all security vulnerabilities need to be fixed, you say? This is not an attempt to get out of doing work or not resolve issues. However, I believe there are situations where the risk of the exploit is less than the risk of trying to fix it in a chaotic, last minute manner.

I can’t count the number of times I have seen issues arise that appeared to be simple fixes. The bug was not very serious and could only be exploited in a very limited way. For example, the bug required the user’s machine to be compromised to enable exploitation. The fix, however, ended up taking more than a week due to some complications. When the bug appeared 2 days before code freeze there were many discussions on performing a fix, and potentially holding up the release, and moving the remediation to the next iteration.

When we take the time to analyze the risk and exposure of the finding, it is possible to make an educated decision as to which risk is better for the organization and the customers. In this situation, the assumption is that the user’s system would need to be compromised for the exploit to happen. If that is true, the application is already vulnerable to password sniffing or other attacks that would make this specific exploit a waste of time.

Forcing a fix at this point in the game increases the chances of introducing another vulnerability, possibly more severe than the one that we are trying to fix. Was that risk worth it?

Timing can have an affect on our judgement when it comes to resolving security issues. It should not be used as an escape goat or reason not to fix things. When analyzing the risk of an item, make sure you are also considering how that may affect the environment as a whole. This risk may not be directly with the flaw, but indirectly due to how it is fixed. There is no hard and fast rule, exactly the reason why we use a risk based approach.

Engage your information security office or enterprise risk teams to help with the analysis. They may be able to provide a different point of view or insight you may have overlooked.

Sharing with Social Media

Does your application provide a way for users to share their progress or success with others through social media? Are you thinking about adding that feature in the future? Everyone loves to share their stories with their friends and colleagues, but as application developers we need to make sure that we are considering the security aspects of how we go about that.

Take-Aways


  • Use the APIs when talking to another service
  • Don’t accept credentials to other systems out of your control
  • Check with security to validate that your design is ok

This morning, whether true or not (I have not registered for the RSA conference), there was lots of talk about the RSA registration page offering to post a message to your twitter account regarding you going to the RSA conference. Here is a story about it. The page asks for your twitter username and password to then post a message out on your twitter account. Unfortunately, that is the wrong way to request access to post to a social media account.

Unfortunately, even if you have the best intentions, this is going to come across in a negative way. People will start assuming you are storing that information, that you know have access to important peoples twitter accounts going forward. Maybe you do, maybe you don’t, the problem there is that no one knows what happened with that information.

Just about every social media site out there has APIs available and support some oAuth or other authorization mechanism to perform this type of task. By using the proper channel, the user would be redirected to the social media site (Twitter in this instance) and after authenticating there, would provide authorization for the other site to post messages as the registered user.

Using this technique, the user doesn’t have to give the initial application their social media password, instead they get a token they can use to make the post. The token may have a limited lifetime, can be revoked, and doesn’t provide full access to the account. Most likely, the token would only allow access to post a message. It would not provide access to modify account settings or anything else.

If you are looking to integrate or share with social media sites, take the time to determine the right way to do it. This is really important when it involves access to someone else account. Protect the user and yourself. Don’t just take the easy way out and throw a form on the screen. Understand the architecture of the system and the security that needs to be in place. There are a lot of sites that allow sharing with social media, understand how they are doing it. When in doubt, run this by someone else to see if what you are planning on doing looks like the right way to do it.

Password Storage Overview

Start reading the news and you are bound to read about another data breach involving user credentials. Whether you get any details about how the passwords (that were stolen) were stored, we can assume that in many of these cases that they were not well protected. Maybe they were stored in clear text (no, it can’t be true), or use weak hashes. Passwords hold the key to our access to most applications. What are you doing to help protect them?

First, lets just start with recommending that the users don’t re-use their passwords, ever. Don’t share them across applications, just don’t. Pick a password manager or some other mechanism to allow unique passwords for each application. That doesn’t mean using summer2015 for one site and summer2016 for another. Don’t have them relatable in any way.

Now that we have that out of the way, how can the applications better protect their passwords? Lets look at the storage options:

  • Clear Text
  • Encrypted
  • Hashed

Clear Text

While this may seem harmless, since the data is stored in your highly secured, super secret database, it is looked down upon. As a developer, it may seem easy, throwing a prototype together to have the password in plain text. You don’t have to worry if your storage algorithm is working properly, or if you forget your password as you are building up the feature set. It is right there in the database. While convenience may be there, this is just a bad idea. Remember that passwords should only be known by the user that created it. They should also not be shared. Take an unauthorized breach out of the picture for a moment, the password could still be viewed by certain admins of the system or database administrators. Bring a breach back into the picture, now the passwords can be easily seen by anyone with no effort. There are no excuses for storing passwords as plain text. By now, you should know better.

Encrypted

Another option commonly seen is the use of reversible encryption. With encryption, the data is unreadable, but with the right keys, it is possible to read the actual data. In this case, the real password. The encryption algorithm used can make a difference on how long it takes to reverse it if it falls into the wrong hands. Like the clear text storage, the passwords are still left exposed to certain individuals that have access to the encrypt/decrypt routines. There shouldn’t be a need to ever see the plain text version of the password again, so why store it so that it can be decrypted? Again, not recommended for strong, secure password storage.

Hashing

Hashing is process of transforming the data so that it is un-readable and not reversible. Unlike when the data is encrypted, a hashed password cannot be transformed back to the original value. To crack a hashed password you would typically run password guesses through the same hashing algorithm looking for the same result. If you find a plaintext value that creates the same hash, then you have found the password.

Depending on the hashing algorithm you use, it can be easy or very difficult to identify the password. Using MD5 with no salt is typically very easy to crack. Using a strong salt and SHA-512 may be a lot harder to crack. Using PBKDF or bcrypt, which include work factors, can be extremely difficult to crack. As you can see, not all hashing algorithms are created equally.

The Salt

The salt, mentioned earlier, is an important factor in hashing passwords to create uniqueness. First, a unique, randomly generated salt will make sure that if two users have the same password, they will not have the same hash. The salt is appended to the password, to basically create a new password, before the data is hashed. Second, a unique salt makes it more difficult for people cracking passwords because they have to regenerate hashes for all their password lists now using a new salt for each user. Generating a database of hashes based on a dictionary of passwords is known as creating a rainbow table. Makes for quicker lookups of hashes. The unique salt negates the rainbow table already generated and requires creating a new one (which can cost a lot of time).

The salt should be unique for each user, randomly generated, and of sufficient length.

The Algorithm

As I mentioned earlier, using MD5, probably not a good idea. Take the time to do some research on hashing algorithms to see which ones have collision issues or have been considered weak. SHA256 or SHA512 are better choices, obviously going with the strongest first if it is supported. As time goes on and computers get faster, algorithms start to weaken. Do your research.

Work Factor
The new standard being implemented and recommended is adding not only a salt, but a work factor to the process of hashing the password. This is done currently by using PBKDF, BCRYPT, or SCRYPT. The work factor slows the process of the hashing down. Keep in mind that hashing algorithms are fast by nature. In the example of PBKDF, iterations are added. That means that instead of just appending a salt to the password and hashing it once, it gets hashed 1,000 or 10,000 times. These iterations increase the work factor and can slow down the ability for an attacker to loop through millions of potential passwords. When comparing BCRYPT to SCRYPT, the big difference is where the work factor is. For BCRYPT, the work factor effects the CPU. For SCRYPT, the work factor is memory based.

Moving forward, you will start to see a lot more of PBKDF, BCRYPT and SCRYPT for password storage. If you are implementing these, test them out with different work factors to see what works for your environment. If 10,000 iterations is too slow on the login page, try decreasing it a little. Keep in mind that comparing a hash to verify a user’s credentials should not be happening very often, so the hit is typically on initial login only.

Conclusion

Password storage is very important in any application. Take a moment to look at how you are doing this in your apps and spread the word to help educate others on more secure ways of storing passwords. Not only could storing them incorrectly be embarrassing or harmful if they are breached, it could lead to issues with different regulatory requirements out there. For more information on storing passwords, check out OWASP’s Password Storage Cheat Sheet.

Keep in mind that if users use very common passwords, even with a strong algorithm, they can be cracked rather quickly. This is because a list of common passwords will be tried first. Creating the hash of the top 1000 passwords won’t take nearly as long as a list of a million or billion passwords.