Category Archives: Take-Aways

Blue Cross Mails USB sticks – Take-Aways

You have information you want to share with your customers, but how do you do it securely? How often have you heard not to click links sent via email? You shouldn’t plug in random USB drives to your computer. From a marketing perspective, how do you get large amounts of information, such as videos and specific information, out to your customers?

In a report by Fierce Healthcare (http://www.fiercehealthcare.com/privacy-security/bcbs-alabama-re-evaluates-usb-marketing-campaign-amid-security-concerns) it appears that BCBS of Alabama thought that sending out some fancy USB drives with benefit information was the right choice. Apparently the drive contained videos and other information about benefits for the company.

As you can guess, there was lots of skepticism around the method. We are taught in many security awareness training sessions to never plug untrusted USB drives into our devices. This is a common tactic used during penetration tests where they drop USB drives in the parking lot or in public places to see if someone will plug it in. Once plugged in, it may allow external access to the system for the attacker. Of course, this depends on the controls in place on the system it was plugged into.

You might wonder why we still see USB drives still passed out at conferences. There are still people that do this and you might consider those as trusted because you feel you received them in person. A company representative handed it to you, so it must be safe right? Well.. not necessarily. You should still proceed with caution.

In regards to mailing the drive, there appears to be an assumption of trust. This may be due to the fact that it was physically sent to you, vs. an item that was emailed. It came through the mail system, it had the corporate branding, even the right return address. The key factor is that all of those things can be spoofed. It is simple to create a letter with a company logo and branding and to set the return address to something other than your personal address. The mail system isn’t designed to verify trust in who sent a letter. Instead, it is meant to hopefully put trust in the fact that if you send mail, it will arrive at its destination.

Take Aways

When we analyze the situation, it helps us decide how to better review our systems to understand our risks and controls. There are a few things you can do to help reduce the risk of these types of potential attacks or situations.

Review your security awareness training to see how it covers USB drives and the policies around them. Are your users trained on how to handle a situation where they receive a USB drive from an untrusted source? Do they know who to contact to make sure it is properly analyzed before they attempt to use it?

Work with your marketing teams to determine how different campaign types work and which ones are acceptable. They are typically not considering the security aspect of every option and helping provide some insight goes a long way. It is not a matter of someone purposely trying to do something insecurely, but rather a situation where someone doesn’t have the exposure. Talk through the different scenarios and why they raise the risk level within the organization or to the clients.

Review any technical controls in place within the organization surrounding plugging devices into the computers. Do you have controls to block these devices? Is that control limited to storage devices? Remember that a USB drive can also be a Human Input Device (HID) which appears not as a storage device, but as a keyboard. These HID devices often bypass limitations on other USB drive types to allow executing code on the system.

Identifying alternative methods means calculating the risk of each one and picking the best choice. As an alternative, the campaign could have been an email campaign with a link in it for users to click. The risk is that people are taught not to click links in emails. Another option could have been to send the mail, but instead of the USB Drive, include a link for the user to type into their browser. It is recommended to use a link that goes to your domain, not a shortened URL. This provides more trust in that the destination is not hidden. Take the time to consider alternatives and the risks for each one.

MySpace Account Takeover – Take-aways

Have you ever forgotten your password, or lost access to your accounts? I know I have. The process of getting your access back can range from very easy to quite difficult. In one case, I had an account that required that a pin code be physically mailed to me in 7-10 days. Of course, this was a financial account that required extra protections.

I came across this article (https://www.wired.com/story/myspace-security-account-takeover/) that identified that MySpace’s process for regaining access to an account was easily by-passable with just a few pieces of information that is commonly easily found.

According to the issue reported, the following details were needed to gain control of an account:

  • Account Holder’s Name
  • UserName
  • Date of Birth

In addition, the email address was part of the form, but was not actually validated.

With the amount of information available on social media platforms, and even though past breaches, it is very difficult to come up with good questions to help validate a user is who they say they are. In this case, the first 2 pieces of information are reportedly available on a user’s account page and are not difficult to find. It is critical that we give great consideration to the way we attempt to validate a user’s identity.

Security or secret questions have been known to be weak for a long time. While they can help provide some assistance in identifying the user, they should not be used alone. One recommendation is to include a side channel for some verification. For example, requiring a user to receive an email at the address on record before answering these questions helps reduce some of the risk. This method now requires that an attacker gain access to the user’s email account to receive a temporary account reset link. Although this is possible, it adds another hurdle to help protect the user’s account.

Similarly, you could look to send a code to a phone number that is currently on record. Like email, it would then require the interception of the phone message, adding difficulty to the process. One of the points here is that you only want to send to email or phone that is already on record. Never allow the user to provide a new email address or phone number as they could add an untrusted phone number and easily take over the account.

There was another piece of this that should be considered, the unvalidated email address. The request for the email address in this reset process may have been meant as another verification of account information. However, as commonly seen, the value wasn’t actually validated on the server. This is an easy oversight to make when working with web forms.

Remember that validation should always be performed on the server. This is because it is simple to bypass client-side validation with the use of simple add-ins or a web proxy.

Validation of data fields should be a standard part of QA testing. Ensure that when testing forms, such as the account recovery, that the form reacts accordingly with many types of inputs. This includes what happens if a field is not provided or if it does not line up with the expected answer. This same situation has been seen on other sites where the current password may be required to update the password, but is not actually verified.

These types of stories help remind us to review these types of functions and features within our application. For older applications, it may have been a while since we have touched these features and how they were designed initially is no longer recommended. Take a moment to look back at your account recovery process to make sure it is up to par with your current requirements.

Looking for some help with your application security program? Just want someone to talk to about these types of topics? Reach out to us. We are here to help and offer a wide range of services to help make your day easier. Reach out to james@developsec.com for more information.

Using the AWS disruption to your advantage

By now you have heard of the amazon issues that plagued many websites a few days ago. I want to talk about one key part of the issue that often gets overlooked. If you read through their message describing their service disruption (https://aws.amazon.com/message/41926/) you will notice a section where they discuss some changes to the tools they use to manage their systems.

So let’s take a step back for a moment. Amazon attributed the service disruption to basically a simple mistake when executing a command. Although following an established playbook, something was entered incorrectly, leading to the disruption. I don’t want to get into all the details about the disruption, but rather, lets fast forward to considering the tools that are used.

Whether we are developing applications for external use or tools used to manage our systems, we don’t always think about all the possible threat scenarios. Sure, we think about some of the common threats and put in some simple safeguards, but what about some of those more detailed issues.

In this case, the tools allowed manipulating sub-systems that shouldn’t have been possible. It allowed shutting systems down too quickly, which caused some issues. This is no different than many of the tools that we write. We understand the task and create something to make it happen. But then our systems get more complicated. They pull in other systems which have access to yet other systems. A change to something directly accessible has an effect on a system 3 hops away.

Now, the chances of this edge case from happening are probably pretty slim. The chances that this case was missed during design is pretty good. Especially if you are not regularly reassessing your systems. What may have not been possible 2 years ago when the system was created is now possible today.

Even with Threat Modeling and secure design, things change. If the tool isn’t being updated, then the systems it controls are. We must have the ability to learn from these situations and identify how we can apply them to our own situations. How many tools have you created that can control your systems where a simple wrong click or incorrect parameter can create a huge headache. Do you even understand your systems well enough to know if doing something too fast or too slow can cause a problem. What about the effect of shutting one machine down has on any other machines. Are there warning messages for things that may have adverse effects? Are there safeguards in place of someone doing something out of the norm?

We are all busy and going back through working systems isn’t a high priority. These tools/systems may require an addition of new features, which makes for a perfect time to reassess it. Like everything in security, peripheral vision is very important. When working on one piece, it is always good to peek at other code around it. Take the time to verify that everything is up to date and as expected. Not any changes and determine if any enhancements or redesign is in order. Most of the time, these are edge case scenarios, but they can have a big impact if they occur.

Jardine Software helps companies get more value from their application security programs. Let’s talk about how we can help you.

James Jardine is the CEO and Principal Consultant at Jardine Software Inc. He has over 15 years of combined development and security experience. If you are interested in learning more about Jardine Software, you can reach him at james@jardinesoftware.com or @jardinesoftware on twitter.

The 1 thing you need to know about the Daily Motion hack

It was just released that Daily Motion suffered a hack attack resulting in a large number of usernames and email addresses being released. Rather than focusing on the number of records received (the wow factor), I want to highlight what most places are just glancing over: Password Storage.

According to the report, only a small portion of the accounts had a password associated with it. That is in the millions, and you might be thinking this is bad. It is actually the highlight of the story. Why? Daily Motion used bcrypt to hash their user passwords before storing them.

Bcrypt uses both a salt value and a work factor when hashing the data. The salt has been a long time recommendation when hashing passwords as it can help reduce rainbow table attacks. The work factor, which has been recommended much more in recent years makes brute forcing passwords work intensive. This means that it requires more time per password, slowing down large cracking attacks.

Bcrypt is not the only option either. PBKDF and Scrypt are other available options that work in a similar way.

Using a strong algorithm makes it much more difficult to crack the passwords in the event that they are hacked some how. The use of any of these algorithms doesn’t rule out the possibility of cracking the passwords. They just make it much more difficult or time intensive. There are always circumstances that can change this. However, using one of these algorithms can go a long way in helping protect that data.

How are you storing passwords?

Take a moment to look at how you are storing passwords and consider how it will stand up in the event of breached account details. Do you use a unique salt for each password? Do you implement a work factor to slow cracking attempts down?

How would you handle this type of breach?

If accounts were to be breached like this, how would you handle it? Do you have a process in place? Would you force password resets? How would you notify users? Consider these types of questions to verify you have a plan in place.

James Jardine is the CEO and Principal Consultant at Jardine Software Inc. He has over 15 years of combined development and security experience. If you are interested in learning more about Jardine Software, you can reach him at james@jardinesoftware.com or @jardinesoftware on twitter.

ImageMagick – Take-aways

Do your applications accept file uploads? More specifically, image uploads? Do you use a site that allows you to upload images? If you haven’t been following the news lately, there was recently a few vulnerabilities found in the ImageMagick image library. This library is very common in websites to perform image processing. The vulnerability allows remote code execution (RCE) on the web server, which is very dangerous. For more specific details on the vulnerability itself, check out this post on the Sucuri Blog.

There are a few things I wanted to focus on that are less about ImageMagick and more focused on better security solutions.

Application Inventory

I know, enough already. I get it, I mention application inventory all the time. It is with good reason. We rely heavily on 3rd part components just like ImageMagick and there are bound to be security flaws identified in them. Some may be less critical, while others demanding the highest priority. The issue at hand is that we cannot defend against the unknown. In this case, even if we are aware of a vulnerability, it doesn’t help us if we are not aware we use the vulnerable component. Time is important, especially when it comes to critical type issues. You need to be able to quickly determine if your company uses the vulnerable component, which apps are effected, and what types of data resides in those systems. Performing fire drills for every vulnerability announced is very inefficient. Being able to quickly identify the effected areas allows for a better understanding of any risk changes.

Permissions

When you are configuring your applications, think about the type of permissions the application has on the server. Does it have the ability to write to specific folders, execute specific files, read data, etc.? Does the application run as an administrator with full rights? To help reduce the attack surface we must understand what the app needs to do, and then make sure the permissions are aligned with those needs. In this case, the files may be stored on the file system to be used later so write permissions may be required. It is still possible to limit those write permissions to specific folders and even limiting execution permissions.

Application Configuration

How you design and configure your application plays a significant role in how security is handled. In regards to allowing file uploads, how do you handle storing the files? Are they on the file system or in the database? How are features configured? In the case of ImageMagick, there are configuration settings that can be set to limit the vulnerabilities. Make sure that you are only accepting file types that are needed and discarding others. These configurations can take many forms, but will help provide security when they are truly understood.

Importance of Input Validation

The vulnerability in ImageMagick highlights the importance of sanitizing input and performing solid validation. In this case, the file name is not properly handled which allows an attacker to run unintended commands. We have to check to make sure that the data passed to our applications is what we are expecting. If it is not, then it must be discarded.

This is not the last time we will see a vulnerable component that is used by lots of applications. It is part of the risk of using external components. We can do more on our side to make sure that we know what we are using and preparing for the event that something goes wrong. Take the time to understand your applications and think about the different ways security may be effected.

HIV clinic Data Breach: Thoughts and Takeaways

One of the most common ways for sensitive information to be released outside of an authorized environment is by simple, common mistakes made by employees. These types of incidents usually have no malicious intent and are generally innocent in nature. An example of this was recently reported regarding a newsletter that was sent out to HIV patients (and others) that the sender made a simple mistake. Rather than use the BCC for each recipients address, they used the CC field. For those that may not realize, you don’t see the users listed as BCC (blind carbon copy), as opposed to the CC field which is shown to all recipients.

Think about any mass emails you may be a part of and which ones use the CC field instead of the BCC field. I have a few that I am on that share my information with the rest of the list. In many cases, this may not be that big of a concern, but in a health related situation like this one, it becomes more severe. The issue becomes a privacy and compliance issue as it deals with HIPAA and personal health information.

Is the solution as straight forward and simple as creating a procedural check list to ensure that BCC is used instead of CC? This may work, but it still opens up the opportunity for someone on a tight deadline to skip the checklist and make the same mistake gain. We are all aware that after an incident we will be more aware, but as time goes on that awareness slips to the way side.

A company could engage a 3rd party mailer, like MailChimp, to do their newsletter mailings. This route raises different concerns because you are placing your critical or private data, the patients related to the health issue, in the hands of a 3rd party. If that vendor suffers a breach you will incur some risk there as well. Different vendors have different policies and security practices, so if you are thinking about taking that option make sure you understand what is and is not offered.

There may be add-ons for your mail program that can help send newsletters individually, rather than as a bulk email. One such solution for Microsoft Outlook is Send Individually created by Sperry Software. (Full disclosure, I used to work with Sperry Software, but I am not compensated by mentioning their product. I am not a reseller nor do I have any affiliation at this time) There may be other add-ins by other vendors that can do this as well.

Whichever direction you go, make sure that you are reviewing your processes and the risks they expose. This type of human error is easy to make, but quick to be crucified. Don’t cut corners due to quick timelines and have another person review before sending anything externally. Sometimes that second pair of eyes can catch the simplest of mistakes that are so easily overlooked by the original writer. It is important that we take time to understand these situations and learn from them. Attention to detail can save a lot of hassle in the future.

Amazon XSS: Thoughts and Takeaways

It was recently identified, and Amazon was quick (2 days) to fix it, that one of their sites was vulnerable to cross-site scripting. Cross-site scripting is a vulnerability that allows an attacker to control the output in the user’s browser. A more detailed look into cross-site scripting can be found on the OWASP site.

Take-Aways


  • QA could have found this
  • Understand your input validation routines
  • Check to make sure the proper output encoding is in place in every location user supplied data is sent to the browser

Vulnerabilities like the one listed above are simple to detect. In fact, many can be detected by automated scanners. Unfortunately, we cannot rely on automated scanners to find every vulnerability. Automated scanning is a great first step in identifying flaws like cross-site scripting. It is just as important for developers and QA analysts to be looking for these types of bugs. When we break it down, a cross-site scripting flaw is just a bug. It may be classified under “security” but nonetheless it is a bug that effects the quality of the application.

We want to encourage developers and QA to start looking for these types of bugs to increase the quality of their applications. Quality is more than just if the app works as expected. If the application has a bug that allows an attacker the ability to send malicious code to another user of the application that is still a quality issue.

If you are a developer, take a moment to think about what output you send to the client and if you are properly encoding that data. It is not as simple as just encoding the less than character or greater than character. Context matters. Look for the delimiters and control characters that are relative to where the output is going to determine the best course of action. It is also a good idea to standardize the delimiters you use for things like HTML attributes. Don’t use double quotes in some places, single quotes in others and then nothing in the rest. Pick one (double or single quotes) and stick to it everywhere.

If you are a QA analyst, understand what input is accepted by the application and then where that output is then used again. The first step is testing what data you can send to the server. Has there been any input validation put in place? Input validation should be implemented in a way to limit the types and size of data in most of the fields. The next step is to verify that any special characters are being encoded when they are returned back down to the browser. These are simple steps that can be performed by anyone. You could also start scripting these tests to make it easier in the future.

It is our (dev,qa,ba,application owners) responsibility to create quality applications. Adding these types of checks do not add a lot of time to the cycle and the more you do it, the less you will start to see allowing you to increase the testing timelines. Bugs are everywhere so be careful and test often.

Verizon Email API Insecure Direct Object Reference Thoughts and Takeaways

It was recently announced that there was a flaw identified (and since fixed) in the Verizon API that allowed access to Verizon customer email accounts. The way this worked was that there was an ID parameter with the email account’s user ID specified. If a user supplied a different user’s ID name, that user’s email account would be returned. This is known as an Insecure Direct Object Reference. It was also found that the attacker could not only read another user’s email, but also send email from that account. This could be very useful in spear phishing attacks because users are more trustful of emails from their contacts.

Take-Aways


  • Understand the parameters that are used in the application
  • Use a web proxy to see the raw requests and responses for better understanding
  • Create test cases for these parameters that check access to different objects to ensure authorization checks are working properly
  • Implement row-based authorization to ensure the authenticated user can only see his information

The issue that is presented is that the API is not checking if the authenticated user has permission to access the specified mailbox. It would appear that it is only checking that the user is authenticated. Remember that authentication is the process of identifying who the requesting party is. Authorization is the process of determining what the authenticated user has access to. In this situation, the API should first validate that the user is authenticated, and then when a request is made for a resource (email account in this example) verify that that user is authorized to access that account before allowing it.

Unfortunately, many API’s are vulnerable to this type of attack because there is an assumption that the user can’t change the parameter values due to a lack of user interface. It is imperative that developers and QA testers both use a proxy when testing applications to be able to manipulate these types of parameters. This allows testing for unauthorized access to different objects. This is a very simple test case that should be included for every application, and not just for APIs. If you see a parameter value, make sure it is being properly tested from a security standpoint. For example, and ID field that may be an integer may get tested to make sure that the value cannot be any other type of data, but must also be checked to see if different values give access to unauthorized data.

It was also mentioned that the API didn’t use HTTPS for its communication channel. Using HTTP allows other user’s along the communication line to intercept the request and response data, potentially opening up the user to a variety of vulnerabilities. Make sure you are using the proper communication channel to protect your users in your mobile applications as well as the web applications.

OneStopParking Breach Thoughts and Takeaways

It was recently announced that OneStopParking.com suffered from a data breach exposing customer credit card data. According to the report, the breach occurred due to missing patches in the application’s Joomla install. Apparently the patches caused some problems with the application so they were pushed back. The patches in question were released in September of 2014.

Take-Aways


  • Implement a patch management program
  • Use a web application firewall (WAF) for extended coverage

It is common to come across systems that are not fully patched. Of course there are a multitude of reasons for these scenarios. In this case, the patch caused a problem with the application which means that the developer has to fix their application before the patch can be applied, or not apply the patch at all. All too often the issue with missing patches is that the company just doesn’t have a good patch management process. This is very common when a system uses tools like Joomla or other 3rd party frameworks because they don’t even know updates are available. We are seeing these types of frameworks getting better at alerting the system administrator to the availability of the updates, which is a step forward.

In the case of OneStopParking.com it is nice to hear that they were aware of the patch. That is half of the battle. Unfortunately for them it apparently didn’t play nice with their application so they were not able to install it, ultimately leading to a breach. This example shows that there are risks to any 3rd party frameworks that you use and sometimes you may be at their mercy when it comes to the patches. It can be a difficult decision to determine if the application should continue running unpatched, or be taken offline until everything is working properly.

It is difficult for outsiders to try and determine the risk vs reward in a situation like this. It is a gamble and unfortunately, in this case, one that didn’t pay off. This is a good lesson in patching and 3rd party frameworks. If you are using these frameworks, make sure that you have a way to track what frameworks are in use and what security risks may arise at any given time. Create a plan to test the patches and apply them when appropriate.

If applying a security patch causes an issue, think about alternative methods for protecting that application until the patch can be applied. One option is to install a web application firewall (WAF) in front of the application that can be configured to protect the feature that has the security flaw. While this may not be a permanent solution, it is often recommended for this type of situation.

MoonPig Take-Aways

It was recently released that there were some security concerns with how the Moonpig, an online greetings card company in the UK, utilizes their API for mobile applications.  From the public disclosure of a vulnerability found in their API it may be possible for a user to see other user’s personal information, including last 4 of their credit card number, expiration date and name.  This is a great opportunity to look at some of the security issues and how they can be avoided in your code.

Take-Aways


For Developers

  • Use strong authentication mechanisms
    • Tamper Protection
    • Anti-Replay
    • Secure Communication
    • Limited Access
  • Don’t use hard-coded passwords
  • Implement authorization checks at the row level for data records
  • Implement brute-force and anti-automation protections
  • Implement certificate-pinning if possible to limit the ability to intercept traffic

For QAs

  • Use a proxy for inspecting traffic to the API
    • This would help identify use of stored credentials
  • Verify authorization checks
    • This is especially important when you see numeric parameters.  Increment/decrement them to see if access is granted to other customers
  • Check for Brute Force protections

Authentication

One of the first things pointed out in how the MoonPig API works is the authentication.  The application was making use of Basic Authentication, which is commonly seen in mobile API’s.  The good news is that the basic authentication was being performed over a secure channel (HTTPS).  The secure channel helps protect the credentials in transit to the server for verification.  This protection is important because basic authentication only base64 encodes the username and password.  It takes minimal effort to decode base64, although there is a common misconception otherwise.  As an example, the username/password of “James:password” is encoded as “SmFtZXM6cGFzc3dvcmQ=”.  While the resulting output appears to be unreadable, it isn’t.   Basic authentication does have some drawbacks, such as no brute force or anti-replay protections.

It appears that the API in question had some detail about using OAuth2, but it was not implemented.  OAuth provides a great option for authentication for mobile applications.  In some implementations it will include a signature to help prevent tampering of the request, anti-replay functions and other controls that are very useful when authenticating a request.  For more information on OAuth please visit the main OAuth Community Site.

Authorization

The next issue is a lack of authorization when it comes to accessing functionality or data.  Authorization is the ability to determine what functionality or data an authenticated user has access to.  Authentication is required for authorization to work.  In this situation, the ability to view other user’s information is a perfect example.  The API accepted a customer id of some sort, which happened to be an incremental numeric value, which identified what data to display.  By incrementing that customer id, it could be possible to view a different customer’s data.   This is common in many applications and is referred to as a Direct Object Reference.  A properly designed application would only allow the authenticated user to have access to the customer records for which they control and not any others.

Anti-Automation

The final issue to discuss is the lack of brute force protections on the API.  In a situation where simply implementing a parameter value by 1 or other set value returns different data, automating that to scrape lots of data becomes a concern.  It would be one thing if an attacker could grab one record at a time manually and it takes a long time to get the entire database.  It is completely different when a script can run through the 3 million records in a few minutes or an hour.  Unfortunately, this can also be difficult to defend against when it is an API.  With a user interface, you can throw a CAPTCHA or some other method up there, but with a lack of interface that can be a bit more challenging.  This is something that should be thought about when designing your APIs.

Certificate Pinning

It is just an assumption that the application doesn’t use certificate pinning because the traffic was intercepted and modified.  While it is possible to bypass certificate pinning I didn’t get that impression.  The purpose of certificate pinning is to have the app only accept trusted certificates.  When this is configured it makes it more difficult to use a certificate that is generated by the proxy an attacker might use for their man-in-the-middle attempts.

While certificate pinning is a great control, it is not always the right choice.   If your users are within a corporate environment that sends all traffic through a proxy that may strip or modify the SSL then this might not be feasible.

Conclusion

We can learn something from all of the security incidents that we see.  Although the breach may not effect you directly, take the time to understand what happened so you can ensure the same things are not happening within your organization.