Tag Archives: security awareness

XSS in a Script Tag

Cross-site scripting is a pretty common vulnerability, even with many of the new advances in UI frameworks. One of the first things we mention when discussing the vulnerability is to understand the context. Is it HTML, Attribute, JavaScript, etc.? This understanding helps us better understand the types of characters that can be used to expose the vulnerability.

In this post, I want to take a quick look at placing data within a <script> tag. In particular, I want to look at how embedded <script> tags are processed. Let’s use a simple web page as our example.

<html>
	<head>
	</head>
	<body>
	<script>
		var x = "<a href=test.html>test</a>";
	</script>
	</body>
</html>

The above example works as we expect. When you load the page, nothing is displayed. The link tag embedded in the variable is rated as a string, not parsed as a link tag. What happens, though, when we embed a <script> tag?

<html>
	<head>
	</head>
	<body>
	<script>
		var x = "<script>alert(9)</script>";
	</script>
	</body>
</html>

In the above snippet, actually nothing happens on the screen. Meaning that the alert box does not actually trigger. This often misleads people into thinking the code is not vulnerable to cross-site scripting. if the link tag is not processes, why would the script tag be. In many situations, the understanding is that we need to break out of the (“) delimiter to start writing our own JavaScript commands. For example, if I submitted a payload of (test”;alert(9);t = “). This type of payload would break out of the x variable and add new JavaScript commands. Of course, this doesn’t work if the (“) character is properly encoded to not allow breaking out.

Going back to our previous example, we may have overlooked something very simple. It wasn’t that the script wasn’t executing because it wasn’t being parsed. Instead, it wasn’t executing because our JavaScript was bad. Our issue was that we were attempting to open a <script> within a <script>. What if we modify our value to the following:

<html>
	<head>
	</head>
	<body>
	<script>
		var x = "</script><script>alert(9)</script>";
	</script>
	</body>
</html>

In the above code, we are first closing out the original <script> tag and then we are starting a new one. This removes the embedded nuance and when the page is loaded, the alert box will appear.

This technique works in many places where a user can control the text returned within the <script> element. Of course, the important remediation step is to make sure that data is properly encoded when returned to the browser. By default, Content Security Policy may not be an immediate solution since this situation would indicate that inline scripts are allowed. However, if you are limiting the use of inline scripts to ones with a registered nonce would help prevent this technique. This reference shows setting the nonce (https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/script-src).

When testing our applications, it is important to focus on the lack of output encoding and less on the ability to fully exploit a situation. Our secure coding standards should identify the types of encoding that should be applied to outputs. If the encodings are not properly implemented then we are citing a violation of our standards.

Equifax Take-aways

By now, you must have heard about the Equifax breach that may have affected up to 143 million records of user people’s information. At this point, I don’t think they can confirm exactly how many records were actually compromised, leading to going with the larger of the numbers just to be safe. While many are quick to jump to conclusions and attempt to Monday morning quarterback what they did or didn’t do to get breached, I like to focus on what we can learn for our own organizations. There are a few topics I want to discuss that hopefully will be useful within your organization.

Patching

Well, it appears to be pretty clear that the avenue of attack was a Struts patch that was missing on the server. The patch was apparently released a few months prior to the attack, or at least acknowledgement of the attack. On the surface, patching appears to be a pretty easy task. A patch is released, you apply it.

Simple, right?

Patching is actually much more complex than that. It may be that simple when you have a single system to work with and maintain with very few software packages. Unfortunately, that is not the reality for so many places. Many organizations are dealing with hundreds or even thousands of systems to attempt to keep fully patched. This is a pretty big task, even if there were no other variables. Automate it they say. Sure, automation can be done, and needs to be done. How can anyone patch that many systems in a reasonable time frame manually?

There are other factors to consider. First, lets consider that there are many different types of patches. You have patches for the operating system, patches for applications, patches for frameworks, even patches for client-side libraries. Does your automation cover all of these sources? Some software has automatic update capabilities and will update on their own. Others require that you explicitly go out and download the patch and apply it.

Second, you have custom written applications with millions of lines of code pulling in multiple frameworks and packages to make development easier. It would be foolish to apply the patch without testing it first. This becomes more of a challenge with application patches because the entire application needs to be retested. This is more than a test to make sure the computer still boots. This needs to make sure that all the functionality, especially that functionality around the component is still properly functioning. The testing alone can be a time consuming piece. Add on to that if the patch makes any other changes within the code that breaks something. How bad does it break it. How much code needs to be rewritten for your custom code to work correctly again? Does that component have other components that are dependent on that version? Does this end up affecting other components?

Finally, who is tasked with patching the systems? Is this defined within the business? Are the same people that apply OS patches to the server the ones responsible for the application component patches? How do they track those type of patches? Do they need to get the go ahead from the application team that the patch is OK to implement?

As you can see, there are a lot of factors that go into apply what may appear to be a simple patch. What it highlights to me is the importance of understanding what components our application uses, how they interact with each other, and understanding how patches are applied when made available. Worst case scenario, we didn’t even know a patch was released.

Patching, however, is just one control for helping protect our systems. Similar to how input validation is a control to help with injection attacks. We shouldn’t be relying on it alone. The Equifax breach shows this well, that we must consider other controls in place in the event another control breaks down.

Encryption

I hear a lot of people talk about the data should have been encrypted. I believe that to be an easy statement to make, but without more details on how the data was actually accessed, it is not very helpful. Hopefully, your organization has a data classification policy. Hopefully, that data classification policy describes how data should be protected. This is the policy that determines how data should be protected and it should exist. If you have not seen this policy, ask for it.

Now that we know some data needs to be encrypted, what is the right method to use? Should we use disk encryption or column level encryption? Should we use Tokenization? The each have their pros and cons. Maybe the answer is you have to implement all of them, just to be safe, but how might that affect your ability to have a high performing functioning application?

You may decide to implement disk encryption for your database. That is a good step, in the event that someone is able to steal the actual files of the database. That doesn’t help much if the application has a vulnerability that allows access to the data that the attacker can just enumerate through. This can be similar to column level encryption as well. Often times application flaws may be able to bypass the encryption if incorrectly implemented. I guess at the very least, you get to say the data was encrypted.

The point with encryption is to make sure you know what you are doing and how you are implementing it. What attack vectors will it protect against and which ones may still be vulnerable. If you are going to take the time to implement it, it is important to make the best use of it.

Auditing and Logging

Auditing and Logging are important parts of the security of an application. They help us see and act upon events that may be malicious. How do you get vision into 3rd party components, like Struts, to see what they are doing? Are you relying on system event logs if the component throws an exception? Within our own applications we can use the logging to identify queries run, data accessed, and authorization failures, etc. When a system gets compromised, that logging may not be useful. It may be a combination of system and application events that help identify an attack as it is happening or after the fact. This is a great reminder that logging mechanisms can cross boundaries and this needs to be reviewed. Take a moment to look at how your applications and your web server are configured to identify potential malicious attacks. Consider different attack scenarios and see how those may get logged and if/when someone might see them.

Risk Management

Business run on the concept of taking risks. Sometimes this works in favor of the organization, sometimes not. In order to make better decisions, they must understand the risks they face. In a situation like this, we know there may be a patch available for a platform. The patch is critical since it allows for remote code execution. But what was known about the risk? What applications were effected on that server? What type of data did those applications maintain? Where does that application fit into our business model? Often times, we don’t look at the real details of a vulnerability or risk, rather we focus on the numbers. A patch that may compromise a system with no records and access is very different than one that relates to all your customer data that may be sensitive.

Don’t mistake this as an alternative to patch management. It is, however, a reality that in the midst of doing business, decisions will be made and not all of them will be popular. When working in your organization, think about the information you may be providing in regards to the decision making process. Is it sufficient? Does it tell the whole story?

Wrap Up

Companies are always at risk of being breached. As we see new breaches appear in the news we need to take a little time to skip the hype and personal opinions, and take a look at what it means to our programs. Look for the facts of what happened, how decisions may have been made, and the effect those had on the organization. Then apply that to your organization. Maybe you learn a new perspective on how a vulnerability can be used. Maybe you see a control that was bypassed that you also use and you want to review how your processes work. In any case, there are lessons we can learn from any situation. Take those and see how they can be used to help your processes and procedures to provide security in your organization.

Blue Cross Mails USB sticks – Take-Aways

You have information you want to share with your customers, but how do you do it securely? How often have you heard not to click links sent via email? You shouldn’t plug in random USB drives to your computer. From a marketing perspective, how do you get large amounts of information, such as videos and specific information, out to your customers?

In a report by Fierce Healthcare (http://www.fiercehealthcare.com/privacy-security/bcbs-alabama-re-evaluates-usb-marketing-campaign-amid-security-concerns) it appears that BCBS of Alabama thought that sending out some fancy USB drives with benefit information was the right choice. Apparently the drive contained videos and other information about benefits for the company.

As you can guess, there was lots of skepticism around the method. We are taught in many security awareness training sessions to never plug untrusted USB drives into our devices. This is a common tactic used during penetration tests where they drop USB drives in the parking lot or in public places to see if someone will plug it in. Once plugged in, it may allow external access to the system for the attacker. Of course, this depends on the controls in place on the system it was plugged into.

You might wonder why we still see USB drives still passed out at conferences. There are still people that do this and you might consider those as trusted because you feel you received them in person. A company representative handed it to you, so it must be safe right? Well.. not necessarily. You should still proceed with caution.

In regards to mailing the drive, there appears to be an assumption of trust. This may be due to the fact that it was physically sent to you, vs. an item that was emailed. It came through the mail system, it had the corporate branding, even the right return address. The key factor is that all of those things can be spoofed. It is simple to create a letter with a company logo and branding and to set the return address to something other than your personal address. The mail system isn’t designed to verify trust in who sent a letter. Instead, it is meant to hopefully put trust in the fact that if you send mail, it will arrive at its destination.

Take Aways

When we analyze the situation, it helps us decide how to better review our systems to understand our risks and controls. There are a few things you can do to help reduce the risk of these types of potential attacks or situations.

Review your security awareness training to see how it covers USB drives and the policies around them. Are your users trained on how to handle a situation where they receive a USB drive from an untrusted source? Do they know who to contact to make sure it is properly analyzed before they attempt to use it?

Work with your marketing teams to determine how different campaign types work and which ones are acceptable. They are typically not considering the security aspect of every option and helping provide some insight goes a long way. It is not a matter of someone purposely trying to do something insecurely, but rather a situation where someone doesn’t have the exposure. Talk through the different scenarios and why they raise the risk level within the organization or to the clients.

Review any technical controls in place within the organization surrounding plugging devices into the computers. Do you have controls to block these devices? Is that control limited to storage devices? Remember that a USB drive can also be a Human Input Device (HID) which appears not as a storage device, but as a keyboard. These HID devices often bypass limitations on other USB drive types to allow executing code on the system.

Identifying alternative methods means calculating the risk of each one and picking the best choice. As an alternative, the campaign could have been an email campaign with a link in it for users to click. The risk is that people are taught not to click links in emails. Another option could have been to send the mail, but instead of the USB Drive, include a link for the user to type into their browser. It is recommended to use a link that goes to your domain, not a shortened URL. This provides more trust in that the destination is not hidden. Take the time to consider alternatives and the risks for each one.

Validation: Client vs. Server

Years ago, I remember being on a technical interview phone call for a senior developer position. What stood out was when the interviewer asked me about performing input validation. The question was in regards to if validation should be on the client or the server. My answer: The server.

What took me by surprise was when the response was that my answer was incorrect. In fact, I was told that Microsoft recommends performing validation on the client. This was inaccurate information, but I let it go and continued with the interview.
Recently, I have been having more conversations around input validation. In particular, the question of client or server side. While it is easy to state that validation should always be performed on the server, lets dig into this a little more to better understand your situation.

From a pure security perspective, input validation must be performed on the server. There is one simple reason for this: Any protections built using client-side techniques can be bypassed by using a simple web proxy. Using JavaScript to enforce that a field contains an email address can be easily bypassed by intercepting the request and changing it after the JavaScript has executed.

If you look at the threat model of your application, requests from the client to the server cross a trust boundary. Because of this trust boundary we know that the data needs to be validated again. Why? There is no way to know what happened to the data before it was received. We can assume the request was sent from a browser, used by a typical user. However, we don’t know if the data was manipulated after leaving the browser, or even sent from a browser at all.

That, however, is from a strict security standpoint. We must not forget that client-side validation serves a purpose as well. While client-side validation may not be trusted by the server, it tends to be more focused on immediate feedback to the user. Not only does this save a round trip, or many round trips, to the server, it cuts down on the processing the server needs to handle.
If we take an example of purely validating required fields on a form, we can immediately see the benefit of client-side validation. Even a small form, if not complete can create a lot of inefficiency if the user is constantly posting it without all the required fields. The ability to alert to this on the client makes it much quicker and cuts down on the number of invalid requests to the server.

Of course, this doesn’t mean that the user can’t fill in all the required fields to pass the client-side validation, intercept the request, and then remove some of those fields. In this case, server-side validation would catch this. The goal, however, of client-side validation is to provide a reactive user interface that is fast.

Understanding how each validation location functions and what the real purpose is helps us identify when to use each. While server-side validation is always required, client-side validation can be a great addition to the application.

Sub Resource Integrity – SRI

Do you rely on content distribution networks or CDNs to provide some of your resources? You may not consider some of your resources in this category, but really it is any resource that is provided outside of your server. For example, maybe you pull in the jQuery JavaScript file from ajax.googleapis.com rather than hosting the file on your server.
These CDNs provide a great way to give fast access to these resources. But how do you know you are getting the file you expect?

As an attacker, if I can attack multiple people vs just one, it is a better chance of success. A CDN provides a central location to potentially affect many applications, vs. targeting just one. Would you know if the CDN has modified that file you are expecting?

Welcome Sub Resource Integrity, or SRI. SRI provides the ability to validate the signature of the file against a predetermined hash. It is common for websites that provide files for downloads to provide a hash to validate the file is not corrupt. After downloading the file, you would compute the hash using the same algorithm (typically MD5) and then compare it to the hash listed on the server.

SRI works in a similar way. To implement this, as a developer you create a hash of the expected resource using a specified hashing algorithm. Then, you would add an integrity attribute to your resource, whether it is a script element or stylesheet. When the browser requests the resource, it will compute the hash, compare it to the integrity attribute and if successful, will load the resource. if it is unsuccessful, the file will not be loaded.

How it works

Lets look at how we would implement this for jQuery hosted at google. We will be including the reference from https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js

Initially, we might start by just creating a script tag with that as the source. This will work, but doesn’t provide any integrity check. There are a few different ways we can create the digest. An easy way is to use https://www.srihash.org/. The site provides a way to enter in the url to the resource and it will create the script tag for you.

Another option is to generate the hash yourself. To do this you will start by downloading the resource to your local system.

Once the file is downloaded, you can generate the hash by executing the following command:


openssl dgst -sha384 -binary Downloads/jquery.min.js | openssl base64 -A

Make sure you change Downloads/jquery.min.js to your downloaded file path. You should see a hash similar to:

xBuQ/xzmlsLoJpyjoggmTEz8OWUFM0/RC5BsqQBDX2v5cMvDHcMakNTNrHIW2I5f

Now, we can build our script tag as follows (Don’t forget to add the hashing algorithm to the integrity attribute:

<script src=”https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js” integrity=”sha384-xBuQ/xzmlsLoJpyjoggmTEz8OWUFM0/RC5BsqQBDX2v5cMvDHcMakNTNrHIW2I5f” crossorigin=”anonymous”></script>

Notice that there is a new crossorigin attribute as well. This is set to anonymous to allow CORS to work correctly. The CDN must have CORS set up to allow the integrity check to occur.

If you want to test the integrity check out, add another script tag to the page (after the above tag) that looks like the following:

<script>alert(window.jQuery);</script>

When the page loads, it should alert with some jQuery information. Now modify the Integrity value (I removed the last character) and reload the page. You should see a message that says “undefined”. This means that the resource was not loaded.

Browser support is still not complete. At this time, only Chrome, Opera, and Firefox support this feature.

Handling Failures

What do you do if the integrity check fails? You don’t want to break your site, right? Using the code snippet we tested with above, we could check to make sure it loaded, and if not, load it from a local resource. This gives us the benefit of using the CDN most of the time and falling back to a local resource only when necessary. The following may be what the updated script looks like:

<script src=”https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js” integrity=”sha384-xBuQ/xzmlsLoJpyjoggmTEz8OWUFM0/RC5BsqQBDX2v5cMvDHcMakNTNrHIW2I5f” crossorigin=”anonymous”></script>
<script> window.jQuery || document.write(‘<script src=”/jquery-.min.js”><\/script>’)</script>

When the integtrity check fails, you can see the local resource being loaded in the image below:

SRI-1

If you are using resources hosted on external networks, give some thought about implementing SRI and how it may benefit you. It is still in its early stages and not supported by all browsers, but it can certainly help reduce some of the risk of malicious files delivered through these networks.

Jardine Software helps companies get more value from their application security programs. Let’s talk about how we can help you.

James Jardine is the CEO and Principal Consultant at Jardine Software Inc. He has over 15 years of combined development and security experience. If you are interested in learning more about Jardine Software, you can reach him at james@jardinesoftware.com or @jardinesoftware on twitter.

Sharing with Social Media

Does your application provide a way for users to share their progress or success with others through social media? Are you thinking about adding that feature in the future? Everyone loves to share their stories with their friends and colleagues, but as application developers we need to make sure that we are considering the security aspects of how we go about that.

Take-Aways


  • Use the APIs when talking to another service
  • Don’t accept credentials to other systems out of your control
  • Check with security to validate that your design is ok

This morning, whether true or not (I have not registered for the RSA conference), there was lots of talk about the RSA registration page offering to post a message to your twitter account regarding you going to the RSA conference. Here is a story about it. The page asks for your twitter username and password to then post a message out on your twitter account. Unfortunately, that is the wrong way to request access to post to a social media account.

Unfortunately, even if you have the best intentions, this is going to come across in a negative way. People will start assuming you are storing that information, that you know have access to important peoples twitter accounts going forward. Maybe you do, maybe you don’t, the problem there is that no one knows what happened with that information.

Just about every social media site out there has APIs available and support some oAuth or other authorization mechanism to perform this type of task. By using the proper channel, the user would be redirected to the social media site (Twitter in this instance) and after authenticating there, would provide authorization for the other site to post messages as the registered user.

Using this technique, the user doesn’t have to give the initial application their social media password, instead they get a token they can use to make the post. The token may have a limited lifetime, can be revoked, and doesn’t provide full access to the account. Most likely, the token would only allow access to post a message. It would not provide access to modify account settings or anything else.

If you are looking to integrate or share with social media sites, take the time to determine the right way to do it. This is really important when it involves access to someone else account. Protect the user and yourself. Don’t just take the easy way out and throw a form on the screen. Understand the architecture of the system and the security that needs to be in place. There are a lot of sites that allow sharing with social media, understand how they are doing it. When in doubt, run this by someone else to see if what you are planning on doing looks like the right way to do it.

Untrusted Data: Quick Overview

In the application security community it is common to talk about untrusted data. Talk about any type of injection attack (SQLi, XSS, XXE, etc) and one of the first terms mentions is untrusted data. In some cases it is also known as user data. While we hear the phrase all the time, are we sure everyone understands what it means? What is untrusted data? It is important that anyone associated with creating and testing applications understand the concept of untrusted data.

Unfortunately, it can get a little fuzzy as you start getting into different situations. Lets start with the most simplistic definition:

Untrusted data is data that is controlled by the user.

Client-Side

While this is very simple, it is often confusing to many. When we say controlled by the user, what does that really mean? For some, they stop at data in text boxes or drop down lists, overlooking hidden fields, request headers (cookies, referer, user agent, etc), etc. And that is just on the client side.

From the client perspective, we need to include any data that could be manipulated before it gets to the server. That includes cookie values, hidden form fields, user agent, referer field, and any other item that is available. A common mistake is to assume if there is no input box on the page for the data, it cannot be manipulated. Thanks to browser plugins and web proxies, that is far from the truth. All of that data is vulnerable to manipulation.

Server-Side

What about on the server side? Can we assume everything on the server is trusted? First, lets think about the resources we have server-side. There are configuration files, file system objects, databases, web services, etc. What would you trust out of these systems?

It is typical to trust configuration files stored on the file system of the web server. Think your web.xml or web.config files. These are typically deployed with the application files and are not easy to update. Access to those files in production should be very limited and it would not be easy to open them up to others for manipulation. What about data from the database? Often I hear people trusting the database. This is a dangerous option. Lets take an example.

Database Example

You have a web application that uses a database to store its information. The web application does a good job of input validation (type, length, etc) for any data that can be stored in the database. Since the web application does good input validation, it lacks on output encoding because it assumes the data in the database is good. Today, maybe no other apps write to that database. Maybe the only way to get data into that database is either via SQL Script run by a DBA or through the app with good input validation. Even here, there are weaknesses. What if the input validation misses an attack payload. Sure the validation is good, but does it catch everything? What if a rogue DBA manipulates a script to put malicious data into the database? Could happen.

Now, think about the future, when the application owner requests a mobile application to use the same database, or they decide to create a user interface for data that previously was not available for update in the application previously. Now, that data that was thought to be safe (even though it probably wasn’t) is now even less trusted. The mobile application or other interfaces may not be as stringent as thought.

The above example has been seen in real applications numerous times. It is a good example of how what we might think at first is trusted data really isn’t.

Web services are similar to the database. Even internal web services should be considered untrusted when it comes to the validity of the data being returned. We don’t know how the data on the other side of that service is being handled, so how can we trust it?

Conclusion

When working with data, take the time to think about where that data is coming from and whether or not you are willing to take the risk of trusting it. Think beyond what you can see. It is more than just fields on the form that users can manipulate. It is more than just the client-side data, even though we use terms like user controlled data. There is a lot of talk about threat modeling in application security, and it is a great way to help identify these trust boundaries. Using a data flow model and showing the separation of trust can make it much easier to understand what data we trust and what we don’t. At the bottom of this article is a very basic, high level, rough draft data flow diagram that shows basic trust boundaries for some of the elements I mentioned above. This is just an example and is not indicative of an actual system.

When it comes to output encoding to protect against cross site scripting or proper encoding for SQL, LDAP, or OS calls, the safest approach is to just perform the encoding. While you may trust a value from the web.config it doesn’t mean you can’t properly do output encoding to protect from XSS. From a pure code perspective, that is the most secure code. Assuming the data is safe, while it may be, does increase risk to some level.

If you are new to application security or training others, make sure they really understand what is meant by untrusted data. Go deeper than what is on the surface. Ask questions. Thank about different scenarios. Use stories from past experience. Once the concept is better understood it makes triaging flaws from the different automated tools much easier and should reduce number of bugs going forward.

Example Data Flow Diagram Showing Trust Boundaries
(This is only an example for demonstration purposes.)

Ex. Data Flow

HIV clinic Data Breach: Thoughts and Takeaways

One of the most common ways for sensitive information to be released outside of an authorized environment is by simple, common mistakes made by employees. These types of incidents usually have no malicious intent and are generally innocent in nature. An example of this was recently reported regarding a newsletter that was sent out to HIV patients (and others) that the sender made a simple mistake. Rather than use the BCC for each recipients address, they used the CC field. For those that may not realize, you don’t see the users listed as BCC (blind carbon copy), as opposed to the CC field which is shown to all recipients.

Think about any mass emails you may be a part of and which ones use the CC field instead of the BCC field. I have a few that I am on that share my information with the rest of the list. In many cases, this may not be that big of a concern, but in a health related situation like this one, it becomes more severe. The issue becomes a privacy and compliance issue as it deals with HIPAA and personal health information.

Is the solution as straight forward and simple as creating a procedural check list to ensure that BCC is used instead of CC? This may work, but it still opens up the opportunity for someone on a tight deadline to skip the checklist and make the same mistake gain. We are all aware that after an incident we will be more aware, but as time goes on that awareness slips to the way side.

A company could engage a 3rd party mailer, like MailChimp, to do their newsletter mailings. This route raises different concerns because you are placing your critical or private data, the patients related to the health issue, in the hands of a 3rd party. If that vendor suffers a breach you will incur some risk there as well. Different vendors have different policies and security practices, so if you are thinking about taking that option make sure you understand what is and is not offered.

There may be add-ons for your mail program that can help send newsletters individually, rather than as a bulk email. One such solution for Microsoft Outlook is Send Individually created by Sperry Software. (Full disclosure, I used to work with Sperry Software, but I am not compensated by mentioning their product. I am not a reseller nor do I have any affiliation at this time) There may be other add-ins by other vendors that can do this as well.

Whichever direction you go, make sure that you are reviewing your processes and the risks they expose. This type of human error is easy to make, but quick to be crucified. Don’t cut corners due to quick timelines and have another person review before sending anything externally. Sometimes that second pair of eyes can catch the simplest of mistakes that are so easily overlooked by the original writer. It is important that we take time to understand these situations and learn from them. Attention to detail can save a lot of hassle in the future.

Tips for Securing Test Data (Scrubbing?)

An application typically has multiple environments from development through to full production. It is rare to find an application that doesn’t use some form of data. Some applications may use just a little data with a very simple database, while others may have very complex database schemas with a lot of data. Developers usually load just enough data to test the features/functions being implemented in the current iteration. Production systems contain actual customer information which may be very sensitive in nature. Finally, we have the test environments. These environments need to be fully functional, requiring lots of data, but where should the data come from?

In many cases it is common to see data from production copied into the test environments. Due to many test systems having less security controls in place, this may inadvertently expose sensitive data. In addition to securing the environment, here are a few tips to help protect the sensitive data when trying to populate lower level environments.

  • Don’t Use Production Data
  • Disassociate Sensitive Information
  • Remove Sensitive Information

Don’t Use Production Data
The safest solution is to not use actual production data in any other environments. Like any other security control, if you don’t have the information you have less risk. While this data may be most realistic to indicate how the system is used, it often comes with a high risk exposure. There are benefits to using scripts to generate test data because it is less likely to contain sensitive information and it can be easier to make test automation more successful. It is also possible to script in values that may be edge cases or less common in real data that can help enable better test cases.

Disassociate Sensitive Information
If you have (or decide) to use data from production one option is to make sure sensitive data is disassociated. There are many ways to do this, depending on how your system works. Some places will just scramble the fields so the data is real, but the different columns are re-arranged so that data for any given row is actually not related. The following table shows the initial data (Note: This data is made up):

First Name Last Name Tax ID Phone
John Smith 333-33-3333 904-555-6588
Debra Jones 111-11-1111 301-555-2395
Jason Walker 999-99-9999 011-138-9443

The following table shows the same data from above, but it has been disassociated. Notice how the data is no longer related to any specific person. Keep in mind that the data here is a very small sample so the combinations to get the real data would not be that difficult. However with a large dataset, it could be enough to help slow an attacker.

First Name Last Name Tax ID Phone
John Walker 111-11-1111 904-555-6588
Debra Smith 999-99-9999 011-138-9443
Jason Jones 333-33-3333 301-555-2395

Depending on features of the system, this may not be ideal. Imagine that the system actually sends emails or ships items. Of course you have disabled these features so they don’t actually function in test, right? Either way, data like phone numbers, addresses, email addresses, etc could lead to an incident if executed against this random data. Customers would be confused if they received a notification that had some of their information but also someone else’s information. A headache you don’t want to deal with. On another note, things like email addresses can be self identifying all by themselves. This might be information that should be removed or further mangled as to protect your user identities.

Remove the Sensitive Data
One option is to remove any data considered to be sensitive. It is important to check with the corporate guidelines or data classifications for the specific requirements for sensitive information. The sensitive data could be replace by just place holder generic data. For example, replace all phone numbers with (999)999-9999 or emails with test@sometestexample.com.

It can be more difficult when a sensitive field is used as a search field or a unique identifier. If that phone number is used as a search field, setting all phone numbers to the same value won’t work well in the test environment because you can’t really test the search feature. It would return all or nothing, which would not be a desired test case.

Check with your internal security office to understand the policies and procedures that are in place regarding production data. If no policies exist, work with the security team to help define them. By working together it is possible to understand the risks and hopefully reduce them. Determine a procedure that will work in your situation.

Tips for Securing Test Servers/Devices on a Network

How many times have you wanted to see how something worked, or it looked really cool, so you stood up an instance on your network? You are trying out Jenkins, or you stood up a new Tomcat server for some internal testing. Do you practice good security procedures on these systems? Do you set strong passwords? Do you apply updates? These devices or applications are often overlooked by the person that stood them up, and probably unknown to the security team.

It may seem as though these systems are not critical, or even important, because they are just for testing or don’t touch sensitive information. It is common to hear that they are internal, so an attacker cannot get to them. The reality is that every system or application on a network can be an aide to an attacker. No matter how benign the system may seem, it adds to the attack surface area.

What Can You Do?

There are a few things to think about when any type of application server or device is added to the network.

  • Change Default Passwords
  • Apply Updates
  • Remove Default Files
  • Decommission Appropriately

Change Default Passwords

While this seems simple, it is amazing how often the default password is still in use on systems on a network. This goes beyond unused systems or rogue systems, but to many other production devices. It only takes a moment to change the password on the device. It may not seem like it, but a quick Google search for default passwords for just about any device/COTS application can yield quick results.

Fortunately, many recent systems have switched to not use default passwords, rather they force you to set a password during setup. This is a great step in the right direction. It is also a good idea to change the default administrator account name if possible. This can make it a little more time consuming for an attacker to attempt brute forcing the password if they don’t know the user id.

If you develop software or devices that get deployed to customers you should be thinking about how the setup process works. Rather than setting a default password, have the user create one during the setup process.

Apply Updates

One of the most critical controls for security is patching. Many organizations will have patching procedures for the systems and software they know about, but if you are standing up an unknown device it may not get patched. Software patches often times contain security fixes, some of which are critical. Make sure you are keeping the system updated to help keep everyone safe. It is also a good idea to let the team that handles patching and system maintenance know about the new application/device.

Remove Default Files

If the application is deployed with default files or examples, it may be a good idea to remove them. It is common to see these types of files, meant only for testing purposes, not be very secure. Removing the files will help tighten the security of the system, leading to a more secure network.

Decomission Appropriately

If you are done using the system, remove it. I can’t tell you how many times I have found a system that hadn’t been used in months or even years because it was just to try something out. No one even remembered it, security didn’t know about it, and it was very vulnerable. By removing it, you no longer have to worry about patching it or the default passwords. It reduces the attack surface area and limits an attackers ability to elevate their privileges.

Is the Risk Real?

You bet it is. Imagine you have left an old Tomcat server on the network with default credentials (tomcat/tomcat) or something similar. An attacker is able to get onto the internal network, lets just assume that a phishing attack was successful. I know.. like that would ever happen. They log into the management console of Tomcat and deploy a WAR file containing a shell script.

I have a video that shows deploying Laudanum in just this manner that can be found here.

Now that the attacker has a shell available, he can start running commands against the operating system. Depending on the permissions that the Tomcat user is running under it is possible that he is running as an admin. At this point, he creates a new user on the system and even elevate that user to be an administrative user. Maybe RDP is enabled and remote login is possible. At the very least it will be possible to read files from the system. This could lead to getting a meterpreter shell, stealing administrative hashes, even leading to gaining domain admin access if a domain admin has logged into that system.

That is just one example of how your day may go bad with an old system sitting on the network that no one is maintaining. The point is that every system on the network needs to be taken care of. As a developer who may be looking to try a new application out, take some time to think about these risks. You may also want to talk to your security team (if you have one) about the application to see if there are known vulnerabilities. Let them know that it is out there so they may help keep an eye out for any strange behavior.

This doesn’t mean you can’t stand different things up, but we need to be aware of the risks to it. You may find that there is a network segment that is heavily controlled that the application or device will be put on to help reduce the risk. You don’t know until you ask. Keep in mind that security is everyone’s job, not just the team that has the word in their title. They can’t help protect what they don’t know about.