Tag Archives: security testing

Burp Extension – Juice Shop Routes

When it comes to testing for security within our web applications, we often look to creating simple tools to help speed things up. They also help provide a consistent way to help identify known patterns. For those that haven’t been following, I have been doing a few posts about getting the OWASP Juice Shop application up and running. In this post, I want to introduce a simple burp extension I created to help with a few of the challenges presented in the OWASP Juice Shop.

The Challenge

The OWASP Juice Shop has multiple challenges built into it to help guide the user along in finding web applications. The app is written as a single page app (SPA) and is a little different than your traditional applications. Rather than make a full page request for everything you click, instead the application just makes API calls. In short, the entire UI is returned on the first load of the application.

There are multiple challenges within the Juice Shop that require the user to access specific URLs that are not available via a menu or link. For example (and I don’t want to directly give away answers to the challenge here), imaging visiting the contact screen. If you click the menu item for contact us, it will update the URL in the address bar to be like /#/contact. So I don’t have to actually click a link to access the contact us page, I could just update that URL directly in the address bar.

Finding the Routes

So how do you find the URLs that are available to access? There are multiple ways of doing this. First, of course, you could try to just fuzz them. This is good if the URLs are not available anywhere for us to find. Fortunately, this Angular application embeds the different routes within the client-side code. It uses a RouteProvider object to do this.

With a little digging, you can find this information in the /dist/juice-shop.min.js file. This is a pretty large file and looking through it can be a little tough. Searching for RouteProvider will bring you to the route definitions.

A Burp Extension

As a professional web app tester, I spend a lot of time using Burp Suite as my web proxy. It allows me to easily view and intercept all the traffic my browser sends back and forth to the server. Another cool feature is it allows building your own extensions to solve just this type of problem. Of course, you probably wouldn’t just create an extension for this one piece or a specific app. Instead, the hope would be that it would become helpful for other applications that use angular’s route provider object. This way, when I go to test other sites, I may get this information right there in my scanner results.

I built a quick little burp extension that looks for the routeProvider object and then, if found, pulls out each route defined. The code can be found at https://github.com/jamesjardine/juiceshoproutes. The code that performs all the tasks are in the BurpExtender.java file.

Once the extension is built into a JAR file it can be added into Burp. Note that this only works for Burp Pro because it uses the passive scanner. To install it, just navigate to the “Extender” tab and click the Add button. Then select the Jar file you created:

Be 0

Here is a quick run down of what it does:

  • Looks to see if the response from the server contains the phrase $routeProvider. If it finds this phrase it then gets the start offset and the end of the object by identifying the next ] character.
  • It then loops through the specific body text looking for a match to e.when. This basically defines when the URL matches X, load a specific template.
  • Each route that is identified is then displayed in an unordered list.
  • When you visit the Target Tab and select your site, click on the Issues tab to look for Angular Routes.

This is just a quick example of how building custom burp extensions can help increase your efficiency or help provide some additional coverage for known patterns within applications.

————–POTENTIAL SPOILER INFO BELOW ——————–

IF YOU ARE WANTING TO TRY THIS OUT WITHOUT SEEING THE RESPONSE FIRST STOP READING HERE!!!

There is no interaction needed to trigger this scan except for just visiting the initial page of the Juice Shop site. The Issues tase will look like:

Be screen1

If you look at the response, you will see the following image:

Be 2

In the above image you can see the highlighted area showing where the RouteProvder routes are defined. In the following image, you can see the advisory screen which indicates the formatted list of routes that are available:

Be 3

OWASP 2017 Changes

When I talk to people about application security, the most recognized topics is the OWASP Top 10. If you haven’t heard of the top 10, or need a refresher, you can get the full list at:

https://www.owasp.org/images/7/72/OWASP_Top_10-2017_%28en%29.pdf.pdf

The OWASP Top 10 is on a three year update cycle. We had the list in 2010, 2013 and now the latest is 2017. You may be wondering why it is 2017 rather than 2016. I think that is a question a lot of people had. In any case, the list made it out to final release after the initial draft was rejected. Now that it is here, we can analyze it and see how it affects us and our organizations.

https://www.youtube.com/watch?v=kfDuxwFScOE

What I think sticks out more to me this update over previous updates is the removal of some pretty common flaws based on my experience. In the past we have seen flaws move up or down on the risk level, or get combined, but not as much removed. In 2017, we saw two items get removed:

  • Cross-site Request Forgery
  • Unvalidated Redirects and Forwards

I find these items interesting because I see them on most of the assessments I do. Let’s take a quick look at them.

Cross-site Request Forgery

CSRF can be a pretty serious flaw based on its context. It is the ability to force the victim’s browser to make requests to another site they are authenticated too without their knowledge. An example of a higher-risk context is the ability to change the victim’s email address on their profile. If the system doesn’t have two factor authentication or other safe guards, changing the email address can lead to the ability to request a password reset. In many situations, this can lead to easily taking over the victim’s account.
This is just one example of how CSRF can be used. The good news is that many newer frameworks provide some level of CSRF protection built-in. So in many applications it is not as prevalent. However, based on my experience, not everyone is using the latest frameworks. Due to this, I still find this on a lot of the assessments I do.

Unvalidated Redirects and Forwards

Unvalidated Redirects is often viewed as a low risk issue. In many cases, it may represent a low risk. There are some situations that make unvalidated redirects fairly dangerous. A good example is the redirect often performed by login forms. A common feature of many applications is to redirect the user to a specific resource after logging in. To do this, a parameter in the URL specifies the path to be sent to. If the application allows redirecting to external sites, it is simple to set up a malicious site with the same look and feel as the expected site. If the victim uses your link with the reference to your malicious site they may be presented with your fake login page after successfully logging into the real site. The victim may believe they have mistyped their password and just login again without checking the URL, leading to account takeover.

We also saw to access control findings get merged into one. This change makes a lot of sense when you look at each item. They are both regarding access control issues.

With the removal and merging, the list has brought on three new vulnerabilities:

  • XML External Entities (XXE)
  • Insecure Deserialization
  • Insufficient Logging and Monitoring

XML External Entities (XXE)

XML External Entities is a vulnerability that takes advantage of how XML Parsers interpret the supplied XML. In this case, it is possible to reference other resources outside of the XML document. A common scenario is the ability to read other files on the web server, such as the /etc/passwd file. This vulnerability also may allow a denial of service attack to occur due to embedding specific entities. This vulnerability obviously relies on the application parsing XML data. If your application is parsing XML, it is recommended to make sure the parser is ignoring or blocking DTDs. If your parser doesn’t have that option, or you need to allow some DTDs, make sure your input validation is limiting those to only acceptable ones.

Insecure Deserialization

Insecure Deserialization occurs when you are deserializing data that has not been properly sanitized. This occurs because we assume that the data serialized has not been modified. When the data is modified, it could be executed during the deserialization process to perform commands. To help prevent this, make sure you are enforcing strict data checks on the objects that have been serialized. I do not see this very often in many of the assessments I do. Just depends on the application as many do not use much serialization.

Insufficient Logging and Monitoring

When I talk to people and ask them about logging, the first response, or usually the only response, is related to troubleshooting. There is no doubt that troubleshooting is critical for any application. If the application is not running as expected, users may leave, transactions may get lost, or a myriad of other issues may occur. Logging is for much more than just troubleshooting. Proper logging of security related events can help identify an attack while it is occurring as well as help identify what happened after the fact. It can be very difficult to identify what data was accessed or how if there are no logs indicating such information. It is good that we are seeing more attention called to this practice, although it can be a complex one to implement and verify. Don’t forget that once you start logging security events, they must be monitored to take action.

Wrap Up

Changes to the OWASP Top 10 isn’t something new. We know it will happen and it may require some adjustment to what we are doing internally. While we do see items drop or get added, it just highlights that the top 10 is a mere starting point of security. Every organization should have their list of top 10 risks. Don’t limit yourself to these short lists. They are to help identify the highest risks and implement them in a feasible way. Application security doesn’t happen overnight. There has to be a starting point and then a path to mature.

Listen to the podcast on this topic. http://podcast.developsec.com/developsec-podcast-91-owasp-top-10-2017-thoughts

Two-Factor Authentication Considerations

There was a recent article talking about how a very small percentage of google users actually use 2-factor authentication. You can read the full article at http://www.theregister.co.uk/2018/01/17/no_one_uses_two_factor_authentication/

Why 2-Factor

Two-factor authentication, or multi-factor authentication, is a valuable step in the process to protect accounts from unauthorized users. Traditionally, we have relied just on a username/password combination. That process had its own weaknesses that many applications have moved to improve. For example, many sites now require “complex” passwords. Of course, complex is up for debate. But we have seen the minimum password length go up and limitations on using known weak passwords go up. Each year we see lists of the most common passwords to not use, some being 123456 or Password. I hope no one is using these types of passwords. To be honest, I don’t know of any sites I use that would allow this type of password. So many these days require a mix of characters or special characters.

https://www.youtube.com/watch?v=YxXebkpSLr8

The above controls are meant to help reduce the risk of someone just guessing your password, there are other controls to help try to limit brute forcing techniques. Many accounts offer account lockout after X number of invalid attempts. There are other controls that we also see implemented around protecting the username/password logic. None of these controls help protect against a user reusing passwords on another site that may be compromised. They also do not protect against a user falling for a social engineering attack to trick them into sharing their passwords. To help combat this, many sites will implement a second factor beyond username/password.

The idea of the second factor is that even if you have the username and password, you will not have this other piece of information. In most cases, it is a value that changes every 60 seconds or so, and is delivered over a protected channel. For example, the token used may be sent via SMS, a voice call, or created through a phone application like the Google Authenticator application. So even if the attacker is able to get your password, via a breach, brute force, or just lucky guessing, in theory they would not have access to that second factor.

Why Are People Not Using It?

So why do people not enable the second factor on their Google accounts? Unfortunately, the presentation didn’t appear to explain that, which makes sense since it is difficult to know why people do or do not do certain things. I think there may be a few reasons for it that we will briefly touch on.

First, I think many people just are not aware of enabling the second factor. To be fair, it is sort of buried down in settings that may be difficult to find if you are not really looking for it. If it is not front and center, then there is a much smaller chance people will go seeking it out. To add to the issue, many people really don’t understand what 2-factor authentication means or how it really helps them. Sure, in security we get it, but that doesn’t mean everyone else does. How do we make it more prominent that this is a positive security feature? Many users will already be aware of 2-factor if they use online banking as most of those have started enforcing it.

Many people think that two factor authentication is a burden or it will slow their access down. This is usually not the case unless the application has implemented it poorly. Many sites will allow you to save your computer so you don’t need to enter the 2nd factor every time you access the site. However, it will require it if you access from a different computer.

To complicate things, other applications may not support signing in with 2 factors, like your email client. In these cases, you have to generate an app password which can be very confusing to many users, especially those that are not technically savvy.

There may be a chance that users don’t think they need to protect their email accounts, that it is not sensitive. If you just use email to communicate with friends and receive junk mail, what could be so bad, right? Most people forget that things like password resets are performed using an email account. Having control of an email account provides a lot of control over a lot of things. While it may seem small, email is an important function to protect.

If you are using Gmail, I recommend configuring 2-factor authentication. The following video walks through setting it up using SMS (Although there are other options as well):

Demo- Google 2 factor

If you are developing applications, I recommend looking into providing the option of 2-factor authentication. When you do this, make sure that you are promoting its use in a positive way. If you already have 2-factor with your application, can you run a report to determine what percentage of users are actually using it? If that number is low, what steps can you take to increase them?

Don’t assume that any application is not worthy of the extra security. Many applications are already providing 2-factor and that number will just increase. While we still have the password, we will always be looking for ways to add more protection. When implemented properly, it is simple for the end user, but effective in increasing security. If your user base is not taking advantage of the option, take the time to assess why that is and how it can be improved.

As I was writing this up, I ran into an interesting situation with 2-factor that sparked some more thoughts. When looking to support 2-factor authentication and not using SMS, take careful consideration to the applications you may choose to support. On the Apple App Store alone there are over 200 different authenticator apps available. Some are interchangeable while others are not. This can be another barrier in users choosing to enable 2-factor authentication.

Tinder Mobile Take-Aways

While browsing through the news I noticed an article talking about the Tinder mobile app and a privacy concern. You can read the article at https://www.consumerreports.org/privacy/tinder-app-security-flaws-put-users-privacy-at-risk/. To summarize what is considered the issue is that the mobile application does not transmit the photos that you see using HTTPS. This means that anyone on the same connection can see the traffic and, ultimately, see the photos you are presented. From my understanding, it doesn’t appear the potential attacker can tell who the user is that is viewing these photos as the rest of the traffic is properly using HTTPS.

We have discussed the move to all HTTPS multiple times on this blog and we are seeing a lot of sites making the switch. With web applications it is easy to see if the site is using HTTPS or not with the indicators near the address bar. Of course, these indicators are often confusing to most, but at least we have the ability to see the status. With a mobile application it is much more difficult to tell if data is transmitted using HTTPS or not because there is no visible indicator. Instead, one needs to view the raw traffic or use a web proxy to see how the data is transmitted. This can be misleading to many people because the assumption is that the data is protected because it is hidden under more layers.

In this instance, the ability to see these photos may not be considered that sensitive by many. Assuming that anyone can create an account and see the photos doesn’t make them a secret. People have opted to post their images for others to find them on the network. Of course, level of sensitivity is in the eye of the beholder these days. Another issue that is potentially possible in this situation is that the attacker could manipulate that image traffic to show a different image. This could lead to the end user seeing a different image than the one expected. The usefulness of this could be called into question at any type of large scale.

The take-away here is that when we are building applications we must take care in understanding how we are transmitting all of our data to determine what needs to be protected. As I mentioned, there is already a push to make everything HTTPS all the time. If you have decided not to use HTTPS for your connections, have you documented the reasons? What does your threat model tell you about the risks with that data and its communication. How does that risk line up with your acceptance procedures.

Another interesting tidbit came out of the article mentioned above. In addition to seeing the actual photos, they found it was possible to identify whether or not the end user liked or disliked the photo by comparing the network traffic. The interesting part about this part is that those decisions were encrypted when transmitted. The key point here is that the traffic for each decision was a set size and the sizes were different for like and dislike. By viewing the traffic after seeing a photo, it is possible to determine which ones were liked based on the size of the requests. In this case, it still doesn’t identify the end user that is using the application.

We don’t typically spend a lot of time analyzing the size of the requests we send in the event someone may try to determine what actions we are taking over an encrypted channel. Most of the time these actions are not possible to determine, or the level of effort is way above what is realistic. The easy solution would be to make sure all traffic was encrypted and we wouldn’t be able to know what images were liked or disliked. Maybe it would be possible to still see the difference, but with no way to tie it to specific images. The other option is to attempt to pad the requests so that they are all the same size. This would be for highly sensitive systems as the complexity may not be worth the benefit.

Of course, all of this is based on the attacker being on the same network as the end user so they can intercept or view the traffic in the first place. In the case of a public place, it might just be easier to hover over your shoulder and watch you use the app then intercept the traffic and guess at who is using it.

Both of these topics are good conversation starters within your organization. They help us realize that even just one request that doesn’t use HTTPS may be seen and could raise an issue. It also helps us to see that sometimes even encrypted data can be determined, but that doesn’t mean it is a high risk. Each situation is different and should be properly analyzed to determine the risk it creates for the company and the organization.

Equifax Take-aways

By now, you must have heard about the Equifax breach that may have affected up to 143 million records of user people’s information. At this point, I don’t think they can confirm exactly how many records were actually compromised, leading to going with the larger of the numbers just to be safe. While many are quick to jump to conclusions and attempt to Monday morning quarterback what they did or didn’t do to get breached, I like to focus on what we can learn for our own organizations. There are a few topics I want to discuss that hopefully will be useful within your organization.

Patching

Well, it appears to be pretty clear that the avenue of attack was a Struts patch that was missing on the server. The patch was apparently released a few months prior to the attack, or at least acknowledgement of the attack. On the surface, patching appears to be a pretty easy task. A patch is released, you apply it.

Simple, right?

Patching is actually much more complex than that. It may be that simple when you have a single system to work with and maintain with very few software packages. Unfortunately, that is not the reality for so many places. Many organizations are dealing with hundreds or even thousands of systems to attempt to keep fully patched. This is a pretty big task, even if there were no other variables. Automate it they say. Sure, automation can be done, and needs to be done. How can anyone patch that many systems in a reasonable time frame manually?

There are other factors to consider. First, lets consider that there are many different types of patches. You have patches for the operating system, patches for applications, patches for frameworks, even patches for client-side libraries. Does your automation cover all of these sources? Some software has automatic update capabilities and will update on their own. Others require that you explicitly go out and download the patch and apply it.

Second, you have custom written applications with millions of lines of code pulling in multiple frameworks and packages to make development easier. It would be foolish to apply the patch without testing it first. This becomes more of a challenge with application patches because the entire application needs to be retested. This is more than a test to make sure the computer still boots. This needs to make sure that all the functionality, especially that functionality around the component is still properly functioning. The testing alone can be a time consuming piece. Add on to that if the patch makes any other changes within the code that breaks something. How bad does it break it. How much code needs to be rewritten for your custom code to work correctly again? Does that component have other components that are dependent on that version? Does this end up affecting other components?

Finally, who is tasked with patching the systems? Is this defined within the business? Are the same people that apply OS patches to the server the ones responsible for the application component patches? How do they track those type of patches? Do they need to get the go ahead from the application team that the patch is OK to implement?

As you can see, there are a lot of factors that go into apply what may appear to be a simple patch. What it highlights to me is the importance of understanding what components our application uses, how they interact with each other, and understanding how patches are applied when made available. Worst case scenario, we didn’t even know a patch was released.

Patching, however, is just one control for helping protect our systems. Similar to how input validation is a control to help with injection attacks. We shouldn’t be relying on it alone. The Equifax breach shows this well, that we must consider other controls in place in the event another control breaks down.

Encryption

I hear a lot of people talk about the data should have been encrypted. I believe that to be an easy statement to make, but without more details on how the data was actually accessed, it is not very helpful. Hopefully, your organization has a data classification policy. Hopefully, that data classification policy describes how data should be protected. This is the policy that determines how data should be protected and it should exist. If you have not seen this policy, ask for it.

Now that we know some data needs to be encrypted, what is the right method to use? Should we use disk encryption or column level encryption? Should we use Tokenization? The each have their pros and cons. Maybe the answer is you have to implement all of them, just to be safe, but how might that affect your ability to have a high performing functioning application?

You may decide to implement disk encryption for your database. That is a good step, in the event that someone is able to steal the actual files of the database. That doesn’t help much if the application has a vulnerability that allows access to the data that the attacker can just enumerate through. This can be similar to column level encryption as well. Often times application flaws may be able to bypass the encryption if incorrectly implemented. I guess at the very least, you get to say the data was encrypted.

The point with encryption is to make sure you know what you are doing and how you are implementing it. What attack vectors will it protect against and which ones may still be vulnerable. If you are going to take the time to implement it, it is important to make the best use of it.

Auditing and Logging

Auditing and Logging are important parts of the security of an application. They help us see and act upon events that may be malicious. How do you get vision into 3rd party components, like Struts, to see what they are doing? Are you relying on system event logs if the component throws an exception? Within our own applications we can use the logging to identify queries run, data accessed, and authorization failures, etc. When a system gets compromised, that logging may not be useful. It may be a combination of system and application events that help identify an attack as it is happening or after the fact. This is a great reminder that logging mechanisms can cross boundaries and this needs to be reviewed. Take a moment to look at how your applications and your web server are configured to identify potential malicious attacks. Consider different attack scenarios and see how those may get logged and if/when someone might see them.

Risk Management

Business run on the concept of taking risks. Sometimes this works in favor of the organization, sometimes not. In order to make better decisions, they must understand the risks they face. In a situation like this, we know there may be a patch available for a platform. The patch is critical since it allows for remote code execution. But what was known about the risk? What applications were effected on that server? What type of data did those applications maintain? Where does that application fit into our business model? Often times, we don’t look at the real details of a vulnerability or risk, rather we focus on the numbers. A patch that may compromise a system with no records and access is very different than one that relates to all your customer data that may be sensitive.

Don’t mistake this as an alternative to patch management. It is, however, a reality that in the midst of doing business, decisions will be made and not all of them will be popular. When working in your organization, think about the information you may be providing in regards to the decision making process. Is it sufficient? Does it tell the whole story?

Wrap Up

Companies are always at risk of being breached. As we see new breaches appear in the news we need to take a little time to skip the hype and personal opinions, and take a look at what it means to our programs. Look for the facts of what happened, how decisions may have been made, and the effect those had on the organization. Then apply that to your organization. Maybe you learn a new perspective on how a vulnerability can be used. Maybe you see a control that was bypassed that you also use and you want to review how your processes work. In any case, there are lessons we can learn from any situation. Take those and see how they can be used to help your processes and procedures to provide security in your organization.

JavaScript in an HREF or SRC Attribute

The anchor (<a>) HTML tag is commonly used to provide a clickable link for a user to navigate to another page. Did you know it is also possible to set the HREF attribute to execute JavaScript. A common technique is to use the onclick event of the anchor tab to execute a JavaScript method when the user clicks the link. However, to stop the browser from actually redirecting the HREF can be set to javascript:void(0);. This cancels the HREF functionality and allows the JavaScript from the onclick to execute as expected.

In the above example, notice that the HREF is set with a value starting with “javascript:”. This identifier tells the browser to execute the code following that prefix. For those that are security savvy, you might be thinking about cross-site scripting when you hear about executing JavaScript within the browser. For those of you that are new to security, cross-site scripting refers to the ability for an attacker to execute unintended JavaScript in the context of your application (https://www.owasp.org/index.php/Cross-site_Scripting_(XSS)).

I want to walk through a simple scenario of where this could be abused. In this scenario, the application will attempt to track the page the user came from to set up where the Cancel button will redirect to. Imagine you have a list page that allows you to view details of a specific item. When you click the item it takes you to that item page and passes a BackUrl in the query string. So the link may look like:

https://developsec.com/item.php?backUrl=/items.php

On the page, there is a hyperlink created that sets the HREF to the backUrl property, like below:

<a href=”<?php echo $_GET[“backUrl”];?>”>Back</a>

When the page executes as expected you should get an output like this:

<a href=”/items.php”>Back</a>

There is a big problem though. The application is not performing any type of output encoding to protect against cross-site scripting. If we instead pass in backUrl=”%20onclick=”alert(10); we will get the following output:

<a href=”” onclick=”alert(10);“>Back</a>

In the instance above, we have successfully inserted the onclick event by breaking out of the HREF attribute. The bold section identifies the malicious string we added. When this link is clicked it will prompt an alert box with the number 10.

To remedy this, we could (or typically) use output encoding to block the escape from the HREF attribute. For example, if we can escape the double quotes (” -> &quot; then we cannot get out of the HREF attribute. We can do this (in PHP as an example) using htmlentities() like this:

<a href=”<?php echo htmlentities($_GET[“backUrl”],ENT_QUOTES);?>”>Back</a>

When the value is rendered the quotes will be escapes like the following:

<a href=”&quot; onclick=&"alert(10);“>Back</a>

Notice in this example, the HREF actually has the entire input (in bold), rather than an onclick event actually being added. When the user clicks the link it will try to go to https://www.developsec.com/” onclick=”alert(10); rather than execute the JavaScript.

But Wait… JavaScript

It looks like we have solved the XSS problem, but there is a piece still missing. Remember at the beginning of the post how we mentioned the HREF supports the javascript: prefix? That will allow us to bypass the current encodings we have performed. This is because with using the javascript: prefix, we are not trying to break out of the HREF attribute. We don’t need to break out of the double quotes to create another attribute. This time we will set backUrl=javascript:alert(11); and we can see how it looks in the response:

<a href=”javascript:alert(11);“>Back</a>

When the user clicks on the link, the alert will trigger and display on the page. We have successfully bypassed the XSS protection initially put in place.

Mitigating the Issue

There are a few steps we can take to mitigate this issue. Each has its pros and many can be used in conjunction with each other. Pick the options that work best for your environment.

  • URL Encoding – Since the HREF is meant to be a URL, you could perform URL encoding. URL encoding will render the javascript benign in the above instances because the colon (:) will get encoded. You should be using URL encoding for URLs anyway, right?
  • Implement Content Security Policy (CSP) – CSP can help limit the ability for inline scripts to be executed. In this case, it is an inline script so something as simple as ‘Content-Security-Policy:default-src ‘self’ could be sufficient. Of course, implementing CSP requires research and great care to get it right for your application.
  • Validate the URL – It is a good idea to validate that the URL used is well formed and pointing to a relative path. If the system is unable to parse the URL then it should not be used and a default back URL can be substituted.
  • URL White Listing – Creating a white list of valid URLs for the back link can be effective at limiting what input is used by the end user. This can cut down on the values that are actually returned blocking any malicious scripts.
  • Remove javascript: – This really isn’t recommended as different encodings can make it difficult to effectively remove the string. The other techniques listed above are much more effective.

The above list is not exhaustive, but does give an idea of ways to help reduce the risk of JavaScript within the HREF attribute of a hyper link.

Iframe SRC

It is important to note that this situation also applies to the IFRAME SRC attribute. it is possible to set the SRC of an IFRAME using the javascript: notation. In doing so, the javascript executes when the page is loaded.

Wrap Up

When developing applications, make sure you take this use case into consideration if you are taking URLs from user supplied input and setting that in an anchor tag or IFrame SRC.

If you are responsible for testing applications, take note when you identify URLs in the parameters. Investigate where that data is used. If you see it is used in an anchor tag, look to see if it is possible to insert JavaScript in this manner.

For those performing static analysis or code review, look for areas where the HREF or SRC attributes are set with untrusted data and make sure proper encoding has been applied. This is less of a concern if the base path of the URL has been hard-coded and the untrusted input only makes up parameters of the URL. These should still be properly encoded.

Understanding Your Application Platform

Building applications today includes the use of some pretty impressive platforms. These platforms have so much built in capability, many of the most common tasks are easily accomplished through simple method calls. As developers, we rely on these frameworks to provide a certain level of functionality. Much of which we may never even use.

When it comes to security, the platform can be a love/hate relationship. On the one hand, developers may have little control over how the platform handles certain tasks. On the other, the platform may provide excellent security controls. As we mature these platforms, we see a lot of new, cool security features enabled by default. Many view engines have cross-site scripting protections built in by default. Many of the systems use ORM to help reduce SQL Injection vulnerabilities. The problem we often run into is we don’t really know what our platform does and does not provide.

A question was posed about what was the most secure application platform, or which would you recommend. The problem to that question is that the answer really is “It depends.” The frameworks are not all created equally. Some have better XSS preventions. Others may have default CSRF prevention. What a framework does or doesn’t have can also change next month. They are always being updated. Rather than pick the most secure platform, I recommend to people to take the time to understand your platform of choice.

Does it matter if you use PHP, Java, .Net, Python, or Ruby? They all have some built in features, they all have their downfalls. So rather than trying to swap platforms every time a new one gets better features, spend some time to understand the platform you have in front of you. When you understand the risks that you face, you can then determine how those line up with your platform. Do you output user input to a web browser? If so, cross site scripting is a concern. Determine how your platform handles that. It may be that the platform auto encodes that data for you. The encoding may only happen in certain contexts. it may be the platform doesn’t provide any encoding, but rather leaves that up to you.

While the secure by default is more secure, as it reduces the risk of human oversight, applications can still be very secure without it. This is where that understanding comes into play. If I understand my platform and know that it doesn’t encode for me then I must make the effort to protect that. This may even include creating your own function or library that is used enterprise wide to help solve the problem. Unfortunately, when you don’t understand your platform, you never realize that this is a step you must take. Then it is overlooked and you are vulnerable.

I am also seeing more platforms starting to provide security guidelines or checklists to help developers with secure implementation. They may know of areas the platform doesn’t create a protection, so they show how to get around that. Maybe something is not enabled by default, but they recommend and show how to enable that. The more content like this that is produced the more we will understand how to securely create applications.

Whatever platform you use, understanding it will make the most difference. If the platform doesn’t have good documentation, push for it. Ask around or even do the analysis yourself to understand how security works in your situations.

Using the AWS disruption to your advantage

By now you have heard of the amazon issues that plagued many websites a few days ago. I want to talk about one key part of the issue that often gets overlooked. If you read through their message describing their service disruption (https://aws.amazon.com/message/41926/) you will notice a section where they discuss some changes to the tools they use to manage their systems.

So let’s take a step back for a moment. Amazon attributed the service disruption to basically a simple mistake when executing a command. Although following an established playbook, something was entered incorrectly, leading to the disruption. I don’t want to get into all the details about the disruption, but rather, lets fast forward to considering the tools that are used.

Whether we are developing applications for external use or tools used to manage our systems, we don’t always think about all the possible threat scenarios. Sure, we think about some of the common threats and put in some simple safeguards, but what about some of those more detailed issues.

In this case, the tools allowed manipulating sub-systems that shouldn’t have been possible. It allowed shutting systems down too quickly, which caused some issues. This is no different than many of the tools that we write. We understand the task and create something to make it happen. But then our systems get more complicated. They pull in other systems which have access to yet other systems. A change to something directly accessible has an effect on a system 3 hops away.

Now, the chances of this edge case from happening are probably pretty slim. The chances that this case was missed during design is pretty good. Especially if you are not regularly reassessing your systems. What may have not been possible 2 years ago when the system was created is now possible today.

Even with Threat Modeling and secure design, things change. If the tool isn’t being updated, then the systems it controls are. We must have the ability to learn from these situations and identify how we can apply them to our own situations. How many tools have you created that can control your systems where a simple wrong click or incorrect parameter can create a huge headache. Do you even understand your systems well enough to know if doing something too fast or too slow can cause a problem. What about the effect of shutting one machine down has on any other machines. Are there warning messages for things that may have adverse effects? Are there safeguards in place of someone doing something out of the norm?

We are all busy and going back through working systems isn’t a high priority. These tools/systems may require an addition of new features, which makes for a perfect time to reassess it. Like everything in security, peripheral vision is very important. When working on one piece, it is always good to peek at other code around it. Take the time to verify that everything is up to date and as expected. Not any changes and determine if any enhancements or redesign is in order. Most of the time, these are edge case scenarios, but they can have a big impact if they occur.

Jardine Software helps companies get more value from their application security programs. Let’s talk about how we can help you.

James Jardine is the CEO and Principal Consultant at Jardine Software Inc. He has over 15 years of combined development and security experience. If you are interested in learning more about Jardine Software, you can reach him at james@jardinesoftware.com or @jardinesoftware on twitter.

Introducing our Slack channel

It is a new year and time for some new ways for all of us to communicate. We appreciate all that have read the posts and listened to the podcast. Both of these will continue to move forward in 2017 with some new material on the way.

We are happy to announce we have started a Slack channel. You can find it at developsec.slack.com. The blog and podcasts have been great in providing information in a read-only manner. Slack is an opportunity to open up more conversation and create more dialog. A chance for us all to share stories and work out problems.

The channel is dedicated to application security topics. This means web, mobile, IoT, and whatever else is running applications. The goal is to provide a mechanism to bring application security into our everyday life. If you are interested in being a part of the DevelopSec Slack group, just drop me a line at james@jardinesoftware.com. I will send you out an invite.

DevelopSec encourages developers, business analysts, QA, and other application team members to take part in this great new opportunity.

Here is to 2017. A new year with new opportunities. As always, feel free to reach out to discuss AppSec.

Insulin Pump Vulnerability – Take-aways

It was recently announced that there were a few vulnerabilities found with some insulin pumps that could allow a remote attacker to cause the pump to distribute more insulin than expected. There is a great write up of the situation here. When I say remote attack, keep in mind that in this scenario, it is someone that is within close proximity to the device. This is not an attack that can be performed via the Internet.

This situation creates an excellent learning opportunity for anyone that is creating devices, or that have devices already on the market. Let’s walk through a few key points from the article.

The Issue

The issue at hand was that the device didn’t perform strong authentication and that the communication between the remote and the device was not encrypted. Keep in mind that this device was rolled out back in 2008. Not to say that encryption wasn’t an option 8 years ago, for these types of devices it may not have been as main stream. We talk a lot about encryption today. Whether it is for IoT devices, mobile applications or web applications, the message is the same: Encrypt the communication channel. This doesn’t mean that encryption solves every problem, but it is a control that can be very effective in certain situations.

This instance is not the only one we have seen with unencrypted communications that allowed remote execution. It wasn’t long ago that there were computer mice and keyboards that also had the same type of problem. It also won’t be the last time we see this.

Take the opportunity to look at what you are creating and start asking the question regarding communication encryption and authentication. We are past the time where we can make the assumption the protocol can’t be reversed, or it needs special equipment. If you have devices in place currently, go back and see what you are doing. Is it secure? Could it be better? Are there changes we need to make. If you are creating new devices, make sure you are thinking about these issues. Communications is high on the list for all devices for security vulnerabilities. Make sure it is considered.

The Hype, or Lack of

I didn’t see a lot of extra hype over the disclosure of this issue. Often times, in security, we have a tendency to exaggerate things a bit. I didn’t see that here and I like it. At the beginning of this post I mentioned the attack is remote, but still requires close physical proximity. Is it an issue: Yes. is it a hight priority issue: Depends on functionality achieved. As a matter of fact, the initial post about the vulnerabilities states they don’t believe it is cause for panic and the risk of wide scale exploitation is relatively low. Given the circumstances, I would agree. They even go on to say they would still use this device for their children. I believe this is important because it highlights that there is risk in everything we do, and it is our responsibility to understand and choose to accept it.

The Response

Johnson and Johnson released an advisory to their customers alerting them of the issues. In addition, they provided potential options if these customers were concerned over the vulnerabilities. You might expect that the response would downplay the risk, but I didn’t get that feeling. They list the probability as extremely low. I don’t think there is dishonesty here. The statement is clear and understandable. The key component is the offering of some mitigations. While these may provide some inconvenience, it is positive to know there are options available to help further reduce the risk.

Attitude and Communication

It is tough to tell by reading some articles about a situation, but it feels like the attitudes were positive throughout the process. Maybe I am way off, but let’s hope that is true. As an organization it is important to be open to input from 3rd parties when it comes to the security of our devices. In many cases, the information is being provided to help, not be harmful. If you are the one receiving the report, take the time to actually read and understand it before jumping to conclusions.

As a security tester, it is important for the first contact to be a positive one. This can be difficult if you have had bad experiences in the past, but don’t judge everyone based on previous experiences. When the communication starts on a positive note, the chances are better it will continue that way. Starting off with a negative attitude can bring a disclosure to a screeching halt.

Conclusion

We are bound to miss things when it comes to security. In fact, you may have made a decision that years down the road will turn out to be incorrect. Maybe it wasn’t at the time, but technology changes quickly. We can’t just write off issues, we must understand them. Understand the risk and determine the proper course of action. Being prepared for the unknown can be difficult, but keeping a level head and honest will make these types of engagements easier to handle.

Jardine Software helps companies get more value from their application security programs. Let’s talk about how we can help you.

James Jardine is the CEO and Principal Consultant at Jardine Software Inc. He has over 15 years of combined development and security experience. If you are interested in learning more about Jardine Software, you can reach him at james@jardinesoftware.com or @jardinesoftware on twitter.