• Skip to main content
  • Skip to primary sidebar
  • Skip to footer

DevelopSec

  • Home
  • Podcast
  • Blog
  • Services
    • Secure Development Training
    • Advisory Services
    • Application Penetration Testing
    • Application Security and SDLC Program Review
  • Resources
  • About
  • Schedule a Call

James Jardine

May 29, 2020 by James Jardine Leave a Comment

Proxying localhost on FireFox

When you think of application security testing, one of the most common tools is a web proxy. Whether it is Burp Suite from Portswigger, ZAP from OWASP, Fiddler, or Charles Proxy, a proxy is heavily used. From time to time, you may find yourself testing a locally running application. Outside of some test labs or local development, this isn’t really that common. But if you do find yourself testing a site on localhost, you may run into a roadblock in your browser. If you are using a recent version of FireFox, when you go into your preferences screen and click on the Network Settings “Settings” button, you might notice the following image:

DenyProxy

When configuring your proxy, there is a box to list exceptions to not proxy traffic for. In the old days, localhost used to be pre-populated in this box. However, that is not the case anymore. Instead, localhost is explicitly blocked from being proxied. You can see this in the highlighted area of the image above.

So how do you you proxy your localhost application? There are a few ways to handle this.

You could set up your hosts file to give a different name to your local website. In this case, you would access the application using your defined hostname, rather than “localhost”.

Another way to get around this would be to modify the about:config of Firefox and update the network.proxy.allow_hijacking_localhost property true as shown in the following image:

AllowHijacking

Once this change is made, it will update the network settings screen to no longer block localhost from proxying. The following image shows that this statement is no longer there:

AllowProxy

Filed Under: General Tagged With: application security, AppSec, pen test, pen testing, pentesting, qa, secure development, security testing

February 10, 2020 by James Jardine Leave a Comment

Chrome is making some changes.. are you ready?

Last year, Chrome announced that it was making a change to default cookies to SameSite:Lax if there is no SameSite setting explicitly set. I wrote about this change last year (https://www.jardinesoftware.net/2019/10/28/samesite-by-default-in-2020/). This change could have an impact on some sites, so it is important that you test this out. The changes are supposed to start rolling out in February (this month). The linked post shows how to force these defaults in both FireFox and Chrome.

In addition to this, Chrome has announced that it is going to start blocking mixed-content downloads (https://blog.chromium.org/2020/02/protecting-users-from-insecure.html). In this case, they are starting in Chrome 83 (June 2020) with blocking executable file downloads (.exe, .apk) that are over HTTP but requested from an HTTPS site.

The issue at hand is that users are mislead into thinking the download is secure due to the requesting page indicating it is over HTTPS. There isn’t a way for them to clearly see that the request is insecure. The linked Chrome blog describes a timeline of how they will slowly block all mixed-content types.

For many sites this might not be a huge concern, but this is a good time to check your sites to determine if you have any type of mixed content and ways to mitigate this.

You can identify mixed content on your site by using the Javascript Console. It can be found under the Developer Tools in your browser. This will prompt a warning when it identifies mixed content. There may also be some scanners you can use that will crawl your site looking for mixed content.

To help mitigate this from a high level, you could implement CSP to upgrade insecure requests:

Content-Security-Policy: upgrade-insecure-requests

This can help by upgrading insecure requests, but it is not supported in all browsers. The following post goes into a lot of detail on mixed content and some ways to resolve it: https://developers.google.com/web/fundamentals/security/prevent-mixed-content/fixing-mixed-content

The increase in protections of the browsers can help reduce the overall threats, but always remember that it is the developer’s responsibility to implement the proper design and protections. Not all browsers are the same and you can’t rely on the browser to provide all the protections.

Filed Under: General Tagged With: application security, AppSec, chrome, developer, secure code, secure development, secure testing

October 8, 2019 by James Jardine

Investing in People for Better Application Security

Application security, like any facet of security, is a complex challenge with a mountain of solutions. Of course, no one solution is complete. Even throwing multiple solutions will never get 100% coverage.

The push today is around devsecops, or pushing left in the SDLC. I am seeing more solutions recommending also pushing right in the SDLC. I feel like we are stuck at this crossroad where the arrow points both ways.

The good news is that none of these recommendations are wrong. We do need to push left in the SDLC. The sooner we address issues, the better off we are. The idea that if we don’t introduce a vulnerability in the first place is the best case scenario. Unfortunately, we also know that is an unrealistic assumption. So this brings us to pushing right. Here, we are looking to quickly identify issues after they are introduced and, in some cases, actively block attacks. Of course, let’s not leave out that automation is our key to scalable solutions as we build and deploy our applications.

Much of what we focus on is bringing in some form of tool. Tools are great. They take they mundane, repetitive work off of our plate. Unfortunately, they can’t do it all. In fact, many tools need someone that has at least some knowledge of the subject. This is where the people come in.

Over the years, I have worked with many companies as both a developer and an application security expert. I have seen many organizations that put a lot of effort into building an application security team, focused on managing everything application security. Often times, this team is separate from the application development teams. This can create a lot of friction. With the main focus on the application security team, many organizations don’t put as much effort into the actual application development teams.

How does your organization prepare developers, business analysts, project managers and software testers to create secure applications?

In my experience, the following are some common responses. Please feel free to share with me your answers.

  • The organization provides computer based training (CBT) modules for the development teams to watch.
  • The organization sends a few developers to a conference or specialized training course and expects them to brief everyone when they return.
  • The organization brings in an instructor to give an in-house 2-3 day trading class on secure development (once a year).
  • The organization uses its security personnel to provide secure development training to the developers.
  • The organization provides SAST or DAST tools, but the results are reviewed by the security team.
  • The organization has updated the SDLC to included security checkpoints, but no training is provided to the development teams.
  • The organization doesn’t provide any training on security for the development teams.

By no means is this an exhaustive list, but just some of the more common scenarios I have seen. To be fair, many of these responses have a varying range of success across organizations. We will look at some of the pitfalls too many of these approaches in future articles.

The most important point I want to make is that the development teams are the closest you can get to the actual building of the application. The business analysts are helping flush out requirements and design. The developers are writing the actual code, dealing with different languages and frameworks. The QA team, or software testers, are on the front line of testing the application to ensure that it works as expected. These groups know the application inside and out. To help them understand the types of risk they face and techniques to avoid them is crucial to any secure application development program.

My goal is not, let me repeat, NOT, to turn your application development teams into “security” people. I see this concept floating around on social media and I am not a fan. Why? First and foremost, each of you have your own identity, your own role. If you are a developer, you are a developer, not a security person. If you are a software tester, don’t try to be a security person. In these roles, you have a primary role and security is a part of that. It is not the defining attribute of your tasks.

Instead, the goal is to make you aware of how security fits into your role. As a software tester, the historical goals focused on ensuring that the application functions as expected. Looking at validating use cases. When we start to think about security within our role, we start to look at abuse cases. There becomes a desire to ensure that the application doesn’t act in certain ways. Sometimes this is very easy and others it may be beyond our capabilities.

Take the software tester again. The goal is not to turn you into a penetration tester. That role requires much more in-depth knowledge, and honestly, should be reserved for looking for the most complex vulnerabilities. This doesn’t mean that you can’t do simple tests for Direct Object Reference by changing a simple ID in the URL. It doesn’t mean that you don’t understand how to do some simple checks for SQL Injection or Cross-site Scripting. It does mean you should be able to understand what the common vulnerabilities are and how to do some simple tests for them.

If you invest in your people correctly, you will start to realize how quickly application security becomes more manageable. It becomes more scalable. The challenge becomes how to efficiently train your people to provide the right information at the right time. What type of support are you looking for? Is it the simple CBT program, or do you desire something more fluid and ongoing that provides continuing support for your most valuable assets?

Different programs work for different organizations. In all cases, it is important to work with your teams to identify what solution works best for them. Providing the right type of training, mentorship, or support at the right time can make a huge impact.

Don’t just buy a training solution, look for a partner in your development teams training efforts. A partner that gets to know your situation, that is available at the right times, and builds an on-going relationship with the team.

Filed Under: General Tagged With: application security, application security program, developer awareness, developer training, secure code, secure development, security training, training

October 8, 2019 by James Jardine Leave a Comment

What is the difference between source code review and static analysis?

Static analysis is the process of using automation to analyze the application’s code base for known security patterns. It uses different methods, such as following data from it source (input) to its sink (output) to identify potential weaknesses. It also uses simple search methods in an attempt to identify hard-coded values, like passwords in the code. Automated tools struggle at finding business logic or authentication/authorization flaws.

Code Review is a much larger project where both automated and manual techniques are used to review a code base for potential security risks. A code review may incorporate a static analysis component to quickly scan for some types of vulnerabilities. The manual part of the secure code review involves a person looking at the code to identify flaws that automated scanners cannot. For example, they may look at the authentication logic looking for potential logic flaws. They may also look to identify other authorization or business logic flaws that the automation is challenged to find.

Filed Under: Questions Tagged With: application security, code review, development, sast, secure code review, secure coding, secure development, secure sdlc, security, testing

August 1, 2019 by James Jardine Leave a Comment

Interesting Browser Difference

Update 8/16/19 – It appears that not long after I published this, Chrome sent an update that now mimics FireFox. In Chrome you now get a new tab that has a URL of “about:blank#blocked”.

When working on a recent test I noticed something pretty interesting when I had found what I thought was a Cross-Site Scripting vulnerability. I have posted previously on the ability to execute XSS when you control the HREF attribute of a link tag. This is done by setting a url to javascript:alert(9);. This might look like:

<a href=”javascript:alert(9);”>Click Me</a>

This is similar to the situation I had. However, when I was testing with FireFox, I found that the alert box was not appearing. Instead, when I clicked the link, it would open a new tab and the URL would be javascript:alert(9);. No Alert box. What gives??

So I could see the payload there, but it wouldn’t execute. I was confused. I decided to load up Chrome and see what happens there. To my surprise, although it is what I originally expected, the alert box was displayed.

It turns out, the link tag wasn’t as simple as the above example. Instead it looked more like this:

<a href=”javascript:alert(9);” target=”_blank” rel=”noopener no referrer”>Click Me</a>

After performing a little more testing, it appears that when the target and red attributes exist, Firefox opens a new tab instead of executing the specified JavaScript. I m not sure if this is a bug in how FireFox handles this scenario or not, but it does highlight an interesting point about differences with browsers.

In the event your testing runs across this same type of scenario, take the time to try different browsers to see if the results change.

Filed Under: General Tagged With: application security, AppSec, secure development, security awareness, security testing, xss

May 7, 2019 by James Jardine Leave a Comment

XSS in Script Tag

Cross-site scripting is a pretty common vulnerability, even with many of the new advances in UI frameworks. One of the first things we mention when discussing the vulnerability is to understand the context. Is it HTML, Attribute, JavaScript, etc.? This understanding helps us better understand the types of characters that can be used to expose the vulnerability.

In this post, I want to take a quick look at placing data within a <script> tag. In particular, I want to look at how embedded <script> tags are processed. Let’s use a simple web page as our example.

<html>
	<head>
	</head>
	<body>
	<script>
		var x = "<a href=test.html>test</a>";
	</script>
	</body>
</html>

The above example works as we expect. When you load the page, nothing is displayed. The link tag embedded in the variable is rated as a string, not parsed as a link tag. What happens, though, when we embed a <script> tag?

<html>
	<head>
	</head>
	<body>
	<script>
		var x = "<script>alert(9)</script>";
	</script>
	</body>
</html>

In the above snippet, actually nothing happens on the screen. Meaning that the alert box does not actually trigger. This often misleads people into thinking the code is not vulnerable to cross-site scripting. if the link tag is not processed, why would the script tag be. In many situations, the understanding is that we need to break out of the (“) delimiter to start writing our own JavaScript commands. For example, if I submitted a payload of (test”;alert(9);t = “). This type of payload would break out of the x variable and add new JavaScript commands. Of course, this doesn’t work if the (“) character is properly encoded to not allow breaking out.

Going back to our previous example, we may have overlooked something very simple. It wasn’t that the script wasn’t executing because it wasn’t being parsed. Instead, it wasn’t executing because our JavaScript was bad. Our issue was that we were attempting to open a <script> within a <script>. What if we modify our value to the following:

<html>
	<head>
	</head>
	<body>
	<script>
		var x = "</script><script>alert(9)</script>";
	</script>
	</body>
</html>

In the above code, we are first closing out the original <script> tag and then we are starting a new one. This removes the embedded nuance and when the page is loaded, the alert box will appear.

This technique works in many places where a user can control the text returned within the <script> element. Of course, the important remediation step is to make sure that data is properly encoded when returned to the browser. By default, Content Security Policy may not be an immediate solution since this situation would indicate that inline scripts are allowed. However, if you are limiting the use of inline scripts to ones with a registered nonce would help prevent this technique. This reference shows setting the nonce (https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/script-src).

When testing our applications, it is important to focus on the lack of output encoding and less on the ability to fully exploit a situation. Our secure coding standards should identify the types of encoding that should be applied to outputs. If the encodings are not properly implemented then we are citing a violation of our standards.

Filed Under: General Tagged With: app sec, app testing, pen testing, penetration test, qa, secure development, secure testing, security

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 14
  • Go to Next Page »

Primary Sidebar

Stay Informed

Sign up to receive email updates regarding current application security topics.

Contact Us:

Contact us today to see how we can help.
Contact Us

Footer

Company Profile

Are you tackling the challenge to integrate security into the development process? Application security can be a complex task and often … Read More... about Home

Resources

Podcasts
DevelopSec
Down the Security Rabbithole (#DTSR)

Blogs
DevelopSec
Jardine Software

Engage With Us

  • Email
  • GitHub
  • Twitter
  • YouTube

Contact Us

DevelopSec
1044 South Shores Rd.
Jacksonville, FL 32207

P: 904-638-5431
E: james@developsec.com



Privacy Policy

© Copyright 2018 Developsec · All Rights Reserved