• Skip to main content
  • Skip to primary sidebar
  • Skip to footer

DevelopSec

  • Home
  • Podcast
  • Blog
  • Resources
  • About
  • Schedule a Call
You are here: Home / blog

blog

October 8, 2019 by James Jardine

Investing in People for Better Application Security

Application security, like any facet of security, is a complex challenge with a mountain of solutions. Of course, no one solution is complete. Even throwing multiple solutions will never get 100% coverage.

The push today is around devsecops, or pushing left in the SDLC. I am seeing more solutions recommending also pushing right in the SDLC. I feel like we are stuck at this crossroad where the arrow points both ways.

The good news is that none of these recommendations are wrong. We do need to push left in the SDLC. The sooner we address issues, the better off we are. The idea that if we don’t introduce a vulnerability in the first place is the best case scenario. Unfortunately, we also know that is an unrealistic assumption. So this brings us to pushing right. Here, we are looking to quickly identify issues after they are introduced and, in some cases, actively block attacks. Of course, let’s not leave out that automation is our key to scalable solutions as we build and deploy our applications.

Much of what we focus on is bringing in some form of tool. Tools are great. They take they mundane, repetitive work off of our plate. Unfortunately, they can’t do it all. In fact, many tools need someone that has at least some knowledge of the subject. This is where the people come in.

Over the years, I have worked with many companies as both a developer and an application security expert. I have seen many organizations that put a lot of effort into building an application security team, focused on managing everything application security. Often times, this team is separate from the application development teams. This can create a lot of friction. With the main focus on the application security team, many organizations don’t put as much effort into the actual application development teams.

How does your organization prepare developers, business analysts, project managers and software testers to create secure applications?

In my experience, the following are some common responses. Please feel free to share with me your answers.

  • The organization provides computer based training (CBT) modules for the development teams to watch.
  • The organization sends a few developers to a conference or specialized training course and expects them to brief everyone when they return.
  • The organization brings in an instructor to give an in-house 2-3 day trading class on secure development (once a year).
  • The organization uses its security personnel to provide secure development training to the developers.
  • The organization provides SAST or DAST tools, but the results are reviewed by the security team.
  • The organization has updated the SDLC to included security checkpoints, but no training is provided to the development teams.
  • The organization doesn’t provide any training on security for the development teams.

By no means is this an exhaustive list, but just some of the more common scenarios I have seen. To be fair, many of these responses have a varying range of success across organizations. We will look at some of the pitfalls too many of these approaches in future articles.

The most important point I want to make is that the development teams are the closest you can get to the actual building of the application. The business analysts are helping flush out requirements and design. The developers are writing the actual code, dealing with different languages and frameworks. The QA team, or software testers, are on the front line of testing the application to ensure that it works as expected. These groups know the application inside and out. To help them understand the types of risk they face and techniques to avoid them is crucial to any secure application development program.

My goal is not, let me repeat, NOT, to turn your application development teams into “security” people. I see this concept floating around on social media and I am not a fan. Why? First and foremost, each of you have your own identity, your own role. If you are a developer, you are a developer, not a security person. If you are a software tester, don’t try to be a security person. In these roles, you have a primary role and security is a part of that. It is not the defining attribute of your tasks.

Instead, the goal is to make you aware of how security fits into your role. As a software tester, the historical goals focused on ensuring that the application functions as expected. Looking at validating use cases. When we start to think about security within our role, we start to look at abuse cases. There becomes a desire to ensure that the application doesn’t act in certain ways. Sometimes this is very easy and others it may be beyond our capabilities.

Take the software tester again. The goal is not to turn you into a penetration tester. That role requires much more in-depth knowledge, and honestly, should be reserved for looking for the most complex vulnerabilities. This doesn’t mean that you can’t do simple tests for Direct Object Reference by changing a simple ID in the URL. It doesn’t mean that you don’t understand how to do some simple checks for SQL Injection or Cross-site Scripting. It does mean you should be able to understand what the common vulnerabilities are and how to do some simple tests for them.

If you invest in your people correctly, you will start to realize how quickly application security becomes more manageable. It becomes more scalable. The challenge becomes how to efficiently train your people to provide the right information at the right time. What type of support are you looking for? Is it the simple CBT program, or do you desire something more fluid and ongoing that provides continuing support for your most valuable assets?

Different programs work for different organizations. In all cases, it is important to work with your teams to identify what solution works best for them. Providing the right type of training, mentorship, or support at the right time can make a huge impact.

Don’t just buy a training solution, look for a partner in your development teams training efforts. A partner that gets to know your situation, that is available at the right times, and builds an on-going relationship with the team.

Filed Under: General Tagged With: application security, application security program, developer awareness, developer training, secure code, secure development, security training, training

October 8, 2019 by James Jardine Leave a Comment

What is the difference between source code review and static analysis?

Static analysis is the process of using automation to analyze the application’s code base for known security patterns. It uses different methods, such as following data from it source (input) to its sink (output) to identify potential weaknesses. It also uses simple search methods in an attempt to identify hard-coded values, like passwords in the code. Automated tools struggle at finding business logic or authentication/authorization flaws.

Code Review is a much larger project where both automated and manual techniques are used to review a code base for potential security risks. A code review may incorporate a static analysis component to quickly scan for some types of vulnerabilities. The manual part of the secure code review involves a person looking at the code to identify flaws that automated scanners cannot. For example, they may look at the authentication logic looking for potential logic flaws. They may also look to identify other authorization or business logic flaws that the automation is challenged to find.

Filed Under: Questions Tagged With: application security, code review, development, sast, secure code review, secure coding, secure development, secure sdlc, security, testing

August 1, 2019 by James Jardine Leave a Comment

Interesting Browser Difference

Update 8/16/19 – It appears that not long after I published this, Chrome sent an update that now mimics FireFox. In Chrome you now get a new tab that has a URL of “about:blank#blocked”.

When working on a recent test I noticed something pretty interesting when I had found what I thought was a Cross-Site Scripting vulnerability. I have posted previously on the ability to execute XSS when you control the HREF attribute of a link tag. This is done by setting a url to javascript:alert(9);. This might look like:

<a href=”javascript:alert(9);”>Click Me</a>

This is similar to the situation I had. However, when I was testing with FireFox, I found that the alert box was not appearing. Instead, when I clicked the link, it would open a new tab and the URL would be javascript:alert(9);. No Alert box. What gives??

So I could see the payload there, but it wouldn’t execute. I was confused. I decided to load up Chrome and see what happens there. To my surprise, although it is what I originally expected, the alert box was displayed.

It turns out, the link tag wasn’t as simple as the above example. Instead it looked more like this:

<a href=”javascript:alert(9);” target=”_blank” rel=”noopener no referrer”>Click Me</a>

After performing a little more testing, it appears that when the target and red attributes exist, Firefox opens a new tab instead of executing the specified JavaScript. I m not sure if this is a bug in how FireFox handles this scenario or not, but it does highlight an interesting point about differences with browsers.

In the event your testing runs across this same type of scenario, take the time to try different browsers to see if the results change.

Filed Under: General Tagged With: application security, AppSec, secure development, security awareness, security testing, xss

May 7, 2019 by James Jardine Leave a Comment

XSS in Script Tag

Cross-site scripting is a pretty common vulnerability, even with many of the new advances in UI frameworks. One of the first things we mention when discussing the vulnerability is to understand the context. Is it HTML, Attribute, JavaScript, etc.? This understanding helps us better understand the types of characters that can be used to expose the vulnerability.

In this post, I want to take a quick look at placing data within a <script> tag. In particular, I want to look at how embedded <script> tags are processed. Let’s use a simple web page as our example.

<html>
	<head>
	</head>
	<body>
	<script>
		var x = "<a href=test.html>test</a>";
	</script>
	</body>
</html>

The above example works as we expect. When you load the page, nothing is displayed. The link tag embedded in the variable is rated as a string, not parsed as a link tag. What happens, though, when we embed a <script> tag?

<html>
	<head>
	</head>
	<body>
	<script>
		var x = "<script>alert(9)</script>";
	</script>
	</body>
</html>

In the above snippet, actually nothing happens on the screen. Meaning that the alert box does not actually trigger. This often misleads people into thinking the code is not vulnerable to cross-site scripting. if the link tag is not processed, why would the script tag be. In many situations, the understanding is that we need to break out of the (“) delimiter to start writing our own JavaScript commands. For example, if I submitted a payload of (test”;alert(9);t = “). This type of payload would break out of the x variable and add new JavaScript commands. Of course, this doesn’t work if the (“) character is properly encoded to not allow breaking out.

Going back to our previous example, we may have overlooked something very simple. It wasn’t that the script wasn’t executing because it wasn’t being parsed. Instead, it wasn’t executing because our JavaScript was bad. Our issue was that we were attempting to open a <script> within a <script>. What if we modify our value to the following:

<html>
	<head>
	</head>
	<body>
	<script>
		var x = "</script><script>alert(9)</script>";
	</script>
	</body>
</html>

In the above code, we are first closing out the original <script> tag and then we are starting a new one. This removes the embedded nuance and when the page is loaded, the alert box will appear.

This technique works in many places where a user can control the text returned within the <script> element. Of course, the important remediation step is to make sure that data is properly encoded when returned to the browser. By default, Content Security Policy may not be an immediate solution since this situation would indicate that inline scripts are allowed. However, if you are limiting the use of inline scripts to ones with a registered nonce would help prevent this technique. This reference shows setting the nonce (https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/script-src).

When testing our applications, it is important to focus on the lack of output encoding and less on the ability to fully exploit a situation. Our secure coding standards should identify the types of encoding that should be applied to outputs. If the encodings are not properly implemented then we are citing a violation of our standards.

Filed Under: General Tagged With: app sec, app testing, pen testing, penetration test, qa, secure development, secure testing, security

January 4, 2019 by James Jardine Leave a Comment

What is the difference between Brute Force and Credential Stuffing?

Many people get confused between brute force attacks and credentials stuffing. To help clear this up, here is a simple description of the two. These are both in regards to the login form only.

Brute Force
Brute force attacks on the login form consist of the attacker having a defined list (called a dictionary) of potential passwords. The attacker will then try each of these defined passwords with each username the attacker is trying to brute force. Put simply, this is a 1 (username) too many (password) attack.

A common mitigation to brute force attacks is the implementation of account lockout. In this case, after 3, or 5, or 10 failed attempts for a single username, the user account is locked to block any more attempts. This drastically reduces the number of passwords that may be tried in a short period of time.

Credential Stuffing
Credential Stuffing is another attack on the login form but it differs from a brute force attack in that the list used contains both a username and a password. This list is often obtained through a data breach at another organization. The purpose is to find accounts that are re-used across multiple sites. In this case, the attack is a 1 (username) to 1 (password) attack. For each username, only one password will be attempted.

Unlike with Brute Force, account lockout doesn’t have much affect on credential stuffing. Multi-factor authentication is a good mitigation as it will limit the use of valid credentials.

Filed Under: Questions Tagged With: application security, AppSec, brute force, credential stuffing, developsec, pen testing, penetration test, pentest, secure development, secure testing, secure training, vulnerability

October 5, 2018 by James Jardine Leave a Comment

Apple Mail: Highlighting External Email Addresses

A simple error to make when sending an email with sensitive data is to send it to the wrong email address. Imagine you are sending some information to Dave down in accounting. When you fill out the email you start typing Dave and the auto-complete pops up and you select the first one (out of habit). You think you have selected the right Dave, but what if you didn’t? How would you know that you were about to send potentially sensitive information outside of the organization?

Apple Mail and iOS mail has a built in feature to highlight any email address outside of a specified domain. The steps to enable this are a little different based on the desktop vs. an iPad/iPhone. Let’s look at each one.

Desktop

By default, email addresses all appear the same when composing a new message. The image below shows me composing an email with two addresses:

  • To: james@jardinesoftware.com
  • CC: james@developsec.com

Mail 1

To change this, in the Mail program, go to Preferences…Composing. Under the Addressing: section there is a checkbox for “Mark addresses not ending with“. Check the box and enter your organization’s domain name in the text box. In the image below, I have entered my domain of @jardinesoftware.com.

Mail 2

Now, go back and compose a new message entering in an email address that is in the org and one that is out of it. The email address that is outside of your domain should be highlighted in red. The below image shows my configuration with the same 2 email addresses. Notice that the james@developsec.com address is now listed in red indicating it is external to my organization.

Mail 3

iPhone/iPad

The mail application with the iPhone/iPad also has this capability. To enable this, go into Settings->Mail. Scroll down to the Composing section and click the “Mark Addresses” button (shown below).

Mail 4

On the next screen, enter the email address you don’t want to highlight. In my case, I entered my domain of @jardinesoftware.com. (shown below)

Mail 5

Once the change is stored, go to mail and compose a new message. In my example, I used the same emails as above. Notice that the james@developsec.com email address is now highlighted in red:

Mail 6

Although this is not a fool proof way to stop this type of mistake, it does add in a visual clue that just might catch your attention. Simple changes like this can help reduce your risk of accidentally leaking sensitive information out of your organization.

Filed Under: General Tagged With: breach, cyber security awareness month, data breach, email, mail security, security, security awareness, security training

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Interim pages omitted …
  • Go to page 17
  • Go to Next Page »

Primary Sidebar

Contact Us:

Contact us today to see how we can help.
Contact Us

Footer

Company Profile

Are you tackling the challenge to integrate security into the development process? Application security can be a complex task and often … Read More... about Home

Resources

Podcasts
DevelopSec
Down the Security Rabbithole (#DTSR)

Blogs
DevelopSec
Jardine Software

Engage With Us

  • Email
  • GitHub
  • Twitter
  • YouTube

Contact Us

DevelopSec
1044 South Shores Rd.
Jacksonville, FL 32207

P: 904-638-5431
E: james@developsec.com



Privacy Policy

© Copyright 2018 Developsec · All Rights Reserved