Insulin Pump Vulnerability – Take-aways

It was recently announced that there were a few vulnerabilities found with some insulin pumps that could allow a remote attacker to cause the pump to distribute more insulin than expected. There is a great write up of the situation here. When I say remote attack, keep in mind that in this scenario, it is someone that is within close proximity to the device. This is not an attack that can be performed via the Internet.

This situation creates an excellent learning opportunity for anyone that is creating devices, or that have devices already on the market. Let’s walk through a few key points from the article.

The Issue

The issue at hand was that the device didn’t perform strong authentication and that the communication between the remote and the device was not encrypted. Keep in mind that this device was rolled out back in 2008. Not to say that encryption wasn’t an option 8 years ago, for these types of devices it may not have been as main stream. We talk a lot about encryption today. Whether it is for IoT devices, mobile applications or web applications, the message is the same: Encrypt the communication channel. This doesn’t mean that encryption solves every problem, but it is a control that can be very effective in certain situations.

This instance is not the only one we have seen with unencrypted communications that allowed remote execution. It wasn’t long ago that there were computer mice and keyboards that also had the same type of problem. It also won’t be the last time we see this.

Take the opportunity to look at what you are creating and start asking the question regarding communication encryption and authentication. We are past the time where we can make the assumption the protocol can’t be reversed, or it needs special equipment. If you have devices in place currently, go back and see what you are doing. Is it secure? Could it be better? Are there changes we need to make. If you are creating new devices, make sure you are thinking about these issues. Communications is high on the list for all devices for security vulnerabilities. Make sure it is considered.

The Hype, or Lack of

I didn’t see a lot of extra hype over the disclosure of this issue. Often times, in security, we have a tendency to exaggerate things a bit. I didn’t see that here and I like it. At the beginning of this post I mentioned the attack is remote, but still requires close physical proximity. Is it an issue: Yes. is it a hight priority issue: Depends on functionality achieved. As a matter of fact, the initial post about the vulnerabilities states they don’t believe it is cause for panic and the risk of wide scale exploitation is relatively low. Given the circumstances, I would agree. They even go on to say they would still use this device for their children. I believe this is important because it highlights that there is risk in everything we do, and it is our responsibility to understand and choose to accept it.

The Response

Johnson and Johnson released an advisory to their customers alerting them of the issues. In addition, they provided potential options if these customers were concerned over the vulnerabilities. You might expect that the response would downplay the risk, but I didn’t get that feeling. They list the probability as extremely low. I don’t think there is dishonesty here. The statement is clear and understandable. The key component is the offering of some mitigations. While these may provide some inconvenience, it is positive to know there are options available to help further reduce the risk.

Attitude and Communication

It is tough to tell by reading some articles about a situation, but it feels like the attitudes were positive throughout the process. Maybe I am way off, but let’s hope that is true. As an organization it is important to be open to input from 3rd parties when it comes to the security of our devices. In many cases, the information is being provided to help, not be harmful. If you are the one receiving the report, take the time to actually read and understand it before jumping to conclusions.

As a security tester, it is important for the first contact to be a positive one. This can be difficult if you have had bad experiences in the past, but don’t judge everyone based on previous experiences. When the communication starts on a positive note, the chances are better it will continue that way. Starting off with a negative attitude can bring a disclosure to a screeching halt.


We are bound to miss things when it comes to security. In fact, you may have made a decision that years down the road will turn out to be incorrect. Maybe it wasn’t at the time, but technology changes quickly. We can’t just write off issues, we must understand them. Understand the risk and determine the proper course of action. Being prepared for the unknown can be difficult, but keeping a level head and honest will make these types of engagements easier to handle.

Jardine Software helps companies get more value from their application security programs. Let’s talk about how we can help you.

James Jardine is the CEO and Principal Consultant at Jardine Software Inc. He has over 15 years of combined development and security experience. If you are interested in learning more about Jardine Software, you can reach him at or @jardinesoftware on twitter.

WAF and your penetration test

Your penetration tester wants you to disable your web application firewall (WAF) or white list their IP. Do you do it? Should you do it?

This question gets asked all the time and it is important to understand the pros and cons to the final decision.

First, let’s understand why the request to disable the WAF for the tester is presented in the first place. The first reaction may be just lazy testing, but that is not the reason. One of the goals of testing an application is to test the application, not a component in front of it. While the WAF is there to stop these types of attacks, it also prohibits understanding the security stance of the application itself.

Getting direct access to the application helps build a better picture of the security of the application. For example, if your WAF is blocking common vulnerabilities (which is good, that is what it is supposed to do) you may not ever realize that you are suffering from insecure coding practices. This may be apparent if other vulnerabilities are identified, however you can’t understand the depth at which the insecure practices are being executed.

Second, WAFs have been victim to bypasses many times in the past. So testing through the WAF may block many attempts of attack, but that doesn’t mean that the vulnerability doesn’t exist. WAF bypass attempts can be a drain on your assessors time and may also then limit the rest of the testing that can be performed in the limited timeframe. Even if the assessor can do an in-depth bypass test, it doesn’t mean a bypass won’t be discovered in the product at a later time.

Finally, what happens if your WAF gets disabled, or misconfigured? This is always a risk with any device. An issue arises and for troubleshooting you put the WAF back in passive mode. You push a rule change that doesn’t work as expected, leaving attack opportunities open. These issues could arise and the application becomes more vulnerable. Unfortunately, you may not even know how vulnerable because the issues were not identified during a test.

When we perform assessments with WAFs in place, we recommend to our clients to allow us direct access to the application for the first portion of the test. This can be doing by white listing our IP address. This is better than disabling the WAF altogether, if possible. We don’t want to open the application up to attack by others during our assessment. Allowing direct access allows us to test the application, not the WAF, to identify vulnerabilities. This gives a more accurate picture of the risk of the application and secure development practices.

The second part of the test will go through the WAF. This allows us to verify if specific vulnerabilities that were found in the application are being effectively blocked by the WAF. This allows a better understanding of the effectiveness of the WAF. You would want to know if the WAF was easily bypassed and a rule change may be needed.

Determining whether or not to allow direct access or not during a test is not always easy. As a tester, you may not always get the direct access. As a client, you may not always want to grant the access. A lot of this depends on your goals of the test. Having an understanding of why direct access is requested helps clients better understand what they are getting out of that assessment.

If you are getting ready to have a penetration test and have implemented a WAF, make sure you have given this topic some thought before the kick-off. Be prepared to provide a response with an understandable explanation for your decision.

Jardine Software helps companies get more value from their application security programs. Let’s talk about how we can help you.

James Jardine is the CEO and Principal Consultant at Jardine Software Inc. He has over 15 years of combined development and security experience. If you are interested in learning more about Jardine Software, you can reach him at or @jardinesoftware on twitter.

Login Forms and HTTP

Does your application have a login form? Do you deliver it over HTTPS to protect the username and password while being transmitted to the server? If you answered yes to both of those questions, are you sure?

Many years ago, before there was a huge push for HTTPS all the time, it was common practice for many applications to load a login form using HTTP, but then submit the form over HTTPS. This was accomplished by setting the action attribute of the form to the full HTTPS version of the site.

<form action=”” method=”post”>

There was a flaw in this setup. The flaw is not even with the submission of your credentials. Instead, the issue is how that login form is initially loaded. Remember we said that the initial request was HTTP? The belief was that because the loading of the form doesn’t transmit any sensitive data, it would be ok to use HTTP. You could even take a trip back to the performance wars during that time stating that HTTP was much faster to load. (We learned a lot of the years).

The problem is that if there is a malicious user (attacker) on your same network that is able to redirect your traffic through them they could manipulate the initial load of the page. Imagine if the attacker intercepted your request to the login page (initial load) and changed the action of the form to a different site?

<form action=”” method=”post”>

Notice how the new form submission will go to a different site, not even using HTTPS. From the end user’s point of view they wouldn’t even know the form was going to send their credentials to a different site.

Over the years, we have seen the use of this methodology shrinking down. Many sites are now loading their login forms all over HTTPS. As a matter of fact, many sites are 100% HTTPS.

But Wait!!

There is another angle to this that is often overlooked, but works very similar. Does your site allow it to be loaded into frames? I have seen a lot of sites that have been including another application’s login form using either frame sets or frames. The issue, the container site is often a simple marketing or branding site and runs over HTTP.

Like the above example, the HTTP site is including a frame reference to an HTTPS site. Again, the login form submission is still correct. However, it is possible that the attacker from the previous scenario could intercept the containing page and change the reference for the login frame. In this case, the attacker would most likely create a page that is identical to the real login form and point the frame to that one. The user would have no idea that the authentication page was incorrect, because it would look like the original. When the user submits their credentials, they would then be submitted to the malicious user instead of the real site.

It is recommended to not allow your site to be hosted within a sub frame. There are plenty of articles that discuss frame busting techniques and you could look into the X-Frame-Options header as well. If your form doesn’t load in a frame then your risk of being included on a non-secure site is reduced. For all other scenarios, there isn’t a lot of reason to not be using HTTPS from end to end. By securing all of the transactions, it reduces the risk that an attacker can easily manipulate that traffic.

Does SAST and DAST Really Require Security Experts To Run Them?

There is no argument that automated tools help quickly identify many of the vulnerabilities found in applications today. Tools are typically categorized into one of the following three categories:

  • Dynamic Application Security Testing (DAST) – analyzes the running application.
  • Static Application Security Testing (SAST) – analyzes the source or byte code of the application.
  • Interactive Application Security Testing (IAST) – uses agents installed on the web server to instrument the application and analyze it at runtime. This gives access to both dynamic and static details.

These tools are becoming more popular within organizations as part of their application security programs. While there is still a large gap that can only be filled by manual testing, the automated tools are a good first step.

Are these security tools or development tools?

This depends on what your goal is. Ultimately, they are focused on finding security flaws which make them a “security” tool, however that doesn’t mean it is meant for the security team. I have been presenting for a long time on the idea that many of these tools should be considered development tools and placed in the hands of development and QA. To understand where they fit in your organization, you have to understand how you want to use them.

The application team should be using these tools to help quickly identify security bugs earlier on in the SDLC. To make this work efficiently, this should be considered a development tool. Let’s look at how SAST is used by developers. Developers have the ability to scan their source code using their chosen SAST tool. In many cases, these tools may provide an IDE plugin to make the process easier. As the developer writes their code, they can perform regular scans in smaller increments to quickly identify potential security threats. Once identified, the code can be corrected before it even leaves the development queue. In other instances, SAST is embedded into the continuous integration pipeline and scans can be run on check-in or other predefined activities. The results are immediately available to the development teams to review and make corrections.

The security team should be using these tools differently than the development teams. Security is typically more interested in the risk an application presents to the application. In the use of these tools, they provide valuable information that can be reported on to help analyze that risk. The security team should also be involved in the creation of policies and procedures around the implementation and use of the tools provided.

I thought only security experts can run these tools?

I believe this is a myth that only security experts can run these types of tools. Developers have long had static analysis tools built right into their IDEs, they just didn’t have a focus for security flaws. QA groups use all sorts of automated tools to assist in application testing. Lets take a look at each of the tool categories again.


Dynamic scanners usually require at minimum three pieces of information to get started: URL, Username, Password. Most of these scanners are either a windows or web application with a GUI. Setting up a scan is usually not to complex and fairly straightforward. I will admit that some applications, depending on how the login works or how routing is configured, can require a more complex setup. Depending on the scanner, you may be able to perform other advanced settings, but these are not difficult to learn.

Once the scan has completed, the results are provided in a GUI as well. The GUI typically provides a simple way to then view each of the findings identified, including the request and response the scanner sent and received. In dynamic scanning, the request and response are helpful in identifying where the issue is in the application. The tool will also usually provide a description of the vulnerability and references to more information. Based on my experience, reviewing findings in the provided GUI is much more efficient than trying to review an exported PDF of the results. A GUI that provides easy navigation is much better received than a 100-200 page PDF report that can’t really be consumed.

Dynamic scanning fits right into the QA process of scanners and can be easily executed by the QA or development team. This is good, because it can be executed against the QA or other pre-prod instances which are owned by the QA and development groups.


Static scanners analyze your binaries or source code. As expected, they need either the source code itself, or a compiled version of the code. Depending on the tool, you may have to submit the binaries to a service or select them using a local interface. The most difficult part in the process is usually getting the code to compile correctly with debug symbols and components. This compilation is usually done by the development team or DevOps if that has been implemented.

Once the scan is complete, the results are provided either in a web interface or in a local GUI. These results can then be inspected to view details about the issue, just like in the DAST solutions. The big difference being that in SAST you won’t see a request/response, rather you see the file and line of code indicated as well as the source and sink of the vulnerability. The source and sink help trace the data as it passes through the application to the vulnerable line of code. This is useful in helping understand the vulnerability to resolve it. There is a lot of efficiency gained in being able to view these details in a simple manner. The other option is exporting the results to a PDF file which is very inefficient.

Due to the results referencing source code, developers are put into an environment they understand. With some understanding of how security flaws work and are addressed, developers can review the results and take appropriate actions to address them.


Interactive testing uses an agent on the web server to instrument the application. Basically, it injects code into the application to analyze it as it runs. Due to performance hits, this is a great tool for pre-production environments. It also relies on the code being executed.. basically, if your application is not being used, it won’t analyze the code. This also makes it great for a QA environment or regression testing environment. The setup just requires installing the agent onto the server. The results are displayed, as they are found, in a web interface. These results include the source code and the request/response values. Having direct access to the tool provides an efficient way to view the details to then provide remediations to them.

Review how you use these tools

It has been my experience that more than just security experts can run any of these tools. There may be some tweaking that can be done and requires learning how the tools works, but this is something anyone on the dev or QA teams can adapt to. Take a look at your current program. If you are using any of these tools, look at how you are using them and who has access. Is your current process efficient? Does it make it harder than it needs to be? If you are not using these tools and looking to implement any of them, look at the resources you have and get the development teams involved. Think about where you want these tools to sit and how you want to take advantage of them. They don’t have to be locked down to a security team. Use the resources you have and allow the application teams to take ownership of the tools that will help them create more secure applications.

Jardine Software helps companies get more value from their application security programs. Let’s talk about how we can help you.

James Jardine is the CEO and Principal Consultant at Jardine Software Inc. He has over 15 years of combined development and security experience. If you are interested in learning more about Jardine Software, you can reach him at or @jardinesoftware on twitter.

How Serious is Username Enumeration

Looking through Twitter recently, I caught a very interesting stream that started with the following message:

There were quite a few replies, and a good discussion on the topic of the seriousness of username enumeration flaws. 140 characters is difficult to share a lot of thoughts, so I thought this would actually be a great post.

What is Username Enumeration?

Username Enumeration refers to the ability to determine not only if a username is valid within a specific application, but also to automate the process of identifying a multiple valid usernames. The most common example of username harvesting is on the application logon form. The culprit, the unsuccessful login error message. This happens because the application creators want to be helpful and let the user know what went wrong. When the message is not generic, for example “unsuccessful login”, it may indicate if the user exists or not. An example message would be “Username does not exist”. This is bad. A better response would be “Username or password are incorrect”. This is a much more generic answer and doesn’t give away whether or not the username is valid.

Where do we find Username Enumeration?

As I mentioned above, the login screen is one of the most common locations we find username harvesting. However, it is not the only place. Probably the two next biggest offenders are the Forgot Password screen and the User Registration screen. With all three of these screens, it is critical to realize that the display message is not the only way to detect username harvesting. In reality, it is any difference that comes back between an invalid and a valid user. This could be the error message, a status code that is different, timing attacks, or a myriad of other techniques. Don’t just think about the error message. That one is typically the easiest, but not the only method for identification. API calls may also be a source for username harvesting.

The Login Form

The login form is typically vulnerable through the error message. To help protect the login form, make sure that the application returns a generic error message for any error situation. A good example of this is “Your login attempt was unsuccessful. If you continue to have trouble, contact support.” This message is good for invalid username, invalid password, and even account lockout.

The Forgot Password Form

The forgot password screen is usually vulnerable due to poor forget password design. Most systems ask for the username/email address and if correct, do one of the following: 1) Ask for secret security questions, 2) Display a message that an email has been sent to the email address on record. Unfortunately, for an incorrect username, the system returns an error message that the username was not found. To correct this problem, it is possible to either ask for more than just the username (something that is private to the account), or just always display the “email has been sent” message. In this configuration, no matter what, the email is sent form is displayed.

The Registration Form

The registration screen is a bit more tricky because, in most situations, you need to be able to alert the user that their username is already in use. Unlike the other two scenarios, the error message can’t really be fixed. One option on a registration screen is to enable the use of a CAPTCHA to attempt preventing automation attacks. If implemented properly the attacker would be slowed down in their enumeration making things more difficult. It may not make it impossible, but the goal is to reduce the risk.

How is Username Enumeration Used?

There are a few ways that username enumeration becomes important to an attacker. First, it is a good technique to limit down the list of possible usernames to attempt password attacks on. One example on twitter gave the example of taking a list of 100 million usernames and reducing it to 100 thousand to perform actual password guessing on. In this scenario, the attackers are also attempting to take advantage of weak password complexity, or just bad password hygiene. Often times, they may take the top 3 or 4 common passwords and try those against all of the valid usernames to see if anyone is using them.

The second way that username enumeration is used is for social engineering attacks. Or, at least, more targeted social engineering attacks. If the attacker knows that the user has an account on a specific application, it can make phone calls and emails much more convincing and ultimately more successful. Limiting emails to 100,000 users rather than 100 million helps reduce the overhead and reduces the chances of getting detected.

Somewhat related to social engineering attacks, this information could also be used to target individuals based on moral values. Remember the Ashley Madison breach that happened a while back. Depending on what the username is in use, it may be possible to identify people that use a particular site and sort of out them, or blackmail them. Think of other sites people may not want to be public about visiting.

How Serious is it?

I bet you didn’t think we would ever get here. It is important to understand how the attackers may be using a vulnerability to start to determine how serious it really is. Unfortunately, there isn’t an easy answer that fits every application. As I mentioned above, with poor password hygiene, the risk would be raised if someone is using one of those top 4 passwords. It also depends on the type of system, the type of data collected, the functionality the application performs, etc. No systems are created or rated equally.

In the scenario above for the login screen guessing the top 4 passwords, let’s not forget that it is possible to try all 100 million usernames from our list. Sure, it is helpful to reduce it down to only valid usernames, but the attack is still possible without the username enumeration flaw. In the ideal setup, the application would have some form of detection system in place that might detect all of the unsuccessful login attempts, but that is the ideal. It is not common enough to see that type of detection implemented in many applications. Without that detection, and a little crafty rate limiting by an attacker, the difference between 100 million attempts and 100,000 attempts is really just time.

As we all know, social engineering attacks are the latest and greatest. Any help we provide attackers in that arena is a step backwards for us. This could lead to a bigger problem and that is why we need to understand the issues we are trying to solve.

Should we fix it?

If we had all the time and resources in the world, the answer would be an easy “yes”. Unfortunately, we don’t have that. This really comes down to your organization’s risk assessments and priorities. I always like to provide the recommendation to resolve the issue, as it makes for a better application overall. Depending on your situation, the fix may not be straight forward or easy. You may have other higher priority items that need to be worked on. Remember, for the password attack, it is really a combination of a weak password issue as well. It is not just username harvesting on its own.

Take the opportunity to review how you are auditing and logging these types of events within your application. In some cases we may see logging occurring, but then there is no auditing of those logs. Detection and response is critical in helping reduce the risk of these types of flaws.

Continuing Forward

Username enumeration is a risk to any application. It is up to the organization to understand their business to determine how big or small that risk is. This isn’t something that should just be ignored, but it also doesn’t mean it is something that is a critical finding. Don’t just write the finding off. Use that as an opportunity to discuss the functionality, understand how it works, and determine the best course of action for the organization and the users.

Username enumeration is typically fairly simple to test for and should be worked into the QA testing cycle. Take the time to discuss with the architects, developers, business analysts, and the testers how the organization can test for this and reduce the risk.

Jardine Software helps companies get more value from their application security programs. Let’s talk about how we can help you.

James Jardine is the CEO and Principal Consultant at Jardine Software Inc. He has over 15 years of combined development and security experience. If you are interested in learning more about Jardine Software, you can reach him at or @jardinesoftware on twitter.

Should Password Change Invalidate All Access Tokens?

Passwords are a part of our every day life. It is no wonder they are under such scrutiny, with many breaches focusing on them. We all know how to manage our passwords, or at least we should by now. We know that we should change our passwords every once in a while, especially if we believe they may have been a part of a recent breach. What about those access tokens?

Access tokens are typically used by your mobile devices to access your account without the need for you to enter in your username and password every time. In some situations, you may have to create an app password for your mobile device if you are using 2-factor authentication. This is because most applications still don’t support the 2-factor model. Instead, you create an application password and use that with your application.

Depending on your system, the management of these app passwords or tokens can vary quite a bit. In this post, I want to focus on how the ideal solution may work. First, the access tokens and the user’s password should not be related in any way. The closest they come to any relation is that they both provide authentication to the same application. This purpose here is that the compromise of one, does not compromise the other. While getting access to an access token may give me access to the account, it should not give me access to the password. If you are thinking that the access to the account means I could change the password, let me stop you right there. A good system would require the current password to be able to change the password. So this should be prohibited.

In the same vein, getting access to the account password should not lead to the compromise of the account’s access tokens. In a good system, there should be a way to manage these tokens, but that management should not include the ability to view existing tokens. Instead, that functionality is used to revoke an access token. In the instance of an application password (2-factor), you may also have the ability to create a new application password. This would still not give access to the existing password or token values.

So the question posed, or implied: If you change your account password, should that invalidate your devices?

If there is truly no relationship between the two items, and one doesn’t effect the other, then there may not be a reason to invalidate everything for every password change. As a good friend likes to always say, “what is the problem we are trying to solve?” So what is the problem? Let’s look at a few scenarios:

User just wanted to change their password

It is not unusual for a user to want to change their password. They may have even forgotten it. In this case, if the password and access tokens are unrelated, I don’t see a big risk around the access tokens that may have been created for other devices. The password should be completely separate from the access tokens and the change should have no impact to the mobile devices.

User’s password was leaked in a recent breach

This scenario is very similar to the one above. However, if an account has been compromised, it makes sense to review the devices that have been granted access and remove any that should not be there. The account compromise should not lead to the compromise of the access tokens already generated, but may enable the creation, or addition, of access tokens for new devices.

User’s device was lost or stolen

Something no one wants to think about, their device getting lost or stolen. In this case, the specific device’s access token has probably been compromised. This shouldn’t lead to a compromise of the user’s actual password. Like the above example, there should be a way for the user to log into their account and revoke access to that specific device, or worst case, all devices.

Should we, or shouldn’t we?

I don’t think there is an exact answer here. I think if done right, the two pieces are separate and stand alone. What happens to one, shouldn’t have an effect on the other. Unfortunately, that is not how things work in the real world. If you are a developer or architect designing these systems, take the time to consider these questions. Understand how your applications work, how the authentication pieces fit together, and how that will all tie together for the user experience. It may make sense to reset everything every time. On the flip side, it shouldn’t have to if designed properly. Remember the question: “What are we trying to solve?”

Application security is moving at a rapid pace, and it is great to see these types of topics coming up. As we start to raise more questions, we start to have more discussions. Share your feedback with us about this topic. Are there concerns that should be raised? Are there risks that should be considered? Share your thoughts with out on Twitter (@developsec).

Jardine Software helps companies get more value from their application security programs. Let’s talk about how we can help you.

James Jardine is the CEO and Principal Consultant at Jardine Software Inc. He has over 15 years of combined development and security experience. If you are interested in learning more about Jardine Software, you can reach him at or @jardinesoftware on twitter.

Application Security and Responsibility

Who is responsible for application security within your organization? While this is something I don’t hear asked very often, when I look around the implied answer is the security team. This isn’t just limited to application security either. Look at network security. Who, in your organization, is responsible for network security? From my experience, the answer is still the security group. But is that how it should be? Is there a better way?

Security has spent a lot of effort to take and accept all of this responsibility. How often have you heard that security is the gate keeper to any production releases? Security has to test your application first. Security has to approve any vulnerabilities that may get accepted. Security has to ….

I won’t argue that the security group has a lot of responsibility when it comes to application security. However, they shouldn’t have all of it, or even a majority of it. If we take a step back for a moment, lets think about how applications are created. Applications are created by application teams which consist of app owners, business analysts, developers, testers, project managers, and business units. Yet, when there is a security risk with the application it is typically the security group under fire. The security group typically doesn’t have any ability to write or fix the application, and they shouldn’t. There is a separation, but are you sure you know where it is?

I have done a few presentations recently where I focus on getting application teams involved in security. I think it is important for organizations to think about this topic because for too long we have tried to separate the duties at the wrong spot.

The first thing I like to point out is that the application development teams are smart, really smart. They are creating complex business functions that drive most organizations. We need to harness this knowledge rather than trying to augment it with other people. You might find this surprising, but most application security tools have GUIs that anyone on your app dev teams can use with little experience. Yet, most organizations I have been into have the security group running the security tools (such as Veracode, Checkmarx, WhiteHat, Contrast, etc). This is an extra layer that just decreases the efficiency of the process.

By getting the right resources involved with some of these tools and tasks, it not only gets security closer to the source, but it also frees up the security team for other activities. Moving security into the development process increases efficiency. Rather than waiting on a scan by the security team, the app team can run the scans and get the results more quickly. Even better, they can build it into their integration process and most likely automate much of the work. This changes the security team to be reserved for the more complex security issues. It also makes the security team more scalable when they do not have to just manage tools.

I know what you are thinking.. But the application team doesn’t understand security. I will give it to you, in may organizations this is very true. But why? Here we have identified what the problem really is. Currently, security tries to throw tools at the issue and manage those tools. But the real problem is that we are not focusing on maturing the application teams. We attempt to separate security from the development lifecycle. I did a podcast on discussing current application security training for development teams.

Listen to the podcast on AppSec Training

Everyone has a responsibility for application security, but we need to put a bigger focus on the application teams and getting them involved. We cannot continue to just hurl statements about getting these teams involved over the fence. We say to implement security into the SDLC, but rarely are we defining these items. We say to educate the developers, but typically just provide offensive security testing training, 1-2 days a year. We are not taking the time to identify how they work, how their processes flow, etc. to determine how to address the problem.

Take a look at your program and really understand it. What are you currently doing? Who is performing what roles? What resources do you have and are you using them effectively? What type of training are you providing and is it effective regarding your goals?

James Jardine is the CEO and Principal Consultant at Jardine Software Inc. He has over 15 years of combined development and security experience. If you are interested in learning more about Jardine Software, you can reach him at or @jardinesoftware on twitter.

Originally posted at

Understanding the “Why”

If I told you to adjust your seat before adjusting your mirror in your car, would you just do it? Just because I said so, or do you understand why there is a specific order? Most of us retain concepts better when we can understand them logically.

Developing applications requires a lot of moving pieces. An important piece in that process is implementing security controls to help protect the application, the company, and the users. In many organizations, security is heavily guided by an outside group, i.e.. the security group or 3rd party testers.

Looking at an external test, or even a test by an internal security team, often the result is a report containing findings. These findings typically include a recommendation to guide the application team in a direction to help reduce or mitigate the finding. In my experience, the recommendations tend to be pretty generic. For example, a username harvesting flaw may come with a recommendation to return the same message for both valid and invalid user names. In most cases, this is a valid recommendation as it is the reason for the flaw.

But Why? Why does it matter?

Working with application teams, it quickly becomes clear the level of understanding regarding security topics. The part that is often missing is the Why. Sure, the team can implement a generic message (using the username harvesting flaw above) and it may solve the finding. But does it solve the real issue? What are the chances that when you come back and test another app for this same development team that the flaw may exist somewhere else? When we take the time to really explain why this finding is a concern, how it can be abused, and start discussing ways to mitigate it, the team gets better. Push aside the “sky is falling” and take the time to understand the application and context.

As security professionals we focus too much on fixing a vulnerability. Don’t get me wrong, the vulnerability should be fixed, but we are too focused. Taking a step back allows us to see a better approach. It is much more than just identifying flaws. It is about getting the application teams to understand why they are flaws (not just because security said so) so they become a consideration in future development. This includes the entire application team, not just developers. Lets look at another example.

An Example

Let’s say that you have a change password form that doesn’t require the current password. As a security professional, your wheels are probably spinning. Thinking about issues like CSRF. From a development side, the typical response “Why do I need to input my password when I just did that to login to change my password?” While the change will most likely get made, because security said it had too, there is still a lack of understanding from the application team. If CSRF was your first reason, what if they have CSRF protections already in place? Do you have another reason? What about if the account is hijacked somehow, or a person sits at the user’s desk and they forgot to lock their PC? By explaining the reasoning behind the requirement, it starts to make sense and is better received. It dominos into a chance that the next project that is developed will take this into consideration.

When the business analysts sits down to write the next change password user story, it will be a part of it. Not because security said so, but because they understand the use case better and how to protect it.

If you are receiving test results, take the time to make sure you understand the findings and the WHY. It will help providing a learning objective as well as reduce the risk of not correcting the problem. Understand how the issue and remediation effects your application and users.

James Jardine is the CEO and Principal Consultant at Jardine Software Inc. He has over 15 years of combined development and security experience. If you are interested in learning more about Jardine Software, you can reach him at or @jardinesoftware on twitter.

Originally posted at

ImageMagick – Take-aways

Do your applications accept file uploads? More specifically, image uploads? Do you use a site that allows you to upload images? If you haven’t been following the news lately, there was recently a few vulnerabilities found in the ImageMagick image library. This library is very common in websites to perform image processing. The vulnerability allows remote code execution (RCE) on the web server, which is very dangerous. For more specific details on the vulnerability itself, check out this post on the Sucuri Blog.

There are a few things I wanted to focus on that are less about ImageMagick and more focused on better security solutions.

Application Inventory

I know, enough already. I get it, I mention application inventory all the time. It is with good reason. We rely heavily on 3rd part components just like ImageMagick and there are bound to be security flaws identified in them. Some may be less critical, while others demanding the highest priority. The issue at hand is that we cannot defend against the unknown. In this case, even if we are aware of a vulnerability, it doesn’t help us if we are not aware we use the vulnerable component. Time is important, especially when it comes to critical type issues. You need to be able to quickly determine if your company uses the vulnerable component, which apps are effected, and what types of data resides in those systems. Performing fire drills for every vulnerability announced is very inefficient. Being able to quickly identify the effected areas allows for a better understanding of any risk changes.


When you are configuring your applications, think about the type of permissions the application has on the server. Does it have the ability to write to specific folders, execute specific files, read data, etc.? Does the application run as an administrator with full rights? To help reduce the attack surface we must understand what the app needs to do, and then make sure the permissions are aligned with those needs. In this case, the files may be stored on the file system to be used later so write permissions may be required. It is still possible to limit those write permissions to specific folders and even limiting execution permissions.

Application Configuration

How you design and configure your application plays a significant role in how security is handled. In regards to allowing file uploads, how do you handle storing the files? Are they on the file system or in the database? How are features configured? In the case of ImageMagick, there are configuration settings that can be set to limit the vulnerabilities. Make sure that you are only accepting file types that are needed and discarding others. These configurations can take many forms, but will help provide security when they are truly understood.

Importance of Input Validation

The vulnerability in ImageMagick highlights the importance of sanitizing input and performing solid validation. In this case, the file name is not properly handled which allows an attacker to run unintended commands. We have to check to make sure that the data passed to our applications is what we are expecting. If it is not, then it must be discarded.

This is not the last time we will see a vulnerable component that is used by lots of applications. It is part of the risk of using external components. We can do more on our side to make sure that we know what we are using and preparing for the event that something goes wrong. Take the time to understand your applications and think about the different ways security may be effected.

Reliance on 3rd Party Components

It was just recently announced that Apple will no longer be supporting QuickTime for Windows. Just like any other software, when support has ended, the software becomes a security risk. As a matter of fact, there are current known vulnerabilities in QuickTime that will never get patched. The Department of Homeland Security has an alert recommending removal of QuickTime for Windows.

For users, it may seem simple: Uninstall QuickTime from your Windows system. But wait.. what about software they rely on that may require quicktime? There are times when our users are locked into some technologies because of the software we create. How does our decision to use a specific technology have an effect on our end users when it comes to security?

If you have created an application that relies on the installation of QuickTime for Windows this creates somewhat of an issue. It can be very difficult to justify your position to end users to continue using a component that is not only known vulnerable, but is also past the end of life. This brings up a few things to consider when using third party components in our solutions.

Why use the component/technology?

The first question to ask is why do we need this component or technology. Is this a small feature within our application or does our entire business model rely on this. What benefit are we getting by using this component. Are there multiple components by different sources? What process was used to pick the used component? Did you look at the history of the creator, the reviews from other users, how active development is, etc.? When picking a component, more is required than just picking the one with the prettiest marketing page.

How do we handle Updates?

The number of breaches and security vulnerabilities documented is at an all time high. You need to be prepared for the event that there are updates, most likely critical, to the component. How will you handle these updates. What type of process will be in place to make sure that the application is compatible with all of these updates so the user is not forced to use an out-dated version? What would you do if the update removed a critical feature your application relies upon?

What if the component is discontinued?

Handling the updates and patches is one thing, but what happens if the component becomes end of life? When the component is no longer supported there is a greater possibility of security risks around it that will not get fixed. In most situations this will probably lead to the decision to stop using that component. I know, this seems like something we don’t want to think about, but it is critical that all of these possibilities are understood.

We use components in almost any software that we create. We rely on these components. Taking the time to create a process around how these components are selected and how they are maintained will help reduce headache later on. We often focus on the now, forgetting about what may happen tomorrow. Take the time to think about how the component is maintained going forward and how it would effect your business if it went away one day. Will you ask your users to continue using a known vulnerable component, or will you have a plan to make the necessary changes to no longer rely on that component?