Tag Archives: breach

Equifax Take-aways

By now, you must have heard about the Equifax breach that may have affected up to 143 million records of user people’s information. At this point, I don’t think they can confirm exactly how many records were actually compromised, leading to going with the larger of the numbers just to be safe. While many are quick to jump to conclusions and attempt to Monday morning quarterback what they did or didn’t do to get breached, I like to focus on what we can learn for our own organizations. There are a few topics I want to discuss that hopefully will be useful within your organization.

Patching

Well, it appears to be pretty clear that the avenue of attack was a Struts patch that was missing on the server. The patch was apparently released a few months prior to the attack, or at least acknowledgement of the attack. On the surface, patching appears to be a pretty easy task. A patch is released, you apply it.

Simple, right?

Patching is actually much more complex than that. It may be that simple when you have a single system to work with and maintain with very few software packages. Unfortunately, that is not the reality for so many places. Many organizations are dealing with hundreds or even thousands of systems to attempt to keep fully patched. This is a pretty big task, even if there were no other variables. Automate it they say. Sure, automation can be done, and needs to be done. How can anyone patch that many systems in a reasonable time frame manually?

There are other factors to consider. First, lets consider that there are many different types of patches. You have patches for the operating system, patches for applications, patches for frameworks, even patches for client-side libraries. Does your automation cover all of these sources? Some software has automatic update capabilities and will update on their own. Others require that you explicitly go out and download the patch and apply it.

Second, you have custom written applications with millions of lines of code pulling in multiple frameworks and packages to make development easier. It would be foolish to apply the patch without testing it first. This becomes more of a challenge with application patches because the entire application needs to be retested. This is more than a test to make sure the computer still boots. This needs to make sure that all the functionality, especially that functionality around the component is still properly functioning. The testing alone can be a time consuming piece. Add on to that if the patch makes any other changes within the code that breaks something. How bad does it break it. How much code needs to be rewritten for your custom code to work correctly again? Does that component have other components that are dependent on that version? Does this end up affecting other components?

Finally, who is tasked with patching the systems? Is this defined within the business? Are the same people that apply OS patches to the server the ones responsible for the application component patches? How do they track those type of patches? Do they need to get the go ahead from the application team that the patch is OK to implement?

As you can see, there are a lot of factors that go into apply what may appear to be a simple patch. What it highlights to me is the importance of understanding what components our application uses, how they interact with each other, and understanding how patches are applied when made available. Worst case scenario, we didn’t even know a patch was released.

Patching, however, is just one control for helping protect our systems. Similar to how input validation is a control to help with injection attacks. We shouldn’t be relying on it alone. The Equifax breach shows this well, that we must consider other controls in place in the event another control breaks down.

Encryption

I hear a lot of people talk about the data should have been encrypted. I believe that to be an easy statement to make, but without more details on how the data was actually accessed, it is not very helpful. Hopefully, your organization has a data classification policy. Hopefully, that data classification policy describes how data should be protected. This is the policy that determines how data should be protected and it should exist. If you have not seen this policy, ask for it.

Now that we know some data needs to be encrypted, what is the right method to use? Should we use disk encryption or column level encryption? Should we use Tokenization? The each have their pros and cons. Maybe the answer is you have to implement all of them, just to be safe, but how might that affect your ability to have a high performing functioning application?

You may decide to implement disk encryption for your database. That is a good step, in the event that someone is able to steal the actual files of the database. That doesn’t help much if the application has a vulnerability that allows access to the data that the attacker can just enumerate through. This can be similar to column level encryption as well. Often times application flaws may be able to bypass the encryption if incorrectly implemented. I guess at the very least, you get to say the data was encrypted.

The point with encryption is to make sure you know what you are doing and how you are implementing it. What attack vectors will it protect against and which ones may still be vulnerable. If you are going to take the time to implement it, it is important to make the best use of it.

Auditing and Logging

Auditing and Logging are important parts of the security of an application. They help us see and act upon events that may be malicious. How do you get vision into 3rd party components, like Struts, to see what they are doing? Are you relying on system event logs if the component throws an exception? Within our own applications we can use the logging to identify queries run, data accessed, and authorization failures, etc. When a system gets compromised, that logging may not be useful. It may be a combination of system and application events that help identify an attack as it is happening or after the fact. This is a great reminder that logging mechanisms can cross boundaries and this needs to be reviewed. Take a moment to look at how your applications and your web server are configured to identify potential malicious attacks. Consider different attack scenarios and see how those may get logged and if/when someone might see them.

Risk Management

Business run on the concept of taking risks. Sometimes this works in favor of the organization, sometimes not. In order to make better decisions, they must understand the risks they face. In a situation like this, we know there may be a patch available for a platform. The patch is critical since it allows for remote code execution. But what was known about the risk? What applications were effected on that server? What type of data did those applications maintain? Where does that application fit into our business model? Often times, we don’t look at the real details of a vulnerability or risk, rather we focus on the numbers. A patch that may compromise a system with no records and access is very different than one that relates to all your customer data that may be sensitive.

Don’t mistake this as an alternative to patch management. It is, however, a reality that in the midst of doing business, decisions will be made and not all of them will be popular. When working in your organization, think about the information you may be providing in regards to the decision making process. Is it sufficient? Does it tell the whole story?

Wrap Up

Companies are always at risk of being breached. As we see new breaches appear in the news we need to take a little time to skip the hype and personal opinions, and take a look at what it means to our programs. Look for the facts of what happened, how decisions may have been made, and the effect those had on the organization. Then apply that to your organization. Maybe you learn a new perspective on how a vulnerability can be used. Maybe you see a control that was bypassed that you also use and you want to review how your processes work. In any case, there are lessons we can learn from any situation. Take those and see how they can be used to help your processes and procedures to provide security in your organization.

The 1 thing you need to know about the Daily Motion hack

It was just released that Daily Motion suffered a hack attack resulting in a large number of usernames and email addresses being released. Rather than focusing on the number of records received (the wow factor), I want to highlight what most places are just glancing over: Password Storage.

According to the report, only a small portion of the accounts had a password associated with it. That is in the millions, and you might be thinking this is bad. It is actually the highlight of the story. Why? Daily Motion used bcrypt to hash their user passwords before storing them.

Bcrypt uses both a salt value and a work factor when hashing the data. The salt has been a long time recommendation when hashing passwords as it can help reduce rainbow table attacks. The work factor, which has been recommended much more in recent years makes brute forcing passwords work intensive. This means that it requires more time per password, slowing down large cracking attacks.

Bcrypt is not the only option either. PBKDF and Scrypt are other available options that work in a similar way.

Using a strong algorithm makes it much more difficult to crack the passwords in the event that they are hacked some how. The use of any of these algorithms doesn’t rule out the possibility of cracking the passwords. They just make it much more difficult or time intensive. There are always circumstances that can change this. However, using one of these algorithms can go a long way in helping protect that data.

How are you storing passwords?

Take a moment to look at how you are storing passwords and consider how it will stand up in the event of breached account details. Do you use a unique salt for each password? Do you implement a work factor to slow cracking attempts down?

How would you handle this type of breach?

If accounts were to be breached like this, how would you handle it? Do you have a process in place? Would you force password resets? How would you notify users? Consider these types of questions to verify you have a plan in place.

James Jardine is the CEO and Principal Consultant at Jardine Software Inc. He has over 15 years of combined development and security experience. If you are interested in learning more about Jardine Software, you can reach him at james@jardinesoftware.com or @jardinesoftware on twitter.

HIV clinic Data Breach: Thoughts and Takeaways

One of the most common ways for sensitive information to be released outside of an authorized environment is by simple, common mistakes made by employees. These types of incidents usually have no malicious intent and are generally innocent in nature. An example of this was recently reported regarding a newsletter that was sent out to HIV patients (and others) that the sender made a simple mistake. Rather than use the BCC for each recipients address, they used the CC field. For those that may not realize, you don’t see the users listed as BCC (blind carbon copy), as opposed to the CC field which is shown to all recipients.

Think about any mass emails you may be a part of and which ones use the CC field instead of the BCC field. I have a few that I am on that share my information with the rest of the list. In many cases, this may not be that big of a concern, but in a health related situation like this one, it becomes more severe. The issue becomes a privacy and compliance issue as it deals with HIPAA and personal health information.

Is the solution as straight forward and simple as creating a procedural check list to ensure that BCC is used instead of CC? This may work, but it still opens up the opportunity for someone on a tight deadline to skip the checklist and make the same mistake gain. We are all aware that after an incident we will be more aware, but as time goes on that awareness slips to the way side.

A company could engage a 3rd party mailer, like MailChimp, to do their newsletter mailings. This route raises different concerns because you are placing your critical or private data, the patients related to the health issue, in the hands of a 3rd party. If that vendor suffers a breach you will incur some risk there as well. Different vendors have different policies and security practices, so if you are thinking about taking that option make sure you understand what is and is not offered.

There may be add-ons for your mail program that can help send newsletters individually, rather than as a bulk email. One such solution for Microsoft Outlook is Send Individually created by Sperry Software. (Full disclosure, I used to work with Sperry Software, but I am not compensated by mentioning their product. I am not a reseller nor do I have any affiliation at this time) There may be other add-ins by other vendors that can do this as well.

Whichever direction you go, make sure that you are reviewing your processes and the risks they expose. This type of human error is easy to make, but quick to be crucified. Don’t cut corners due to quick timelines and have another person review before sending anything externally. Sometimes that second pair of eyes can catch the simplest of mistakes that are so easily overlooked by the original writer. It is important that we take time to understand these situations and learn from them. Attention to detail can save a lot of hassle in the future.

Best Practices for Cyber Incident: DoJ Released Guide

Breaches and other security incidents are happening all of the time, and can happen to anyone. Do you know what to do if an incident occurs in your backyard? The Department of Justice recently released the Best Practices for Victim Response and Reporting of Cyber Incidents to help you understand the process. Looking through the 15 page document, there are quite a few great points that are made. Here are just a few examples of what are included. I encourage you to check out the entire document as this won’t do it justice. It covers 4 topics:

  • Before the Incident
  • Responding to the Intrusion
  • What Not to do Following an Incident
  • After an Incident

The document is broken down into different topics, starting with before the intrusion actually occurs. This step is often overlooked because we never think it will happen to us. Talk to anyone that performs incident response or forensics and while these tasks are performed after an incident, what was done before the incident can be a game changer. Don’t forget baselines of your systems to know what is normal.

It is good practice to identify what is important to your business, that is, what you need to protect. This is different for every company. The next step is to create an action and response plan in the event an incident occurs. Well thought out plans make dealing with an incident easier. Make sure you are including non-technical resources in this planning. This may include the legal teams, human resources, public relations, and a wide array of other personnel within the company. When a breach occurs, there are a lot of moving parts to deal with.

Forming a relationship with law enforcement is also a good idea. It makes it easier to contact them in the event of an incident and you may feel more comfortable with the situation. The relationship may also lead to information ahead of time that could be useful to thwart an impending attack.

Once an incident occurs it is time to respond to it. This is the 2nd topic covered by the report. This starts with making an initial assessment of the situation. Who is logged on, what systems are affected, etc. Once you have identified affected systems it is time to implement measures to minimize damage. This might include removing systems from the network, shutting them down or segregating them. Once protected it is time to collect information about the incident, often requiring imaging of the affected systems. Note: If you are not sure how to do this it is a good idea to contact a professional. You do not want to risk damaging the evidence.

Once you have identified the affected systems/data it is type to put the notify portion of your action plan into effect. Don’t forget that it is more than just notifying customers. You want to understand which customers need to be notified, but also vendors/partners and internal employees. Depending on the situation, law enforcement may also need to be notified.

I really like that the document contains information about what NOT to do following an incident. Professionals that don’t focus in IR and Forensics tend to not think about what they shouldn’t be doing that could cause problems with the investigation or themselves. The affected systems should not be used for any communication unless absolutely necessary. These systems are most likely compromised and you can’t expect any information to be safe at this point.

While hacking back seems to be a debated topic these days, the recommendation is to avoid it. While the laws are broad when it comes to CFAA and other computer crimes, you don’t want to go from victim to defendant. Let the authorities deal with the issue.

Finally, once the incident is cleared and complete, stay vigilant. Don’t assume that once you get attacked it won’t happen again. Learn from what happened to help reduce the chances it will happen again.

My summary above just touches on the tip of the information provided in the document linked above. It is nice to see we have a list of best practices that are not technical that many should be able to understand. Even if you are not part of incident response or dealing directly with cyber incidents, take a moment to read the information as it might be helpful one day.