• Skip to main content
  • Skip to primary sidebar
  • Skip to footer

DevelopSec

  • Home
  • Podcast
  • Blog
  • Services
    • Secure Development Training
    • Advisory Services
    • Application Penetration Testing
    • Application Security and SDLC Program Review
  • Resources
  • About
  • Schedule a Call
You are here: Home / blog

blog

January 4, 2019 by James Jardine Leave a Comment

What is the difference between Brute Force and Credential Stuffing?

Many people get confused between brute force attacks and credentials stuffing. To help clear this up, here is a simple description of the two. These are both in regards to the login form only.

Brute Force
Brute force attacks on the login form consist of the attacker having a defined list (called a dictionary) of potential passwords. The attacker will then try each of these defined passwords with each username the attacker is trying to brute force. Put simply, this is a 1 (username) too many (password) attack.

A common mitigation to brute force attacks is the implementation of account lockout. In this case, after 3, or 5, or 10 failed attempts for a single username, the user account is locked to block any more attempts. This drastically reduces the number of passwords that may be tried in a short period of time.

Credential Stuffing
Credential Stuffing is another attack on the login form but it differs from a brute force attack in that the list used contains both a username and a password. This list is often obtained through a data breach at another organization. The purpose is to find accounts that are re-used across multiple sites. In this case, the attack is a 1 (username) to 1 (password) attack. For each username, only one password will be attempted.

Unlike with Brute Force, account lockout doesn’t have much affect on credential stuffing. Multi-factor authentication is a good mitigation as it will limit the use of valid credentials.

Filed Under: Questions Tagged With: application security, AppSec, brute force, credential stuffing, developsec, pen testing, penetration test, pentest, secure development, secure testing, secure training, vulnerability

October 5, 2018 by James Jardine Leave a Comment

Apple Mail: Highlighting External Email Addresses

A simple error to make when sending an email with sensitive data is to send it to the wrong email address. Imagine you are sending some information to Dave down in accounting. When you fill out the email you start typing Dave and the auto-complete pops up and you select the first one (out of habit). You think you have selected the right Dave, but what if you didn’t? How would you know that you were about to send potentially sensitive information outside of the organization?

Apple Mail and iOS mail has a built in feature to highlight any email address outside of a specified domain. The steps to enable this are a little different based on the desktop vs. an iPad/iPhone. Let’s look at each one.

Desktop

By default, email addresses all appear the same when composing a new message. The image below shows me composing an email with two addresses:

  • To: james@jardinesoftware.com
  • CC: james@developsec.com

Mail 1

To change this, in the Mail program, go to Preferences…Composing. Under the Addressing: section there is a checkbox for “Mark addresses not ending with“. Check the box and enter your organization’s domain name in the text box. In the image below, I have entered my domain of @jardinesoftware.com.

Mail 2

Now, go back and compose a new message entering in an email address that is in the org and one that is out of it. The email address that is outside of your domain should be highlighted in red. The below image shows my configuration with the same 2 email addresses. Notice that the james@developsec.com address is now listed in red indicating it is external to my organization.

Mail 3

iPhone/iPad

The mail application with the iPhone/iPad also has this capability. To enable this, go into Settings->Mail. Scroll down to the Composing section and click the “Mark Addresses” button (shown below).

Mail 4

On the next screen, enter the email address you don’t want to highlight. In my case, I entered my domain of @jardinesoftware.com. (shown below)

Mail 5

Once the change is stored, go to mail and compose a new message. In my example, I used the same emails as above. Notice that the james@developsec.com email address is now highlighted in red:

Mail 6

Although this is not a fool proof way to stop this type of mistake, it does add in a visual clue that just might catch your attention. Simple changes like this can help reduce your risk of accidentally leaking sensitive information out of your organization.

Filed Under: General Tagged With: breach, cyber security awareness month, data breach, email, mail security, security, security awareness, security training

June 26, 2018 by James Jardine Leave a Comment

Checking npm packages using npm-audit

Our applications rely more and more on external packages to enable quick deployment and ease of development. While these packages help reduce the code we have to write ourselves, it still may present risk to our application.

If you are building Nodejs applications, you are probably using npm to manage your packages. For those that don’t know, npm is the node package manager. It is a direct source to quickly include functionality within your application. For example, say you want to hash your user passwords using bcrypt. To do that, you would grab a bcrypt package from npm. The following is just one of the bcrypt packages available:

https://www.npmjs.com/package/bcrypt

Each package we may use may also rely on other packages. This creates a fairly complex dependency graph of code used within your application you have no part in writing.

Tracking vulnerable components

It can be fairly difficult to identify issues related to these packages, never mind their sub packages. We all can’t run our own static analysis on each package we use, so identifying new vulnerabilities is not very easy. However, there are many tools that work to help identify known vulnerabilities in these packages.

When a vulnerability is publicly disclosed it receives an identifier (CVE). The vulnerability is tracked at https://cve.mitre.org/ and you can search these to identify what packages have known vulnerabilities. Manually searching all of your components doesn’t seem like the best approach.

Fortunately, npm actually has a module for doing just this. It is npm-audit. The package was included starting with npm 6.0. If you are using an earlier version of npm, you will not find it.

To use this module, you just need to be in your application directory (the same place you would do npm start) and just run:

npm audit.

On the surface, it is that simple. You can see the output of me running this on a small project I did below:

Npm audit

As you can see, it produces a report of any packages that may have known vulnerabilities. It also includes a few details about what that issue is.

To make this even better, some of the vulnerabilities found may actually be fixed automatically. If that is available, you can just run:

npm audit fix.

The full details of the different parameters can be found on the npm-audit page at https://docs.npmjs.com/cli/audit.

If you are doing node development or looking to automate identifying these types of issues, npm-audit may be worth a look. The more we can automate the better. Having something simple like this to quickly identify issues is invaluable. Remember, just because a component may be flagged as having a vulnerability, it doesn’t mean you are using that code or that your app is guaranteed vulnerable. Take the effort to determine the risk level for your application and organization. Of course, we should strive to be on the latest versions to avoid vulnerabilities, but we know reality diverts from what we wish for.

Have you been using npm-audit? Let me know. I am interested in your stories of success or failure to learn how others implement these things.

Filed Under: General Tagged With: 3rd party component, applicaiton security, components, javascript, nodejs, npm, secure, secure development, security, security components, security testing

June 26, 2018 by James Jardine Leave a Comment

Thinking about starting a bug bounty? Do this first.

Application security has become an important topic within our organizations. We have come to understand that the data that we deem sensitive and critical to our business is made available through these applications. With breaches happening all the time, it is critical to take reasonable steps to help protect that data by ensuring that our applications are implementing strong controls.

Over the years, testing has been the main avenue for “implementing” security into applications. We have seen a shift to the left more recently, leading to doing more throughout the entire development cycle, but we still have a ways to go. I am still a firm believer in embedding security into each of the phases as our main means of securing applications. Testing, however, is still a major component of any security program.

Typically, organizations rely on penetration testing to find the flaws in their applications. This is the de facto standard for understanding your risk. Unfortunately, penetration testing for applications has been watered down from what we think about with network testing. Many of the assessments we call penetration tests these days are just automated scans transposed into a custom report. These types of testing overlook one of the components a penetration test provides, which is the manual testing. Of course, there is much more to a penetration test, but that is not the focus of this post.

Internally, organizations may implement automated tools to help identify security flaws within their applications. These tools are good at finding certain types of flaws, and usually quite quickly. Like many current penetration tests, they lack the manual assessment side.

Not only does manual testing have the ability to find different types of flaws, such as authentication, authorization, CSRF, business logic, etc., it also has the ability to identify flaws that an automation tool overlooks. For example, a tool may not find every instance of cross-site scripting, depending on how that tool analyzes the system. Granted, manual testing is not guaranteed to find every instance either. With each type of testing, there is always a number of issues that will not be identified. The goal is to start reducing these numbers down over time.

Handling the results of all these res ports from the different assessments is critical to how well you start creating more resilient applications. In many organizations, vulnerabilities identified are handled as individual items and patched. In my opinion, the return on investment is when you can analyze these results to review your development process and see what improvements can be made to reduce the chance these types of flaws will be included in the future. Having an expert available to help review the issues and provide insight into how to use that information to improve your process is valuable.

Having a solid application development process in place is important before thinking about implementing a bug bounty program within your organization. If you are not already doing things consistently, there is a better chance the bounty program will fail.

Bug bounty programs have been becoming more prevalent over the last few years. This is especially true for newer technical startups. We have seen much slower adoption with most of the major corporations. There are many reasons for this, which are outside the scope of this post. There have been questions on whether bug bounties can replace penetration testing. The answer is no, because the goal of each of these is different. There are plenty of articles discussing the subject. A bug bounty program has also been seen by many as the evidence to show they are doing application security. Unfortunately, we can’t test ourselves secure. As I stated previously, testing is just a part of our solution for application security.

A key difference between our traditional testing and a bug bounty program is that bug bounties pay by the bug. Our traditional testing is provided at flat fees. For example, that automated tool is a set price for a month or year subscription. A penetration test is a set price per test. A bug bounty is priced per bug, which makes the cost very unpredictable. In addition, if you are not already doing many of the things previously discussed, there could be a lot of bugs to be found, leading to potentially high payouts.

As I have stated before, penetration testing has a different purpose and it can be very expensive. At Jardine Software we offer more budget friendly manual application security testing at a fixed cost. The goal is not necessarily to find every instance of every vulnerability or to exploit vulnerabilities in the way a penetration test would. The focus is on augmenting the automated testing you may already have in place and to provide that missing manual piece. The testing is performed manually by using the application in combination with Burp Suite, to look for weaknesses and provide those in a way that helps prioritize and then remediate them according to your organization’s needs.

The manual application security testing is typically performed over a week to two weeks and includes a broader scope than a typical bug bounty program. The reason for this is that we want to help identify risks that we see based on our years of experience to make you aware. This assessment can then help identify where you may have issues within your application before opening it up for a crowd sourced bounty program where each bug is priced individually.

If you are thinking about implementing a bug bounty program, reach out and lets chat first. Even if you are not considering a bug bounty program, do you have any manual application security testing implemented? We have the expertise to help provide the necessary testing or provide training for your internal teams to start applying manual testing techniques as part of your life cycle.

Filed Under: General Tagged With: app sec, application program, application security, application security program, AppSec, consulting, developer, developer awareness, development, hacking, hiring, pen test, pen testing, penetration testing, qa, quality, quality assurance, ransomware, secure code, secure program, security testing, security training, testing, vulnerability, vulnerability assessment, vulnerability disclosure

June 26, 2018 by James Jardine Leave a Comment

Overview of Web Security Policies

A vulnerability was just identified in your website. How would you know?

The process of vulnerability disclosure to an organization is often very difficult to identify. Whether you are offering any type of bounty for security bugs or not, it is important that there is a clear path for someone to notify you of a potential concern.

Unfortunately, the process is different on every application and it can be very difficult to find it. For someone that is just trying to help out, it can be very frustrating as well. Some websites may have a separate security page with contact information. Other sites may just have a security email address on the contact us page. Many sites don’t have any clear indication of how to report such a finding. Maybe we could just use the security@ email address for the organization, but do they have it configured?

In an effort to help standardize how to find this information, there is a draft definition for a method for web security policies. You can read the draft at https://tools.ietf.org/html/draft-foudil-securitytxt-03. The goal of this is to specify a text file in a known path to provide contact information for users to submit potential security concerns.

How it works

The first step is to create a security.txt file to describe your web security policy. This file should be found in the .well-known directory (according to the specifications). This would make your text file found at /.well-known/security.txt. In some circumstances, it may also be found at just /security.txt.

The purpose of pinning down the name of the file and where it should be located is to limit the searching process. If someone finds an issue, they know where to go to find the right contact information or process.

The next step is to put the relevant information into the security.txt file. The draft documentation covers this in depth, but I want to give a quick example of what this may look like:

Security.txt

— Start of File —

# This is a sample security.txt file
contact: mailto:james@developsec.com
contact: tel:+1-904-638-5431

# Encryption - This links to my public PGP Key
Encryption: https://www.developsec.com/jamesjardine-public.txt

# Policy - Links to a policy page outlining what you are looking for
Policy: https://www.developsec.com/security-policy

# Acknowledgments - If you have a page that acknowledges users that have submitted a valid bug
Acknowledgments: https://www.developsec.com/acknowledgments

# Hiring - if you offer security related jobs, put the link to that page here
Hiring: https://www.developsec.com/jobs

# Signature - To help secure your file, create a signature file and reference it here.
Signature: https://www.developsec.com/.well-known/security.txt.sig

—- End of File —

I included some comments in that sample above to show what each item is for. A key point is that very little policy information is actually included in the file, rather it is linked as a reference. For example, the PGP key is not actually embedded in the file, but instead the link to the key is referenced.

The goal of the file is to be in a well defined location and provide references to your different security policies and procedures.

WHAT DO YOU THINK?

So I am curious, what do you think about this technique? While it is still in draft status, it is an interesting concept. It allows providing a known path for organizations to follow to provide this type of information.

I don’t believe it is a requirement to create bug bounty programs, or even promote the security testing of your site without permission. However, it does at least provide a means to share your requests and provide information to someone that does find a flaw and wants to share that information with you.

Will we see this move forward, or do you think it will not catch on? If it is a good idea, what is the best way to raise the awareness of it?

Filed Under: General Tagged With: application security, developer, researcher, secure development, secure software, secure testing, security research, security training, security.txt, white hat

June 22, 2018 by James Jardine Leave a Comment

Installing BeEF on ubuntu 18.04

While working on a VM for a class, I had the opportunity to install BeEF for the students. This was the first VM I have built using Ubuntu 18.04, so I expected there to be a few hiccups along the way. The good news is that the process was pretty straight forward and simple. Here are the steps to getting this up and running on Ubuntu 18.04.

I started off by creating a new virtual machine using Ubuntu 18.04. I won’t go through the steps of creating a new virtual machine image, but the takeaway here is that I am starting with a fresh Ubuntu system.

The BeEF team has put together some simple instructions for installing the application (https://github.com/beefproject/beef/wiki/Installation). This walk through follows these instructions pretty close, with one exception to clear up an error we will see along the way (or at least an error I saw).

The first thing I am going to do is install Ruby on my Ubuntu 18.04 image. To do this we want to run the following command:

sudo apt install ruby ruby-dev

The next step is to get the BeEF source files. We will get these from git. If you haven’t installed git, make sure to run the following command first:

sudo apt install git

Once git is available, we can clone the git project with the following command:

git clone https://github.com/beefproject/beef

This will download the source files for BeEF. Next, move into the beef directory:

cd beef

This is where the I started to run into an issue. The original beef instructions say to just run:

./install

However, when I did this, I received a permission error. To resolve that I ran:

sudo ./install

There is probably a way to fix this permission error without running as sudo.. but I didn’t investigate that further.

Once the installation was successfully completed, I ran:

./beef

I was quickly greeted with the following error:

demo@ubuntu:~/beef$ ./beef
Traceback (most recent call last):
4: from ./beef:44:in `<main>’
3: from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require’
2: from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require’
1: from /home/demo/beef/core/loader.rb:17:in `‘
/home/demo/beef/core/loader.rb:17:in `require’: cannot load such file — xmlrpc/client (LoadError)

After digging around, I found that ubuntu 18.04 by default installs Ruby 2.5, which apparently doesn’t have the xmlrpc/client embedded. To fix this, we just need to tell BeEF that it needs this gem. To fix this, I modified the Gemfile file following these steps:

rm Gemfile.lock – Do this first to remove the lock file. Click Y to remove it.

sudo nano Gemfile

In the file, add the following line:

gem ‘xmlrpc’

Save the file and re-run the installation:

sudo ./install

At this point, the installation should be successful. Try running the following command:

./beef

You should see something like the following:

demo@ubuntu:~/beef$ ./beef
[ 6:41:32][*] Browser Exploitation Framework (BeEF) 0.4.7.0-alpha
[ 6:41:32]    |   Twit: @beefproject
[ 6:41:32]    |   Site: https://beefproject.com
[ 6:41:32]    |   Blog: http://blog.beefproject.com
[ 6:41:32]    |_  Wiki: https://github.com/beefproject/beef/wiki
[ 6:41:32][*] Project Creator: Wade Alcorn (@WadeAlcorn)
[ 6:41:33][*] BeEF is loading. Wait a few seconds...
[ 6:41:38][*] 8 extensions enabled.
[ 6:41:38][*] 301 modules enabled.
[ 6:41:38][*] 2 network interfaces were detected.
[ 6:41:38][*] running on network interface: 127.0.0.1
[ 6:41:38]    |   Hook URL: http://127.0.0.1:3000/hook.js
[ 6:41:38]    |_  UI URL:   http://127.0.0.1:3000/ui/panel
[ 6:41:38][*] running on network interface: 192.168.116.139
[ 6:41:38]    |   Hook URL: http://192.168.116.139:3000/hook.js
[ 6:41:38]    |_  UI URL:   http://192.168.116.139:3000/ui/panel
[ 6:41:38][!] Warning: Default username and weak password in use!
[ 6:41:38]    |_  New password for this instance: ec04906c30d928fb857
[ 6:41:38][*] RESTful API key: 6bd0b11e772df40
[ 6:41:38][*] HTTP Proxy: http://127.0.0.1:6789
[ 6:41:38][*] BeEF server started (press control+c to stop)

Notice that there is a “New password” configured here. This is because by default beef sets the username/password to beef/beef. As this is a default, hard-coded password, it is insecure. To fix this, beef detects the default and creates a new temp password to protect the instance. It is recommended to update the username and password to your instance.

Updating the Password

To change the password stop beef by typing ctrl+c. Now, we will edit the config.yaml file:

sudo nano config.yaml

You should see something like this:


beef:
    version: '0.4.7.0-alpha'
    # More verbose messages (server-side)
    debug: false
    # More verbose messages (client-side)
    client_debug: false
    # Used for generating secure tokens
    crypto_default_value_length: 80

    # Credentials to authenticate in BeEF.
    # Used by both the RESTful API and the Admin interface
    credentials:
        user:   "beef"
        passwd: "beef"

    # Interface / IP restrictions

Modify the user and passwd fields to your own values and then save the file using ctrl+x.

When you restart BeEF and go to the ui panel you should now be able to login with your new credentials.

Notes
There should be a way to install the application without using sudo ./install. This should be checked so you don’t install using root permissions. This is a non-production image for me to use for student training.

Make sure you change the default username and password to help lock down your instance.

This tutorial is for educational purposes only. Hacking without permission is illegal and should not be done.

Filed Under: General Tagged With: application security, BeEF, configuration, installation, pen testing, penetration testing, pentest, poc, security, security testing

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 15
  • Go to Next Page »

Primary Sidebar

Stay Informed

Sign up to receive email updates regarding current application security topics.

Contact Us:

Contact us today to see how we can help.
Contact Us

Footer

Company Profile

Are you tackling the challenge to integrate security into the development process? Application security can be a complex task and often … Read More... about Home

Resources

Podcasts
DevelopSec
Down the Security Rabbithole (#DTSR)

Blogs
DevelopSec
Jardine Software

Engage With Us

  • Email
  • GitHub
  • Twitter
  • YouTube

Contact Us

DevelopSec
1044 South Shores Rd.
Jacksonville, FL 32207

P: 904-638-5431
E: james@developsec.com



Privacy Policy

© Copyright 2018 Developsec · All Rights Reserved