FeroxBuster Recursive Content Discovery Tool
FeroxBuster Recursive Content Discovery Tool

FeroxBuster: Recursive Content Discovery Tool

About ferox

Ferox is short for Ferric Oxide. Ferric Oxide, simply put, is rust. The name rustbuster was taken, so I decided on a variation.

feroxbuster is a tool designed to perform Forced Browsing.

Forced browsing is an attack where the aim is to enumerate and access resources that are not referenced by the web application, but are still accessible by an attacker.

feroxbuster uses brute force combined with a wordlist to search for unlinked content in target directories. These resources may store sensitive information about web applications and operational systems, such as source code, credentials, internal network addressing, etc…

This attack is also known as Predictable Resource Location, File Enumeration, Directory Enumeration, and Resource Enumeration.

"
"
Demo FeroxBuster
Demo FeroxBuster

Installation

Download a Release

Releases for multiple architectures can be found in the Releases section. The latest release for each of the following systems can be downloaded and executed as shown below.

Linux (32 and 64-bit) & MacOS

curl -sL https://raw.githubusercontent.com/epi052/feroxbuster/master/install-nix.sh | bash

Windows x86

https://github.com/epi052/feroxbuster/releases/latest/download/x86-windows-feroxbuster.exe.zip
Expand-Archive .\feroxbuster.zip
.\feroxbuster\feroxbuster.exe -V

Windows x86_64

Invoke-WebRequest https://github.com/epi052/feroxbuster/releases/latest/download/x86_64-windows-feroxbuster.exe.zip -OutFile feroxbuster.zip
Expand-Archive .\feroxbuster.zip
.\feroxbuster\feroxbuster.exe -V

Snap Install

Install using snap

sudo snap install feroxbuster

The only gotcha here is that the snap package can only read wordlists from a few specific locations. There are a few possible solutions, of which two are shown below.

If the wordlist is on the same partition as your home directory, it can be hard-linked into ~/snap/feroxbuster/common

"
"
ln /path/to/the/wordlist ~/snap/feroxbuster/common
./feroxbuster -u http://localhost -w ~/snap/feroxbuster/common/wordlist

If the wordlist is on a separate partition, hard-linking won’t work. You’ll need to copy it into the snap directory.

cp /path/to/the/wordlist ~/snap/feroxbuster/common
./feroxbuster -u http://localhost -w ~/snap/feroxbuster/common/wordlist

Homebrew on MacOS and Linux

Install using Homebrew via tap

MacOS

brew tap tgotwig/feroxbuster
brew install feroxbuster

Linux

brew tap tgotwig/linux-feroxbuster
brew install feroxbuster

Cargo Install

feroxbuster is published on crates.io, making it easy to install if you already have rust installed on your system.

cargo install feroxbuster

Configuration

Default Values

Configuration begins with with the following built-in default values baked into the binary:

  • timeout: 7 seconds
  • follow redirects: false
  • wordlist:
/usr/share/seclists/Discovery/Web-Content/raft-medium-directories.txt
  • threads: 50
  • verbosity: 0 (no logging enabled)
  • scan_limit: 0 (no limit imposed on concurrent scans)
  • status_codes: 200 204 301 302 307 308 401 403 405
  • user_agent: feroxbuster/VERSION
  • recursion depth: 4
  • auto-filter wildcards – true
  • output: stdout
  • save_state: true (create a state file in cwd when Ctrl+C is received)

Threads and Connection Limits At A High-Level

This section explains how the -t and -L options work together to determine the overall aggressiveness of a scan. The combination of the two values set by these options determines how hard your target will get hit and to some extent also determines how many resources will be consumed on your local machine.

A Note on Green Threads

feroxbuster uses so-called green threads as opposed to traditional kernel/OS threads. This means (at a high-level) that the threads are implemented entirely in userspace, within a single running process. As a result, a scan with 30 green threads will appear to the OS to be a single process with no additional light-weight processes associated with it as far as the kernel is concerned. As such, there will not be any impact to process (nproc) limits when specifying larger values for -t. However, these threads will still consume file descriptors, so you will need to ensure that you have a suitable nlimit set when scaling up the amount of threads. More detailed documentation on setting appropriate nlimit values can be found in the No File Descriptors Available section of the FAQ

Threads and Connection Limits: The Implementation

  • Threads: The -t option specifies the maximum amount of active threads per-directory during a scan
  • Connection Limits: The -L option specifies the maximum amount of active connections per thread

Threads and Connection Limits: Examples

To truly have only 30 active requests to a site at any given time, -t 30 -L 1 is necessary. Using -t 30 -L 2 will result in a maximum of 60 total requests being processed at any given time for that site. And so on. For a conversation on this, please see Issue #126 which may provide more (or less) clarity

Command Line Parsing

Finally, after parsing the available config file, any options/arguments given on the commandline will override any values that were set as a built-in or config-file value.

USAGE:
    feroxbuster [FLAGS] [OPTIONS] --url <URL>...

FLAGS:
    -f, --add-slash        Append / to each request
    -D, --dont-filter      Don't auto-filter wildcard responses
    -e, --extract-links    Extract links from response body (html, javascript, etc...); make new requests based on
                           findings (default: false)
    -h, --help             Prints help information
    -k, --insecure         Disables TLS certificate validation
        --json             Emit JSON logs to --output and --debug-log instead of normal text
    -n, --no-recursion     Do not scan recursively
    -q, --quiet            Only print URLs; Don't print status codes, response size, running config, etc...
    -r, --redirects        Follow redirects
        --stdin            Read url(s) from STDIN
    -V, --version          Prints version information
    -v, --verbosity        Increase verbosity level (use -vv or more for greater effect. [CAUTION] 4 -v's is probably
                           too much)

OPTIONS:
        --debug-log <FILE>                  Output file to write log entries (use w/ --json for JSON entries)
    -d, --depth <RECURSION_DEPTH>           Maximum recursion depth, a depth of 0 is infinite recursion (default: 4)
    -x, --extensions <FILE_EXTENSION>...    File extension(s) to search for (ex: -x php -x pdf js)
    -N, --filter-lines <LINES>...           Filter out messages of a particular line count (ex: -N 20 -N 31,30)
    -X, --filter-regex <REGEX>...           Filter out messages via regular expression matching on the response's body
                                            (ex: -X '^ignore me$')
    -S, --filter-size <SIZE>...             Filter out messages of a particular size (ex: -S 5120 -S 4927,1970)
    -C, --filter-status <STATUS_CODE>...    Filter out status codes (deny list) (ex: -C 200 -C 401)
    -W, --filter-words <WORDS>...           Filter out messages of a particular word count (ex: -W 312 -W 91,82)
    -H, --headers <HEADER>...               Specify HTTP headers (ex: -H Header:val 'stuff: things')
    -o, --output <FILE>                     Output file to write results to (use w/ --json for JSON entries)
    -p, --proxy <PROXY>                     Proxy to use for requests (ex: http(s)://host:port, socks5(h)://host:port)
    -Q, --query <QUERY>...                  Specify URL query parameters (ex: -Q token=stuff -Q secret=key)
    -R, --replay-codes <REPLAY_CODE>...     Status Codes to send through a Replay Proxy when found (default: --status-
                                            codes value)
    -P, --replay-proxy <REPLAY_PROXY>       Send only unfiltered requests through a Replay Proxy, instead of all
                                            requests
        --resume-from <STATE_FILE>          State file from which to resume a partially complete scan (ex. --resume-from
                                            ferox-1606586780.state)
    -L, --scan-limit <SCAN_LIMIT>           Limit total number of concurrent scans (default: 0, i.e. no limit)
    -s, --status-codes <STATUS_CODE>...     Status Codes to include (allow list) (default: 200 204 301 302 307 308 401
                                            403 405)
    -t, --threads <THREADS>                 Number of concurrent threads (default: 50)
        --time-limit <TIME_SPEC>            Limit total run time of all scans (ex: --time-limit 10m)
    -T, --timeout <SECONDS>                 Number of seconds before a request times out (default: 7)
    -u, --url <URL>...                      The target URL(s) (required, unless --stdin used)
    -a, --user-agent <USER_AGENT>           Sets the User-Agent (default: feroxbuster/VERSION)
    -w, --wordlist <FILE>                   Path to the wordlist

Scan’s Display Explained

feroxbuster attempts to be intuitive and easy to understand, however, if you are wondering about any of the scan’s output and what it means, this is the section for you!

Discovered Resource

When feroxbuster finds a response that you haven’t filtered out, it’s reported above the progress bars and looks similar to what’s pictured below.

The number of lines, words, and bytes shown here can be used to filter those responses

Discovered Resource
Discovered Resource

Overall Scan Progress Bar

The top progress bar, colored yellow, tracks the overall scan status. Its fields are described in the image below.

Scan Progress Bar
Scan Progress Bar

Directory Scan Progress Bar

All other progress bars, colored cyan, represent a scan of one particular directory and will look similar to what’s below.

Directory Scan Progress Bar
Directory Scan Progress Bar

Example Usage

Multiple Values

Options that take multiple values are very flexible. Consider the following ways of specifying extensions:

./feroxbuster -u http://127.1 -x pdf -x js,html -x php txt json,docx

The command above adds .pdf, .js, .html, .php, .txt, .json, and .docx to each url

All of the methods above (multiple flags, space separated, comma separated, etc…) are valid and interchangeable. The same goes for urls, headers, status codes, queries, and size filters.

Include Headers

./feroxbuster -u http://127.1 -H Accept:application/json "Authorization: Bearer {token}"

IPv6, non-recursive scan with INFO-level logging enabled

./feroxbuster -u http://[::1] --no-recursion -vv

Read urls from STDIN; pipe only resulting urls out to another tool

cat targets | ./feroxbuster --stdin --quiet -s 200 301 302 --redirects -x js | fff -s 200 -o js-files

Proxy traffic through Burp

./feroxbuster -u http://127.1 --insecure --proxy http://127.0.0.1:8080

Proxy traffic through a SOCKS proxy (including DNS lookups)

./feroxbuster -u http://127.1 --proxy socks5h://127.0.0.1:9050

Pass auth token via query parameter

./feroxbuster -u http://127.1 --query token=0123456789ABCDEF

Extract Links from Response Body (New in v1.1.0)

Search through the body of valid responses (html, javascript, etc…) for additional endpoints to scan. This turns feroxbuster into a hybrid that looks for both linked and unlinked content.

Example request/response with --extract-links enabled:

  • Make request to http://example.com/index.html
  • Receive, and read in, the body of the response
  • Search the body for absolute and relative links
  • Add the following directories for recursive scanning:
    • http://example.com/homepage
    • http://example.com/homepage/assets
    • http://example.com/homepage/assets/img
    • http://example.com/homepage/assets/img/icons
  • Make a single request to
http://example.com/homepage/assets/img/icons/handshake.svg
./feroxbuster -u http://127.1 --extract-links

Here’s a comparison of a wordlist-only scan vs --extract-links using Feline from Hack the Box:

Wordlist only

Wordlist only
Wordlist only

With --extract-links

Wtih extract-links
Wtih extract-links

Limit Total Number of Concurrent Scans (new in v1.2.0)

Limit the number of scans permitted to run at any given time. Recursion will still identify new directories, but newly discovered directories can only begin scanning when the total number of active scans drops below the value passed to --scan-limit.

./feroxbuster -u http://127.1 --scan-limit 2
Scans permitted
Scans permitted

Filter Response by Status Code (new in v1.3.0)

Version 1.3.0 included an overhaul to the filtering system which will allow for a wide array of filters to be added with minimal effort. The first such filter is a Status Code Filter. As responses come back from the scanned server, each one is checked against a list of known filters and either displayed or not according to which filters are set.

./feroxbuster -u http://127.1 --filter-status 301

Pause an Active Scan (new in v1.4.0)

NOTEv1.12.0 added an interactive menu to the pause/resume functionality. Active scans can still be paused, however, now you’re presented with the option to cancel a scan instead of simply seeing a spinner.

Scans can be paused and resumed by pressing the ENTER key (shown below, please see v1.12.0‘s entry for the latest visual representation)

Replay Responses to a Proxy based on Status Code (new in v1.5.0)

The --replay-proxy and --replay-codes options were added as a way to only send a select few responses to a proxy. This is in stark contrast to --proxy which proxies EVERY request.

Imagine you only care about proxying responses that have either the status code 200 or 302 (or you just don’t want to clutter up your Burp history). These two options will allow you to fine-tune what gets proxied and what doesn’t.

./feroxbuster -u http://127.1 --replay-proxy http://localhost:8080 --replay-codes 200 302 --insecure

Of note: this means that for every response that matches your replay criteria, you’ll end up sending the request that generated that response a second time. Depending on the target and your engagement terms (if any), it may not make sense from a traffic generated perspective.

Replay Proxy Demo
Replay Proxy Demo

Stop and Resume Scans (–resume-from FILE) (new in v1.9.0)

Version 1.9.0 adds a few features that allow for completely stopping a scan, and resuming that same scan from a file on disk.

A simple Ctrl+C during a scan will create a file that contains information about the scan that was cancelled.

Stop and Resume Scans
Stop and Resume Scans
// example snippet of state file

{
  "scans": [
    {
      "id": "057016a14769414aac9a7a62707598cb",
      "url": "https://localhost.com",
      "scan_type": "Directory",
      "complete": true
    },
    {
      "id": "400b2323a16f43468a04ffcbbeba34c6",
      "url": "https://localhost.com/css",
      "scan_type": "Directory",
      "complete": false
    }
  ],
  "config": {
    "wordlist": "/wordlists/seclists/Discovery/Web-Content/common.txt",
    "...": "..."
  },
  "responses": [
    {
      "type": "response",
      "url": "https://localhost.com/Login",
      "path": "/Login",
      "wildcard": false,
      "status": 302,
      "content_length": 0,
      "line_count": 0,
      "word_count": 0,
      "headers": {
        "content-length": "0",
        "server": "nginx/1.16.1"
      }
    }
  ]
},

Based on the example image above, the same scan can be resumed by using feroxbuster --resume-from ferox-http_localhost-1606947491.state. Directories that were already complete are not rescanned, however partially complete scans are started from the beginning.

Enforce a Time Limit on Your Scan (new in v1.10.0)

Version 1.10.0 adds the ability to set a maximum runtime, or time limit, on your scan. The usage is pretty simple: a number followed directly by a single character representing seconds, minutes, hours, or days. feroxbuster refers to this combination as a time_spec.

Examples of possible time_specs:

  • 30s – 30 seconds
  • 20m – 20 minutes
  • 1h – 1 hour
  • 1d – 1 day (why??)

A valid time_spec can be passed to --time-limit in order to force a shutdown after the given time has elapsed.

Enforce a Time Limit
Enforce a Time Limit

Extract Links from robots.txt (New in v1.10.2)

In addition to extracting links from the response body, using --extract-links makes a request to /robots.txt and examines all Allow and Disallow entries. Directory entries are added to the scan queue, while file entries are requested and then reported if appropriate.

Comparison w/ Similar Tools

There are quite a few similar tools for forced browsing/content discovery. Burp Suite Pro, Dirb, Dirbuster, etc… However, in my opinion, there are two that set the standard: gobuster and ffuf. Both are mature, feature-rich, and all-around incredible tools to use.

So, why would you ever want to use feroxbuster over ffuf/gobuster? In most cases, you probably won’t. ffuf in particular can do the vast majority of things that feroxbuster can, while still offering boatloads more functionality. Here are a few of the use-cases in which feroxbuster may be a better fit:

  • You want a simple tool usage experience
  • You want to be able to run your content discovery as part of some crazy 12 command unix pipeline extravaganza
  • You want to scan through a SOCKS proxy
  • You want auto-filtering of Wildcard responses by default
  • You want an integrated link extractor/robots.txt parser to increase discovered endpoints
  • You want recursion along with some other thing mentioned above (ffuf also does recursion)
  • You want a configuration file option for overriding built-in default values for your scans
extracts links from response body to increase scan coverage (v1.1.0)
limit number of concurrent recursive scans (v1.2.0)
filter out responses by status code (v1.3.0)
interactive pause and resume of active scan (v1.4.0)
replay only matched requests to a proxy (v1.5.0)
filter out responses by line & word count (v1.6.0)
json output (ffuf supports other formats as well) (v1.7.0)
filter out responses by regular expression (v1.8.0)
save scan’s state to disk (can pick up where it left off) (v1.9.0)
maximum run time limit (v1.10.0)
use robots.txt to increase scan coverage (v1.10.2)
use example page’s response to fuzzily filter similar pages (v1.11.0)
cancel a recursive scan interactively (v1.12.0)
huge number of other options

Of note, there’s another written-in-rust content discovery tool, rustbuster. I came across rustbuster when I was naming my tool (😢). I don’t have any experience using it, but it appears to be able to do POST requests with an HTTP body, has SOCKS support, and has an 8.3 shortname scanner (in addition to vhost dns, directory, etc…). In short, it definitely looks interesting and may be what you’re looking for as it has some capability I haven’t seen in similar tools.

SSL Error routines

tls_process_server_certificate:certificate verify failed

In the event you see an error similar to

SSL Error routines
SSL Error routines
error trying to connect: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1913: (self signed certificate)

You just need to add the -k|--insecure flag to your command.

feroxbuster rejects self-signed certs and other “insecure” certificates/site configurations by default. You can choose to scan these services anyway by telling feroxbuster to ignore insecure server certs.

Dark Mode