Dirhunt is a web crawler optimize for search and analyze directories. This tool can find interesting things if the server has the “index of” mode enabled. Dirhunt is also useful if the directory listing is not enabled. It detects directories with false 404 errors, directories where an empty index file has been created to hide things and much more.
$ dirhunt http://website.com/
Dirhunt does not use brute force. But neither is it just a crawler. This tool is faster than others because it minimizes requests to the server. Generally, this tool takes between 5-30 seconds, depending on the website and the server.
Features
- Process one or multiple sites at a time.
- Process ‘Index Of’ pages and report interesting files.
- Detect redirectors.
- Detect blank index file created on directory to hide things.
- Process some html files in search of new directories.
- 404 error pages and detect fake 404 errors.
- Filter results by flags.
- Analyze results at end. It also processes date & size of the Index Pages.
- Get new directories using robots.txt, VirusTotal, Google, CommonCrawl (NEW!) & SSL Certificate (NEW!).
- Delay between requests.
- One or multiple proxies option. It can also search for free proxies.
- Save the results to a JSON file
- Resume the aborted scans
Install
If you have Pip installed on your system, you can use it to install the latest Dirhunt stable version:
$ sudo pip3 install dirhunt
Python 2.7 & 3.5-3.8 are supported but Python 3.x is recommended. Use pip2
on install for Python2.
From sources
The sources for Dirhunt can be downloaded from the Github repo.
You can either clone the public repository:
$ git clone git://github.com/Nekmo/dirhunt
Or download the tarball:
$ curl -OL https://github.com/Nekmo/dirhunt/tarball/master
Once you have a copy of the source, you can install it with:
$ python setup.py install
How to Use
To see the available help run:
$ dirhunt --help
Find directories
You can define one or more urls/domains, from the same domain or different. It is better if you put urls with complete paths. This way Dirhunt will have easier to find directories.
$ dirhunt <url 1>[ <url 2>]
For example:
$ dirhunt http://domain1/blog/awesome-post.html http://domain1/admin/login.html http://domain2/ domain3.com
Results for multiple sites will be displayed together. You can also load urls or domains from one or more files using the full path (/path/to/file) or the relative path (./file). Examples:
dirhunt domain1.com ./file/to/domains.txt /home/user/more_domains.txt
Resume analysis
Press Ctrl + c
to pause the current scan. For example:
...
[200] https://site.com/path/ (Generic)
Index file found: index.php
[200] https://site.com/path/foo/ (Generic)
Index file found: index.php
◣ Started a second ago
^C
An interrupt signal has been detected. what do you want to do?
[A]bort
[c]ontinue
[r]esults
Enter a choice [A/c/r]:
You can continue the analysis now (choose option c
), show the current results (press r
) or abort now. Run the analysis again with the same parameters to pick the analysis where you left off.
An interrupt signal has been detected. what do you want to do?
[A]bort
[c]ontinue
[r]esults
Enter a choice [A/c/r]: A
Created resume file "/home/nekmo/.cache/dirhunt/ca32...". Run again using the same parameters to resume.
Read more about how to use Dirhunt in the documentation.
dirhunt (this link opens in a new window) by Nekmo (this link opens in a new window)
Find web directories without bruteforce