wfuzz - An in-depth, practical guide to web application fuzzing
Wfuzz is a powerful, mature command-line web application fuzzer used by penetration testers and bug-bounty hunters to discover hidden endpoints, test parameters for injection points, brute-force login forms, and fuzz headers/cookies. This guide explains the core concepts, installation (including apt install wfuzz
on Kali), the command syntax, practical examples (with realistic simulated outputs), advanced techniques, tuning and troubleshooting, and defensive notes so you can use Wfuzz responsibly and effectively.
Legal reminder: Use Wfuzz only on systems you own or where you have explicit authorization to test. Unauthorised scanning or fuzzing is illegal and unethical.
Quick overview: what Wfuzz does and when to use it
Wfuzz automates HTTP requests by replacing the token FUZZ
(or FUZn
variants) in a URL, header, cookie, POST body or host header with payloads from a wordlist. It was designed to:
- Discover unlinked directories/files (directory fuzzing)
- Find hidden parameters and injection points (parameter fuzzing)
- Bruteforce login forms or HTTP authentication (credential spraying)
- Fuzz headers, cookies, and other HTTP fields
- Chain or double-fuzz (e.g., path + filename + extension)
Because it’s highly scriptable and supports filters, encoders, multi-threaded requests and proxies, Wfuzz is useful during reconnaissance and exploitation stages of web tests. The project is actively maintained on GitHub and has documentation and downloadable releases.
Installing Wfuzz
Option 1 — Install from Kali repositories (recommended on Kali)
Kali includes Wfuzz in its packages, so the simplest way on Kali Linux is:
sudo apt update
sudo apt install wfuzz -y
This installs a packaged version tied to your Kali release and provides integration with installed wordlists and environment. (Kali docs explicitly recommend sudo apt install wfuzz
.) ([Kali Linux][2])
Simulated output:
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
wfuzz
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 82.1 kB of archives.
After this operation, 421 kB of additional disk space will be used.
Get:1 http://kali.download/kali kali-rolling/main amd64 wfuzz all 2.1.4-1 [82.1 kB]
Fetched 82.1 kB in 1s (64.0 kB/s)
Selecting previously unselected package wfuzz.
(Reading database ... 146372 files and directories currently installed.)
Preparing to unpack .../wfuzz_2.1.4-1_all.deb ...
Unpacking wfuzz (2.1.4-1) ...
Setting up wfuzz (2.1.4-1) ...
Processing triggers for man-db (2.12.0-2) ...
Option 2 — pip install wfuzz
/ GitHub releases
If you want the latest release from upstream or a specific version:
pip3 install wfuzz
# or clone the repo
git clone https://github.com/xmendez/wfuzz.git
cd wfuzz
python3 setup.py install
Wfuzz’s GitHub and ReadTheDocs contain project source, examples, and a downloadable PDF manual. If you use pip, ensure dependencies like pycurl
, pyparsing
are installed; the GitHub README lists these.
Tip: On some systems pycurl
may need specific SSL/OpenSSL build options; see Troubleshooting below.
Wfuzz basics: syntax and key options
The minimal Wfuzz invocation:
wfuzz -w <wordlist> <target>
Where <target>
contains one or more FUZZ
tokens. Wfuzz will replace each FUZZ
occurrence with values from <wordlist>
.
Example: directory discovery
wfuzz -w /usr/share/wordlists/dirb/common.txt http://target/FUZZ
Important flags (short list)
-w <file>
— wordlist file (one payload per line) or multiple-w
entries for multiple FUZZ positions.-u <url>
or<url>
positional — target URL withFUZZ
.-c
— colored output.-t <threads>
— number of concurrent connections.--hc <codes>
— hide responses with HTTP status code(s). E.g.--hc 404
hides 404s.--sc <codes>
— only show responses with these status codes.--hw <words>
— hide responses with a specific number of words.--hl <lines>
— hide responses with a specific number of lines.--hh <header>
— hide responses containing this header.-H "Header: Value"
— add HTTP header(s).-b "cookie=value"
— add cookie(s).-p
— use proxy (format:127.0.0.1:8080:HTTP
).-e <encoders>
— encoders to apply to payloads (base64, URL encode, etc.).-f
— follow redirects.-o <format>
— output format (e.g., CSV, JSON).-v
— verbose.
For the full options reference, consult the Wfuzz documentation.
Example workflows and realistic (simulated) outputs
Below you’ll find practical examples with explanations and simulated outputs that reflect how Wfuzz presents results.
All simulated outputs are representative — exact columns/format can vary by installed wfuzz version and flags like
-c
.
1) Directory discovery (basic)
Command:
wfuzz -c -w /usr/share/wordlists/dirb/common.txt http://10.0.0.20/FUZZ
What it does:
- Iterates words from
common.txt
, requestshttp://10.0.0.20/<word>
, and prints non-hidden responses. Use--hc 404
if many 404s flood results.
Simulated output:
********************************************************
* Wfuzz 2.1.4 - The Web Fuzzer *
********************************************************
Target: http://10.0.0.20/FUZZ
Total requests: 1200
=====================================================================
ID Response Lines Words Chars Payload
=====================================================================
0001 200 12 324 14560 admin
0002 301 0 0 0 images
0003 200 8 120 5024 uploads
0004 403 1 4 14 .git
0005 200 20 820 42012 robots.txt
---------------------------------------------------------------------
Finished: 5 results
Explanation:
admin
returned 200;images
redirected (301);.git
returned 403 (interesting — can indicate repo access blocked). Use--hc 404
to hide noisy 404s.
2) File extension discovery (double fuzzing)
Command: try combinations FUZZ.EXT
where FUZZ
comes from common
and EXT
from extensions.txt
:
wfuzz -c -w /usr/share/wordlists/dirb/common.txt -w /usr/share/wordlists/extensions_common.txt http://10.0.0.20/FUZZ.FUZZ
Simulated output (abridged):
ID Resp Lines Words Chars Payloads
001 200 14 412 21024 admin.php
002 200 10 284 11800 login.html
003 200 6 112 5200 sitemap.xml
Explanation:
- Double FUZZ lets you combine lists. Useful for discovering
index.php
,config.yml
,backup.zip
, etc.
3) Parameter fuzzing (find injection points)
Test GET parameter names:
wfuzz -c -w params.txt http://10.0.0.20/search?FUZZ=testing
Simulated output:
ID Resp Lines Words Chars Payload
001 200 14 420 20480 q
002 200 6 120 5120 id
003 302 2 10 44 token
Explanation:
q
andid
are accepted parameter names;id
may be of interest for numeric injection tests. You can thenwfuzz -w payloads.sql -u "http://10.0.0.20/search?id=FUZZ"
to test for SQL injection payloads.
4) Brute-forcing login forms (POST)
If a login form accepts username
and password
:
wfuzz -c -w userlist.txt -w passlist.txt --hc 302 http://10.0.0.20/login --post "username=FUZZ&password=FUZ2"
Note: replace second token name (FUZ2
) with a different marker when you need two independent FUZZ positions; Wfuzz supports multi-FUZZ with -w
order mapping.
Simulated output:
ID Resp Lines Words Chars Payloads
001 200 5 40 1024 admin:admin
002 401 2 6 18 user1:password
003 302 0 0 0 john:Welcome123
Explanation:
- A 302 redirect is common on successful login; filter with
--sc 302
to show only those.
5) Header fuzzing (discover hidden API key header)
Command:
wfuzz -c -w headers.txt -H "X-API-KEY: FUZZ" http://10.0.0.20/api/health
Simulated output:
ID Resp Lines Words Chars Payload
001 401 2 6 18 key1
002 200 4 36 1024 super-secret-key-123
Explanation:
- When a specific header value returns 200, you may have found a valid API key or token format. Treat discovered secrets responsibly.
6) Cookie fuzzing
Command:
wfuzz -c -w session_ids.txt -b "PHPSESSID=FUZZ" http://10.0.0.20/dashboard --hc 403
Simulated output:
ID Resp Lines Words Chars Payload
001 403 1 3 14 sess-0001
002 200 8 120 6000 sess-8f3a2b
Explanation:
- A 200 response suggests sessions could be guessed; this is sensitive and should be reported in scope-appropriate testing.
7) Using encoders (URL encode, base64, etc.)
If the target expects URL-encoded payloads:
wfuzz -c -w sqli.txt -e url -u "http://10.0.0.20/search?query=FUZZ"
Or base64 encode:
wfuzz -c -w payloads.txt -e b64 -H "X-Auth: FUZZ" http://10.0.0.20/api
Simulated output:
ID Resp Lines Words Chars Payload(enc)
001 200 10 200 9000 ' or '1'='1 (encoded: J3tvcicgbyAnMSc9JzEn)
Explanation:
- Encoders allow you to provide payloads that are transformed before sending, matching how the app expects input.
Useful filters & result processing
Wfuzz provides many filters to narrow down noise:
--hc <codes>
— hide specific HTTP status codes (e.g.,--hc 404,403
).--sc <codes>
— show only these status codes (e.g.,--sc 200,302
).--hw <words>
— hide responses with a specific number of words.--hl <lines>
— hide responses with specific line counts.--hh <header>
— hide responses with a header containing a string.--ml <min>
and--mL <max>
— min/max content length (some WFuzz versions).-o
& formats — save output to CSV/JSON for later analysis.
Example: only show 200 results with content length > 500
wfuzz -c -w common.txt http://10.0.0.20/FUZZ --sc 200 --ml 500
Advanced techniques
Double / triple FUZZ combos (multi column fuzzing)
Wfuzz maps -w
order to successive FUZZ
tokens in the target. That enables testing combinations like FUZZ/FUZZ2
or admin-FUZZ.FUZZ
.
Example:
wfuzz -c -w dirs.txt -w files.txt http://10.0.0.20/FUZZ/FUZZ
Recursive enumeration
If you find a directory, you can recursively fuzz discovered paths by scripting Wfuzz runs or using tools that chain results. Wfuzz itself can be used in loops or with --recursion
in some builds/recipes. Many practitioners parse Wfuzz CSV output and feed discovered paths back into it automatically.
Timing & load tuning
Use -t <threads>
to speed up tests, but be considerate of target stability and bandwidth. Aggressive thread counts can cause DoS effects.
Example: -t 50
for 50 concurrent connections.
Capturing full HTTP traffic (proxy)
To inspect requests/responses via a proxy (Burp/OWASP ZAP):
wfuzz -c -w common.txt -p 127.0.0.1:8080:HTTP http://10.0.0.20/FUZZ
This helps debug encoders, headers and session handling while observing actual HTTP flow.
Recipes, saving & resuming
Wfuzz supports saving runs and recipes (JSON) on some versions. Use -o
formats (CSV/JSON) to persist results for later filtering or reporting. Some forks/utilities add recipe features to repeat complex runs.
Integrating Wfuzz with other tools
- Burp Suite: Use Burp as a proxy to capture Wfuzz requests, or export Wfuzz findings into Burp for manual verification.
- Gobuster / Dirb: Wfuzz overlaps with these tools; Wfuzz excels at parameter/header fuzzing and flexible encoders, while Gobuster/Dirb are fast dedicated directory scanners. Use them in combination.
- Custom scripts: Parse CSV/JSON outputs to feed into scripted checks (e.g., automatically run SQLi payloads against discovered params).
Practical checklist for responsible use
- Scope: Confirm written authorization and defined scope (domains, IP ranges, time windows).
- Backups/Notifications: Notify stakeholders if you will perform heavy fuzzing. Schedule low-traffic windows.
- Rate limits: Set thread counts and delays to avoid DoS.
- Logging: Keep your own logs (CSV/JSON) of findings.
- Proofs: Capture evidence (HTTP requests/responses) without exfiltrating sensitive data.
- Report responsibly: Include reproduction steps, payloads used, and remediation guidance.
Troubleshooting & common pitfalls
Pycurl / SSL issues
Some Wfuzz installations warn Pycurl is not compiled against Openssl
and SSL fuzzing may fail. This often requires installing a system libcurl
and compiling pycurl
against the desired OpenSSL.
If you see errors like:
/usr/lib/python3/dist-packages/wfuzz/init.py:34: UserWarning: Pycurl is not compiled against Openssl. Wfuzz might not work correctly when fuzzing SSL sites.
Consider reinstalling pycurl
or using the distro package:
sudo apt install python3-pycurl
# or rebuild pycurl with SSL support
pip3 uninstall pycurl
PYCURL_SSL_LIBRARY=openssl pip3 install --compile --no-cache-dir pycurl
High false positive rate
If many results look interesting, apply filters: --hc 404
, --hw
, --hl
, or content-length filters. Validate suspicious finds manually by inspecting responses through a proxy.
Wordlist quality
Choosing the right wordlist is crucial. Built-in lists and the Kali wordlists
package (rockyou, dirb lists) are a good start; create custom lists by combining discovered patterns or using tools like cewl
to generate domain-specific words.
Reporting & remediation suggestions (for defenders)
When you discover issues during authorized testing, frame findings with reproduction steps, risk level, and remediation advice.
Common fixes:
- Harden directories with proper ACLs and remove
.git
and backup files from webroots. - Implement proper input validation, parameterized queries (prevent SQLI), and output encoding (prevent XSS).
- Rate-limit authentication endpoints and implement account lockout policies to prevent brute force.
Quick reference — common commands
Directory brute:
wfuzz -c -w /usr/share/wordlists/dirb/common.txt http://target/FUZZ --hc 404
Double fuzz (path + filename extension):
wfuzz -c -w dirs.txt -w extensions.txt http://target/FUZZ.FUZZ
Header fuzz:
wfuzz -c -w headers.txt -H "X-Api-Key: FUZZ" http://target/api/status
POST / login brute:
wfuzz -c -w users.txt -w passwords.txt --hc 401 http://target/login --post "user=FUZZ&pass=FUZ2"
Proxy (inspect in Burp):
wfuzz -c -w common.txt -p 127.0.0.1:8080:HTTP http://target/FUZZ
Encoders:
wfuzz -c -w payloads.txt -e url -u "http://target/search?q=FUZZ"
Final notes
Wfuzz is a flexible, scriptable web fuzzer with capabilities beyond simple directory scanning — it supports multi-position fuzzing, encoders, header/cookie manipulation, threading, and rich filtering to help focus on meaningful results. It is a staple in the pentester’s toolbox and works particularly well when combined with manual analysis and other automated tools. Keep your use ethical and within legal boundaries, and tune wordlists and filters to reduce noise and false positives.