Best way to handle timeouts with httprobe in large scan lists? #3
-
|
I'm running httprobe against a list of ~50k subdomains and noticing that some hosts with very slow responses are causing the whole scan to take forever. Currently I'm using: cat subs.txt | httprobe -t 5000But even with the 5s timeout, when you have tens of thousands of hosts, it adds up. Is there a recommended approach for balancing speed vs. accuracy when dealing with large lists? Should I be piping through something like |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
|
Great question! httprobe already handles concurrency internally using goroutines — the For large lists, try bumping it up: cat subs.txt | httprobe -c 50 -t 3000A few tips that helped me with large scans:
With 50k subs, I typically get through the list in under 2 minutes with |
Beta Was this translation helpful? Give feedback.
Great question! httprobe already handles concurrency internally using goroutines — the
-cflag controls the concurrency level (default is 20).For large lists, try bumping it up:
cat subs.txt | httprobe -c 50 -t 3000A few tips that helped me with large scans:
-c 50or even-c 100works well if your network can handle it.massdnsorpurednsfirst to eliminate non-resolving domains. No point probing a host that doesn't resolve.-prefer-https— if you only care about HTTPS, this skips the HTTP check when…