Subdomain Recon Steps:
Subdomains , subfinder & sublist3r & assetfinder
subfinder -d $trgt1 -all -rl 10 -o target.subd.txt
subfinder -d example.com -all -recursive > target.subd.txt
CRT.sh:
Go to www.crt.sh and search for the target domain. You may be able to discover new subdomains from this site.
Note: you can also perform bruteforce subdomain searches + recursive searches on discovered subdomains:
dnsrecon -t brt -d domain.com -D /usr/share/seclists/Discovery/DNS/subdomains-top1million-5000 > subd.txt
Find alive subdomain hosts
httpx -l target.subd.txt -o alive.txt
For installation instructions, see HTTPX in the Tools section
cat subdomain.txt | httpx-toolkit -ports 80,443,8080,8000,8888 -threads 200 > subdomains_alive.txt
cat target.subd.txt | aquatone -chrome-path /usr/bin/chrome
Subdomains > Pretty Recon (Paid service)
dnsrecon -t brt -D /usr/share/seclists/Discovery/DNS/subdomains-top1million-5000.txt -d backdoor.htb
subzy run --targets subdomains.txt --concurrency 100 --hide_fails --verify_ssl
for subdomain in $(cat wordlist.txt); do dig $subdomain.example.com +short; done
Example
dirb http://$trgt1/aspnet_client/system_web/ fuzz.txt -r
Example
dirb http://$trgt1/aspnet_client/system_web/ fuzz.txt -r
Example
dirb http://$trgt1/aspnet_client/system_web/ fuzz.txt -r
Example
dirb http://$trgt1/aspnet_client/system_web/ fuzz.txt -r
Example
httpx -l subs.txt -o alive_subs.txt
install
go install -v github.com/projectdiscovery/httpx/cmd/httpx@latest
sudo cp ~/go/bin/httpx /usr/bin/httpx
arjun -u https://site.com/endpoint.php -oT arjun_output.txt -t 10 --rate-limit 10 --passive -m GET,POST --headers
arjun -u https://site.com/endpoint.php -oT arjun_output.txt -m GET,POST -w /usr/share/wordlists/seclists/Discovery/Web-Content/burp-parameter-names.txt -t 10 --rate-limit 10 --headers
cat urls1.txt | gf redirect
inurl:%3Dhttp site:example.com
inurl:%3D%2F site:example.com
inurl:redirecturi site:example.com
inurl:redirect_uri site:example.com
inurl:redirecturl site:example.com
inurl:redirect_uri site:example.com
inurl:return site:example.com
inurl:returnurl site:example.com
inurl:relaystate site:example.com
inurl:forward site:example.com
inurl:forwardurl site:example.com
inurl:forward_url site:example.com
inurl:url site:example.com
inurl:uri site:example.com
inurl:dest site:example.com
inurl:destination site:example.com
inurl:next site:example.com
e.g. example.com/bing.com
example.com//bing.com
Burpsuite
Google Dorking
gf
pattern.txt
{GOBUSTER}/v1
{GOBUSTER}/v2
api/v1/{GOBUSTER}
api/v2/{GOBUSTER}
API bruteforce gobuster command:
gobuster dir -u http://$trgt1:5002 -w /usr/share/wordlists/dirb/big.txt -p pattern.txt
kr wordlist list
.kite file
kr scan http://192.168.241.16:5002 -w /path/to/routes-small.kite
assetnote wordlist
kr scan http://$trgt1:5002 -A ASSET_NOTE_ALIAS
kr scan http://$trgt1:5002 -A apiroutes-240528
more scanning more options
kr scan http://$trgt1 -w routes.kite -x 20 -j 100 --ignore-length=1053
kr scan http://$trgt1:5002 -w routes.kite -A=apiroutes-240528
kr brute http://$trgt1:5002 -A raft-small-directories
kiterunner replay command:
kr kb replay -w routes.kite "REPLAY_STRING"
send replay to Burp
kr kb replay -w routes.kite "REPLAY_STRING" --proxy=http://127.0.0.1:8080
arjun -u https://site.com/endpoint.php -oT arjun_output.txt -t 10 --rate-limit 10 --passive -m GET,POST --headers
arjun -u https://site.com/endpoint.php -oT arjun_output.txt -m GET,POST -w /usr/share/wordlists/seclists/Discovery/Web-Content/burp-parameter-names.txt -t 10 --rate-limit 10 --headers
KiteRunner
FFUF
Arjun
Burpsuite
Perform subjs against gathered subdomains. This will look for js paths on the specified targets within alive.txt
prefix hostnames with http://
for i in $(cat subd.txt); do echo "http://$i"; done;
Run subjs against urls
subjs -i alive.txt -ua "aslam4dm" > subjs.txt | tee
This is a tool is used to discover endpoints (and potential parameters) for a given target.
This command goes through each js webpath in subjs.txt and performs xnLinkFinder.py to look for additional endpoints/paths specified in the js code, saving the output to xnPath.txt
python3 xnLinkFinder.py -i subjs.txt -sf live.somesite.com -o xnPath.txt
Create a file containing the full paths and save as xnFullPath.txt
Note: this may be problematic, because some of the discovered files may only apply to the root domain
for p in $(cat xnlPaths.txt); do echo "https://<target.com>$p" >> xnFullPath.txt; done
Extract .js files only and save to xnJS_URL.txt
grep '\.js$' xnFullPath.txt > xnJS_URL.txt
echo example.com | katana -d 5 | grep -E "\.js$" | nuclei -t nuclei-templates/http/exposures/ -c 30
Looking through subjs.txt and the webpaths from xnJS_URL.txt, analyse the javascript files for sensitive information exposure.
python3 SecretFinder.py -i https://example.com/1.js -o result.html
Loop through JS URLs and perform SecretFinder.py on them. Save the output as n.out.html
counter=1; while read -r line; do python ~/Tools/secretfinder/SecretFinder.py -i "$line" -o "${counter}.out.html"; ((counter++)); done < xnJS_URL.txt
Analyse an entire domain
python3 SecretFinder.py -i https://example.com/ -e
python linkfinder.py -i https://example.com -d
download from here: https://github.com/lc/subjs/releases/
subjs -i alive.txt
python3 ~/Tools/xnLinkFinder/xnLinkFinder.py -i js.txt -sf somesite.com
xnJS_URL.txt
subjs.txt
xnFullPath.txt
xnPath.txt
SSLScan:
sslscan $trgt1
sslscan -h
SSLLabs:
www.ssllabs.com to review the SSL configuration of the target
CRT.sh:
Go to www.crt.sh and search for the target domain. This site can be used to also discover potential subdomains
Look out for the following:
gf repo: https://github.com/tomnomnom/gf
go install github.com/tomnomnom/gf@latestsudo cp ~/go/bin/gf /usr/bin/mkdir ~/.gfgit clone https://github.com/Sherlock297/gf_patterns.git; cp gf_patterns/*.json ~/.gfdownload gf from here: https://github.com/tomnomnom/gf
cat urls1.txt | gf xss/ssrf/redirect/sqli etc. > urls2.txt
cat discovered_urls.txt | httpx -sc -title -nc -o target_urls.txt
grep '\[200\]' target_urls.txt > target_urls2.txt
awk '{print $1}' target_urls2.txt > target_urls_main.txt
cat target_urls_main.txt | aquatone
n.secret.html
gf | httpx | aqua (query all links)
subd.txt
python3 dirsearch.py -u <target url (subdomain url)>
dirb http://$trgt1/aspnet_client/system_web/ fuzz.txt -r
python3 dirsearch.py -u <>
cewl
Find aspx files on server:
dirb http://$trgt1/ /usr/share/wordlists/dirb/common.txt -r -X .aspx
dirb http://$trgt1/aspnet_client/system_web/ fuzz.txt -r
dirsearch -u https://example.com -e php,cgi,htm,html,shtm,shtml,js,txt,bak,zip,old,conf,log,pl,asp,aspx,jsp,sql,db,sqlite,mdb,tar,gz,7z,rar,json,xml,yml,yaml,ini,java,py,rb,php3,php4,php5 --random-agent --recursive -R 3 -t 20 --exclude-status=404 --follow-redirects --delay=0.1
ffuf -w seclists/Discovery/Web-Content/directory-list-2.3-big.txt -u https://example.com/FUZZ -fc 400,401,402,403,404,429,500,501,502,503 -recursion -recursion-depth 2 -e .html,.php,.txt,.pdf,.js,.css,.zip,.bak,.old,.log,.json,.xml,.config,.env,.asp,.aspx,.jsp,.gz,.tar,.sql,.db -ac -c -H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0" -H "X-Forwarded-For: 127.0.0.1" -H "X-Originating-IP: 127.0.0.1" -H "X-Forwarded-Host: localhost" -t 100 -r -o results.json
Example
dirb http://$trgt1/aspnet_client/system_web/ fuzz.txt -r
Find aspx files on server:
dirb http://$trgt1/ /usr/share/wordlists/dirb/common.txt -r -X .aspx
gobuster
Hakrawler:
cat urls.txt | hakrawler -proxy http://localhost:8080
echo https://google.com | hakrawler -subs
echo google.com | haktrails subdomains | httpx | hakrawler
Katana:
katana -u subdomains_alive.txt -d 5 -ps -pss waybackarchive,commoncrawl,alienvault -kf -jc -fx -ef woff,css,png,svg,jpg,woff2,jpeg,gif,svg -o allurls.txt
waybackurls target.co.uk > urls1.txt
python waymore.py -i target.co.uk -mode U
modify the url parameter of the following URL - include the asterisk to account for subdomains:
https://web.archive.org/cdx/search/cdx?url=*.changeme.com&output=text&fl=original&collpase=urlkey
cat allurls.txt | grep -E "\.xls|\.xml|\.xlsx|\.json|\.pdf|\.sql|\.doc|\.docx|\.pptx|\.txt|\.zip|\.tar\.gz|\.tgz|\.bak|\.7z|\.rar|\.log|\.cache|\.secret|\.db|\.backup|\.yml|\.gz|\.config|\.csv|\.yaml|\.md|\.md5"
whois $trgt1
alternatively, you can use
www.whois.com / www.who.is
This step should be performed on all domains/subdomains in scope, across all pages
Manually step through the web application as well as performing spidering techniques on the application.
Look out for the following mechanisms in the application (note: this is not an exhaustive list).
Authentication Pages
1.1. Login Page
1.2. Registration Page
1.3. Password Reset Page
1.4. Redirection Parameters
etc.
Sessions and Tokens
2.1. Session Cookies
2.2. Tokens
2.3. Token Decryption
Account Creation
3.1. Create multiple accounts
3.2. Levels of privileges for user account
3.3. Create multi-privd accounts
3.4. Password Reset functions
3.5. User Information Input fields
3.6. Redirection Parameters
Auto-Gen Emails
4.1. Subscription function
4.2. Signup function
4.3. Reset Password function
4.4. Purchase Receipt
etc.
Upload Functionality
5.1. Media Upload Function
5.2. Profile Picture Upload Function
5.3. Document File Upload
etc.
API
6.1.1 API Hosted (Application is a producer of API services)
6.1.2. API Used (Application is a consumer of API services)
6.2. API type (SOAP/REST/GraphQL etc.)
6.3. API Endpoints
6.4. API Response Rendering
In Application User Functions
7.1. Comment Functions
7.2. Like Functions
7.3. Share Functions
7.4. Upload Functions
7.5. In-app Direct Messaging
7.6. Purchases
7.7. AI Chat Functions
7.8. Database Usage
etc.
PDF Generation
8.1. Order Receipt
etc.
User Controllable Parameters
9.1. Spider application
9.2. Burpsuite HTTP history
9.3. Browse the application manually
Parameter Discovery
Redirection Parameters
$trgt1 = example.com | $trgt2 = site.com
Standard scan:
./whatweb $trgt1
Multiple targets:
./whatweb -v $trgt1 $trgt2
Aggressive scan:
./whatweb -a 3 -v $trgt1
Review the techstack:
Hidden Parameters
CMS Map:
nmap --script=http-wordpress* $trgt1
Basic Enum:
wpscan --url http://$trgt1 --enumerate
Auth Brt:
wpscan --url http://$trgt1/wp-login.php -U admin -P /usr/share/wordlists/rockyou.txt
Quick Plugin check:
wpscan --url http://$trgt1 --enumerate p --plugins-detection aggressive
Aggr plugin check
1:
wpscan -e ap --plugins-detection aggressive --url http://$trgt1
2:
wpscan --url https://site.com --disable-tls-checks --api-token <here> -e at -e ap -e u --enumerate ap --plugins-detection aggressive --force
Searchsploit check on plugins:
searchsploit {plugin name}
Recon and Enum Tracks
GREEN BOX:
potential test links
RED BOX: js/endpoint/secrets
blc http://yoursite.com -ro
broken-link-checker
blc <target URL> -ro
echo example.com | katana -d 5 -ps -pss waybackarchive,commoncrawl,alienvault -f qurl | urldedupe >output.txt
katana -u https://example.com -d 5 | grep '=' | urldedupe | anew output.txt
Nuclei
BurpSuite Scan
cat output.txt | sed 's/=.*/=/' >final.txt
cat urls.txt | grep -E ".php|.asp|.aspx|.jspx|.jsp" | grep '=' | sort > output.txt
Nuclei Custom Template Creation
CVE discovery (fetchcve.py) https://github.com/aslam4dm/fetchcve/
IIS Vulnerability Detection
bchecks
Nikto
nikto -h $trgt1
Hint
For more ideas, see 2.3-HTTP and DNS from OSCP Methodo
cat output.txt | sed 's/=.*/=/' >final.txt
echo example.com | gau --mc 200 | urldedupe >urls.txt
IIS Tilde Enumeration
shortscan
Vulnerability Scanning
filename:manifest.xml
filename:travis.yml
filename:vim_settings.xml
filename:database
filename:prod.exs NOT prod.secret.exs
filename:prod.secret.exs
filename:.npmrc _auth
filename:.dockercfg auth
filename:WebServers.xml
filename:.bash_history <Domain name>
filename:sftp-config.json
filename:sftp.json path:.vscode
filename:secrets.yml password
filename:.esmtprc password
filename:passwd path:etc
filename:dbeaver-data-sources.xml
path:sites databases password
filename:config.php dbpasswd
filename:prod.secret.exs
filename:configuration.php JConfig password
filename:.sh_history
shodan_api_key language:python
filename:shadow path:etc
JEKYLL_GITHUB_TOKEN
filename:proftpdpasswd
filename:.pgpass
filename:idea14.key
filename:hub oauth_token
HEROKU_API_KEY language:json
HEROKU_API_KEY language:shell
SF_USERNAME salesforce
filename:.bash_profile aws
extension:json api.forecast.io
filename:.env MAIL_HOST=smtp.gmail.com
filename:wp-config.php
extension:sql mysql dump
filename:credentials aws_access_key_id
filename:id_rsa or filename:id_dsa
This tool goes through the following api link:
https://api.github.com/users/<USERNAME>/repos?type=all&per_page=100
It then grabs the list of repos created/forked by the user with the following:
grep clone_url | cut -d'"' -f4
Clones or updates the repository using git clone --bare -q for a new clone or git fetch -q origin HEAD:HEAD to update an existing clone.
Finds the commit history of users' repos with --bare
git clone --bare -q ${REPO_WITH_CREDS} ${CLONEPATH}
./archaeologit.sh aslam4dm 'secret|password|credentials' gitscan.out
pip install trufflehog
TruffleHog searches through git repositories for secrets, digging deep into commit history and branches. This is effective at finding secrets accidentally committed.
trufflehog --regex --entropy=False https://github.com/<yourTargetRepo>
trufflehog --regex --entropy=False /path/to/downloaded/repo
A bare clone does not contain the working directory that a typical cloned repository has. It contains only the version history and the data associated with the repository,
This approach will scan through the commit history, searching for the specified pattern using git log and git grep to identify the commit number, file name, and line number where the pattern is located.
git clone --bare -q <repo url>
cd <path to repo>
Note: This command searches for a specific pattern (pattern_to_search) in the changes introduced in each commit, displaying the commit hash and modified file names using git log.
Uses the commit hashes and file names to perform a search (git grep) for the pattern in those specific files for each commit.
Formats and enhances the output by replacing colons in the git grep output to improve readability using sed.
git log -S'pattern_to_search' -p --pretty=format:"%h" --name-only | xargs -I{} sh -c 'git grep -n "pattern_to_search" {} | sed "s/:/ : /"'
if you find a ".git" path exposed on the web application, you can use git-dumper to download the repository
Use gitdumper to download the repo:
https://github.com/arthaud/git-dumper
pip install git-dumper
e.g.
running git-dumper to dump out files
git-dumper http://source.cereal.htb/.git ./
display all files, including deleted
git status
on the .git directory, you can find the history of commits
git log
Show the git commit/changes
git show <commit_hash>
revert to the last commit - restoring all the files
git reset --hard
Github Dorking
Archeologit
Trufflehog
Git clone --bare | Git log -S'' -p --pretty=format"%h" --name-only
Sensitive File Disclosure
site:*.example.com (ext:doc OR ext:docx OR ext:odt OR ext:pdf OR ext:rtf OR ext:ppt OR ext:pptx OR ext:csv OR ext:xls OR ext:xlsx OR ext:txt OR ext:xml OR ext:json OR ext:zip OR ext:rar OR ext:md OR ext:log OR ext:bak OR ext:conf OR ext:sql)
IDOR Recon
Object Mapping
Sequential Testing
BAC Recon
Authentication
Session Info
aqua
Auto Email
User Accounts
Upload Functions
Parameters
User Role Recon
Endpoint Analysis
Subdomain discovery
alive.txt
JS Files & API Endpoint discovery
Crawling (get all links)
Waybackurls (get all links)
Directory discovery (get all links)
Redirection discovery
Broken Link discovery
APIs