Recon Tracks
Asset Discovery Track
App Functionality Track
Vulnerability Detection
Tech Stack Track
OSINT
Asset Discover Quick View
Hint - URL Sorting and Deduplication
IIS
Quick view
IDOR Quick View
BAC Quick View

Subdomain Discovery

Steps

Subdomain Recon Steps:

  1. Subdomain Discovery subd.txt
  2. Alive Subdomain Discovery alive.txt
  3. Take screenshots of the target subdomains
  4. PrettyRecon (Paid Service)
  5. Subdomain Takeover Quick Check
1. Subdomain Discovery: subd.txt

Subdomains , subfinder & sublist3r & assetfinder

subfinder -d $trgt1 -all -rl 10 -o target.subd.txt
subfinder -d example.com -all -recursive > target.subd.txt

CRT.sh:
Go to www.crt.sh and search for the target domain. You may be able to discover new subdomains from this site.

Note: you can also perform bruteforce subdomain searches + recursive searches on discovered subdomains:

dnsrecon -t brt -d domain.com -D /usr/share/seclists/Discovery/DNS/subdomains-top1million-5000 > subd.txt
2. Alive Subdomain Discovery: alive.txt

Find alive subdomain hosts

httpx -l target.subd.txt -o alive.txt

For installation instructions, see HTTPX in the Tools section

cat subdomain.txt | httpx-toolkit -ports 80,443,8080,8000,8888 -threads 200 > subdomains_alive.txt
3. Take screenshots of the target subdomains
cat target.subd.txt | aquatone -chrome-path /usr/bin/chrome
  • Navigate through aquatone's main report and determine the interesting subdomains
4. PrettyRecon (Paid Service)

Subdomains > Pretty Recon (Paid service)

dnsrecon -t brt -D /usr/share/seclists/Discovery/DNS/subdomains-top1million-5000.txt -d backdoor.htb
5. Subdomain Takeover Quick Check
subzy run --targets subdomains.txt --concurrency 100 --hide_fails --verify_ssl

Tools

Dig
for subdomain in $(cat wordlist.txt); do dig $subdomain.example.com +short; done
AssetFinder

Example

dirb http://$trgt1/aspnet_client/system_web/ fuzz.txt -r
FFUF

Example

dirb http://$trgt1/aspnet_client/system_web/ fuzz.txt -r
Sublist3r

Example

dirb http://$trgt1/aspnet_client/system_web/ fuzz.txt -r
DNScan

Example

dirb http://$trgt1/aspnet_client/system_web/ fuzz.txt -r
HTTPX

Example

httpx -l subs.txt -o alive_subs.txt

install

go install -v github.com/projectdiscovery/httpx/cmd/httpx@latest
sudo cp ~/go/bin/httpx /usr/bin/httpx

IDOR Recon

Steps

Tools

Hidden Parameters

Steps

  1. Hidden Get parameters
  2. Hidden Post parameters
1. Hidden Get parameters
arjun -u https://site.com/endpoint.php -oT arjun_output.txt -t 10 --rate-limit 10 --passive -m GET,POST --headers 
arjun -u https://site.com/endpoint.php -oT arjun_output.txt -m GET,POST -w /usr/share/wordlists/seclists/Discovery/Web-Content/burp-parameter-names.txt -t 10 --rate-limit 10 --headers 
2. Hidden Post parameters

Tools

Arjun
Burpsuite: extensions

Redirection Discovery

Steps

  1. Look for Redirection Points in the Application
  2. Google Dorks to find Redirect Parameters
  3. Path Fragment Redirection Discovery
  4. Header Based Redirection Discovery
1. Look for Redirection Points in the Application
Method 1: Spider with Burp
Method 2: Manual hunting
Method 3: Grep For redirect
  • If you have a list of links, use the 'gf' tool with the argument 'redirect' to look for links that contain potential redirection parameters
cat urls1.txt | gf redirect
2. Google Dorks to find Redirect Parameters
inurl:%3Dhttp site:example.com
inurl:%3D%2F site:example.com
inurl:redirecturi site:example.com
inurl:redirect_uri site:example.com
inurl:redirecturl site:example.com
inurl:redirect_uri site:example.com
inurl:return site:example.com
inurl:returnurl site:example.com
inurl:relaystate site:example.com
inurl:forward site:example.com
inurl:forwardurl site:example.com
inurl:forward_url site:example.com
inurl:url site:example.com
inurl:uri site:example.com
inurl:dest site:example.com
inurl:destination site:example.com
inurl:next site:example.com
3. Path Fragment Redirection Discovery
4. Header Based Redirection Discovery
Location-Based Open Redirection Discovery
Referer-Based Open Redirection Discovery

Tools

Burpsuite
Google Dorking
gf

API/Endpoint Discovery

Steps

  1. JS File Analysis
  2. Endpoint Discovery GoBuster and Kiterunner
  3. Endpoint Bruteforcing FFUF
  4. Parameter Discovery Arjun
1. JS File Analysis
  • SecretFinder (Python)
  • Mantra (go)
2. Endpoint Discovery: GoBuster and Kiterunner
Gobuster

pattern.txt

{GOBUSTER}/v1
{GOBUSTER}/v2
api/v1/{GOBUSTER}
api/v2/{GOBUSTER}

API bruteforce gobuster command:

gobuster dir -u http://$trgt1:5002 -w /usr/share/wordlists/dirb/big.txt -p pattern.txt
Kiterunner
kr wordlist list

.kite file

kr scan http://192.168.241.16:5002 -w /path/to/routes-small.kite 

assetnote wordlist

kr scan http://$trgt1:5002 -A ASSET_NOTE_ALIAS
kr scan http://$trgt1:5002 -A apiroutes-240528

more scanning more options

kr scan http://$trgt1 -w routes.kite -x 20 -j 100 --ignore-length=1053
kr scan http://$trgt1:5002 -w routes.kite -A=apiroutes-240528
kr brute http://$trgt1:5002 -A raft-small-directories

kiterunner replay command:

kr kb replay -w routes.kite "REPLAY_STRING"

send replay to Burp

kr kb replay -w routes.kite "REPLAY_STRING" --proxy=http://127.0.0.1:8080
3. Endpoint Bruteforcing: FFUF
4. Parameter Discovery: Arjun
arjun -u https://site.com/endpoint.php -oT arjun_output.txt -t 10 --rate-limit 10 --passive -m GET,POST --headers 
arjun -u https://site.com/endpoint.php -oT arjun_output.txt -m GET,POST -w /usr/share/wordlists/seclists/Discovery/Web-Content/burp-parameter-names.txt -t 10 --rate-limit 10 --headers 

Tools

KiteRunner
FFUF
Arjun
Burpsuite

Javascript Discovery

Steps

  1. JS File Discovery Burpsuite
  2. JS File Discovery subjs.txt -> xnlPath.txt -> xnlFullPath.txt -> xnJS_URL.txt
  3. JS File Discovery with Katana
  4. JS File Analysis SecretFinder.git LinkFinder.git -> n.out.html
1. JS File Discovery: Burpsuite
  • JS Miner
2. JS File Discovery: subjs.txt -> xnPath.txt -> xnFullPath.txt -> xnJS_URL.txt
2.1. Subjs: subjs.txt

Perform subjs against gathered subdomains. This will look for js paths on the specified targets within alive.txt

prefix hostnames with http://

for i in $(cat subd.txt); do echo "http://$i"; done;

Run subjs against urls

subjs -i alive.txt -ua "aslam4dm" > subjs.txt | tee
2.2. XnLinkFinder: xnPath.txt -> xnFullPath.txt -> xnJS_URL.txt

This is a tool is used to discover endpoints (and potential parameters) for a given target.

This command goes through each js webpath in subjs.txt and performs xnLinkFinder.py to look for additional endpoints/paths specified in the js code, saving the output to xnPath.txt

python3 xnLinkFinder.py -i subjs.txt -sf live.somesite.com -o xnPath.txt

Create a file containing the full paths and save as xnFullPath.txt
Note: this may be problematic, because some of the discovered files may only apply to the root domain

for p in $(cat xnlPaths.txt); do echo "https://<target.com>$p" >> xnFullPath.txt; done

Extract .js files only and save to xnJS_URL.txt

grep '\.js$' xnFullPath.txt > xnJS_URL.txt
3. JS File Discovery with Katana
echo example.com | katana -d 5 | grep -E "\.js$" | nuclei -t nuclei-templates/http/exposures/ -c 30
4. JS File Analysis xnJS_url.txt -> n.secret.html
SecretFinder: n.secret.html

Looking through subjs.txt and the webpaths from xnJS_URL.txt, analyse the javascript files for sensitive information exposure.

python3 SecretFinder.py -i https://example.com/1.js -o result.html

Loop through JS URLs and perform SecretFinder.py on them. Save the output as n.out.html

counter=1; while read -r line; do python ~/Tools/secretfinder/SecretFinder.py -i "$line" -o "${counter}.out.html"; ((counter++)); done < xnJS_URL.txt

Analyse an entire domain

python3 SecretFinder.py -i https://example.com/ -e
LinkFinder
python linkfinder.py -i https://example.com -d
TruffleHog

Tools

subjs
subjs -i alive.txt
secretfinder & linkfinder
xnLinkFinder.py
python3 ~/Tools/xnLinkFinder/xnLinkFinder.py -i js.txt -sf somesite.com
burpsuite: extensions
  • JS Miner

xnJS_URL.txt

subjs.txt

xnFullPath.txt

xnPath.txt

SSL Enumeration

SSLScan:

sslscan $trgt1

sslscan -h

SSLLabs:
www.ssllabs.com to review the SSL configuration of the target

CRT.sh:
Go to www.crt.sh and search for the target domain. This site can be used to also discover potential subdomains

Other (misc)

Look out for the following:

  • Webpage title
  • Response Headers
  • 3rd Parties & Dependencies
  • CVEs
  • Shodan/Wappalyzer
  • Verbose Error Pages
  • Template Engine Enumeration

Grep For (GF) | HTTPX | Aquatone on Discovered Links

Steps

  1. Grep For vulnerable link
  2. HTTPX to view Response Codes
  3. Aquatone to view link state
Set up GF
  1. install go install github.com/tomnomnom/gf@latest
  2. sudo cp ~/go/bin/gf /usr/bin/
  3. mkdir ~/.gf
  4. git clone https://github.com/Sherlock297/gf_patterns.git; cp gf_patterns/*.json ~/.gf

download gf from here: https://github.com/tomnomnom/gf

cat urls1.txt | gf xss/ssrf/redirect/sqli etc. > urls2.txt
  • Make sure to remove all duplicates
  • Streamline the output file
2. HTTPX to view Response Codes
cat discovered_urls.txt | httpx -sc -title -nc -o target_urls.txt
grep '\[200\]' target_urls.txt > target_urls2.txt
awk '{print $1}' target_urls2.txt > target_urls_main.txt
cat target_urls_main.txt | aquatone

n.secret.html

gf | httpx | aqua (query all links)

subd.txt

Directory Busting

Steps

  1. Select Target from Subdomain list (if applicable)
  2. Perform Recursive Directory Search
    2.1. Standard wordlist
    2.2. Large wordlist
    2.3. Custom Wordlist
  3. Perform File Search on Discovered Pages
  4. Go through Each Discovered Page
  5. Look for 'Disclosure' Vulnerabilities
1. Select Target from Subdomain list (if applicable)
2.1. Standard wordlist
python3 dirsearch.py -u <target url (subdomain url)> 
dirb http://$trgt1/aspnet_client/system_web/ fuzz.txt -r
2.2. Large wordlist
python3 dirsearch.py -u <>
2.3. Custom Wordlist
cewl
3. Perform File Search on Discovered Pages

Find aspx files on server: 

dirb http://$trgt1/ /usr/share/wordlists/dirb/common.txt -r -X .aspx
4. Go through Each Discovered Page
5. Look for 'Disclosure' Vulnerabilities

Tools

FeroxBuster
dirb http://$trgt1/aspnet_client/system_web/ fuzz.txt -r
Dirsearch
dirsearch -u https://example.com -e php,cgi,htm,html,shtm,shtml,js,txt,bak,zip,old,conf,log,pl,asp,aspx,jsp,sql,db,sqlite,mdb,tar,gz,7z,rar,json,xml,yml,yaml,ini,java,py,rb,php3,php4,php5 --random-agent --recursive -R 3 -t 20 --exclude-status=404 --follow-redirects --delay=0.1
FFUF
ffuf -w seclists/Discovery/Web-Content/directory-list-2.3-big.txt -u https://example.com/FUZZ -fc 400,401,402,403,404,429,500,501,502,503 -recursion -recursion-depth 2 -e .html,.php,.txt,.pdf,.js,.css,.zip,.bak,.old,.log,.json,.xml,.config,.env,.asp,.aspx,.jsp,.gz,.tar,.sql,.db -ac -c -H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0" -H "X-Forwarded-For: 127.0.0.1" -H "X-Originating-IP: 127.0.0.1" -H "X-Forwarded-Host: localhost" -t 100 -r -o results.json
Dirb

Example

dirb http://$trgt1/aspnet_client/system_web/ fuzz.txt -r

Find aspx files on server: 

dirb http://$trgt1/ /usr/share/wordlists/dirb/common.txt -r -X .aspx
GoBuster
gobuster

Crawling and Spidering

Steps

  1. Crawl with Hakrawler and Katana
  2. Crawl with BurpSuite
  3. Crawl with Scrapy
1. Crawl with hakrwaler and Katana

Hakrawler:

cat urls.txt | hakrawler -proxy http://localhost:8080
echo https://google.com | hakrawler -subs
echo google.com | haktrails subdomains | httpx | hakrawler

Katana:

katana -u subdomains_alive.txt -d 5 -ps -pss waybackarchive,commoncrawl,alienvault -kf -jc -fx -ef woff,css,png,svg,jpg,woff2,jpeg,gif,svg -o allurls.txt
2. Crawl with BurpSuite
  • crawl and audit scan
3. Crawl with Scrapy

Wayback Machine

Steps

  1. Find URL Paths with Waybackurls
  2. Find URL Paths with Waymore
  3. Find URL Paths with Web.Archive
  4. Search URLs for Info Disclosure
1. Find URL Paths with Waybackurls
  1. Collect list of URLs - you can crawl/spider too: Test on multiple subdomains
waybackurls target.co.uk > urls1.txt
2. Find URL Paths with Waymore
python waymore.py -i target.co.uk -mode U
3. Find URL Paths with web.archive

modify the url parameter of the following URL - include the asterisk to account for subdomains:

https://web.archive.org/cdx/search/cdx?url=*.changeme.com&output=text&fl=original&collpase=urlkey
4. Search URLs for Info Disclosure
cat allurls.txt | grep -E "\.xls|\.xml|\.xlsx|\.json|\.pdf|\.sql|\.doc|\.docx|\.pptx|\.txt|\.zip|\.tar\.gz|\.tgz|\.bak|\.7z|\.rar|\.log|\.cache|\.secret|\.db|\.backup|\.yml|\.gz|\.config|\.csv|\.yaml|\.md|\.md5"

Whois

whois $trgt1

alternatively, you can use

www.whois.com / www.who.is

Webapp Functions

Steps

This step should be performed on all domains/subdomains in scope, across all pages
Manually step through the web application as well as performing spidering techniques on the application.

Look out for the following mechanisms in the application (note: this is not an exhaustive list).

  1. Authentication Pages
    1.1. Login Page
    1.2. Registration Page
    1.3. Password Reset Page
    1.4. Redirection Parameters
    etc.

  2. Sessions and Tokens
    2.1. Session Cookies
    2.2. Tokens
    2.3. Token Decryption

  3. Account Creation
    3.1. Create multiple accounts
    3.2. Levels of privileges for user account
    3.3. Create multi-privd accounts
    3.4. Password Reset functions
    3.5. User Information Input fields
    3.6. Redirection Parameters

  4. Auto-Gen Emails
    4.1. Subscription function
    4.2. Signup function
    4.3. Reset Password function
    4.4. Purchase Receipt
    etc.

  5. Upload Functionality
    5.1. Media Upload Function
    5.2. Profile Picture Upload Function
    5.3. Document File Upload
    etc.

  6. API
    6.1.1 API Hosted (Application is a producer of API services)
    6.1.2. API Used (Application is a consumer of API services)
    6.2. API type (SOAP/REST/GraphQL etc.)
    6.3. API Endpoints
    6.4. API Response Rendering

  7. In Application User Functions
    7.1. Comment Functions
    7.2. Like Functions
    7.3. Share Functions
    7.4. Upload Functions
    7.5. In-app Direct Messaging
    7.6. Purchases
    7.7. AI Chat Functions
    7.8. Database Usage
    etc.

  8. PDF Generation
    8.1. Order Receipt
    etc.

  9. User Controllable Parameters
    9.1. Spider application
    9.2. Burpsuite HTTP history
    9.3. Browse the application manually

Tools

  • Manual enumeration
  • Firefox Addon: FoxPwn
  • Burpsuite: Extensions (FoxPwn, Autorize, Autorepeater) and HTTP history

BAC Recon

Steps

Tools

Parameter Discovery

Redirection Parameters

Whatweb

$trgt1 = example.com | $trgt2 = site.com
Standard scan:

./whatweb $trgt1

Multiple targets:

./whatweb -v $trgt1 $trgt2

Aggressive scan:

./whatweb -a 3 -v $trgt1

Wappalyzer

Review the techstack:

  • Hosting Panel
  • CMS
  • Backend Language
  • Server Info
  • Databases
  • Dependencies

Hidden Parameters

CMS

Wordpress

CMS Map:

nmap --script=http-wordpress* $trgt1

Basic Enum:

wpscan --url http://$trgt1 --enumerate

Auth Brt:

wpscan --url http://$trgt1/wp-login.php -U admin -P /usr/share/wordlists/rockyou.txt

Quick Plugin check:

wpscan --url http://$trgt1 --enumerate p --plugins-detection aggressive 

Aggr plugin check
1:

wpscan -e ap --plugins-detection aggressive --url http://$trgt1

2:

wpscan --url https://site.com --disable-tls-checks --api-token <here> -e at -e ap -e u --enumerate ap --plugins-detection aggressive --force

Searchsploit check on plugins:

searchsploit {plugin name}

Recon and Enum Tracks

GREEN BOX:
potential test links

RED BOX: js/endpoint/secrets

Broken Link Hijacking

Steps

  1. Look through the Application for Broken Links
  2. Automated Search to find Broken Links Broken-link-checker
  3. Determine What the Broken Link Does
blc http://yoursite.com -ro

Tools

broken-link-checker

blc <target URL> -ro
echo example.com | katana -d 5 -ps -pss waybackarchive,commoncrawl,alienvault -f qurl | urldedupe >output.txt
katana -u https://example.com -d 5 | grep '=' | urldedupe | anew output.txt

Nuclei

BurpSuite Scan

cat output.txt | sed 's/=.*/=/' >final.txt
cat urls.txt | grep -E ".php|.asp|.aspx|.jspx|.jsp" | grep '=' | sort > output.txt

Nuclei Custom Template Creation

CVE discovery (fetchcve.py) https://github.com/aslam4dm/fetchcve/

IIS Vulnerability Detection

bchecks

Nikto

nikto -h $trgt1

Hint
For more ideas, see 2.3-HTTP and DNS from OSCP Methodo

cat output.txt | sed 's/=.*/=/' >final.txt
echo example.com | gau --mc 200 | urldedupe >urls.txt

IIS Tilde Enumeration

shortscan

Vulnerability Scanning

Github Sensitive File Disclosure

Steps

  1. Github Dorking
  2. Automated Commit History Information Disclosure Discovery
  3. Manual Commit History Discovery
  4. .git Repo Exposure
1. Github Dorking
filename:manifest.xml  
filename:travis.yml  
filename:vim_settings.xml  
filename:database  
filename:prod.exs NOT prod.secret.exs  
filename:prod.secret.exs  
filename:.npmrc _auth  
filename:.dockercfg auth  
filename:WebServers.xml  
filename:.bash_history <Domain name>  
filename:sftp-config.json  
filename:sftp.json path:.vscode  
filename:secrets.yml password  
filename:.esmtprc password  
filename:passwd path:etc  
filename:dbeaver-data-sources.xml  
path:sites databases password  
filename:config.php dbpasswd  
filename:prod.secret.exs  
filename:configuration.php JConfig password  
filename:.sh_history  
shodan_api_key language:python  
filename:shadow path:etc  
JEKYLL_GITHUB_TOKEN  
filename:proftpdpasswd  
filename:.pgpass  
filename:idea14.key  
filename:hub oauth_token  
HEROKU_API_KEY language:json  
HEROKU_API_KEY language:shell  
SF_USERNAME salesforce  
filename:.bash_profile aws  
extension:json api.forecast.io  
filename:.env MAIL_HOST=smtp.gmail.com  
filename:wp-config.php  
extension:sql mysql dump  
filename:credentials aws_access_key_id  
filename:id_rsa or filename:id_dsa
2. Automated Commit History Information Disclosure Discovery
2.1. Archaeologit

This tool goes through the following api link:
https://api.github.com/users/<USERNAME>/repos?type=all&per_page=100
It then grabs the list of repos created/forked by the user with the following:
grep clone_url | cut -d'"' -f4

Clones or updates the repository using git clone --bare -q for a new clone or git fetch -q origin HEAD:HEAD to update an existing clone.

Finds the commit history of users' repos with --bare
git clone --bare -q ${REPO_WITH_CREDS} ${CLONEPATH}

./archaeologit.sh aslam4dm 'secret|password|credentials' gitscan.out
2.2. TruffleHog
pip install  trufflehog

 TruffleHog searches through git repositories for secrets, digging deep into commit history and branches. This is effective at finding secrets accidentally committed.

trufflehog --regex --entropy=False https://github.com/<yourTargetRepo>
trufflehog --regex --entropy=False /path/to/downloaded/repo
3. Manual Commit History Discovery

A bare clone does not contain the working directory that a typical cloned repository has. It contains only the version history and the data associated with the repository,

This approach will scan through the commit history, searching for the specified pattern using git log and git grep to identify the commit number, file name, and line number where the pattern is located.

git clone --bare -q <repo url>
cd <path to repo>

Note: This command searches for a specific pattern (pattern_to_search) in the changes introduced in each commit, displaying the commit hash and modified file names using git log.
Uses the commit hashes and file names to perform a search (git grep) for the pattern in those specific files for each commit.
Formats and enhances the output by replacing colons in the git grep output to improve readability using sed.

git log -S'pattern_to_search' -p --pretty=format:"%h" --name-only | xargs -I{} sh -c 'git grep -n "pattern_to_search" {} | sed "s/:/ : /"'
4. .git Repo Exposure

if you find a ".git" path exposed on the web application, you can use git-dumper to download the repository

Use gitdumper to download the repo:
https://github.com/arthaud/git-dumper

pip install git-dumper

e.g.
running git-dumper to dump out files

git-dumper http://source.cereal.htb/.git ./ 
  • this downloads a whole bunch of files...

display all files, including deleted

git status

on the .git directory, you can find the history of commits

git log

Show the git commit/changes

git show <commit_hash>

revert to the last commit - restoring all the files

git reset --hard

Tools

Github Dorking
Archeologit
Trufflehog
Git clone --bare | Git log -S'' -p --pretty=format"%h" --name-only

Google Dorking

Steps

Sensitive File Disclosure

site:*.example.com (ext:doc OR ext:docx OR ext:odt OR ext:pdf OR ext:rtf OR ext:ppt OR ext:pptx OR ext:csv OR ext:xls OR ext:xlsx OR ext:txt OR ext:xml OR ext:json OR ext:zip OR ext:rar OR ext:md OR ext:log OR ext:bak OR ext:conf OR ext:sql)

Tools

Shodan Recon

Steps

Tools

IDOR Recon

Object Mapping

Sequential Testing

BAC Recon

Authentication

Session Info

aqua

Auto Email

User Accounts

Upload Functions

Parameters

User Role Recon

Endpoint Analysis

Subdomain discovery

alive.txt

JS Files & API Endpoint discovery

Crawling (get all links)

Waybackurls (get all links)

Directory discovery (get all links)

Redirection discovery

Broken Link discovery

APIs

Asset discovery
App functionality
Fuzzing and Vuln scan
Tech Stack
OSINT