Enumerate Admin Interfaces
Check List
Methodology
Admin Panel Enumeration / Exposed Admin-Login Path Disclosure
To find the paths that exist for the admin login, use the first command used for robots.txt. This path may be leaked in this file, or using Google Dork, we can identify all the paths related to the admin page in the target
Then, using the commands related to the scanning tools, identify the paths related to the admin login page on the target
We can run the Nmap command on the target with the switch for login pages, which may be for the admin
Sometimes, writing a program inside the comments inside the pages will cause this error to leak the admin login page, which will cause vulnerability. Using the Katana command, we can perform this operation on the comments, and we can use the created script to find the path to the admin login page and execute it on the target
Cheat Sheet
Search Engine Discovery
robots.txt
curl $WEBSITE/robots.txtinurl:admin |
inurl:adminstrator |
inurl:admin-panel |
inurl:admin-dashboard |
inurl:wp-admin |
inurl:phpmyadmin |
inurl:dbadmin |
inurl:controlpanel |
inurl:adminpanel |
inurl:login |
intitle:admin |
intitle:login
site:$WEBSITEservices.http.response.body:"admin" OR
services.http.response.body:"adminstrator" OR
services.http.response.body:"admin-panel" OR
services.http.response.body:"admin-dashboard" OR
services.http.response.body:"wp-admin" OR
services.http.response.body:"phpmyadmin" OR
services.http.response.body:"dbadmin" OR
services.http.response.body:"controlpanel" OR
services.http.response.body:"adminpanel" OR
services.http.response.body:"login"
$WEBSITEPort Scan
nmap -p0-10000 \
-sS \
-sV \
--mtu 5000 \
--script http-enum,http-frontpage-login \
$WEBSITESubdomain Fuzzing
dnsenum $WEBSITE \
-f /usr/share/seclists/Discovery/DNS/subdomains-top1million-20000.txtgobuster dns \
--wildcard \
-d $WEBSITE \
-w /usr/share/seclists/Discovery/DNS/subdomains-top1million-20000.txtDirectory Fuzzing
nuclei -u $WEBSITE -tags panel logingobuster dir -u $WEBSITE \
-w /usr/share/seclists/Discovery/Web-Content/raft-large-directories.txtffuf -u $WEBSITE/FUZZ \
-w /usr/share/seclists/Discovery/Web-Content/raft-large-directories.txt \
-r -c -mc 200urlfinder -d $WEBSITEwaybackurls $WEBSITEComment and Links
katana -u $WEBSITE \
-fr "(static|assets|img|images|css|fonts|icons)/" \
-o /tmp/katana_output.txt \
-xhr-extraction \
-automatic-form-fill \
-silent \
-strategy breadth-first \
-js-crawl \
-extension-filter jpg,jpeg,png,gif,bmp,tiff,tif,webp,svg,ico,css \
-headless --no-sandbox \
-known-files all \
-field url \
-sf url
cat /tmp/katana_output.txt | \
sed 's/\?.*//g' | \
sed 's/\.aspx$//' | \
sed 's/\/[^/]*\.json$//' | \
grep -v '\.js$' | \
grep -v '&' | \
sort -u > /tmp/urls.txtsudo nano fuzz-login.sh#!/bin/bash
# Path to the sensitive words list
SENSITIVE_WORDS="/usr/share/seclists/Discovery/Web-Content/Logins.fuzz.txt"
# Path to the URLs file
URLS_FILE="/tmp/urls.txt"
# Iterate through each sensitive word
while read -r word; do
# Search for the word in the URLs file (case-insensitive)
matches=$(grep -i "$word" "$URLS_FILE")
# If matches are found, print the sensitive word and matched URLs
if [[ ! -z "$matches" ]]; then
echo "Sensitive word found: $word"
echo "$matches"
echo "--------------------"
fi
done < "$SENSITIVE_WORDS"sudo chmod +x fuzz-login.sh;sudo ./fuzz-login.sh $WEBSITELast updated