UK Security Researcher

A site containing various blog posts, tutorials, tools and information regarding working with me.

UK Security Researcher

A site containing various blog posts, tutorials, tools and information regarding working with me.

Tutorials Blog Posts Tools Contact Information

My methods of recon & testing

Being able to get XSS on one endpoint is good, but what about if you want that XSS on another 25 endpoints? In this very short tutorial i'll be going through a few of my methods I do for recon and testing. If you have anything you'd like to add just DM me on twitter!

Google Dorking
The most basic, yet sometimes most overlooked. Here are some payloads I use for dorking on google:

Don't forget you can use -string to hide certain results, site:google.com inurl:login -admin won't display urls containing "admin".

Sublist3r, wfuzz and robots.txt
Very often when a site has a big scope I will enumerate all sub domains and their robots.txt file, as very often their robots.txt will leak a lot of endpoints. With this data, i'll use wfuzz to ping all known domains/endpoints discovered and check which return something other than 404. I also scrape endpoints from various other places such as google, js files etc.

I will always keep a copy locally of discovered endpoints, because usually a parameter vulnerable to reflective XSS might work on various other endpoints. Using the data from our previous scrape we can now run this parameter on all their endpoints and see if anymore are vulnerable.

I always say to people: keep a log of endpoints/parameters you've discovered a bug on.. it saves you time! :)

Whenever i'm checking out a site i'll always keep a note of interesting endpoints that I know have been around a while, but had lots of updates. I will input a list of urls and scrape as much data as possible from waybackmachine's archives and one jackpot i've had success with is to view their robots.txt from the last x years and from there I use the approach above to quickly determine which are still on the server.

Old endpoints are usually vulnerable, as I found out yesterday when I was able to expose millions of users' thanks to an old unsubscribe url that wasn't used anymore and they forgot about.

Mohammed Diaa released two great tools for automating scraping WayBackMachine. Highly recommend checking his tools out!

.js files
Carrying on from above, the reason for inputting lots of interesting endpoints is because old javascript code might reveal more endpoints. This is also why I will check all .js files I find because they usually always have interesting data inside them such as more endpoints, dev comments, hardcoded info leaks (api keys) etc.

Depending on how big the scope is i'll run masscan on the ip block they own to check for any interesting ports open and anything interesting they are running.

Just use their site
A given really.. ;)

Depends on a programs scope, using all of the above techniques you should easily be able to start identifying lots of endpoints to play with. There are lots more techniques for recon and these are just a few of my favs. Got something interesting you want to add to the list? Just dm me and i'll add it!