-
-
Notifications
You must be signed in to change notification settings - Fork 95
Device_QEMU
The LeechCore library supports reading and writing live QEMU Guest VM memory from the host at very high speeds.
Facts in short:
- Tested on Ubuntu 22.04 (qemu + libvirt).
- Only works on Linux hosts, Windows hosts are not supported (Windows guests are supported).
- No kernel modules or changes to QEMU - only configuration required!
- Acquires memory in read/write mode.
- Acquired memory is assumed to be volatile.
The QEMU plugin is contributed by Mathieu Renard - www.h2lab.org - @H2lab_org. The QEMU plugin is available as a separate plugin in the LeechCore-plugins repository.
LeechCore API:
Please specify the acquisition device type in LC_CONFIG.szDevice
when calling LcCreate
. The acquisition device type is qemu
.
PCILeech / MemProcFS:
Please specify the device type in the -device
option to PCIleech/MemProcFS.
Options:
shm=
Shared memory device to use (under /dev/shm).
qmp=
QEMU QMP unix socket to use to retrieve guest VM memory map (optional).
Examples:
-device qemu://shm=qemu-win7.mem
-device qemu://shm=qemu-win7.mem,qmp=/tmp/qmp-win7.sock
No special requirements. If running as a user (not as root) the user must have read/write access to shared memory device under /dev/shm/
and the QMP socket.
The QMP socket is optional - but strongly recommended. If not used an alternative is to use a manual memory map (given by the command line option -memmap). For information about how to retrieve the manual memory map from the VM please see [LeechCore wiki)(https://github.com/ufrisk/LeechCore/wiki/Device_FPGA_AMD_Thunderbolt) about how to use Sysinternals RAMMap from Microsoft to do this.
The following example shows MemProcFS being connected to the qemu shared memory device at /dev/shm/qemu-win7.mem
and the guest VM physical memory map is retrieved from QMP socket at /tmp/qmp-win7.sock
. The VM memory is analyzed by MemProcFS live in the mounted FUSE file system at /mnt/
.
PCILeech has also been run against the live memory (simultaneously as MemProcFS is connected). Memory has been "dumped" at very high speeds (4GB/s) and winlogon.exe have been patched to allow for spawning a system command shell by pressing left shift 5 times (stickykeys).
Below is a short guide how to configure the QEMU VM on Ubuntu 22.04. In addition to this one may have to download and install libssl1.1 on Ubuntu 22.04 to get PCILeech/MemProcFS to run.
If using virt-manager edit the VM domain XML config to create the shared memory device used and optionally also the QMP socket (for reading the VM physical memory map). XML editing may have to first be enabled in virt-manager menu Edit>Preferences>General Enable XML editing
.
Open VM XML configuration. Before end </domain>
XML tag add (Ubuntu 22.04):
<qemu:commandline>
<qemu:arg value="-object"/>
<qemu:arg value="memory-backend-file,id=qemu-ram,size=6144M,mem-path=/dev/shm/qemu-win7.mem,share=on"/>
<qemu:arg value="-machine"/>
<qemu:arg value="memory-backend=qemu-ram"/>
<qemu:arg value="-qmp"/>
<qemu:arg value="unix:/tmp/qmp-win7.sock,server,nowait"/>
</qemu:commandline>
Open VM XML configuration. Before end </domain>
XML tag add (Ubuntu 20.04):
<qemu:commandline>
<qemu:arg value="-object"/>
<qemu:arg value="memory-backend-file,id=qemu-ram,size=6144M,mem-path=/dev/shm/qemu-win7.mem,share=on"/>
<qemu:arg value="-numa"/>
<qemu:arg value="node,memdev=qemu-ram"/>
<qemu:arg value="-qmp"/>
<qemu:arg value="unix:/tmp/qmp-win7.sock,server,nowait"/>
</qemu:commandline>
In the above config the shared memory device will be called qemu-win7.mem
. It must be stored at /dev/shm
.
The memory must have exactly the same amount as configured in the VM. In this example VM the memory is 6GB (6144MB). Change the memory size at size=6144M
.
A QMP unix socket is also created at: /tmp/qmp-win7.sock
. This socket is queried by the QEMU plugin to retrieve the VM physical memory map.
Find the example configuration below:
Ubuntu deploys a security feature called Apparmor that must be configured to allow QEMU to create the shared memory device and the QMP socket.
Open the file: /etc/apparmor.d/local/abstractions/libvirt-qemu
and add the lines (modify to what your devices are called if needed):
/dev/shm/qemu-win7.mem rw,
/tmp/qmp-win7.sock rw,
After Apparmor have been configured restart the libvirt service by running:
systemctl restart libvirtd.service
It's also possible to run this directly from qemu not using libvirt / virt-manager. Example below:
qemu-system-x86_64 -kernel vmlinuz.x86_64 -m 512 -drive format=raw,file=debian.img,if=virtio,aio=native,cache.direct=on, \
-enable-kvm -append "root=/dev/mapper/cl-root console=ttyS0 earlyprintk=serial,ttyS0,115200 nokaslr" \
-initrd initramfs.x86_64.img \
-object memory-backend-file,id=mem,size=512M,mem-path=/dev/shm/qemu-win7.mem,share=on \
-machine memory-backend=mem \
-qmp unix:/tmp/qmp-win7.sock,server,nowait
-
Ensure the shared memory file and qmp socket is both readable and writable by the current user running PCILeech/MemProcFS.
-
Use a hex editor (such as the command-line hexedit) to open the shared memory file after the VM have been started to ensure it contains memory data (and not all nulls/zeroes).
-
PCILeech currently depends on libssl1.1 which may have to be separately installed on the Ubuntu system.
Sponsor PCILeech and MemProcFS:
PCILeech and MemProcFS is free and open source!
I put a lot of time and energy into PCILeech and MemProcFS and related research to make this happen. Some aspects of the projects relate to hardware and I put quite some money into my projects and related research. If you think PCILeech and/or MemProcFS are awesome tools and/or if you had a use for them it's now possible to contribute by becoming a sponsor!
If you like what I've created with PCIleech and MemProcFS with regards to DMA, Memory Analysis and Memory Forensics and would like to give something back to support future development please consider becoming a sponsor at: https://github.com/sponsors/ufrisk
Thank You 💖