-
-
Notifications
You must be signed in to change notification settings - Fork 95
Device_QEMU
The LeechCore library supports reading and writing live QEMU Guest VM memory from the host at very high speeds.
Facts in short:
- Tested on Ubuntu 20.04 and 22.04 (qemu + libvirt).
- Only works on Linux hosts, Windows hosts are not supported (Windows guests are supported).
- No kernel modules or changes to QEMU - only configuration required!
- Memory acquired via either of Shared Memory approach or HugePages approach.
- Acquires memory in read/write mode.
- Acquired memory is assumed to be volatile.
The QEMU plugin and the shared memory implementation is contributed by Mathieu Renard - www.h2lab.org - @H2lab_org.
The QEMU hugepages implementation is contributed by Aodzip.
The QEMU plugin is available as a separate plugin in the LeechCore-plugins repository. The QEMU plugin is pre-packaged in Linux x64 release builds of LeechCore, PCILeech and MemProcFS by default.
LeechCore API:
Please specify the acquisition device type in LC_CONFIG.szDevice
when calling LcCreate
. The acquisition device type is qemu
.
PCILeech / MemProcFS:
Please specify the device type in the -device
option to PCIleech/MemProcFS.
Options:
shm=
Shared memory device to use (under /dev/shm). Required if shared-memory method is used.
hugepage-pid=
libvirt / QEMU process to target. Required if huge pages method is used.
qmp=
QEMU QMP unix socket to use to retrieve guest VM memory map (optional).
delay-latency-ns=
Delay in ns to be applied once each read request (optional).
delay-readpage-ns=
Delay in ns to be applied per read page (optional).
Examples:
Connect to QEMU shared memory qemu-win7.mem
:
-device qemu://shm=qemu-win7.mem
Connect to QEMU shared memory qemu-win7.mem
and retrieve memory map via QMP:
-device qemu://shm=qemu-win7.mem,qmp=/tmp/qmp-win7.sock
Connect to QEMU shared memory qemu-win7.mem
and retrieve memory map via QMP and emulate a PCILeech FPGA device speed-wise:
qemu://shm=qemu-win7.mem,qmp=/tmp/qmp-win7.sock,delay-latency-ns=150000,delay-readpage-ns=20000
Connect to QEMU huge pages backend for libvirt process id (PID) 5421
and retrieve memory map via QMP:
-device qemu://hugepage-pid=5421,qmp=/tmp/qmp-win7.sock
The LeechCore plugin leechcore_device_qemu.so
from the LeechCore-plugins repository should reside in the same directory as leechcore.so
. The QEMU plugin is pre-packaged in Linux x64 release builds of LeechCore, PCILeech and MemProcFS by default.
If running as a user (not as root) the user must have read/write access to shared memory device under /dev/shm/
and the QMP socket.
The QMP socket is optional - but strongly recommended. If not used an alternative is to use a manual memory map (given by the command line option -memmap). For information about how to retrieve the manual memory map from the VM please see LeechCore wiki about how to use Sysinternals RAMMap from Microsoft to do this.
The following example shows MemProcFS being connected to the qemu shared memory device at /dev/shm/qemu-win7.mem
and the guest VM physical memory map is retrieved from QMP socket at /tmp/qmp-win7.sock
. The VM memory is analyzed by MemProcFS live in the mounted FUSE file system at /mnt/
.
PCILeech has also been run against the live memory (simultaneously as MemProcFS is connected). Memory has been "dumped" at very high speeds (4GB/s) and winlogon.exe have been patched to allow for spawning a system command shell by pressing left shift 5 times (stickykeys).
Below is a guide how to configure the QEMU VM for shared memory (SHM) and QMP on Ubuntu 20.04 and 22.04.
If using virt-manager edit the VM domain XML config to create the shared memory device used and optionally also the QMP socket (for reading the VM physical memory map). XML editing may have to first be enabled in virt-manager menu Edit>Preferences>General Enable XML editing
.
Open VM XML configuration. Before end </domain>
XML tag add (Ubuntu 22.04):
<qemu:commandline>
<qemu:arg value="-object"/>
<qemu:arg value="memory-backend-file,id=qemu-ram,size=6144M,mem-path=/dev/shm/qemu-win7.mem,share=on"/>
<qemu:arg value="-machine"/>
<qemu:arg value="memory-backend=qemu-ram"/>
<qemu:arg value="-qmp"/>
<qemu:arg value="unix:/tmp/qmp-win7.sock,server,nowait"/>
</qemu:commandline>
Open VM XML configuration. Before end </domain>
XML tag add (Ubuntu 20.04):
<qemu:commandline>
<qemu:arg value="-object"/>
<qemu:arg value="memory-backend-file,id=qemu-ram,size=6144M,mem-path=/dev/shm/qemu-win7.mem,share=on"/>
<qemu:arg value="-numa"/>
<qemu:arg value="node,memdev=qemu-ram"/>
<qemu:arg value="-qmp"/>
<qemu:arg value="unix:/tmp/qmp-win7.sock,server,nowait"/>
</qemu:commandline>
If adding the <qemu:commandline>
does not work, if the changes are removed once clicking apply then also modify the initial <domain>
XML tag to:
<domain xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0" type="qemu">
In the above config the shared memory device will be called qemu-win7.mem
. It must be stored at /dev/shm
.
The memory must have exactly the same amount as configured in the VM. In this example VM the memory is 6GB (6144MB). Change the memory size at size=6144M
.
A QMP unix socket is also created at: /tmp/qmp-win7.sock
. This socket is queried by the QEMU plugin to retrieve the VM physical memory map.
Find the example configuration below:
Ubuntu deploys a security feature called Apparmor that must be configured to allow QEMU to create the shared memory device and the QMP socket.
Open the file: /etc/apparmor.d/local/abstractions/libvirt-qemu
and add the lines (modify to what your devices are called if needed):
/dev/shm/qemu-win7.mem rw,
/tmp/qmp-win7.sock rw,
After Apparmor have been configured restart the libvirt service by running:
systemctl restart libvirtd.service
It's also possible to run this directly from qemu not using libvirt / virt-manager. Example below:
qemu-system-x86_64 -kernel vmlinuz.x86_64 -m 512 -drive format=raw,file=debian.img,if=virtio,aio=native,cache.direct=on, \
-enable-kvm -append "root=/dev/mapper/cl-root console=ttyS0 earlyprintk=serial,ttyS0,115200 nokaslr" \
-initrd initramfs.x86_64.img \
-object memory-backend-file,id=mem,size=512M,mem-path=/dev/shm/qemu.mem,share=on \
-qmp unix:/tmp/qmp.sock,server,nowait
-
Ensure the shared memory file and qmp socket is both readable and writable by the current user running PCILeech/MemProcFS.
-
Use a hex editor (such as the command-line hexedit) to open the shared memory file after the VM have been started to ensure it contains memory data (and not all nulls/zeroes).
Below is a guide how to configure the QEMU VM for huge pages and optionally QMP on Ubuntu 22.04.
Note that the hugepages approach may require the user to run the memory acquisition program (PCILeech or MemProcFS) as root due to file permissions. If this is not desirable the shared memory approach may be used.
Before configuring QEMU ensure hugepages are enabled on the Linux system:
grep Huge /proc/meminfo
Hugepagesize is usually set to 2048kB (2MB) on most Intel systems, but may sometimes be set to 1GB as well.
Allocate a number of huge pages (in multiples of Hugepagesize). In the example below 8192 2MB pages are allocated (16GB):
- sysctl -w vm.nr_hugepages=8192
The above is just a short example on HugePages, search for more information and also about how to make this persistent across reboots.
If using virt-manager edit the VM domain XML config to make the VM use hugepages and optionally also the QMP socket (for reading the VM physical memory map). XML editing may have to first be enabled in virt-manager menu Edit>Preferences>General Enable XML editing
.
Open VM XML configuration. After </currentmemory>
XML tag add:
<memoryBacking>
<hugepages>
<page size="2048" unit="KiB"/>
</hugepages>
<access mode="shared"/>
</memoryBacking>
The above will make the VM use huge memory with a page size of 2MB. Adjust accordingly if the huge page size is 1GB. Also there must be huge pages allocated in the system as described in "Example configuration of HugePages on Ubuntu".
It is also recommended to add a QMP socket for easy retrieval of the VM memory map. It is also possible to specify the memory map manually to PCILeech / MemProcFS.
Open VM XML configuration. Before end </domain>
XML tag add (Ubuntu 22.04):
<qemu:commandline>
<qemu:arg value="-qmp"/>
<qemu:arg value="unix:/tmp/qmp-vm-hugepage.sock,server,nowait"/>
</qemu:commandline>
If adding the <qemu:commandline>
does not work, if the changes are removed once clicking apply then also modify the initial <domain>
XML tag to:
<domain xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0" type="qemu">
Ubuntu deploys a security feature called Apparmor that must be configured to allow QEMU to create the shared memory device and the QMP socket.
Open the file: /etc/apparmor.d/local/abstractions/libvirt-qemu
and add the lines (modify to what your devices are called if needed):
/tmp/qmp-vm-hugepage.sock rw
After Apparmor have been configured restart the libvirt service by running:
systemctl restart libvirtd.service
The method described in this wiki entry uses QEMU shared memory to access QEMU guest memory directly. There is an alternative method which uses an emulated QEMU PCIe device to access memory via QEMU virtualized PCIe DMA. This wiki entry does not describe this alternative method. More information are however available in the LeechCore plugin repo - leechcore_device_qemu_pcileech.
Sponsor PCILeech and MemProcFS:
PCILeech and MemProcFS is free and open source!
I put a lot of time and energy into PCILeech and MemProcFS and related research to make this happen. Some aspects of the projects relate to hardware and I put quite some money into my projects and related research. If you think PCILeech and/or MemProcFS are awesome tools and/or if you had a use for them it's now possible to contribute by becoming a sponsor!
If you like what I've created with PCIleech and MemProcFS with regards to DMA, Memory Analysis and Memory Forensics and would like to give something back to support future development please consider becoming a sponsor at: https://github.com/sponsors/ufrisk
Thank You 💖