-
-
Notifications
You must be signed in to change notification settings - Fork 326
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix incorrect handling of brk syscall when shrinking the heap #2776
Conversation
Won't this mask some potential bugs that access mem out of the page? I guess they could be found with QAsan or similar, so may not be a huge issue, just pointing it out. |
Hey @domenukk Hmmm, I don't see how this could mask potential bugs. QEMU is responsible for intercepting the target's brk syscall and making the heap segment to grow and shrink by calling LibAFL side, we are just complying with QEMU behaviour to keep the snapshot healthy: we intercept brk syscalls and make the snapshotted brk interval to grow or decrease. |
Well you don't shrink the page, right? Ergo, anything accessing data on the shrinked part of the page will continue to work in our case, but crash in a real target, or am I missing something? |
Ah qemu never shrinks anyway? Well ... |
why? yes, QEMU shrinks: https://github.com/AFLplusplus/qemu-libafl-bridge/blob/b01a0bc334cf11bfc5e8f121d9520ef7f47dbcd1/linux-user/syscall.c#L843 And that's the point, we weren't shrinking snapshot side. Before it was like this:
with QEMU was handling it correctly:
do you see the error now? Now we keep
Yes we are shrinking now, but I don't fully understand your "will continue working in our case". If a target was accessing an unmapped region (the part that shrinked), it would crash and LibAFL would have caught the crash and reported it. Here we are talking about snapshot stuff tho, so this problem caused the snapshot I hope I could explain myself better this time. I'm here if you have other questions :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks ok to me.
@cube0x8 is right imo, we need to take into account brk update when it shrinks as well.
@@ -11,7 +11,7 @@ use crate::cargo_add_rpath; | |||
|
|||
pub const QEMU_URL: &str = "https://github.com/AFLplusplus/qemu-libafl-bridge"; | |||
pub const QEMU_DIRNAME: &str = "qemu-libafl-bridge"; | |||
pub const QEMU_REVISION: &str = "b01a0bc334cf11bfc5e8f121d9520ef7f47dbcd1"; | |||
pub const QEMU_REVISION: &str = "e558cafe7cf565465e9850ccb310c9d40eecc723"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you update this to 06bf8facec33548840413fba1b20858f58e38e2d
please?
Previously, the QEMU snapshot hook for the
brk
syscall only handled cases where the heap was growing. Eachbrk
call would incrementally update the mapping in the snapshot by assuming that the new brk value was always larger than the previous one.This is not always the case tho, because the heap can also shrink.
Now:
initial_target_brk
is stored inSnapshotModule
change_mapped
),initial_target_brk
is used asstart_addr
, and the new brk value asend_addr
.initial_target_brk
, the mapping remains unchanged, aligning with QEMU's behavior in such cases.