-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS SDK/S3 RangeError: Out of memory
failure randomly if streamed chunks are very large (> 500KB)
#7820
Comments
Related (but not the same): #7428 |
This is crashing as well, for |
Interesting considering AWS CDK also broke in 1.0.15. I thought this was a stream bug but maybe they are connected. The other lib is only for uploads so won't fix the reported issue above. |
Ran the test using
|
Using a locally built bun-profile. In this example RSS was only 110MB prior to allocation, and sum of all chunks was only ~15MB.
|
Verified #8039 PR does not resolve, but it does appear segfault's are no longer possible, and instead result in |
Met the same issue "RangeError: Out of memory\n at anonymous (native)\n at readableByteStreamControllerPull (:1:11)\n at readMany (:1:11)\n " today, when I'm trying to upload a large file to AWS S3 using Bun.serve(... fetch (req, server) ... ) directly, using binary upload instead of multipart file. I must say this is real wired:
My code example: Bun.serve({
async fetch(req, server) {
//1. init: S3 (AWS - s3-client)
await CreateMultipartUploadCommand(...)
//2. upload: (using binary file)
for await (const chunk of req.body) {
promises.push( S3 - async UploadPartCommand(...) )
}
//3. end upload S3:
await Promise.all( [ step-2 - promises])
await CompleteMultipartUploadCommand(...)
},
port: 8000,
maxRequestBodySize: Number.MAX_SAFE_INTEGER, //9007199254740991 (same issue with 999916545+10000)
}); Note for performance reason I used promises in step-2) & await them to complete in step-3) Finally result in the OOM issue, however my cloud (AWS) show the memory isn't high, totally no more than 200M used (totally 2G) Update: Another memory leak but not sure any related: Bun.serve: Memory leak HTTP POST with Body |
Ahh yes I am running into this is well. Trying to make reproduction repo but its tough. Will try downgrading bun @ShadowLeiPolestar since you say 1.1.0 worked sometimes. Also seeing 800mb ram free on my server, and never see this locally on macos. Edit: can confirm, downgrading works. I am sending large json bodies to bun.serve and after the downgrade no issue at least with that one, however even with the downgrade I see memory usage continue increasing on that version :( may have to switch to node for the deadline i have to hit |
What version of Bun is running?
1.0.20+09d51486e
What platform is your computer?
Linux 6.2.0-39-generic x86_64 x86_64
What steps can reproduce the bug?
I've spent a lot of time trying to create a simple repro example but no luck yet. But on a gigabit network I'm able to repro this failure maybe 20% of the time downloading a 59MB file from S3 using the latest
@aws-sdk/client-s3
S3 client. It seems to only happen on high bandwidth interfaces, like in AWS.Edit Not the best repro example, but was able to narrow it down a bit. It relies on a hacky custom fetch handler because of another bug in Bun, but here it is:
What is the expected behavior?
Doesn't crash.
What do you see instead?
Additional information
Most of the time chunks come in consistently around 16KB. But about 20% of the time chunk sizes go through the roof of all sizes ranging up to 540KB, and 100% of the time these large chunks come in it causes this failure. To be clear I'm using a small amount of memory and in fact is running on a machine with 128GB, so this is not a system issue. This is a production issue I'm troubleshooting locally.
The text was updated successfully, but these errors were encountered: