You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Empirically, my target directory for each project seems to grow continuously. If things are being deleted this is not happening for me, or not reliably, or not enough.
Steps
Unfortunately I can't provide a sensible repro. It is easy to repro that the target/ directory can contain old build objects, but that is expected. I don't have a reliable repro for target/ becoming obviously-overly-large, as this seems to happen gradually.
My observations are along these lines:
I pick up a project and start working on it.
I do much development, many cargo build, cargo build --workspace, cargo check, etc. etc., constantly changing the source code, possibly adding/removing dependent crates, etc.
Eventually (a matter of days or perhaps weeks) I run out of disk space.
I use cargo update very sparingly and generally maintain a constant lockfile and use --locked. I'm not 100% sure, but I suspect this "giant target directory" problem can occur without me making changes to dependency versions, and, I think more confidently, without me making many such changes.
Most recently I found that the target directory for the primary tree on my laptop of my personal project otter, which takes 6G for a clean build, had got to 114G. That would seem to imply that cargo had cached around 20 old versions of at least some build artifacts.
Instead of providing a repro recipe, I could very easily wait for this to happen again and then provide some kind of summary or inventory of what is in the target, if someone would give me the right runes to type to extract relevant metatdata.
A wrinkle about my workflow that may be unusual is that I do most of my builds "out of tree", ie using --manifest-path to point to source code elsewhere. (The source code is not writeable by the build user.) I have found that many Rust ecosystem packages' build and test scripts do not work properly out-of-tree, and then I resort to linkfarming, but this ever-growing target seems to happen either way.
I also notice that despite all this cacheing, changes to Rust compiler version or flags set in the config, cause a complete rebuild of everything, so if I need to work on a project which wants to test with multiple rustc versions, I end up doing a lot of rebuilding or having to use two working trees. But I think this latter is a separate issue.
Possible Solution(s)
Put a limit on the number of old versions of things to be kept. (Keep the newest, or use a biased random cache eviction algorith.)
Allow the user to provide a configuration option limiting the maximum size of a target directory, and when that is reached, do some kind of more aggressive cleanup.
Notes
This has been happening to me for a some time, possibly "forever". I tend to run multiple Rust versions.
In the most recent case this happened, I think I was predominantly (if not entirely) using the Rust version (and therefore the cargo version) I quote below.
Yea, I'm going to close this as a duplicate of those issues. We've been looking at using a database to track cache directory contents and to be able to age out unused files.
Problem
Empirically, my
target
directory for each project seems to grow continuously. If things are being deleted this is not happening for me, or not reliably, or not enough.Steps
Unfortunately I can't provide a sensible repro. It is easy to repro that the
target/
directory can contain old build objects, but that is expected. I don't have a reliable repro fortarget/
becoming obviously-overly-large, as this seems to happen gradually.My observations are along these lines:
--locked
. I'm not 100% sure, but I suspect this "giant target directory" problem can occur without me making changes to dependency versions, and, I think more confidently, without me making many such changes.Most recently I found that the
target
directory for the primary tree on my laptop of my personal project otter, which takes 6G for a clean build, had got to 114G. That would seem to imply that cargo had cached around 20 old versions of at least some build artifacts.Instead of providing a repro recipe, I could very easily wait for this to happen again and then provide some kind of summary or inventory of what is in the
target
, if someone would give me the right runes to type to extract relevant metatdata.A wrinkle about my workflow that may be unusual is that I do most of my builds "out of tree", ie using
--manifest-path
to point to source code elsewhere. (The source code is not writeable by the build user.) I have found that many Rust ecosystem packages' build and test scripts do not work properly out-of-tree, and then I resort to linkfarming, but this ever-growingtarget
seems to happen either way.I also notice that despite all this cacheing, changes to Rust compiler version or flags set in the config, cause a complete rebuild of everything, so if I need to work on a project which wants to test with multiple rustc versions, I end up doing a lot of rebuilding or having to use two working trees. But I think this latter is a separate issue.
Possible Solution(s)
Put a limit on the number of old versions of things to be kept. (Keep the newest, or use a biased random cache eviction algorith.)
Allow the user to provide a configuration option limiting the maximum size of a
target
directory, and when that is reached, do some kind of more aggressive cleanup.Notes
This has been happening to me for a some time, possibly "forever". I tend to run multiple Rust versions.
In the most recent case this happened, I think I was predominantly (if not entirely) using the Rust version (and therefore the cargo version) I quote below.
Thanks for your attention.
Version
The text was updated successfully, but these errors were encountered: