Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC-0001-economic-dataloader.md #69

Open
wants to merge 201 commits into
base: master
Choose a base branch
from
Open

RFC-0001-economic-dataloader.md #69

wants to merge 201 commits into from

Conversation

yoadbs
Copy link

@yoadbs yoadbs commented Sep 27, 2024

A new dataloader multiprocessing pipeline design is suggested. This pipeline splits the task of batch generation, into 2 types of workers: item generating workers, and batch generating workers. This pipeline is designated to significantly reduce random-access-memory (RAM) usage, without any significant reduction in throughput.

@facebook-github-bot
Copy link
Contributor

Hi @yoadbs!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@albanD
Copy link
Contributor

albanD commented Sep 27, 2024

cc @andrewkho

@andrewkho
Copy link

Hi @yoadbs thank you for this thoughtful RFC! I just had a quick look but this looks like it would be covered by some of our plans in torchdata to allow more modular parallelism: https://github.com/pytorch/data/issues/1318 . I know it's long but I believe it should cover your use case as well, please let me know if it doesn't.

Some thoughts on this in general, these will be true for both your RFC and the one in pytorchd/data's #1318:

  • this is going to be more relevant for large batch_size and larger data, eg HD video
  • Introducing more IPC between worker pools might slow things down
  • requires tuning multiple worker pools/prefetch buffers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants