Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ResourceLoader.add(...) does not operate like a queue #155

Open
iyobo opened this issue Feb 15, 2021 · 4 comments
Open

ResourceLoader.add(...) does not operate like a queue #155

iyobo opened this issue Feb 15, 2021 · 4 comments

Comments

@iyobo
Copy link

iyobo commented Feb 15, 2021

Hello,
According to the docs, the purpose of the loader.add(...) function is to enqueue something for loading.

What is the point of "enqueueing" something when doing so multiple times throws an error like
Error: Cannot add resources while the loader is running.. Why should the loader being in an active processing state stop enqueuement?

The word enqueue suggests Buffering the work items until the processor is ready, after which the processor pops (i.e array.unshift) the next work item off the list, and on it goes.

My use case is that I am using Pixi.js, which uses this library, to load textures somewhat adhoc as needed.

Could you shed more light on this "Error: Cannot add resources while the loader is running." error? I'd like to learn why it was implemented this way so I know whether to make a PR to improve on this behavior.

Thanks for the tool.

@themoonrat
Copy link

I believe the way it works right now, where you can only add to a loader that is not already active, is because there are tons of edge cases involved when adding to a loader already in progress, so this simplifies things a lot. For a very simple example, you have a loader send out an event for progress being 99%, but then a items are added, and the progress is now down to less than 90%. There are much more complex edge cases than this! So much simpler to just allow each loader to be queued up, started, and then allowed to end.

You can re-use a loader that has finished, by first calling reset on it before adding more stuff to the queue.

In your case though, you want to add things on an ad-hoc basis. In this scenario, I'd recommend to create your own small 'getLoader' functionality that manages an internal pool of loaders. If you have a free loader, just grab it and use. If all loaders are busy, create a new one, add items to it, and start it. Once a loader has finished it's queue, call reset on it, and move it back into the pool for next time you try.

So I agree that a bit of the wording on the docs could make things a little clearer, but in terms of behaviour being changed, that's up to the project maintainer! :)

@iyobo
Copy link
Author

iyobo commented Feb 15, 2021

@themoonrat thanks for the response.

It does raise several other questions however - i.e

  • I'm confused as to why it keeps track of loader.resources (equivalent type Hashtable<string, Resource>) if the design sensibilities of the loader is to only load one thing and be reset or discarded? That's something to be expected from a resource manager, not a one-shot loader.

  • I can definitely see the case for progress representation from both sides. A user might also need to know that the time scope has changed with the addition of new things to load if in the case of adhoc texture loading.

From my estimation, It kinda seems like the current implementation is trying to mash a different type of solution into the design language of another, like trying to fit a full Tree implementation into a Treenode, hence some of the issues encountered.

Would it make sense to have 2 distinct different types of objects?
A ResourceManager that does proper enquement and caching of resources and the other being a simple, lean, promisified, truly one-shot ResourceLoader whereby all it does is take a path and returns a loaded resource?

I think this separation of concerns will serve far more use cases effortlessly.

@themoonrat
Copy link

Hmmm, you say 'if the design sensibilities of the loader is to only load one thing and be reset or discarded' but you can add loads and loads of things to a loader. And then you call load to start loading everything on that list. And when it's finished, you can reset to start from the beginning again.

Might just be the wording, or semantics, but you make it sound like it's 1 loader per 1 file! Whereas its one loader for a batch of files. You want to load a 2nd batch of files and the 1st hasn't finished yet? Create a 2nd loader!

As well as the 'pool' of loaders, I'd recommend setting each loaders callback to check to see if there any other loaders with stuff in their queue, and if so, start that load going. Depends how many simultaneous queues you think might be going, I guess :) Whether you only want 1 loader going at once, or happy to have some in parallel.

There definitely is a case for a new wrapper as you describe that handles these multiple loaders. Of course, then you may come across 'how do I handle loading percentage across multiple loaders' and some of the same edge cases that this library isn't trying to avoid in the first place.... but if it concentrated on a simple promise based API to keep chucking files in, and getting back when they've loaded, it'd definitely help. You're not the first person to ask about re-using of loaders :)

@englercj
Copy link
Owner

Like @themoonrat mentioned, the purpose of not having a single loader object handle getting more loads while already in progress is to avoid the issues with progress that come from doing that. There is one exception to this rule which is child resources. Generally this comes from the post-processing of a resource that spawns a load for another resource. For example, downloading a JSON texture sheet file which references an image to go fetch. In these cases we can divide the progress values assigned to the parent resources amongst the children to maintain correct progress values. Adding child resources during loading is allowed, as long as the parent hasn't yet completed (finished all middleware).

The word enqueue suggests Buffering the work items until the processor is ready, after which the processor pops (i.e array.unshift) the next work item off the list, and on it goes.

This is basically how the loader works. You call add() 1..N times to buffer which loads you want to do, then call .load() to start processing that buffer.

The design philosophy here is that one loader is one load operation. That operation can be for 1 files, or it can be for 100 files. The idea is that when you hit a loading screen, you add all the files you want to load to the loader, kick it off, display the progress on your loading screen, and then when it completes move past the loading screen.

I'm confused as to why it keeps track of loader.resources (equivalent type Hashtable<string, Resource>) if the design sensibilities of the loader is to only load one thing and be reset or discarded? That's something to be expected from a resource manager, not a one-shot loader.

Loader.resources exists so you can access the resources that were loaded once loading completes. Without this map, you'd have to monitor progress events and accumulate the resources yourself. This is a valid way to access the loaded resources, but not the only way. Calling .reset() will clear this map (and other internal data) to prepare the loader for reuse. Again it is one loader per load operation, not one loader per file (necessarily).

A ResourceManager that does proper enquement and caching of resources and the other being a simple, lean, promisified, truly one-shot ResourceLoader whereby all it does is take a path and returns a loaded resource?

This project is a simple, lean, truly one-shot resource loader whereby all it does is take a path and returns a loaded resource. It just can do more than one at a time, because often you have a list of things to load not just one. Like it says in the readme:

your project should have a Resource Manager that stores resources and manages data lifetime. When it decides something needs to be loaded from a remote source, only then does it create a loader and load them.

The primary use-cases I've used this for in the past are:

  1. A loading screen with a progress bar. I know all the things to load, and I load them with one loader and display the progress.
  2. Streaming in resources as the player is playing. On this one I don't really need to display progress since usually the player isn't even aware I'm loading. The loader just acts as a workhorse for pre-loading things I'm likely to need in the future.

In both these cases the loader is a low-level piece behind my resource system that managed data lifetime and streaming in the game. This project is not meant to be a replacement for that system.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants