You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In HeterogeneousEDProducer the (work in) EDM streams is assigned to devices in a "static" way, i.e. each producer on each EDM stream will always use the same device.
In #100 the logic stays (for backwards compatibility), but after all producers have been migrated from HeterogeneousEDProducer to the model of #100, we could try more clever load balancing between the devices.
The text was updated successfully, but these errors were encountered:
One question is that whether we schedule each event fully to one device, or allow producers of a single event to use (or put their output product on the memory of) different devices.
Former is certainly simpler to start with.
Latter needs a model for reading a data product from the memory of another device. This can be trivially achieved with unified memory (#85). In #100 (comment)@fwyzard commented also
Often, we could let the currentDevice_ read the memory of the dataDevice over the PCI-e or NVLINK bus.
Just for the record, I'm trying to think what benefits we could find in running part of the event on different devices (of the same type).
A simple use case (that we are very far from) is if a single event is enough to saturate and exceed a single device.
An other is, if we end up being memory limited, we could keep some conditions (e.g. pixel calibrations) on one gpu, and others (e.g. calorimeters calibrations) on a second one, and run the relevant algorithms on the corresponding device.
This issue is about operating multiple GPUs.
In
HeterogeneousEDProducer
the (work in) EDM streams is assigned to devices in a "static" way, i.e. each producer on each EDM stream will always use the same device.In #100 the logic stays (for backwards compatibility), but after all producers have been migrated from
HeterogeneousEDProducer
to the model of #100, we could try more clever load balancing between the devices.The text was updated successfully, but these errors were encountered: