You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
/// A second stream for kernels to optionally execute on.
pub(crate)par_stream:Arc<CudaStream>,
pub(crate)workspace:Arc<Mutex<CudaSlice<u8>>>,
pub(crate)cache:Arc<TensorCache<CUdeviceptr>>,
}
Despite everything in the Device being wrapped in an Arc, the Cuda device is not actually Send or Sync - almost certainly because some mut* are nested in there for ffi. As a result it is rather burdensome to implement various functionality. As an example: if you want to implement a pipeline where tensors are prepared in thread A, while inference is done in thread B - tensors cant be moved across threads copying to the host and back (serializing to a vec), and device methods cant be called from both threads.
I am not familiar enough with the underlying implementations to understand if the device can implement Sync, but it would be great if at a minimum the device could be sent across threads. As of right now I could certainly just create a device per thread, but I am not sure how much overhead is associated with doing so - or the implication of not sharing the caches across instances.
The text was updated successfully, but these errors were encountered:
I am realizing for the example I described, you wouldn't actually get any parallelism from the addition of threads since the copy (assuming its synchronous) and inference kernels will be interleaved on the same cuda stream.
Nevertheless it would be great if something like the following pseudo code was possible
fn inference_server(results: Sender<Tensor>) -> Sender<Request> {
let dev = dfdx::AutoDevice::default();
let model = dev.build_module::<ResNet, f32>();
let (tx, rx) = tokio::sync::mpsc::channel(256);
let inferencer = UnboundedReceiverStream(rx)
.map(|data| preprocess(data))
.ready_chunks(32)
.map(|batch_vec| dev.tensor(batch_vec))
.for_each(|tensor| tokio::spawn_blocking(|| {
results.send(model.forward(tensor))
}));
tokio::spawn(inferencer);
tx
}
Currently it is not possible to move devices or tensors across threads.
dfdx/dfdx-core/src/tensor/cuda/device.rs
Lines 22 to 33 in 4476b5e
Despite everything in the Device being wrapped in an Arc, the Cuda device is not actually Send or Sync - almost certainly because some mut* are nested in there for ffi. As a result it is rather burdensome to implement various functionality. As an example: if you want to implement a pipeline where tensors are prepared in thread A, while inference is done in thread B - tensors cant be moved across threads copying to the host and back (serializing to a vec), and device methods cant be called from both threads.
I am not familiar enough with the underlying implementations to understand if the device can implement Sync, but it would be great if at a minimum the device could be sent across threads. As of right now I could certainly just create a device per thread, but I am not sure how much overhead is associated with doing so - or the implication of not sharing the caches across instances.
The text was updated successfully, but these errors were encountered: