Hub provides the fastest access to the state-of-the-art datasets for Deep Learning, enabling data scientists to manage them, build scalable data pipelines and connect to Pytorch and Tensorflow.
We realized that there are a few problems related with current workflow in deep learning data management through our experience of working with deep learning companies and researchers. Most of the time Data Scientists/ML researchers work on data management and preprocessing instead of doing modeling. Deep Learning often requires to work with large datasets. Those datasets can grow up to terabyte or even petabyte size. It is hard to manage data, store, access, and version-control. It is time-consuming to download the data and link with the training or inference code. There is no easy way to access a chunk of it and possibly visualize. Wouldn’t it be more convenient to have large datasets stored & version-controlled as single numpy-like array on the cloud and have access to it from any machine at scale?
We’ve talked the talk, now let’s walk through how it works:
pip3 install hub
You can access public datasets with a few lines of code.
import hub
mnist = hub.load("mnist/mnist")
mnist["data"][0:1000].compute()
Load the data and directly train your model using pytorch
import hub
import torch
mnist = hub.load("mnist/mnist")
mnist = mnist.to_pytorch(lambda x: (x["data"], x["labels"]))
train_loader = torch.utils.data.DataLoader(mnist, batch_size=1, num_workers=0)
for image, label in train_loader:
# Training loop here
- Register a free account at Activeloop and authenticate locally
hub register
hub login
- Then create a dataset and upload
from hub import tensor, dataset
images = tensor.from_array(np.zeros((4, 512, 512)))
labels = tensor.from_array(np.zeros((4, 512, 512)))
ds = dataset.from_tensors({"images": images, "labels": labels})
ds.store("username/basic")
- Access it from anywhere else in the world, on any device having a command line.
import hub
ds = hub.load("username/basic")
For more advanced data pipelines like uploading large datasets or applying many transformations, please see docs.
- Store large datasets with version-control
- Collaborate as in Google Docs: Multiple data scientists working on the same data in sync with no interruptions
- Access from multiple machines simultaneously
- Integration with your ML tools like Numpy, Dask, PyTorch, or TensorFlow.
- Create arrays as big as you want
- Take a quick look on your data without redundant manipulations/in a matter of seconds/etc.
- Aerial images: Satellite and drone imagery
- Medical Images: Volumetric images such as MRI or Xray
- Self-Driving Cars: Radar, 3D LIDAR, Point Cloud, Semantic Segmentation, Video Objects
- Retail: Self-checkout datasets
- Media: Images, Video, Audio storage
Activeloop’s Hub format lets you achieve faster inference at a lower cost. Test out the datasets we’ve converted into Hub format - see for yourself!
Similarly to other dataset management packages, Hub
is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.
If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!