diff --git a/assets/hub/datvuthanh_hybridnets.ipynb b/assets/hub/datvuthanh_hybridnets.ipynb index 95b1f0f302fc..411e0b19d841 100644 --- a/assets/hub/datvuthanh_hybridnets.ipynb +++ b/assets/hub/datvuthanh_hybridnets.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "a03f3c27", + "id": "7461d39d", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -24,7 +24,7 @@ { "cell_type": "code", "execution_count": null, - "id": "c53374d0", + "id": "642e295f", "metadata": {}, "outputs": [], "source": [ @@ -34,7 +34,7 @@ }, { "cell_type": "markdown", - "id": "187d33f9", + "id": "51c2ea62", "metadata": {}, "source": [ "## Model Description\n", @@ -93,7 +93,7 @@ { "cell_type": "code", "execution_count": null, - "id": "fe220c4b", + "id": "bda8c68e", "metadata": {}, "outputs": [], "source": [ @@ -109,7 +109,7 @@ }, { "cell_type": "markdown", - "id": "e86df1d3", + "id": "468d5dda", "metadata": {}, "source": [ "### Citation\n", @@ -120,7 +120,7 @@ { "cell_type": "code", "execution_count": null, - "id": "ae665090", + "id": "7159f81c", "metadata": { "attributes": { "classes": [ diff --git a/assets/hub/facebookresearch_WSL-Images_resnext.ipynb b/assets/hub/facebookresearch_WSL-Images_resnext.ipynb index 4458d8b9fbe3..e487b7b1703f 100644 --- a/assets/hub/facebookresearch_WSL-Images_resnext.ipynb +++ b/assets/hub/facebookresearch_WSL-Images_resnext.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "7f6bc74a", + "id": "b80018b5", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "d7c1aa68", + "id": "774b6486", "metadata": {}, "outputs": [], "source": [ @@ -39,7 +39,7 @@ }, { "cell_type": "markdown", - "id": "1be946c9", + "id": "047a209f", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -53,7 +53,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a2cc7b85", + "id": "a5d58f1d", "metadata": {}, "outputs": [], "source": [ @@ -67,7 +67,7 @@ { "cell_type": "code", "execution_count": null, - "id": "46f1c52b", + "id": "dead0b79", "metadata": {}, "outputs": [], "source": [ @@ -99,7 +99,7 @@ }, { "cell_type": "markdown", - "id": "b597bf8a", + "id": "f9fac4b3", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/facebookresearch_pytorch-gan-zoo_dcgan.ipynb b/assets/hub/facebookresearch_pytorch-gan-zoo_dcgan.ipynb index 373b9f403c34..227dfecaefb3 100644 --- a/assets/hub/facebookresearch_pytorch-gan-zoo_dcgan.ipynb +++ b/assets/hub/facebookresearch_pytorch-gan-zoo_dcgan.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "eb6ab35a", + "id": "7876bcf2", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "69a21a14", + "id": "c13b03dd", "metadata": {}, "outputs": [], "source": [ @@ -34,7 +34,7 @@ }, { "cell_type": "markdown", - "id": "ec84b9cd", + "id": "15f8d6ce", "metadata": {}, "source": [ "The input to the model is a noise vector of shape `(N, 120)` where `N` is the number of images to be generated.\n", @@ -45,7 +45,7 @@ { "cell_type": "code", "execution_count": null, - "id": "33e4b4ca", + "id": "7f0dc223", "metadata": {}, "outputs": [], "source": [ @@ -63,7 +63,7 @@ }, { "cell_type": "markdown", - "id": "01fe5ab4", + "id": "dc33614e", "metadata": {}, "source": [ "You should see an image similar to the one on the left.\n", diff --git a/assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb b/assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb index abece3bf99a7..3dcb30f3be38 100644 --- a/assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb +++ b/assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "4f6f6eee", + "id": "0288a894", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -24,7 +24,7 @@ { "cell_type": "code", "execution_count": null, - "id": "902dd745", + "id": "bbb7654f", "metadata": {}, "outputs": [], "source": [ @@ -44,7 +44,7 @@ }, { "cell_type": "markdown", - "id": "51481d60", + "id": "4a1157ab", "metadata": {}, "source": [ "The input to the model is a noise vector of shape `(N, 512)` where `N` is the number of images to be generated.\n", @@ -55,7 +55,7 @@ { "cell_type": "code", "execution_count": null, - "id": "de9b54ff", + "id": "cdacbf60", "metadata": {}, "outputs": [], "source": [ @@ -74,7 +74,7 @@ }, { "cell_type": "markdown", - "id": "e3401543", + "id": "1005f373", "metadata": {}, "source": [ "You should see an image similar to the one on the left.\n", diff --git a/assets/hub/facebookresearch_pytorchvideo_resnet.ipynb b/assets/hub/facebookresearch_pytorchvideo_resnet.ipynb index c41f48941890..6cf5c4db107f 100644 --- a/assets/hub/facebookresearch_pytorchvideo_resnet.ipynb +++ b/assets/hub/facebookresearch_pytorchvideo_resnet.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "c6e3e109", + "id": "00619deb", "metadata": {}, "source": [ "# 3D ResNet\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "35df0999", + "id": "d595788b", "metadata": {}, "outputs": [], "source": [ @@ -33,7 +33,7 @@ }, { "cell_type": "markdown", - "id": "565c3dd7", + "id": "102a8d2a", "metadata": {}, "source": [ "Import remaining functions:" @@ -42,7 +42,7 @@ { "cell_type": "code", "execution_count": null, - "id": "b4a5ff33", + "id": "ba23ffd1", "metadata": {}, "outputs": [], "source": [ @@ -64,7 +64,7 @@ }, { "cell_type": "markdown", - "id": "71b77d71", + "id": "4a60936e", "metadata": {}, "source": [ "#### Setup\n", @@ -75,7 +75,7 @@ { "cell_type": "code", "execution_count": null, - "id": "d61f1edb", + "id": "92a7855b", "metadata": { "attributes": { "classes": [ @@ -94,7 +94,7 @@ }, { "cell_type": "markdown", - "id": "3eb1ebd5", + "id": "44c93646", "metadata": {}, "source": [ "Download the id to label mapping for the Kinetics 400 dataset on which the torch hub models were trained. This will be used to get the category label names from the predicted class ids." @@ -103,7 +103,7 @@ { "cell_type": "code", "execution_count": null, - "id": "494f8f29", + "id": "e4849ccb", "metadata": {}, "outputs": [], "source": [ @@ -116,7 +116,7 @@ { "cell_type": "code", "execution_count": null, - "id": "53f8397e", + "id": "b8dde08a", "metadata": {}, "outputs": [], "source": [ @@ -131,7 +131,7 @@ }, { "cell_type": "markdown", - "id": "50b0697f", + "id": "ea6b33f1", "metadata": {}, "source": [ "#### Define input transform" @@ -140,7 +140,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e59baf46", + "id": "71441349", "metadata": {}, "outputs": [], "source": [ @@ -174,7 +174,7 @@ }, { "cell_type": "markdown", - "id": "4a7866fd", + "id": "ce350472", "metadata": {}, "source": [ "#### Run Inference\n", @@ -185,7 +185,7 @@ { "cell_type": "code", "execution_count": null, - "id": "342b6d09", + "id": "d74d9827", "metadata": {}, "outputs": [], "source": [ @@ -197,7 +197,7 @@ }, { "cell_type": "markdown", - "id": "2c98b756", + "id": "2f515992", "metadata": {}, "source": [ "Load the video and transform it to the input format required by the model." @@ -206,7 +206,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a39b1e5e", + "id": "a420c700", "metadata": {}, "outputs": [], "source": [ @@ -231,7 +231,7 @@ }, { "cell_type": "markdown", - "id": "d17b186a", + "id": "fb5970b0", "metadata": {}, "source": [ "#### Get Predictions" @@ -240,7 +240,7 @@ { "cell_type": "code", "execution_count": null, - "id": "df270bdb", + "id": "81c54fe7", "metadata": {}, "outputs": [], "source": [ @@ -259,7 +259,7 @@ }, { "cell_type": "markdown", - "id": "186a7bf1", + "id": "563cb067", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/facebookresearch_pytorchvideo_slowfast.ipynb b/assets/hub/facebookresearch_pytorchvideo_slowfast.ipynb index 7a4247619c46..25762f5e9571 100644 --- a/assets/hub/facebookresearch_pytorchvideo_slowfast.ipynb +++ b/assets/hub/facebookresearch_pytorchvideo_slowfast.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "e74405d0", + "id": "f5265f05", "metadata": {}, "source": [ "# SlowFast\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e89e8bdf", + "id": "84f49515", "metadata": {}, "outputs": [], "source": [ @@ -33,7 +33,7 @@ }, { "cell_type": "markdown", - "id": "d9086dd0", + "id": "6d9ce844", "metadata": {}, "source": [ "Import remaining functions:" @@ -42,7 +42,7 @@ { "cell_type": "code", "execution_count": null, - "id": "f6f1f1e9", + "id": "e6760dcb", "metadata": {}, "outputs": [], "source": [ @@ -65,7 +65,7 @@ }, { "cell_type": "markdown", - "id": "2aab1608", + "id": "70efabae", "metadata": {}, "source": [ "#### Setup\n", @@ -76,7 +76,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e8f2b0b8", + "id": "d0545729", "metadata": { "attributes": { "classes": [ @@ -95,7 +95,7 @@ }, { "cell_type": "markdown", - "id": "4b59e82d", + "id": "633a6ac5", "metadata": {}, "source": [ "Download the id to label mapping for the Kinetics 400 dataset on which the torch hub models were trained. This will be used to get the category label names from the predicted class ids." @@ -104,7 +104,7 @@ { "cell_type": "code", "execution_count": null, - "id": "d5b7fb63", + "id": "11a1b79b", "metadata": {}, "outputs": [], "source": [ @@ -117,7 +117,7 @@ { "cell_type": "code", "execution_count": null, - "id": "c73e5c0c", + "id": "8657850f", "metadata": {}, "outputs": [], "source": [ @@ -132,7 +132,7 @@ }, { "cell_type": "markdown", - "id": "e4c57b33", + "id": "e8eb7c42", "metadata": {}, "source": [ "#### Define input transform" @@ -141,7 +141,7 @@ { "cell_type": "code", "execution_count": null, - "id": "04975b98", + "id": "4f439c50", "metadata": {}, "outputs": [], "source": [ @@ -198,7 +198,7 @@ }, { "cell_type": "markdown", - "id": "d1c84123", + "id": "48608bbf", "metadata": {}, "source": [ "#### Run Inference\n", @@ -209,7 +209,7 @@ { "cell_type": "code", "execution_count": null, - "id": "b9739a5d", + "id": "2069277e", "metadata": {}, "outputs": [], "source": [ @@ -221,7 +221,7 @@ }, { "cell_type": "markdown", - "id": "4119c9c6", + "id": "78b1fdea", "metadata": {}, "source": [ "Load the video and transform it to the input format required by the model." @@ -230,7 +230,7 @@ { "cell_type": "code", "execution_count": null, - "id": "8d3449ce", + "id": "7d188df7", "metadata": {}, "outputs": [], "source": [ @@ -255,7 +255,7 @@ }, { "cell_type": "markdown", - "id": "2efbee16", + "id": "a27b93d8", "metadata": {}, "source": [ "#### Get Predictions" @@ -264,7 +264,7 @@ { "cell_type": "code", "execution_count": null, - "id": "cb294087", + "id": "bc675197", "metadata": {}, "outputs": [], "source": [ @@ -283,7 +283,7 @@ }, { "cell_type": "markdown", - "id": "fd8ba8ee", + "id": "1b31216a", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/facebookresearch_pytorchvideo_x3d.ipynb b/assets/hub/facebookresearch_pytorchvideo_x3d.ipynb index a98ebd735e90..8dd5bf754b6c 100644 --- a/assets/hub/facebookresearch_pytorchvideo_x3d.ipynb +++ b/assets/hub/facebookresearch_pytorchvideo_x3d.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "e19d28a0", + "id": "576813ae", "metadata": {}, "source": [ "# X3D\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "c9fa2761", + "id": "259c298f", "metadata": {}, "outputs": [], "source": [ @@ -34,7 +34,7 @@ }, { "cell_type": "markdown", - "id": "dac9e117", + "id": "bf673450", "metadata": {}, "source": [ "Import remaining functions:" @@ -43,7 +43,7 @@ { "cell_type": "code", "execution_count": null, - "id": "f6241353", + "id": "23051509", "metadata": {}, "outputs": [], "source": [ @@ -65,7 +65,7 @@ }, { "cell_type": "markdown", - "id": "e6a2fdac", + "id": "8f2e3a79", "metadata": {}, "source": [ "#### Setup\n", @@ -76,7 +76,7 @@ { "cell_type": "code", "execution_count": null, - "id": "afee2379", + "id": "78337995", "metadata": {}, "outputs": [], "source": [ @@ -88,7 +88,7 @@ }, { "cell_type": "markdown", - "id": "05cb6e66", + "id": "4f55500f", "metadata": {}, "source": [ "Download the id to label mapping for the Kinetics 400 dataset on which the torch hub models were trained. This will be used to get the category label names from the predicted class ids." @@ -97,7 +97,7 @@ { "cell_type": "code", "execution_count": null, - "id": "5eae5cf3", + "id": "edeed9df", "metadata": {}, "outputs": [], "source": [ @@ -110,7 +110,7 @@ { "cell_type": "code", "execution_count": null, - "id": "cbf69d49", + "id": "0593cc20", "metadata": {}, "outputs": [], "source": [ @@ -125,7 +125,7 @@ }, { "cell_type": "markdown", - "id": "25361693", + "id": "92bb9c5f", "metadata": {}, "source": [ "#### Define input transform" @@ -134,7 +134,7 @@ { "cell_type": "code", "execution_count": null, - "id": "6afab030", + "id": "e8031608", "metadata": {}, "outputs": [], "source": [ @@ -187,7 +187,7 @@ }, { "cell_type": "markdown", - "id": "079f69ae", + "id": "320da8f0", "metadata": {}, "source": [ "#### Run Inference\n", @@ -198,7 +198,7 @@ { "cell_type": "code", "execution_count": null, - "id": "1e0e11e3", + "id": "f0da4568", "metadata": {}, "outputs": [], "source": [ @@ -210,7 +210,7 @@ }, { "cell_type": "markdown", - "id": "7e121db9", + "id": "792f057e", "metadata": {}, "source": [ "Load the video and transform it to the input format required by the model." @@ -219,7 +219,7 @@ { "cell_type": "code", "execution_count": null, - "id": "0f23b088", + "id": "f42cc905", "metadata": {}, "outputs": [], "source": [ @@ -244,7 +244,7 @@ }, { "cell_type": "markdown", - "id": "11193277", + "id": "e4feab1d", "metadata": {}, "source": [ "#### Get Predictions" @@ -253,7 +253,7 @@ { "cell_type": "code", "execution_count": null, - "id": "decdc9ca", + "id": "dd707356", "metadata": {}, "outputs": [], "source": [ @@ -272,7 +272,7 @@ }, { "cell_type": "markdown", - "id": "08b07033", + "id": "2bb38b1b", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/facebookresearch_semi-supervised-ImageNet1K-models_resnext.ipynb b/assets/hub/facebookresearch_semi-supervised-ImageNet1K-models_resnext.ipynb index 19e3a24dc09d..34f7d11451a1 100644 --- a/assets/hub/facebookresearch_semi-supervised-ImageNet1K-models_resnext.ipynb +++ b/assets/hub/facebookresearch_semi-supervised-ImageNet1K-models_resnext.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "2bd415fd", + "id": "066e7b52", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "6db71247", + "id": "73f3121b", "metadata": {}, "outputs": [], "source": [ @@ -47,7 +47,7 @@ }, { "cell_type": "markdown", - "id": "27049554", + "id": "f106ffda", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -61,7 +61,7 @@ { "cell_type": "code", "execution_count": null, - "id": "800e4183", + "id": "a3356bdb", "metadata": {}, "outputs": [], "source": [ @@ -75,7 +75,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7fc07bb7", + "id": "f1ced582", "metadata": {}, "outputs": [], "source": [ @@ -107,7 +107,7 @@ }, { "cell_type": "markdown", - "id": "0a4f2406", + "id": "c599ed7f", "metadata": {}, "source": [ "### Model Description\n", @@ -144,7 +144,7 @@ { "cell_type": "code", "execution_count": null, - "id": "ede96ff1", + "id": "25ceb02a", "metadata": {}, "outputs": [], "source": [ diff --git a/assets/hub/huggingface_pytorch-transformers.ipynb b/assets/hub/huggingface_pytorch-transformers.ipynb index c009b087d336..35373e65b60b 100644 --- a/assets/hub/huggingface_pytorch-transformers.ipynb +++ b/assets/hub/huggingface_pytorch-transformers.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "feb9a1a4", + "id": "37965285", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -43,7 +43,7 @@ { "cell_type": "code", "execution_count": null, - "id": "4b9306fb", + "id": "a9571698", "metadata": {}, "outputs": [], "source": [ @@ -53,7 +53,7 @@ }, { "cell_type": "markdown", - "id": "1047ea9b", + "id": "0e963ef8", "metadata": {}, "source": [ "# Usage\n", @@ -86,7 +86,7 @@ { "cell_type": "code", "execution_count": null, - "id": "26876dcb", + "id": "9ab3e8e4", "metadata": { "attributes": { "classes": [ @@ -104,7 +104,7 @@ }, { "cell_type": "markdown", - "id": "d20e1fe7", + "id": "1b0b00fa", "metadata": {}, "source": [ "## Models\n", @@ -115,7 +115,7 @@ { "cell_type": "code", "execution_count": null, - "id": "1c6f8a3d", + "id": "d99a57e2", "metadata": { "attributes": { "classes": [ @@ -138,7 +138,7 @@ }, { "cell_type": "markdown", - "id": "c6974f27", + "id": "0a9e83c9", "metadata": {}, "source": [ "## Models with a language modeling head\n", @@ -149,7 +149,7 @@ { "cell_type": "code", "execution_count": null, - "id": "049aeb77", + "id": "578b75b1", "metadata": { "attributes": { "classes": [ @@ -172,7 +172,7 @@ }, { "cell_type": "markdown", - "id": "0cb29c87", + "id": "b62b4cd8", "metadata": {}, "source": [ "## Models with a sequence classification head\n", @@ -183,7 +183,7 @@ { "cell_type": "code", "execution_count": null, - "id": "bcde8abd", + "id": "fe9fd312", "metadata": { "attributes": { "classes": [ @@ -206,7 +206,7 @@ }, { "cell_type": "markdown", - "id": "43c7d719", + "id": "ee1dbd19", "metadata": {}, "source": [ "## Models with a question answering head\n", @@ -217,7 +217,7 @@ { "cell_type": "code", "execution_count": null, - "id": "5f06db16", + "id": "cf5eeb36", "metadata": { "attributes": { "classes": [ @@ -240,7 +240,7 @@ }, { "cell_type": "markdown", - "id": "986eead4", + "id": "a9d1f66d", "metadata": {}, "source": [ "## Configuration\n", @@ -251,7 +251,7 @@ { "cell_type": "code", "execution_count": null, - "id": "2a1f29c1", + "id": "9b2d9fc6", "metadata": { "attributes": { "classes": [ @@ -282,7 +282,7 @@ }, { "cell_type": "markdown", - "id": "4cefd2c7", + "id": "96d26a0e", "metadata": {}, "source": [ "# Example Usage\n", @@ -295,7 +295,7 @@ { "cell_type": "code", "execution_count": null, - "id": "31bd825e", + "id": "f0282ba1", "metadata": {}, "outputs": [], "source": [ @@ -311,7 +311,7 @@ }, { "cell_type": "markdown", - "id": "cc2d7660", + "id": "ca3c2355", "metadata": {}, "source": [ "## Using `BertModel` to encode the input sentence in a sequence of last layer hidden-states" @@ -320,7 +320,7 @@ { "cell_type": "code", "execution_count": null, - "id": "309c1bfb", + "id": "fada3e9c", "metadata": {}, "outputs": [], "source": [ @@ -339,7 +339,7 @@ }, { "cell_type": "markdown", - "id": "78391617", + "id": "2c85b954", "metadata": {}, "source": [ "## Using `modelForMaskedLM` to predict a masked token with BERT" @@ -348,7 +348,7 @@ { "cell_type": "code", "execution_count": null, - "id": "bcf73823", + "id": "369cbf00", "metadata": {}, "outputs": [], "source": [ @@ -370,7 +370,7 @@ }, { "cell_type": "markdown", - "id": "856c6063", + "id": "6cbf8a17", "metadata": {}, "source": [ "## Using `modelForQuestionAnswering` to do question answering with BERT" @@ -379,7 +379,7 @@ { "cell_type": "code", "execution_count": null, - "id": "0dc9b52c", + "id": "cc88ba35", "metadata": {}, "outputs": [], "source": [ @@ -409,7 +409,7 @@ }, { "cell_type": "markdown", - "id": "e468f5af", + "id": "a163d516", "metadata": {}, "source": [ "## Using `modelForSequenceClassification` to do paraphrase classification with BERT" @@ -418,7 +418,7 @@ { "cell_type": "code", "execution_count": null, - "id": "708fdf05", + "id": "11a8db50", "metadata": {}, "outputs": [], "source": [ diff --git a/assets/hub/hustvl_yolop.ipynb b/assets/hub/hustvl_yolop.ipynb index a055f6124c23..093c85fc96b4 100644 --- a/assets/hub/hustvl_yolop.ipynb +++ b/assets/hub/hustvl_yolop.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "cb19e97a", + "id": "477755f1", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -23,7 +23,7 @@ { "cell_type": "code", "execution_count": null, - "id": "21a92ae1", + "id": "131b5528", "metadata": {}, "outputs": [], "source": [ @@ -33,7 +33,7 @@ }, { "cell_type": "markdown", - "id": "cd23fcdb", + "id": "4699c8a0", "metadata": {}, "source": [ "## YOLOP: You Only Look Once for Panoptic driving Perception\n", @@ -132,7 +132,7 @@ { "cell_type": "code", "execution_count": null, - "id": "42678f69", + "id": "e7b461bb", "metadata": {}, "outputs": [], "source": [ @@ -148,7 +148,7 @@ }, { "cell_type": "markdown", - "id": "728d3643", + "id": "cd43ef31", "metadata": {}, "source": [ "### Citation\n", diff --git a/assets/hub/intelisl_midas_v2.ipynb b/assets/hub/intelisl_midas_v2.ipynb index d0c8cb273ea8..7add3bce0c47 100644 --- a/assets/hub/intelisl_midas_v2.ipynb +++ b/assets/hub/intelisl_midas_v2.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "c157b72c", + "id": "bd9ab2f2", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -32,7 +32,7 @@ { "cell_type": "code", "execution_count": null, - "id": "2d26ee63", + "id": "9330a71f", "metadata": { "attributes": { "classes": [ @@ -48,7 +48,7 @@ }, { "cell_type": "markdown", - "id": "9fc975a6", + "id": "7e6364b8", "metadata": {}, "source": [ "### Example Usage\n", @@ -59,7 +59,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a1e9b69c", + "id": "2eefb6f2", "metadata": {}, "outputs": [], "source": [ @@ -75,7 +75,7 @@ }, { "cell_type": "markdown", - "id": "7af23620", + "id": "12e7f1b3", "metadata": {}, "source": [ "Load a model (see [https://github.com/intel-isl/MiDaS/#Accuracy](https://github.com/intel-isl/MiDaS/#Accuracy) for an overview)" @@ -84,7 +84,7 @@ { "cell_type": "code", "execution_count": null, - "id": "6a7615a7", + "id": "1ab5a484", "metadata": {}, "outputs": [], "source": [ @@ -97,7 +97,7 @@ }, { "cell_type": "markdown", - "id": "e04aab33", + "id": "5832d501", "metadata": {}, "source": [ "Move model to GPU if available" @@ -106,7 +106,7 @@ { "cell_type": "code", "execution_count": null, - "id": "6ec96ca7", + "id": "1b885053", "metadata": {}, "outputs": [], "source": [ @@ -117,7 +117,7 @@ }, { "cell_type": "markdown", - "id": "3f2ff58e", + "id": "2a5972df", "metadata": {}, "source": [ "Load transforms to resize and normalize the image for large or small model" @@ -126,7 +126,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a373d812", + "id": "247f74a8", "metadata": {}, "outputs": [], "source": [ @@ -140,7 +140,7 @@ }, { "cell_type": "markdown", - "id": "fc930c88", + "id": "a8b2f344", "metadata": {}, "source": [ "Load image and apply transforms" @@ -149,7 +149,7 @@ { "cell_type": "code", "execution_count": null, - "id": "18b14e45", + "id": "87257110", "metadata": {}, "outputs": [], "source": [ @@ -161,7 +161,7 @@ }, { "cell_type": "markdown", - "id": "45a75b2d", + "id": "b8043f04", "metadata": {}, "source": [ "Predict and resize to original resolution" @@ -170,7 +170,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e8771d83", + "id": "a6fc05a1", "metadata": {}, "outputs": [], "source": [ @@ -189,7 +189,7 @@ }, { "cell_type": "markdown", - "id": "becc084f", + "id": "2d3d0b97", "metadata": {}, "source": [ "Show result" @@ -198,7 +198,7 @@ { "cell_type": "code", "execution_count": null, - "id": "d64cde3e", + "id": "68d9f090", "metadata": {}, "outputs": [], "source": [ @@ -208,7 +208,7 @@ }, { "cell_type": "markdown", - "id": "fa9500de", + "id": "e15edae4", "metadata": {}, "source": [ "### References\n", @@ -222,7 +222,7 @@ { "cell_type": "code", "execution_count": null, - "id": "961fd18f", + "id": "28f08e21", "metadata": { "attributes": { "classes": [ @@ -244,7 +244,7 @@ { "cell_type": "code", "execution_count": null, - "id": "96344668", + "id": "d90e9990", "metadata": { "attributes": { "classes": [ diff --git a/assets/hub/mateuszbuda_brain-segmentation-pytorch_unet.ipynb b/assets/hub/mateuszbuda_brain-segmentation-pytorch_unet.ipynb index ad0e25556013..e3357f74af93 100644 --- a/assets/hub/mateuszbuda_brain-segmentation-pytorch_unet.ipynb +++ b/assets/hub/mateuszbuda_brain-segmentation-pytorch_unet.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "4d8401aa", + "id": "36854897", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e3805838", + "id": "adc84183", "metadata": {}, "outputs": [], "source": [ @@ -33,7 +33,7 @@ }, { "cell_type": "markdown", - "id": "27a6f447", + "id": "4f4eedad", "metadata": {}, "source": [ "Loads a U-Net model pre-trained for abnormality segmentation on a dataset of brain MRI volumes [kaggle.com/mateuszbuda/lgg-mri-segmentation](https://www.kaggle.com/mateuszbuda/lgg-mri-segmentation)\n", @@ -57,7 +57,7 @@ { "cell_type": "code", "execution_count": null, - "id": "92e2721e", + "id": "c0b5d491", "metadata": {}, "outputs": [], "source": [ @@ -71,7 +71,7 @@ { "cell_type": "code", "execution_count": null, - "id": "92ec8e4f", + "id": "80fe56cd", "metadata": {}, "outputs": [], "source": [ @@ -100,7 +100,7 @@ }, { "cell_type": "markdown", - "id": "568b174e", + "id": "703f2552", "metadata": {}, "source": [ "### References\n", diff --git a/assets/hub/nicolalandro_ntsnet-cub200_ntsnet.ipynb b/assets/hub/nicolalandro_ntsnet-cub200_ntsnet.ipynb index 7a829611014d..e6b834a8bb62 100644 --- a/assets/hub/nicolalandro_ntsnet-cub200_ntsnet.ipynb +++ b/assets/hub/nicolalandro_ntsnet-cub200_ntsnet.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "7ff24817", + "id": "80d0fa2b", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7b480b1e", + "id": "254cd796", "metadata": {}, "outputs": [], "source": [ @@ -33,7 +33,7 @@ }, { "cell_type": "markdown", - "id": "df97dfed", + "id": "c8758251", "metadata": {}, "source": [ "### Example Usage" @@ -42,7 +42,7 @@ { "cell_type": "code", "execution_count": null, - "id": "6fd63b9f", + "id": "f8a4b4c9", "metadata": {}, "outputs": [], "source": [ @@ -78,7 +78,7 @@ }, { "cell_type": "markdown", - "id": "d25ce4ac", + "id": "c397c709", "metadata": {}, "source": [ "### Model Description\n", @@ -91,7 +91,7 @@ { "cell_type": "code", "execution_count": null, - "id": "241f4f8f", + "id": "2f858e49", "metadata": { "attributes": { "classes": [ diff --git a/assets/hub/nvidia_deeplearningexamples_efficientnet.ipynb b/assets/hub/nvidia_deeplearningexamples_efficientnet.ipynb index 53264f190950..a228e9765d2b 100644 --- a/assets/hub/nvidia_deeplearningexamples_efficientnet.ipynb +++ b/assets/hub/nvidia_deeplearningexamples_efficientnet.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "06236c8b", + "id": "16dd12b5", "metadata": {}, "source": [ "### This notebook requires a GPU runtime to run.\n", @@ -42,7 +42,7 @@ { "cell_type": "code", "execution_count": null, - "id": "47684356", + "id": "579f6903", "metadata": {}, "outputs": [], "source": [ @@ -52,7 +52,7 @@ { "cell_type": "code", "execution_count": null, - "id": "054e88c8", + "id": "e54670be", "metadata": {}, "outputs": [], "source": [ @@ -73,7 +73,7 @@ }, { "cell_type": "markdown", - "id": "8eeca6df", + "id": "18ccdf10", "metadata": {}, "source": [ "Load the model pretrained on ImageNet dataset.\n", @@ -93,7 +93,7 @@ { "cell_type": "code", "execution_count": null, - "id": "33d10c55", + "id": "97e36c99", "metadata": {}, "outputs": [], "source": [ @@ -105,7 +105,7 @@ }, { "cell_type": "markdown", - "id": "6a0d43fe", + "id": "7a276050", "metadata": {}, "source": [ "Prepare sample input data." @@ -114,7 +114,7 @@ { "cell_type": "code", "execution_count": null, - "id": "1a31651e", + "id": "4d22e75d", "metadata": {}, "outputs": [], "source": [ @@ -132,7 +132,7 @@ }, { "cell_type": "markdown", - "id": "c333fd59", + "id": "cdeee9c4", "metadata": {}, "source": [ "Run inference. Use `pick_n_best(predictions=output, n=topN)` helper function to pick N most probable hypotheses according to the model." @@ -141,7 +141,7 @@ { "cell_type": "code", "execution_count": null, - "id": "f63e272c", + "id": "bdf0c853", "metadata": {}, "outputs": [], "source": [ @@ -153,7 +153,7 @@ }, { "cell_type": "markdown", - "id": "6f454bf4", + "id": "3252cf2f", "metadata": {}, "source": [ "Display the result." @@ -162,7 +162,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7a1f1090", + "id": "8a69b900", "metadata": {}, "outputs": [], "source": [ @@ -176,7 +176,7 @@ }, { "cell_type": "markdown", - "id": "ccee0adc", + "id": "a8426e40", "metadata": {}, "source": [ "### Details\n", diff --git a/assets/hub/nvidia_deeplearningexamples_fastpitch.ipynb b/assets/hub/nvidia_deeplearningexamples_fastpitch.ipynb index d657eb56bb08..6bde9566629b 100644 --- a/assets/hub/nvidia_deeplearningexamples_fastpitch.ipynb +++ b/assets/hub/nvidia_deeplearningexamples_fastpitch.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "8335cef2", + "id": "35398358", "metadata": {}, "source": [ "### This notebook requires a GPU runtime to run.\n", @@ -51,7 +51,7 @@ { "cell_type": "code", "execution_count": null, - "id": "99b310cd", + "id": "d62e722c", "metadata": {}, "outputs": [], "source": [ @@ -66,7 +66,7 @@ { "cell_type": "code", "execution_count": null, - "id": "2d5bcfb6", + "id": "9ccea2b4", "metadata": {}, "outputs": [], "source": [ @@ -82,7 +82,7 @@ }, { "cell_type": "markdown", - "id": "b331fb57", + "id": "6117141c", "metadata": {}, "source": [ "Download and setup FastPitch generator model." @@ -91,7 +91,7 @@ { "cell_type": "code", "execution_count": null, - "id": "f0f9a48c", + "id": "1178aeaa", "metadata": {}, "outputs": [], "source": [ @@ -100,7 +100,7 @@ }, { "cell_type": "markdown", - "id": "2fedf1d3", + "id": "fed393a2", "metadata": {}, "source": [ "Download and setup vocoder and denoiser models." @@ -109,7 +109,7 @@ { "cell_type": "code", "execution_count": null, - "id": "774f291c", + "id": "10853661", "metadata": {}, "outputs": [], "source": [ @@ -118,7 +118,7 @@ }, { "cell_type": "markdown", - "id": "86ac124c", + "id": "9a21ef3f", "metadata": {}, "source": [ "Verify that generator and vocoder models agree on input parameters." @@ -127,7 +127,7 @@ { "cell_type": "code", "execution_count": null, - "id": "2601ca60", + "id": "7cc0a869", "metadata": {}, "outputs": [], "source": [ @@ -147,7 +147,7 @@ }, { "cell_type": "markdown", - "id": "6298ddfb", + "id": "52e11f95", "metadata": {}, "source": [ "Put all models on available device." @@ -156,7 +156,7 @@ { "cell_type": "code", "execution_count": null, - "id": "2352292b", + "id": "a69596a4", "metadata": {}, "outputs": [], "source": [ @@ -167,7 +167,7 @@ }, { "cell_type": "markdown", - "id": "63676a62", + "id": "be021ea2", "metadata": {}, "source": [ "Load text processor." @@ -176,7 +176,7 @@ { "cell_type": "code", "execution_count": null, - "id": "b794808d", + "id": "98fef381", "metadata": {}, "outputs": [], "source": [ @@ -185,7 +185,7 @@ }, { "cell_type": "markdown", - "id": "7c671847", + "id": "136c21df", "metadata": {}, "source": [ "Set the text to be synthetized, prepare input and set additional generation parameters." @@ -194,7 +194,7 @@ { "cell_type": "code", "execution_count": null, - "id": "9ecd2f4d", + "id": "61f8192d", "metadata": {}, "outputs": [], "source": [ @@ -204,7 +204,7 @@ { "cell_type": "code", "execution_count": null, - "id": "c12420f6", + "id": "c17d1989", "metadata": {}, "outputs": [], "source": [ @@ -214,7 +214,7 @@ { "cell_type": "code", "execution_count": null, - "id": "fa51b9ba", + "id": "8253d94e", "metadata": {}, "outputs": [], "source": [ @@ -228,7 +228,7 @@ { "cell_type": "code", "execution_count": null, - "id": "9415c6b1", + "id": "4d4a1a71", "metadata": {}, "outputs": [], "source": [ @@ -242,7 +242,7 @@ }, { "cell_type": "markdown", - "id": "4e6d7312", + "id": "e6f32706", "metadata": {}, "source": [ "Plot the intermediate spectorgram." @@ -251,7 +251,7 @@ { "cell_type": "code", "execution_count": null, - "id": "970d4994", + "id": "a889b6b3", "metadata": {}, "outputs": [], "source": [ @@ -265,7 +265,7 @@ }, { "cell_type": "markdown", - "id": "ad54977d", + "id": "5edfccfa", "metadata": {}, "source": [ "Syntesize audio." @@ -274,7 +274,7 @@ { "cell_type": "code", "execution_count": null, - "id": "1b9e64fe", + "id": "8c8e1fda", "metadata": {}, "outputs": [], "source": [ @@ -284,7 +284,7 @@ }, { "cell_type": "markdown", - "id": "f4a3cfd4", + "id": "cb917a83", "metadata": {}, "source": [ "Write audio to wav file." @@ -293,7 +293,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7189d7a5", + "id": "a20af121", "metadata": {}, "outputs": [], "source": [ @@ -303,7 +303,7 @@ }, { "cell_type": "markdown", - "id": "7723b2eb", + "id": "35e1c107", "metadata": {}, "source": [ "### Details\n", diff --git a/assets/hub/nvidia_deeplearningexamples_gpunet.ipynb b/assets/hub/nvidia_deeplearningexamples_gpunet.ipynb index 6126b1125513..63603a3d08df 100644 --- a/assets/hub/nvidia_deeplearningexamples_gpunet.ipynb +++ b/assets/hub/nvidia_deeplearningexamples_gpunet.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "51f776e6", + "id": "94c28869", "metadata": {}, "source": [ "### This notebook requires a GPU runtime to run.\n", @@ -34,7 +34,7 @@ { "cell_type": "code", "execution_count": null, - "id": "b6265e7d", + "id": "6a56c86d", "metadata": {}, "outputs": [], "source": [ @@ -45,7 +45,7 @@ { "cell_type": "code", "execution_count": null, - "id": "f58a929c", + "id": "7e5cd61c", "metadata": {}, "outputs": [], "source": [ @@ -73,7 +73,7 @@ }, { "cell_type": "markdown", - "id": "05bafba9", + "id": "82d85a57", "metadata": {}, "source": [ "### Load Pretrained model\n", @@ -97,7 +97,7 @@ { "cell_type": "code", "execution_count": null, - "id": "bec84ee3", + "id": "dbadc515", "metadata": {}, "outputs": [], "source": [ @@ -113,7 +113,7 @@ }, { "cell_type": "markdown", - "id": "dd7c8a91", + "id": "de80ac7e", "metadata": {}, "source": [ "### Prepare inference data\n", @@ -123,7 +123,7 @@ { "cell_type": "code", "execution_count": null, - "id": "ecc48106", + "id": "1d61f625", "metadata": {}, "outputs": [], "source": [ @@ -146,7 +146,7 @@ }, { "cell_type": "markdown", - "id": "1252a354", + "id": "6b536cfa", "metadata": {}, "source": [ "### Run inference\n", @@ -156,7 +156,7 @@ { "cell_type": "code", "execution_count": null, - "id": "5972ad6f", + "id": "c31a78ac", "metadata": {}, "outputs": [], "source": [ @@ -168,7 +168,7 @@ }, { "cell_type": "markdown", - "id": "687f85fa", + "id": "6741336f", "metadata": {}, "source": [ "### Display result" @@ -177,7 +177,7 @@ { "cell_type": "code", "execution_count": null, - "id": "716238dd", + "id": "e405aa9f", "metadata": {}, "outputs": [], "source": [ @@ -191,7 +191,7 @@ }, { "cell_type": "markdown", - "id": "76a61b49", + "id": "4b5c39bb", "metadata": {}, "source": [ "### Details\n", diff --git a/assets/hub/nvidia_deeplearningexamples_hifigan.ipynb b/assets/hub/nvidia_deeplearningexamples_hifigan.ipynb index ea51692dedcb..c383d5a6fe8f 100644 --- a/assets/hub/nvidia_deeplearningexamples_hifigan.ipynb +++ b/assets/hub/nvidia_deeplearningexamples_hifigan.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "4a175da6", + "id": "b7e24248", "metadata": {}, "source": [ "### This notebook requires a GPU runtime to run.\n", @@ -44,7 +44,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a1e63852", + "id": "fe720bfd", "metadata": {}, "outputs": [], "source": [ @@ -59,7 +59,7 @@ { "cell_type": "code", "execution_count": null, - "id": "ac17f8e1", + "id": "bd68badf", "metadata": {}, "outputs": [], "source": [ @@ -75,7 +75,7 @@ }, { "cell_type": "markdown", - "id": "0c58ed30", + "id": "69c9817c", "metadata": {}, "source": [ "Download and setup FastPitch generator model." @@ -84,7 +84,7 @@ { "cell_type": "code", "execution_count": null, - "id": "25ef3004", + "id": "b42fa7c0", "metadata": {}, "outputs": [], "source": [ @@ -93,7 +93,7 @@ }, { "cell_type": "markdown", - "id": "5b2a5481", + "id": "343d174e", "metadata": {}, "source": [ "Download and setup vocoder and denoiser models." @@ -102,7 +102,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a6ae5a69", + "id": "749745de", "metadata": {}, "outputs": [], "source": [ @@ -111,7 +111,7 @@ }, { "cell_type": "markdown", - "id": "228da38c", + "id": "c5f94c55", "metadata": {}, "source": [ "Verify that generator and vocoder models agree on input parameters." @@ -120,7 +120,7 @@ { "cell_type": "code", "execution_count": null, - "id": "11c7a53a", + "id": "26c363bd", "metadata": {}, "outputs": [], "source": [ @@ -140,7 +140,7 @@ }, { "cell_type": "markdown", - "id": "652177b7", + "id": "d3185f09", "metadata": {}, "source": [ "Put all models on available device." @@ -149,7 +149,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e5bb7ebc", + "id": "9d0af800", "metadata": {}, "outputs": [], "source": [ @@ -160,7 +160,7 @@ }, { "cell_type": "markdown", - "id": "c420d0ee", + "id": "95edfb46", "metadata": {}, "source": [ "Load text processor." @@ -169,7 +169,7 @@ { "cell_type": "code", "execution_count": null, - "id": "085aed06", + "id": "3f161fbe", "metadata": {}, "outputs": [], "source": [ @@ -178,7 +178,7 @@ }, { "cell_type": "markdown", - "id": "cbcc85d2", + "id": "94372288", "metadata": {}, "source": [ "Set the text to be synthetized, prepare input and set additional generation parameters." @@ -187,7 +187,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a781eeed", + "id": "2b920e08", "metadata": {}, "outputs": [], "source": [ @@ -197,7 +197,7 @@ { "cell_type": "code", "execution_count": null, - "id": "95570589", + "id": "a6329134", "metadata": {}, "outputs": [], "source": [ @@ -207,7 +207,7 @@ { "cell_type": "code", "execution_count": null, - "id": "bff1d381", + "id": "27d3e565", "metadata": {}, "outputs": [], "source": [ @@ -221,7 +221,7 @@ { "cell_type": "code", "execution_count": null, - "id": "8b40325a", + "id": "fe1f0bb5", "metadata": {}, "outputs": [], "source": [ @@ -235,7 +235,7 @@ }, { "cell_type": "markdown", - "id": "6d3b756e", + "id": "5e6e9027", "metadata": {}, "source": [ "Plot the intermediate spectorgram." @@ -244,7 +244,7 @@ { "cell_type": "code", "execution_count": null, - "id": "4a3888b8", + "id": "e7afd2e8", "metadata": {}, "outputs": [], "source": [ @@ -258,7 +258,7 @@ }, { "cell_type": "markdown", - "id": "eb74472a", + "id": "056a517a", "metadata": {}, "source": [ "Syntesize audio." @@ -267,7 +267,7 @@ { "cell_type": "code", "execution_count": null, - "id": "825b2587", + "id": "0084610f", "metadata": {}, "outputs": [], "source": [ @@ -277,7 +277,7 @@ }, { "cell_type": "markdown", - "id": "bcbad8d7", + "id": "bb4df622", "metadata": {}, "source": [ "Write audio to wav file." @@ -286,7 +286,7 @@ { "cell_type": "code", "execution_count": null, - "id": "3402ccbc", + "id": "d4f72102", "metadata": {}, "outputs": [], "source": [ @@ -296,7 +296,7 @@ }, { "cell_type": "markdown", - "id": "ca8d169d", + "id": "37b219e7", "metadata": {}, "source": [ "### Details\n", diff --git a/assets/hub/nvidia_deeplearningexamples_resnet50.ipynb b/assets/hub/nvidia_deeplearningexamples_resnet50.ipynb index 0c7d42833924..fd7208324fd5 100644 --- a/assets/hub/nvidia_deeplearningexamples_resnet50.ipynb +++ b/assets/hub/nvidia_deeplearningexamples_resnet50.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "2904159f", + "id": "dc10fa2d", "metadata": {}, "source": [ "### This notebook requires a GPU runtime to run.\n", @@ -44,7 +44,7 @@ { "cell_type": "code", "execution_count": null, - "id": "db2d989b", + "id": "4b7e4c74", "metadata": {}, "outputs": [], "source": [ @@ -54,7 +54,7 @@ { "cell_type": "code", "execution_count": null, - "id": "749ad4c2", + "id": "0a38e856", "metadata": {}, "outputs": [], "source": [ @@ -75,7 +75,7 @@ }, { "cell_type": "markdown", - "id": "76ceaf8b", + "id": "dcb057cf", "metadata": {}, "source": [ "Load the model pretrained on ImageNet dataset." @@ -84,7 +84,7 @@ { "cell_type": "code", "execution_count": null, - "id": "5714c3c1", + "id": "3ba613fa", "metadata": {}, "outputs": [], "source": [ @@ -96,7 +96,7 @@ }, { "cell_type": "markdown", - "id": "7ad98d14", + "id": "64d06fce", "metadata": {}, "source": [ "Prepare sample input data." @@ -105,7 +105,7 @@ { "cell_type": "code", "execution_count": null, - "id": "1d5f30bf", + "id": "06f24989", "metadata": {}, "outputs": [], "source": [ @@ -123,7 +123,7 @@ }, { "cell_type": "markdown", - "id": "23547f43", + "id": "99c597ac", "metadata": {}, "source": [ "Run inference. Use `pick_n_best(predictions=output, n=topN)` helper function to pick N most probably hypothesis according to the model." @@ -132,7 +132,7 @@ { "cell_type": "code", "execution_count": null, - "id": "09eee7e5", + "id": "0dc8a1a9", "metadata": {}, "outputs": [], "source": [ @@ -144,7 +144,7 @@ }, { "cell_type": "markdown", - "id": "a8a9b75d", + "id": "5965f6ed", "metadata": {}, "source": [ "Display the result." @@ -153,7 +153,7 @@ { "cell_type": "code", "execution_count": null, - "id": "8641346b", + "id": "b1dea786", "metadata": {}, "outputs": [], "source": [ @@ -167,7 +167,7 @@ }, { "cell_type": "markdown", - "id": "1aab4df2", + "id": "5b83f14b", "metadata": {}, "source": [ "### Details\n", diff --git a/assets/hub/nvidia_deeplearningexamples_resnext.ipynb b/assets/hub/nvidia_deeplearningexamples_resnext.ipynb index 461fbf53aa01..c4b1f0bec70c 100644 --- a/assets/hub/nvidia_deeplearningexamples_resnext.ipynb +++ b/assets/hub/nvidia_deeplearningexamples_resnext.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "1ededb6c", + "id": "e72c9bda", "metadata": {}, "source": [ "### This notebook requires a GPU runtime to run.\n", @@ -53,7 +53,7 @@ { "cell_type": "code", "execution_count": null, - "id": "51aea367", + "id": "a7510258", "metadata": {}, "outputs": [], "source": [ @@ -63,7 +63,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a7a6068a", + "id": "17f9db6f", "metadata": {}, "outputs": [], "source": [ @@ -84,7 +84,7 @@ }, { "cell_type": "markdown", - "id": "214e06ed", + "id": "4926bab0", "metadata": {}, "source": [ "Load the model pretrained on ImageNet dataset." @@ -93,7 +93,7 @@ { "cell_type": "code", "execution_count": null, - "id": "015395b9", + "id": "6a3b38bd", "metadata": {}, "outputs": [], "source": [ @@ -105,7 +105,7 @@ }, { "cell_type": "markdown", - "id": "d51738a3", + "id": "b6ef4884", "metadata": {}, "source": [ "Prepare sample input data." @@ -114,7 +114,7 @@ { "cell_type": "code", "execution_count": null, - "id": "337fa0bc", + "id": "7de2ce79", "metadata": {}, "outputs": [], "source": [ @@ -133,7 +133,7 @@ }, { "cell_type": "markdown", - "id": "108e2242", + "id": "9dab7a61", "metadata": {}, "source": [ "Run inference. Use `pick_n_best(predictions=output, n=topN)` helper function to pick N most probably hypothesis according to the model." @@ -142,7 +142,7 @@ { "cell_type": "code", "execution_count": null, - "id": "8acc61bf", + "id": "ed2224e0", "metadata": {}, "outputs": [], "source": [ @@ -154,7 +154,7 @@ }, { "cell_type": "markdown", - "id": "6cabcb50", + "id": "14ca87c6", "metadata": {}, "source": [ "Display the result." @@ -163,7 +163,7 @@ { "cell_type": "code", "execution_count": null, - "id": "9177ac0f", + "id": "c3a2823b", "metadata": {}, "outputs": [], "source": [ @@ -177,7 +177,7 @@ }, { "cell_type": "markdown", - "id": "425a843e", + "id": "e45813fe", "metadata": {}, "source": [ "### Details\n", diff --git a/assets/hub/nvidia_deeplearningexamples_se-resnext.ipynb b/assets/hub/nvidia_deeplearningexamples_se-resnext.ipynb index 4d9ea90377c2..0736fca96a8c 100644 --- a/assets/hub/nvidia_deeplearningexamples_se-resnext.ipynb +++ b/assets/hub/nvidia_deeplearningexamples_se-resnext.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "7bdab1c5", + "id": "aef46189", "metadata": {}, "source": [ "### This notebook requires a GPU runtime to run.\n", @@ -53,7 +53,7 @@ { "cell_type": "code", "execution_count": null, - "id": "85b54891", + "id": "7b4d66fc", "metadata": {}, "outputs": [], "source": [ @@ -63,7 +63,7 @@ { "cell_type": "code", "execution_count": null, - "id": "dd058a88", + "id": "ad33b836", "metadata": {}, "outputs": [], "source": [ @@ -84,7 +84,7 @@ }, { "cell_type": "markdown", - "id": "abcfcc55", + "id": "84aad8fc", "metadata": {}, "source": [ "Load the model pretrained on ImageNet dataset." @@ -93,7 +93,7 @@ { "cell_type": "code", "execution_count": null, - "id": "f6841514", + "id": "2fdb6af5", "metadata": {}, "outputs": [], "source": [ @@ -105,7 +105,7 @@ }, { "cell_type": "markdown", - "id": "f0f23ebe", + "id": "cdb34ad0", "metadata": {}, "source": [ "Prepare sample input data." @@ -114,7 +114,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7669db84", + "id": "da929c1d", "metadata": {}, "outputs": [], "source": [ @@ -133,7 +133,7 @@ }, { "cell_type": "markdown", - "id": "1dded722", + "id": "082f27a7", "metadata": {}, "source": [ "Run inference. Use `pick_n_best(predictions=output, n=topN)` helper function to pick N most probable hypotheses according to the model." @@ -142,7 +142,7 @@ { "cell_type": "code", "execution_count": null, - "id": "5bf3fb72", + "id": "0bb62c4c", "metadata": {}, "outputs": [], "source": [ @@ -154,7 +154,7 @@ }, { "cell_type": "markdown", - "id": "ff01b742", + "id": "b60137fc", "metadata": {}, "source": [ "Display the result." @@ -163,7 +163,7 @@ { "cell_type": "code", "execution_count": null, - "id": "045b1371", + "id": "d02b1810", "metadata": {}, "outputs": [], "source": [ @@ -177,7 +177,7 @@ }, { "cell_type": "markdown", - "id": "40629204", + "id": "741f6dd4", "metadata": {}, "source": [ "### Details\n", diff --git a/assets/hub/nvidia_deeplearningexamples_ssd.ipynb b/assets/hub/nvidia_deeplearningexamples_ssd.ipynb index 669a05963e08..cd5c71ff6687 100644 --- a/assets/hub/nvidia_deeplearningexamples_ssd.ipynb +++ b/assets/hub/nvidia_deeplearningexamples_ssd.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "00fc29ac", + "id": "a7fd2953", "metadata": {}, "source": [ "### This notebook requires a GPU runtime to run.\n", @@ -56,7 +56,7 @@ { "cell_type": "code", "execution_count": null, - "id": "216fd60c", + "id": "310e0f77", "metadata": {}, "outputs": [], "source": [ @@ -66,7 +66,7 @@ }, { "cell_type": "markdown", - "id": "e3598938", + "id": "393309f2", "metadata": {}, "source": [ "Load an SSD model pretrained on COCO dataset, as well as a set of utility methods for convenient and comprehensive formatting of input and output of the model." @@ -75,7 +75,7 @@ { "cell_type": "code", "execution_count": null, - "id": "5d01faeb", + "id": "411f2e80", "metadata": {}, "outputs": [], "source": [ @@ -86,7 +86,7 @@ }, { "cell_type": "markdown", - "id": "6d4590bb", + "id": "d3d2f691", "metadata": {}, "source": [ "Now, prepare the loaded model for inference" @@ -95,7 +95,7 @@ { "cell_type": "code", "execution_count": null, - "id": "f3cc878d", + "id": "b28ad4a9", "metadata": {}, "outputs": [], "source": [ @@ -105,7 +105,7 @@ }, { "cell_type": "markdown", - "id": "4e211633", + "id": "0e9187a6", "metadata": {}, "source": [ "Prepare input images for object detection.\n", @@ -115,7 +115,7 @@ { "cell_type": "code", "execution_count": null, - "id": "b5520438", + "id": "722223e7", "metadata": {}, "outputs": [], "source": [ @@ -128,7 +128,7 @@ }, { "cell_type": "markdown", - "id": "952427fe", + "id": "d8d0c93f", "metadata": {}, "source": [ "Format the images to comply with the network input and convert them to tensor." @@ -137,7 +137,7 @@ { "cell_type": "code", "execution_count": null, - "id": "11a99b99", + "id": "00f422fb", "metadata": {}, "outputs": [], "source": [ @@ -147,7 +147,7 @@ }, { "cell_type": "markdown", - "id": "f3d89308", + "id": "dfe4b433", "metadata": {}, "source": [ "Run the SSD network to perform object detection." @@ -156,7 +156,7 @@ { "cell_type": "code", "execution_count": null, - "id": "9489953f", + "id": "5e358933", "metadata": {}, "outputs": [], "source": [ @@ -166,7 +166,7 @@ }, { "cell_type": "markdown", - "id": "62d5b47b", + "id": "e4c6c76b", "metadata": {}, "source": [ "By default, raw output from SSD network per input image contains\n", @@ -177,7 +177,7 @@ { "cell_type": "code", "execution_count": null, - "id": "897ec8f1", + "id": "6f2d74a6", "metadata": {}, "outputs": [], "source": [ @@ -187,7 +187,7 @@ }, { "cell_type": "markdown", - "id": "5649c506", + "id": "c5af6c78", "metadata": {}, "source": [ "The model was trained on COCO dataset, which we need to access in order to translate class IDs into object names.\n", @@ -197,7 +197,7 @@ { "cell_type": "code", "execution_count": null, - "id": "33096f43", + "id": "05aba236", "metadata": {}, "outputs": [], "source": [ @@ -206,7 +206,7 @@ }, { "cell_type": "markdown", - "id": "324c5a71", + "id": "85c5a626", "metadata": {}, "source": [ "Finally, let's visualize our detections" @@ -215,7 +215,7 @@ { "cell_type": "code", "execution_count": null, - "id": "5b12929b", + "id": "de00fb67", "metadata": {}, "outputs": [], "source": [ @@ -240,7 +240,7 @@ }, { "cell_type": "markdown", - "id": "96276363", + "id": "358e1ae2", "metadata": {}, "source": [ "### Details\n", diff --git a/assets/hub/nvidia_deeplearningexamples_tacotron2.ipynb b/assets/hub/nvidia_deeplearningexamples_tacotron2.ipynb index b922dbc7a436..4fffc9591ab0 100644 --- a/assets/hub/nvidia_deeplearningexamples_tacotron2.ipynb +++ b/assets/hub/nvidia_deeplearningexamples_tacotron2.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "1d2a5fd6", + "id": "e0e3d3c4", "metadata": {}, "source": [ "### This notebook requires a GPU runtime to run.\n", @@ -41,7 +41,7 @@ { "cell_type": "code", "execution_count": null, - "id": "c8e99b9c", + "id": "f041e30b", "metadata": {}, "outputs": [], "source": [ @@ -53,7 +53,7 @@ }, { "cell_type": "markdown", - "id": "767c188d", + "id": "9c3811b8", "metadata": {}, "source": [ "Load the Tacotron2 model pre-trained on [LJ Speech dataset](https://keithito.com/LJ-Speech-Dataset/) and prepare it for inference:" @@ -62,7 +62,7 @@ { "cell_type": "code", "execution_count": null, - "id": "13d5b887", + "id": "1882fd02", "metadata": {}, "outputs": [], "source": [ @@ -74,7 +74,7 @@ }, { "cell_type": "markdown", - "id": "b58c0c88", + "id": "b983374e", "metadata": {}, "source": [ "Load pretrained WaveGlow model" @@ -83,7 +83,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a5451a54", + "id": "1f2d1c60", "metadata": {}, "outputs": [], "source": [ @@ -95,7 +95,7 @@ }, { "cell_type": "markdown", - "id": "7eba6377", + "id": "7cbddd69", "metadata": {}, "source": [ "Now, let's make the model say:" @@ -104,7 +104,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e4b76dc5", + "id": "0aff3bf8", "metadata": {}, "outputs": [], "source": [ @@ -113,7 +113,7 @@ }, { "cell_type": "markdown", - "id": "2c6bed8a", + "id": "fc8b5e40", "metadata": {}, "source": [ "Format the input using utility methods" @@ -122,7 +122,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e838d14d", + "id": "2473c538", "metadata": {}, "outputs": [], "source": [ @@ -132,7 +132,7 @@ }, { "cell_type": "markdown", - "id": "63801402", + "id": "f7e694c7", "metadata": {}, "source": [ "Run the chained models:" @@ -141,7 +141,7 @@ { "cell_type": "code", "execution_count": null, - "id": "19b611fa", + "id": "874ecd1e", "metadata": {}, "outputs": [], "source": [ @@ -154,7 +154,7 @@ }, { "cell_type": "markdown", - "id": "77262942", + "id": "f80ca3ec", "metadata": {}, "source": [ "You can write it to a file and listen to it" @@ -163,7 +163,7 @@ { "cell_type": "code", "execution_count": null, - "id": "3eb8e532", + "id": "b6cfedfb", "metadata": {}, "outputs": [], "source": [ @@ -173,7 +173,7 @@ }, { "cell_type": "markdown", - "id": "a2592a3b", + "id": "4e0c3d14", "metadata": {}, "source": [ "Alternatively, play it right away in a notebook with IPython widgets" @@ -182,7 +182,7 @@ { "cell_type": "code", "execution_count": null, - "id": "b22c7a44", + "id": "cac99f65", "metadata": {}, "outputs": [], "source": [ @@ -192,7 +192,7 @@ }, { "cell_type": "markdown", - "id": "0fea6073", + "id": "e4e7d656", "metadata": {}, "source": [ "### Details\n", diff --git a/assets/hub/nvidia_deeplearningexamples_waveglow.ipynb b/assets/hub/nvidia_deeplearningexamples_waveglow.ipynb index 11533bee3866..bafefc93a500 100644 --- a/assets/hub/nvidia_deeplearningexamples_waveglow.ipynb +++ b/assets/hub/nvidia_deeplearningexamples_waveglow.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "c2e68cca", + "id": "4737738d", "metadata": {}, "source": [ "### This notebook requires a GPU runtime to run.\n", @@ -39,7 +39,7 @@ { "cell_type": "code", "execution_count": null, - "id": "762e0c43", + "id": "16144d7f", "metadata": {}, "outputs": [], "source": [ @@ -51,7 +51,7 @@ }, { "cell_type": "markdown", - "id": "c38795ea", + "id": "9058df91", "metadata": {}, "source": [ "Load the WaveGlow model pre-trained on [LJ Speech dataset](https://keithito.com/LJ-Speech-Dataset/)" @@ -60,7 +60,7 @@ { "cell_type": "code", "execution_count": null, - "id": "d860aa0e", + "id": "cfb19ffe", "metadata": {}, "outputs": [], "source": [ @@ -70,7 +70,7 @@ }, { "cell_type": "markdown", - "id": "83d10fab", + "id": "4161c0ad", "metadata": {}, "source": [ "Prepare the WaveGlow model for inference" @@ -79,7 +79,7 @@ { "cell_type": "code", "execution_count": null, - "id": "20b47dd6", + "id": "7bc84f49", "metadata": {}, "outputs": [], "source": [ @@ -90,7 +90,7 @@ }, { "cell_type": "markdown", - "id": "a091647e", + "id": "4d8154f0", "metadata": {}, "source": [ "Load a pretrained Tacotron2 model" @@ -99,7 +99,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a342ba38", + "id": "45b294a9", "metadata": {}, "outputs": [], "source": [ @@ -110,7 +110,7 @@ }, { "cell_type": "markdown", - "id": "98c82f9d", + "id": "0e97546e", "metadata": {}, "source": [ "Now, let's make the model say:" @@ -119,7 +119,7 @@ { "cell_type": "code", "execution_count": null, - "id": "589d76ba", + "id": "e72c24e6", "metadata": {}, "outputs": [], "source": [ @@ -128,7 +128,7 @@ }, { "cell_type": "markdown", - "id": "db5e0026", + "id": "bd2667fd", "metadata": {}, "source": [ "Format the input using utility methods" @@ -137,7 +137,7 @@ { "cell_type": "code", "execution_count": null, - "id": "1f74382e", + "id": "b1b8d5ad", "metadata": {}, "outputs": [], "source": [ @@ -147,7 +147,7 @@ }, { "cell_type": "markdown", - "id": "2f5b4896", + "id": "303943c1", "metadata": {}, "source": [ "Run the chained models" @@ -156,7 +156,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7be3aca9", + "id": "2615435e", "metadata": {}, "outputs": [], "source": [ @@ -169,7 +169,7 @@ }, { "cell_type": "markdown", - "id": "ad8869e1", + "id": "c270bfc3", "metadata": {}, "source": [ "You can write it to a file and listen to it" @@ -178,7 +178,7 @@ { "cell_type": "code", "execution_count": null, - "id": "15a0c6d0", + "id": "26825978", "metadata": {}, "outputs": [], "source": [ @@ -188,7 +188,7 @@ }, { "cell_type": "markdown", - "id": "c1771622", + "id": "14484a89", "metadata": {}, "source": [ "Alternatively, play it right away in a notebook with IPython widgets" @@ -197,7 +197,7 @@ { "cell_type": "code", "execution_count": null, - "id": "51e1dcd7", + "id": "4cc5f8f8", "metadata": {}, "outputs": [], "source": [ @@ -207,7 +207,7 @@ }, { "cell_type": "markdown", - "id": "4cfb5ba4", + "id": "427cf6bf", "metadata": {}, "source": [ "### Details\n", diff --git a/assets/hub/pytorch_fairseq_roberta.ipynb b/assets/hub/pytorch_fairseq_roberta.ipynb index 196068448cee..7ae7b041e31e 100644 --- a/assets/hub/pytorch_fairseq_roberta.ipynb +++ b/assets/hub/pytorch_fairseq_roberta.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "31ad7aa7", + "id": "94a60a92", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -43,7 +43,7 @@ { "cell_type": "code", "execution_count": null, - "id": "121498f6", + "id": "862010ad", "metadata": {}, "outputs": [], "source": [ @@ -53,7 +53,7 @@ }, { "cell_type": "markdown", - "id": "d425fe97", + "id": "3aee90ba", "metadata": {}, "source": [ "### Example\n", @@ -64,7 +64,7 @@ { "cell_type": "code", "execution_count": null, - "id": "454e72ca", + "id": "d17bf5e4", "metadata": {}, "outputs": [], "source": [ @@ -75,7 +75,7 @@ }, { "cell_type": "markdown", - "id": "360318ab", + "id": "b3dd17f8", "metadata": {}, "source": [ "##### Apply Byte-Pair Encoding (BPE) to input text" @@ -84,7 +84,7 @@ { "cell_type": "code", "execution_count": null, - "id": "5d179e52", + "id": "21c10e8a", "metadata": {}, "outputs": [], "source": [ @@ -95,7 +95,7 @@ }, { "cell_type": "markdown", - "id": "6b8994a9", + "id": "75dd8336", "metadata": {}, "source": [ "##### Extract features from RoBERTa" @@ -104,7 +104,7 @@ { "cell_type": "code", "execution_count": null, - "id": "d57b6500", + "id": "53830c5b", "metadata": {}, "outputs": [], "source": [ @@ -120,7 +120,7 @@ }, { "cell_type": "markdown", - "id": "85168502", + "id": "22d492d5", "metadata": {}, "source": [ "##### Use RoBERTa for sentence-pair classification tasks" @@ -129,7 +129,7 @@ { "cell_type": "code", "execution_count": null, - "id": "9a220ccb", + "id": "bccc692e", "metadata": {}, "outputs": [], "source": [ @@ -151,7 +151,7 @@ }, { "cell_type": "markdown", - "id": "f6f34c54", + "id": "3fc1a321", "metadata": {}, "source": [ "##### Register a new (randomly initialized) classification head" @@ -160,7 +160,7 @@ { "cell_type": "code", "execution_count": null, - "id": "fda20e6e", + "id": "338f0cce", "metadata": {}, "outputs": [], "source": [ @@ -170,7 +170,7 @@ }, { "cell_type": "markdown", - "id": "4190649b", + "id": "e2d53b85", "metadata": {}, "source": [ "### References\n", diff --git a/assets/hub/pytorch_fairseq_translation.ipynb b/assets/hub/pytorch_fairseq_translation.ipynb index 62a04e2d49df..12fc64602aaf 100644 --- a/assets/hub/pytorch_fairseq_translation.ipynb +++ b/assets/hub/pytorch_fairseq_translation.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "03a70c24", + "id": "a0ef562f", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -37,7 +37,7 @@ { "cell_type": "code", "execution_count": null, - "id": "20bcbe45", + "id": "11769181", "metadata": {}, "outputs": [], "source": [ @@ -47,7 +47,7 @@ }, { "cell_type": "markdown", - "id": "10a36215", + "id": "d8d3f273", "metadata": {}, "source": [ "### English-to-French Translation\n", @@ -59,7 +59,7 @@ { "cell_type": "code", "execution_count": null, - "id": "ce42e2ad", + "id": "8c417e7d", "metadata": {}, "outputs": [], "source": [ @@ -101,7 +101,7 @@ }, { "cell_type": "markdown", - "id": "8409f70a", + "id": "8479dcc8", "metadata": {}, "source": [ "### English-to-German Translation\n", @@ -123,7 +123,7 @@ { "cell_type": "code", "execution_count": null, - "id": "70c5ad1a", + "id": "9da22e93", "metadata": {}, "outputs": [], "source": [ @@ -142,7 +142,7 @@ }, { "cell_type": "markdown", - "id": "f625ff49", + "id": "36b4666a", "metadata": {}, "source": [ "We can also do a round-trip translation to create a paraphrase:" @@ -151,7 +151,7 @@ { "cell_type": "code", "execution_count": null, - "id": "49c2eb96", + "id": "bf89c380", "metadata": {}, "outputs": [], "source": [ @@ -172,7 +172,7 @@ }, { "cell_type": "markdown", - "id": "a85bc1a5", + "id": "bcb08298", "metadata": {}, "source": [ "### References\n", diff --git a/assets/hub/pytorch_vision_alexnet.ipynb b/assets/hub/pytorch_vision_alexnet.ipynb index 36f035736f7e..f4c622962c42 100644 --- a/assets/hub/pytorch_vision_alexnet.ipynb +++ b/assets/hub/pytorch_vision_alexnet.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "16626dc1", + "id": "3c885dd5", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -24,7 +24,7 @@ { "cell_type": "code", "execution_count": null, - "id": "95d8ebe4", + "id": "36e46ac5", "metadata": {}, "outputs": [], "source": [ @@ -35,7 +35,7 @@ }, { "cell_type": "markdown", - "id": "c2953f18", + "id": "abf97661", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -49,7 +49,7 @@ { "cell_type": "code", "execution_count": null, - "id": "308fc9cb", + "id": "65d9054a", "metadata": {}, "outputs": [], "source": [ @@ -63,7 +63,7 @@ { "cell_type": "code", "execution_count": null, - "id": "682c4582", + "id": "e624f28d", "metadata": {}, "outputs": [], "source": [ @@ -97,7 +97,7 @@ { "cell_type": "code", "execution_count": null, - "id": "163b43f5", + "id": "e2a34643", "metadata": {}, "outputs": [], "source": [ @@ -108,7 +108,7 @@ { "cell_type": "code", "execution_count": null, - "id": "d5883fd0", + "id": "617e0700", "metadata": {}, "outputs": [], "source": [ @@ -123,7 +123,7 @@ }, { "cell_type": "markdown", - "id": "449f01a3", + "id": "c21ae5c4", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/pytorch_vision_deeplabv3_resnet101.ipynb b/assets/hub/pytorch_vision_deeplabv3_resnet101.ipynb index 633e2c71cd2b..91144ad0f438 100644 --- a/assets/hub/pytorch_vision_deeplabv3_resnet101.ipynb +++ b/assets/hub/pytorch_vision_deeplabv3_resnet101.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "acd9bbb8", + "id": "091cbcb7", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -24,7 +24,7 @@ { "cell_type": "code", "execution_count": null, - "id": "74540f95", + "id": "ce0f1a10", "metadata": {}, "outputs": [], "source": [ @@ -38,7 +38,7 @@ }, { "cell_type": "markdown", - "id": "3faa89a5", + "id": "64e97cbc", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -54,7 +54,7 @@ { "cell_type": "code", "execution_count": null, - "id": "fc820f6a", + "id": "4401a5ee", "metadata": {}, "outputs": [], "source": [ @@ -68,7 +68,7 @@ { "cell_type": "code", "execution_count": null, - "id": "d5a87afa", + "id": "92a05765", "metadata": {}, "outputs": [], "source": [ @@ -97,7 +97,7 @@ }, { "cell_type": "markdown", - "id": "ab84a767", + "id": "35151109", "metadata": {}, "source": [ "The output here is of shape `(21, H, W)`, and at each location, there are unnormalized probabilities corresponding to the prediction of each class.\n", @@ -109,7 +109,7 @@ { "cell_type": "code", "execution_count": null, - "id": "c59aae4b", + "id": "83d14a8a", "metadata": {}, "outputs": [], "source": [ @@ -129,7 +129,7 @@ }, { "cell_type": "markdown", - "id": "13df020f", + "id": "a60195d1", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/pytorch_vision_densenet.ipynb b/assets/hub/pytorch_vision_densenet.ipynb index 818a9867b715..48c3160bd82e 100644 --- a/assets/hub/pytorch_vision_densenet.ipynb +++ b/assets/hub/pytorch_vision_densenet.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "0f8e983f", + "id": "76708fd0", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -24,7 +24,7 @@ { "cell_type": "code", "execution_count": null, - "id": "ac57fe2c", + "id": "cf3fe541", "metadata": {}, "outputs": [], "source": [ @@ -39,7 +39,7 @@ }, { "cell_type": "markdown", - "id": "3a13e61a", + "id": "dcc580e7", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -53,7 +53,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7796387d", + "id": "8512c10f", "metadata": {}, "outputs": [], "source": [ @@ -67,7 +67,7 @@ { "cell_type": "code", "execution_count": null, - "id": "5c75871f", + "id": "2ba264fe", "metadata": {}, "outputs": [], "source": [ @@ -101,7 +101,7 @@ { "cell_type": "code", "execution_count": null, - "id": "6ccdd13c", + "id": "2df44bc5", "metadata": {}, "outputs": [], "source": [ @@ -112,7 +112,7 @@ { "cell_type": "code", "execution_count": null, - "id": "5ced19d3", + "id": "8e45515e", "metadata": {}, "outputs": [], "source": [ @@ -127,7 +127,7 @@ }, { "cell_type": "markdown", - "id": "225e1c26", + "id": "06de3acb", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/pytorch_vision_fcn_resnet101.ipynb b/assets/hub/pytorch_vision_fcn_resnet101.ipynb index 23cd72afec3a..019df15255b9 100644 --- a/assets/hub/pytorch_vision_fcn_resnet101.ipynb +++ b/assets/hub/pytorch_vision_fcn_resnet101.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "65c8d9c9", + "id": "ef2032c1", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -24,7 +24,7 @@ { "cell_type": "code", "execution_count": null, - "id": "3ba547ac", + "id": "6b86c3e5", "metadata": {}, "outputs": [], "source": [ @@ -37,7 +37,7 @@ }, { "cell_type": "markdown", - "id": "5b7fde87", + "id": "0a5de13c", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -53,7 +53,7 @@ { "cell_type": "code", "execution_count": null, - "id": "1853b36e", + "id": "3e479d46", "metadata": {}, "outputs": [], "source": [ @@ -67,7 +67,7 @@ { "cell_type": "code", "execution_count": null, - "id": "4b80c169", + "id": "d7e777d9", "metadata": {}, "outputs": [], "source": [ @@ -96,7 +96,7 @@ }, { "cell_type": "markdown", - "id": "7436a26c", + "id": "901192f4", "metadata": {}, "source": [ "The output here is of shape `(21, H, W)`, and at each location, there are unnormalized probabilities corresponding to the prediction of each class.\n", @@ -108,7 +108,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7cd0b22c", + "id": "5773dc62", "metadata": {}, "outputs": [], "source": [ @@ -128,7 +128,7 @@ }, { "cell_type": "markdown", - "id": "9e77c06f", + "id": "c7a9fb19", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/pytorch_vision_ghostnet.ipynb b/assets/hub/pytorch_vision_ghostnet.ipynb index 9acaf46345e3..d43d92ce3b10 100644 --- a/assets/hub/pytorch_vision_ghostnet.ipynb +++ b/assets/hub/pytorch_vision_ghostnet.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "161def7a", + "id": "a295a3ca", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "17689f73", + "id": "8b08a9ba", "metadata": {}, "outputs": [], "source": [ @@ -33,7 +33,7 @@ }, { "cell_type": "markdown", - "id": "ee748fa7", + "id": "090c64d8", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -47,7 +47,7 @@ { "cell_type": "code", "execution_count": null, - "id": "ec8611d2", + "id": "dc642178", "metadata": {}, "outputs": [], "source": [ @@ -61,7 +61,7 @@ { "cell_type": "code", "execution_count": null, - "id": "2f1d5893", + "id": "581701ff", "metadata": {}, "outputs": [], "source": [ @@ -95,7 +95,7 @@ { "cell_type": "code", "execution_count": null, - "id": "bb1dba4d", + "id": "3cb0ee89", "metadata": {}, "outputs": [], "source": [ @@ -106,7 +106,7 @@ { "cell_type": "code", "execution_count": null, - "id": "ebf31050", + "id": "ef7e4a50", "metadata": {}, "outputs": [], "source": [ @@ -121,7 +121,7 @@ }, { "cell_type": "markdown", - "id": "c29b10c6", + "id": "fb9a2922", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/pytorch_vision_googlenet.ipynb b/assets/hub/pytorch_vision_googlenet.ipynb index 8b23f90001ca..f2624747e375 100644 --- a/assets/hub/pytorch_vision_googlenet.ipynb +++ b/assets/hub/pytorch_vision_googlenet.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "f1a7b60a", + "id": "265124d8", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -24,7 +24,7 @@ { "cell_type": "code", "execution_count": null, - "id": "02901d68", + "id": "0814b75a", "metadata": {}, "outputs": [], "source": [ @@ -35,7 +35,7 @@ }, { "cell_type": "markdown", - "id": "efb92e8a", + "id": "1800f10c", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -49,7 +49,7 @@ { "cell_type": "code", "execution_count": null, - "id": "c620d5b5", + "id": "a64461f1", "metadata": {}, "outputs": [], "source": [ @@ -63,7 +63,7 @@ { "cell_type": "code", "execution_count": null, - "id": "918dd743", + "id": "730ca989", "metadata": {}, "outputs": [], "source": [ @@ -97,7 +97,7 @@ { "cell_type": "code", "execution_count": null, - "id": "5b376b6c", + "id": "18e3d059", "metadata": {}, "outputs": [], "source": [ @@ -108,7 +108,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7fad7b71", + "id": "182cbdbc", "metadata": {}, "outputs": [], "source": [ @@ -123,7 +123,7 @@ }, { "cell_type": "markdown", - "id": "6f2ef850", + "id": "c383b4bf", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/pytorch_vision_hardnet.ipynb b/assets/hub/pytorch_vision_hardnet.ipynb index 6424d5154b15..6cd8a709f1e7 100644 --- a/assets/hub/pytorch_vision_hardnet.ipynb +++ b/assets/hub/pytorch_vision_hardnet.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "6a281ce0", + "id": "38cc7d97", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -24,7 +24,7 @@ { "cell_type": "code", "execution_count": null, - "id": "907b9af6", + "id": "9cc02677", "metadata": {}, "outputs": [], "source": [ @@ -39,7 +39,7 @@ }, { "cell_type": "markdown", - "id": "9af0f110", + "id": "479bc5da", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -53,7 +53,7 @@ { "cell_type": "code", "execution_count": null, - "id": "3fa2cf0e", + "id": "b636068c", "metadata": {}, "outputs": [], "source": [ @@ -67,7 +67,7 @@ { "cell_type": "code", "execution_count": null, - "id": "44b7eeb5", + "id": "2d962bfc", "metadata": {}, "outputs": [], "source": [ @@ -101,7 +101,7 @@ { "cell_type": "code", "execution_count": null, - "id": "3799c18e", + "id": "bf29afeb", "metadata": {}, "outputs": [], "source": [ @@ -112,7 +112,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e5f4b725", + "id": "d117ce33", "metadata": {}, "outputs": [], "source": [ @@ -127,7 +127,7 @@ }, { "cell_type": "markdown", - "id": "abf47544", + "id": "906f320d", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/pytorch_vision_ibnnet.ipynb b/assets/hub/pytorch_vision_ibnnet.ipynb index 382524a87a4d..8b5d86fa25f8 100644 --- a/assets/hub/pytorch_vision_ibnnet.ipynb +++ b/assets/hub/pytorch_vision_ibnnet.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "4f13635a", + "id": "60422909", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "56a4dcb3", + "id": "066eedcc", "metadata": {}, "outputs": [], "source": [ @@ -33,7 +33,7 @@ }, { "cell_type": "markdown", - "id": "c398f0ba", + "id": "a57b711a", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -47,7 +47,7 @@ { "cell_type": "code", "execution_count": null, - "id": "fd070cbd", + "id": "e8b72a02", "metadata": {}, "outputs": [], "source": [ @@ -61,7 +61,7 @@ { "cell_type": "code", "execution_count": null, - "id": "2d54e05b", + "id": "98283199", "metadata": {}, "outputs": [], "source": [ @@ -95,7 +95,7 @@ { "cell_type": "code", "execution_count": null, - "id": "cd7ea50b", + "id": "cb6ff6a8", "metadata": {}, "outputs": [], "source": [ @@ -106,7 +106,7 @@ { "cell_type": "code", "execution_count": null, - "id": "75345899", + "id": "897b2490", "metadata": {}, "outputs": [], "source": [ @@ -121,7 +121,7 @@ }, { "cell_type": "markdown", - "id": "326578b9", + "id": "84cd3475", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/pytorch_vision_inception_v3.ipynb b/assets/hub/pytorch_vision_inception_v3.ipynb index 92c99f2af61f..715ee584fb8e 100644 --- a/assets/hub/pytorch_vision_inception_v3.ipynb +++ b/assets/hub/pytorch_vision_inception_v3.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "f2c74b7c", + "id": "dc51d9d5", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "84dbf1bd", + "id": "4df2a343", "metadata": {}, "outputs": [], "source": [ @@ -33,7 +33,7 @@ }, { "cell_type": "markdown", - "id": "a6388089", + "id": "85dc5701", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -47,7 +47,7 @@ { "cell_type": "code", "execution_count": null, - "id": "69feb11a", + "id": "9bca7a4a", "metadata": {}, "outputs": [], "source": [ @@ -61,7 +61,7 @@ { "cell_type": "code", "execution_count": null, - "id": "2c57f09c", + "id": "1be3ccb3", "metadata": {}, "outputs": [], "source": [ @@ -95,7 +95,7 @@ { "cell_type": "code", "execution_count": null, - "id": "9877fa26", + "id": "c084cb06", "metadata": {}, "outputs": [], "source": [ @@ -106,7 +106,7 @@ { "cell_type": "code", "execution_count": null, - "id": "435fd085", + "id": "df58bd8b", "metadata": {}, "outputs": [], "source": [ @@ -121,7 +121,7 @@ }, { "cell_type": "markdown", - "id": "a6a360d3", + "id": "925cc221", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/pytorch_vision_meal_v2.ipynb b/assets/hub/pytorch_vision_meal_v2.ipynb index b6ced128e15e..dc69aa6646ac 100644 --- a/assets/hub/pytorch_vision_meal_v2.ipynb +++ b/assets/hub/pytorch_vision_meal_v2.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "78518d9d", + "id": "69e7c335", "metadata": {}, "source": [ "### This notebook requires a GPU runtime to run.\n", @@ -27,7 +27,7 @@ { "cell_type": "code", "execution_count": null, - "id": "db36ecf4", + "id": "478cec56", "metadata": {}, "outputs": [], "source": [ @@ -38,7 +38,7 @@ { "cell_type": "code", "execution_count": null, - "id": "4647ee28", + "id": "144635e0", "metadata": {}, "outputs": [], "source": [ @@ -51,7 +51,7 @@ }, { "cell_type": "markdown", - "id": "667b27b2", + "id": "379d9d7f", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -65,7 +65,7 @@ { "cell_type": "code", "execution_count": null, - "id": "58111d27", + "id": "331e41b6", "metadata": {}, "outputs": [], "source": [ @@ -79,7 +79,7 @@ { "cell_type": "code", "execution_count": null, - "id": "edb71391", + "id": "8434e911", "metadata": {}, "outputs": [], "source": [ @@ -113,7 +113,7 @@ { "cell_type": "code", "execution_count": null, - "id": "af62952b", + "id": "726b8b7b", "metadata": {}, "outputs": [], "source": [ @@ -124,7 +124,7 @@ { "cell_type": "code", "execution_count": null, - "id": "f5eeb7f8", + "id": "dc36f79f", "metadata": {}, "outputs": [], "source": [ @@ -139,7 +139,7 @@ }, { "cell_type": "markdown", - "id": "05b99954", + "id": "f2a96be3", "metadata": {}, "source": [ "### Model Description\n", @@ -167,7 +167,7 @@ { "cell_type": "code", "execution_count": null, - "id": "9b70eb90", + "id": "58b0bd08", "metadata": {}, "outputs": [], "source": [ @@ -181,7 +181,7 @@ }, { "cell_type": "markdown", - "id": "99f86567", + "id": "85824670", "metadata": {}, "source": [ "@inproceedings{shen2019MEAL,\n", diff --git a/assets/hub/pytorch_vision_mobilenet_v2.ipynb b/assets/hub/pytorch_vision_mobilenet_v2.ipynb index 802303469a2a..1ea6f0f67d17 100644 --- a/assets/hub/pytorch_vision_mobilenet_v2.ipynb +++ b/assets/hub/pytorch_vision_mobilenet_v2.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "7953a328", + "id": "26aa9f97", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -24,7 +24,7 @@ { "cell_type": "code", "execution_count": null, - "id": "317ec13f", + "id": "4423fb1f", "metadata": {}, "outputs": [], "source": [ @@ -35,7 +35,7 @@ }, { "cell_type": "markdown", - "id": "ef53f37a", + "id": "d951fc23", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -49,7 +49,7 @@ { "cell_type": "code", "execution_count": null, - "id": "4588a23a", + "id": "d6a078f9", "metadata": {}, "outputs": [], "source": [ @@ -63,7 +63,7 @@ { "cell_type": "code", "execution_count": null, - "id": "380ccc04", + "id": "66d7f82b", "metadata": {}, "outputs": [], "source": [ @@ -97,7 +97,7 @@ { "cell_type": "code", "execution_count": null, - "id": "60e2d2cd", + "id": "5cdc2653", "metadata": {}, "outputs": [], "source": [ @@ -108,7 +108,7 @@ { "cell_type": "code", "execution_count": null, - "id": "b1be69f6", + "id": "6d95262c", "metadata": {}, "outputs": [], "source": [ @@ -123,7 +123,7 @@ }, { "cell_type": "markdown", - "id": "61eef719", + "id": "8a13bc6b", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/pytorch_vision_once_for_all.ipynb b/assets/hub/pytorch_vision_once_for_all.ipynb index 3b8a36b12698..cb468dcf8164 100644 --- a/assets/hub/pytorch_vision_once_for_all.ipynb +++ b/assets/hub/pytorch_vision_once_for_all.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "9daeccba", + "id": "39d74515", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -29,7 +29,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e2920106", + "id": "915406d2", "metadata": {}, "outputs": [], "source": [ @@ -45,7 +45,7 @@ }, { "cell_type": "markdown", - "id": "fa6bf8c3", + "id": "a6b4baf5", "metadata": {}, "source": [ "| OFA Network | Design Space | Resolution | Width Multiplier | Depth | Expand Ratio | kernel Size | \n", @@ -62,7 +62,7 @@ { "cell_type": "code", "execution_count": null, - "id": "44318a1a", + "id": "1fbb128a", "metadata": {}, "outputs": [], "source": [ @@ -77,7 +77,7 @@ }, { "cell_type": "markdown", - "id": "ab3fae3c", + "id": "fa1c85fa", "metadata": {}, "source": [ "### Get Specialized Architecture" @@ -86,7 +86,7 @@ { "cell_type": "code", "execution_count": null, - "id": "97fe0aac", + "id": "141ce42e", "metadata": {}, "outputs": [], "source": [ @@ -101,7 +101,7 @@ }, { "cell_type": "markdown", - "id": "5c941f15", + "id": "62131db8", "metadata": {}, "source": [ "More models and configurations can be found in [once-for-all/model-zoo](https://github.com/mit-han-lab/once-for-all#evaluate-1)\n", @@ -111,7 +111,7 @@ { "cell_type": "code", "execution_count": null, - "id": "2d577955", + "id": "3e2ce7a1", "metadata": {}, "outputs": [], "source": [ @@ -122,7 +122,7 @@ }, { "cell_type": "markdown", - "id": "f4490eab", + "id": "4c775b82", "metadata": {}, "source": [ "The model's prediction can be evalutaed by" @@ -131,7 +131,7 @@ { "cell_type": "code", "execution_count": null, - "id": "db41275c", + "id": "66ce5d71", "metadata": {}, "outputs": [], "source": [ @@ -173,7 +173,7 @@ }, { "cell_type": "markdown", - "id": "95dbd248", + "id": "e01481b9", "metadata": {}, "source": [ "### Model Description\n", @@ -189,7 +189,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a46493e4", + "id": "2ba18e84", "metadata": {}, "outputs": [], "source": [ diff --git a/assets/hub/pytorch_vision_proxylessnas.ipynb b/assets/hub/pytorch_vision_proxylessnas.ipynb index 5fb18284a6bf..fb3148f49baa 100644 --- a/assets/hub/pytorch_vision_proxylessnas.ipynb +++ b/assets/hub/pytorch_vision_proxylessnas.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "5373c917", + "id": "a391a344", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "4e082662", + "id": "99601745", "metadata": {}, "outputs": [], "source": [ @@ -35,7 +35,7 @@ }, { "cell_type": "markdown", - "id": "c3c04d33", + "id": "2325f09c", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -49,7 +49,7 @@ { "cell_type": "code", "execution_count": null, - "id": "9e5fc0f1", + "id": "4dae43ad", "metadata": {}, "outputs": [], "source": [ @@ -63,7 +63,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7de307ea", + "id": "5ccf8eaf", "metadata": {}, "outputs": [], "source": [ @@ -97,7 +97,7 @@ { "cell_type": "code", "execution_count": null, - "id": "9815a194", + "id": "e1ea05a3", "metadata": {}, "outputs": [], "source": [ @@ -108,7 +108,7 @@ { "cell_type": "code", "execution_count": null, - "id": "4444854e", + "id": "17f6fe00", "metadata": {}, "outputs": [], "source": [ @@ -123,7 +123,7 @@ }, { "cell_type": "markdown", - "id": "88f51718", + "id": "646e2bfb", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/pytorch_vision_resnest.ipynb b/assets/hub/pytorch_vision_resnest.ipynb index 194fc5c96d4a..26f8ecc7eca5 100644 --- a/assets/hub/pytorch_vision_resnest.ipynb +++ b/assets/hub/pytorch_vision_resnest.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "abca1a5a", + "id": "f6ad132d", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a2e95264", + "id": "5941a9cc", "metadata": {}, "outputs": [], "source": [ @@ -36,7 +36,7 @@ }, { "cell_type": "markdown", - "id": "befe843f", + "id": "8463e678", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -50,7 +50,7 @@ { "cell_type": "code", "execution_count": null, - "id": "4dcfce53", + "id": "cc3c3c8c", "metadata": {}, "outputs": [], "source": [ @@ -64,7 +64,7 @@ { "cell_type": "code", "execution_count": null, - "id": "5dab18e5", + "id": "2f979882", "metadata": {}, "outputs": [], "source": [ @@ -98,7 +98,7 @@ { "cell_type": "code", "execution_count": null, - "id": "63b96316", + "id": "95be068b", "metadata": {}, "outputs": [], "source": [ @@ -109,7 +109,7 @@ { "cell_type": "code", "execution_count": null, - "id": "eae10c0e", + "id": "5ac52421", "metadata": {}, "outputs": [], "source": [ @@ -124,7 +124,7 @@ }, { "cell_type": "markdown", - "id": "5a3e8db4", + "id": "e9ac388e", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/pytorch_vision_resnet.ipynb b/assets/hub/pytorch_vision_resnet.ipynb index 4a19f442667e..36629aad573c 100644 --- a/assets/hub/pytorch_vision_resnet.ipynb +++ b/assets/hub/pytorch_vision_resnet.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "d8087c15", + "id": "1c0aaf86", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "41a2a36a", + "id": "bc98d2fd", "metadata": {}, "outputs": [], "source": [ @@ -38,7 +38,7 @@ }, { "cell_type": "markdown", - "id": "92097d39", + "id": "80a2cf64", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -52,7 +52,7 @@ { "cell_type": "code", "execution_count": null, - "id": "c14501e7", + "id": "79ced132", "metadata": {}, "outputs": [], "source": [ @@ -66,7 +66,7 @@ { "cell_type": "code", "execution_count": null, - "id": "9028552b", + "id": "94b15881", "metadata": {}, "outputs": [], "source": [ @@ -100,7 +100,7 @@ { "cell_type": "code", "execution_count": null, - "id": "67497b46", + "id": "fbecef47", "metadata": {}, "outputs": [], "source": [ @@ -111,7 +111,7 @@ { "cell_type": "code", "execution_count": null, - "id": "29fa3c75", + "id": "f0271a30", "metadata": {}, "outputs": [], "source": [ @@ -126,7 +126,7 @@ }, { "cell_type": "markdown", - "id": "49085582", + "id": "3647fd75", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/pytorch_vision_resnext.ipynb b/assets/hub/pytorch_vision_resnext.ipynb index 7987db9b398a..bbed5e9bc0c5 100644 --- a/assets/hub/pytorch_vision_resnext.ipynb +++ b/assets/hub/pytorch_vision_resnext.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "35e27b7b", + "id": "24d91fcc", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "1a2a41b3", + "id": "d0214938", "metadata": {}, "outputs": [], "source": [ @@ -35,7 +35,7 @@ }, { "cell_type": "markdown", - "id": "01946ced", + "id": "628cd867", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -49,7 +49,7 @@ { "cell_type": "code", "execution_count": null, - "id": "cac096b4", + "id": "d3d0cc3c", "metadata": {}, "outputs": [], "source": [ @@ -63,7 +63,7 @@ { "cell_type": "code", "execution_count": null, - "id": "40d0f659", + "id": "d693cdf1", "metadata": {}, "outputs": [], "source": [ @@ -97,7 +97,7 @@ { "cell_type": "code", "execution_count": null, - "id": "55a9a1d8", + "id": "4060cee4", "metadata": {}, "outputs": [], "source": [ @@ -108,7 +108,7 @@ { "cell_type": "code", "execution_count": null, - "id": "f3866e62", + "id": "27a812c9", "metadata": {}, "outputs": [], "source": [ @@ -125,7 +125,7 @@ }, { "cell_type": "markdown", - "id": "040ccd09", + "id": "18460454", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/pytorch_vision_shufflenet_v2.ipynb b/assets/hub/pytorch_vision_shufflenet_v2.ipynb index 2d2beca20add..5e25f34a6b66 100644 --- a/assets/hub/pytorch_vision_shufflenet_v2.ipynb +++ b/assets/hub/pytorch_vision_shufflenet_v2.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "95ff55c3", + "id": "2a7af770", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -24,7 +24,7 @@ { "cell_type": "code", "execution_count": null, - "id": "fc271b95", + "id": "b0c9c160", "metadata": {}, "outputs": [], "source": [ @@ -35,7 +35,7 @@ }, { "cell_type": "markdown", - "id": "22e21ca4", + "id": "670d3296", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -49,7 +49,7 @@ { "cell_type": "code", "execution_count": null, - "id": "fbce3848", + "id": "acb2705f", "metadata": {}, "outputs": [], "source": [ @@ -63,7 +63,7 @@ { "cell_type": "code", "execution_count": null, - "id": "59c8d3fb", + "id": "dfb380fe", "metadata": {}, "outputs": [], "source": [ @@ -97,7 +97,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7c543e92", + "id": "da43c9ac", "metadata": {}, "outputs": [], "source": [ @@ -108,7 +108,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7e173469", + "id": "bef76571", "metadata": {}, "outputs": [], "source": [ @@ -123,7 +123,7 @@ }, { "cell_type": "markdown", - "id": "5e5ccceb", + "id": "3ebbabcd", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/pytorch_vision_snnmlp.ipynb b/assets/hub/pytorch_vision_snnmlp.ipynb index 9601cc96206a..2e98cdfb5f81 100644 --- a/assets/hub/pytorch_vision_snnmlp.ipynb +++ b/assets/hub/pytorch_vision_snnmlp.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "87c35091", + "id": "26001b9c", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a9dc31a1", + "id": "8aefcce7", "metadata": {}, "outputs": [], "source": [ @@ -37,7 +37,7 @@ }, { "cell_type": "markdown", - "id": "33554d77", + "id": "6e3587dc", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -51,7 +51,7 @@ { "cell_type": "code", "execution_count": null, - "id": "ddc30dfe", + "id": "05c10bb6", "metadata": {}, "outputs": [], "source": [ @@ -65,7 +65,7 @@ { "cell_type": "code", "execution_count": null, - "id": "c51f09db", + "id": "c548b9d3", "metadata": {}, "outputs": [], "source": [ @@ -97,7 +97,7 @@ }, { "cell_type": "markdown", - "id": "10f333eb", + "id": "61afa7bd", "metadata": {}, "source": [ "### Model Description\n", @@ -121,7 +121,7 @@ { "cell_type": "code", "execution_count": null, - "id": "288c4bed", + "id": "b070a0d3", "metadata": {}, "outputs": [], "source": [ diff --git a/assets/hub/pytorch_vision_squeezenet.ipynb b/assets/hub/pytorch_vision_squeezenet.ipynb index 1f3178a8d53b..8eeee053cea2 100644 --- a/assets/hub/pytorch_vision_squeezenet.ipynb +++ b/assets/hub/pytorch_vision_squeezenet.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "990b17b6", + "id": "e3cf10c0", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "6eec995d", + "id": "6b0f04f5", "metadata": {}, "outputs": [], "source": [ @@ -35,7 +35,7 @@ }, { "cell_type": "markdown", - "id": "ef7589d3", + "id": "14301e0d", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -49,7 +49,7 @@ { "cell_type": "code", "execution_count": null, - "id": "d455b779", + "id": "bd13e568", "metadata": {}, "outputs": [], "source": [ @@ -63,7 +63,7 @@ { "cell_type": "code", "execution_count": null, - "id": "d382dc58", + "id": "18675fde", "metadata": {}, "outputs": [], "source": [ @@ -97,7 +97,7 @@ { "cell_type": "code", "execution_count": null, - "id": "1172d23b", + "id": "d4329710", "metadata": {}, "outputs": [], "source": [ @@ -108,7 +108,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a19b4f86", + "id": "4755ebbf", "metadata": {}, "outputs": [], "source": [ @@ -123,7 +123,7 @@ }, { "cell_type": "markdown", - "id": "cdab898c", + "id": "7b7d6c59", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/pytorch_vision_vgg.ipynb b/assets/hub/pytorch_vision_vgg.ipynb index 1f7567aa42a5..94082489f2d8 100644 --- a/assets/hub/pytorch_vision_vgg.ipynb +++ b/assets/hub/pytorch_vision_vgg.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "b7bf779d", + "id": "24dbd7fd", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "cf95fa3b", + "id": "0f90b7a9", "metadata": {}, "outputs": [], "source": [ @@ -41,7 +41,7 @@ }, { "cell_type": "markdown", - "id": "a09fce9d", + "id": "94461774", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -55,7 +55,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e4cc1824", + "id": "cb9c4d28", "metadata": {}, "outputs": [], "source": [ @@ -69,7 +69,7 @@ { "cell_type": "code", "execution_count": null, - "id": "6d1cf177", + "id": "624987be", "metadata": {}, "outputs": [], "source": [ @@ -103,7 +103,7 @@ { "cell_type": "code", "execution_count": null, - "id": "497e676e", + "id": "3b8897e4", "metadata": {}, "outputs": [], "source": [ @@ -114,7 +114,7 @@ { "cell_type": "code", "execution_count": null, - "id": "14d11348", + "id": "5d69095a", "metadata": {}, "outputs": [], "source": [ @@ -129,7 +129,7 @@ }, { "cell_type": "markdown", - "id": "e61007b6", + "id": "03f74824", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/pytorch_vision_wide_resnet.ipynb b/assets/hub/pytorch_vision_wide_resnet.ipynb index 542a219382ab..cf161c963d87 100644 --- a/assets/hub/pytorch_vision_wide_resnet.ipynb +++ b/assets/hub/pytorch_vision_wide_resnet.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "2d21a3e8", + "id": "d366fb24", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "ad9c5b1e", + "id": "d1a86f6e", "metadata": {}, "outputs": [], "source": [ @@ -36,7 +36,7 @@ }, { "cell_type": "markdown", - "id": "051c5f23", + "id": "62fc0b0b", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -50,7 +50,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7c5780de", + "id": "5edc1040", "metadata": {}, "outputs": [], "source": [ @@ -64,7 +64,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e42019aa", + "id": "a86c792c", "metadata": {}, "outputs": [], "source": [ @@ -98,7 +98,7 @@ { "cell_type": "code", "execution_count": null, - "id": "ff656268", + "id": "f09c00c9", "metadata": {}, "outputs": [], "source": [ @@ -109,7 +109,7 @@ { "cell_type": "code", "execution_count": null, - "id": "d739c287", + "id": "4f257dba", "metadata": {}, "outputs": [], "source": [ @@ -124,7 +124,7 @@ }, { "cell_type": "markdown", - "id": "4c073a20", + "id": "52ade214", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/sigsep_open-unmix-pytorch_umx.ipynb b/assets/hub/sigsep_open-unmix-pytorch_umx.ipynb index 110765165d8d..8b4ceede5f16 100644 --- a/assets/hub/sigsep_open-unmix-pytorch_umx.ipynb +++ b/assets/hub/sigsep_open-unmix-pytorch_umx.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "b82b10f0", + "id": "6a1a65cd", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "44aa00e6", + "id": "f9790a9b", "metadata": {}, "outputs": [], "source": [ @@ -34,7 +34,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7649dafc", + "id": "47fe7b1f", "metadata": {}, "outputs": [], "source": [ @@ -59,7 +59,7 @@ }, { "cell_type": "markdown", - "id": "12bb0cfc", + "id": "1982e29b", "metadata": {}, "source": [ "### Model Description\n", @@ -94,7 +94,7 @@ { "cell_type": "code", "execution_count": null, - "id": "2ce82f72", + "id": "fb5c980f", "metadata": {}, "outputs": [], "source": [ @@ -104,7 +104,7 @@ }, { "cell_type": "markdown", - "id": "138c8662", + "id": "55c65701", "metadata": {}, "source": [ "### References\n", diff --git a/assets/hub/simplenet.ipynb b/assets/hub/simplenet.ipynb index 3fef62c5553f..892b473529f1 100644 --- a/assets/hub/simplenet.ipynb +++ b/assets/hub/simplenet.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "40c42022", + "id": "b2a471c2", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "99af1e5b", + "id": "ad86a684", "metadata": {}, "outputs": [], "source": [ @@ -41,7 +41,7 @@ }, { "cell_type": "markdown", - "id": "fef7ea73", + "id": "d7daf7cb", "metadata": {}, "source": [ "All pre-trained models expect input images normalized in the same way,\n", @@ -55,7 +55,7 @@ { "cell_type": "code", "execution_count": null, - "id": "d8df08b5", + "id": "cc0ab575", "metadata": {}, "outputs": [], "source": [ @@ -69,7 +69,7 @@ { "cell_type": "code", "execution_count": null, - "id": "efd8576d", + "id": "a6a16cfb", "metadata": {}, "outputs": [], "source": [ @@ -103,7 +103,7 @@ { "cell_type": "code", "execution_count": null, - "id": "88d04e45", + "id": "ee1319ea", "metadata": {}, "outputs": [], "source": [ @@ -114,7 +114,7 @@ { "cell_type": "code", "execution_count": null, - "id": "fd2932fd", + "id": "daebb731", "metadata": {}, "outputs": [], "source": [ @@ -129,7 +129,7 @@ }, { "cell_type": "markdown", - "id": "10c955ff", + "id": "c79fc11b", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/snakers4_silero-models_stt.ipynb b/assets/hub/snakers4_silero-models_stt.ipynb index c04b82edd7ec..c93ee85be423 100644 --- a/assets/hub/snakers4_silero-models_stt.ipynb +++ b/assets/hub/snakers4_silero-models_stt.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "860cb313", + "id": "eb575053", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -24,7 +24,7 @@ { "cell_type": "code", "execution_count": null, - "id": "ed8ffa2a", + "id": "3a9d262f", "metadata": {}, "outputs": [], "source": [ @@ -36,7 +36,7 @@ { "cell_type": "code", "execution_count": null, - "id": "ed535a3f", + "id": "8dd1d901", "metadata": {}, "outputs": [], "source": [ @@ -69,7 +69,7 @@ }, { "cell_type": "markdown", - "id": "b82b835c", + "id": "50caba0b", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/snakers4_silero-models_tts.ipynb b/assets/hub/snakers4_silero-models_tts.ipynb index 685de7507119..5a7178430ac3 100644 --- a/assets/hub/snakers4_silero-models_tts.ipynb +++ b/assets/hub/snakers4_silero-models_tts.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "f0a6d820", + "id": "0db6016b", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -20,7 +20,7 @@ { "cell_type": "code", "execution_count": null, - "id": "223e020b", + "id": "e643ea69", "metadata": {}, "outputs": [], "source": [ @@ -32,7 +32,7 @@ { "cell_type": "code", "execution_count": null, - "id": "1ca8e914", + "id": "1b0fffba", "metadata": {}, "outputs": [], "source": [ @@ -55,7 +55,7 @@ }, { "cell_type": "markdown", - "id": "037a5256", + "id": "d1a169dc", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/snakers4_silero-vad_vad.ipynb b/assets/hub/snakers4_silero-vad_vad.ipynb index db0eeb3b2fa5..83ec083b978d 100644 --- a/assets/hub/snakers4_silero-vad_vad.ipynb +++ b/assets/hub/snakers4_silero-vad_vad.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "bb0fd16b", + "id": "a4268e07", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -22,7 +22,7 @@ { "cell_type": "code", "execution_count": null, - "id": "212d3342", + "id": "e073be9d", "metadata": {}, "outputs": [], "source": [ @@ -34,7 +34,7 @@ { "cell_type": "code", "execution_count": null, - "id": "27ef74aa", + "id": "8adf2b00", "metadata": {}, "outputs": [], "source": [ @@ -63,7 +63,7 @@ }, { "cell_type": "markdown", - "id": "965c01c1", + "id": "6963bff7", "metadata": {}, "source": [ "### Model Description\n", diff --git a/assets/hub/ultralytics_yolov5.ipynb b/assets/hub/ultralytics_yolov5.ipynb index 873a64ac9276..fa39fb4d897c 100644 --- a/assets/hub/ultralytics_yolov5.ipynb +++ b/assets/hub/ultralytics_yolov5.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "a0180524", + "id": "1c32e755", "metadata": {}, "source": [ "### This notebook is optionally accelerated with a GPU runtime.\n", @@ -29,7 +29,7 @@ { "cell_type": "code", "execution_count": null, - "id": "99c50335", + "id": "1ef68660", "metadata": {}, "outputs": [], "source": [ @@ -39,7 +39,7 @@ }, { "cell_type": "markdown", - "id": "3209a0a5", + "id": "953b653d", "metadata": {}, "source": [ "## Model Description\n", @@ -82,7 +82,7 @@ { "cell_type": "code", "execution_count": null, - "id": "2c778684", + "id": "c3dd5584", "metadata": {}, "outputs": [], "source": [ @@ -112,7 +112,7 @@ }, { "cell_type": "markdown", - "id": "fc6c1715", + "id": "d328eadf", "metadata": {}, "source": [ "## Citation\n", @@ -125,7 +125,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a4d290d8", + "id": "14ad32c4", "metadata": { "attributes": { "classes": [ @@ -150,7 +150,7 @@ }, { "cell_type": "markdown", - "id": "2845b328", + "id": "9329814d", "metadata": {}, "source": [ "## Contact\n", diff --git a/case_studies/amazon-ads.html b/case_studies/amazon-ads.html index afb1691e229c..19fe8366a880 100644 --- a/case_studies/amazon-ads.html +++ b/case_studies/amazon-ads.html @@ -310,7 +310,7 @@
November 07, 2024
+November 08, 2024
November 07, 2024
+November 08, 2024
November 07, 2024
+November 08, 2024
PFRL is a deep reinforcement learning library that implements various state-of-the-art deep reinforcement algorithms in Python using PyTorch.
+PyTorch Geometric is a library for deep learning on irregular input data such as graphs, point clouds, and manifolds.
A complete and open-sourced solution for injecting domain-specific knowledge into pre-trained LLM.
+ML Prediction, Planning and Simulation for Self-Driving built on PyTorch.
Ray is a fast and simple framework for building and running distributed applications.
+TorchOpt is a PyTorch-based library for efficient differentiable optimization.
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR).
+Determined is a platform that helps deep learning teams train models more quickly, easily share GPU resources, and effectively collaborate.
depyf is a tool to help users understand and adapt to PyTorch compiler torch.compile.
+A complete and open-sourced solution for injecting domain-specific knowledge into pre-trained LLM.
Intel® Neural Compressor provides unified APIs for network compression technologies for faster inference
+AllenNLP is an open-source research library built on PyTorch for designing and evaluating deep learning models for NLP.
NeMo: a toolkit for conversational AI.
+Flower - A Friendly Federated Learning Framework
A PyTorch framework for deep learning on point clouds.
+A deep learning library for video understanding research. Hosts various video-focused models, datasets, training pipelines and more.
baal (bayesian active learning) aims to implement active learning using metrics of uncertainty derived from approximations of bayesian posteriors in neural networks.
+Lightly is a computer vision framework for self-supervised learning.
Determined is a platform that helps deep learning teams train models more quickly, easily share GPU resources, and effectively collaborate.
+Kornia is a differentiable computer vision library that consists of a set of routines and differentiable modules to solve generic CV problems.
MONAI provides domain-optimized foundational capabilities for developing healthcare imaging training workflows.
+PennyLane is a library for quantum ML, automatic differentiation, and optimization of hybrid quantum-classical computations.
The easiest way to use deep metric learning in your application. Modular, flexible, and extensible.
+TorchIO is a set of tools to efficiently read, preprocess, sample, augment, and write 3D medical images in deep learning applications written in PyTorch.
AllenNLP is an open-source research library built on PyTorch for designing and evaluating deep learning models for NLP.
+Detectron2 is FAIR's next-generation platform for object detection and segmentation.
State-of-the-art Natural Language Processing for PyTorch.
+MONAI provides domain-optimized foundational capabilities for developing healthcare imaging training workflows.
BoTorch is a library for Bayesian Optimization. It provides a modular, extensible interface for composing Bayesian optimization primitives.
+Data-centric declarative deep learning framework
A Python package for improving PyTorch performance on Intel platforms
+Substra is a federated learning Python library to run federated learning experiments at scale on real distributed data.
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
+Colossal-AI is a Unified Deep Learning System for Big Model Era
Lightly is a computer vision framework for self-supervised learning.
+Renate is a library providing tools for re-training pytorch models over time as new data becomes available.
A lightweight declarative PyTorch wrapper for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.
+baal (bayesian active learning) aims to implement active learning using metrics of uncertainty derived from approximations of bayesian posteriors in neural networks.
skorch is a high-level library for PyTorch that provides full scikit-learn compatibility.
+ClearML is a full system ML / DL experiment manager, versioning and ML-Ops solution.
The PopTorch interface library is a simple wrapper for running PyTorch programs directly on Graphcore IPUs.
+PyTorch/TorchScript compiler for NVIDIA GPUs using TensorRT
higher is a library which facilitates the implementation of arbitrarily complex gradient-based meta-learning algorithms and nested optimisation loops with near-vanilla PyTorch.
+FairScale is a PyTorch extension library for high performance and large scale training on one or multiple machines/nodes.
library of algorithms to speed up neural network training
+A generalizable application framework for segmentation, regression, and classification using PyTorch
Ignite is a high-level library for training neural networks in PyTorch. It helps with writing compact, but full-featured training loops.
+ParlAI is a unified platform for sharing, training, and evaluating dialog models across many tasks.
PennyLane is a library for quantum ML, automatic differentiation, and optimization of hybrid quantum-classical computations.
+Fast and extensible image augmentation library for different CV tasks like classification, segmentation, object detection and pose estimation.
CrypTen is a framework for Privacy Preserving ML. Its goal is to make secure computing techniques accessible to ML practitioners.
+DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
ClearML is a full system ML / DL experiment manager, versioning and ML-Ops solution.
+CrypTen is a framework for Privacy Preserving ML. Its goal is to make secure computing techniques accessible to ML practitioners.
An open source framework for deep learning on satellite and aerial imagery.
+Deep Graph Library (DGL) is a Python package built for easy implementation of graph neural network model family, on top of PyTorch and other frameworks.
SimulAI is basically a toolkit with pipelines for physics-informed machine learning.
+A library for state-of-the-art self-supervised learning
Detectron2 is FAIR's next-generation platform for object detection and segmentation.
+RoMa is a standalone library to handle rotation representations with PyTorch (rotation matrices, quaternions, rotation vectors, etc). It aims for robustness, ease-of-use, and efficiency.
Train PyTorch models with Differential Privacy
+Poutyne is a Keras-like framework for PyTorch and handles much of the boilerplating code needed to train neural networks.
A framework for elegantly configuring complex applications.
+ONNX Runtime is a cross-platform inferencing and training accelerator.
ML Prediction, Planning and Simulation for Self-Driving built on PyTorch.
+TorchDrift is a data and concept drift library for PyTorch. It lets you monitor your PyTorch models to see if they operate within spec.
PyPose is a robotics-oriented, PyTorch-based library that combines deep perceptual models with physics-based optimization techniques, so that users can focus on their novel applications.
+Datasets, transforms, and models for geospatial data
The Unified Machine Learning Framework
+depyf is a tool to help users understand and adapt to PyTorch compiler torch.compile.
TorchOpt is a PyTorch-based library for efficient differentiable optimization.
+Forte is a toolkit for building NLP pipelines featuring composable components, convenient data interfaces, and cross-task interaction.
PyTorch Geometric is a library for deep learning on irregular input data such as graphs, point clouds, and manifolds.
+An open source hyperparameter optimization framework to automate hyperparameter search.
Flower - A Friendly Federated Learning Framework
+Flair is a very simple framework for state-of-the-art natural language processing (NLP).
Horovod is a distributed training library for deep learning frameworks. Horovod aims to make distributed DL fast and easy to use.
+PyTorch Lightning is a Keras-like ML library for PyTorch. It leaves core training and validation logic to you and automates the rest.
Minimalist Neural Machine Translation toolkit for educational purposes
+PyTorch Geometric Temporal is a temporal (dynamic) extension library for PyTorch Geometric.
PyKale is a PyTorch library for multimodal learning and transfer learning with deep learning and dimensionality reduction on graphs, images, texts, and videos.
+Train PyTorch models with Differential Privacy
Basic Utilities for PyTorch Natural Language Processing (NLP).
+A unified ensemble framework for PyTorch to improve the performance and robustness of your deep learning model.
TorchDrift is a data and concept drift library for PyTorch. It lets you monitor your PyTorch models to see if they operate within spec.
+Avalanche: an End-to-End Library for Continual Learning
A generalizable application framework for segmentation, regression, and classification using PyTorch
+Horovod is a distributed training library for deep learning frameworks. Horovod aims to make distributed DL fast and easy to use.
Catalyst helps you write compact, but full-featured deep learning and reinforcement learning pipelines with a few lines of code.
+A Python package for improving PyTorch performance on Intel platforms
TensorLy is a high level API for tensor methods and deep tensorized neural networks in Python that aims to make tensor learning simple.
+SimulAI is basically a toolkit with pipelines for physics-informed machine learning.
A PyTorch-based knowledge distillation toolkit for natural language processing
+AdaptDL is a resource-adaptive deep learning training and scheduling framework.
Substra is a federated learning Python library to run federated learning experiments at scale on real distributed data.
+NeMo: a toolkit for conversational AI.
PyTorch/TorchScript compiler for NVIDIA GPUs using TensorRT
+PyTorch3D provides efficient, reusable components for 3D Computer Vision research with PyTorch.
A library for state-of-the-art self-supervised learning
+A powerful and flexible machine learning platform for drug discovery.
A powerful and flexible machine learning platform for drug discovery.
+PySyft is a Python library for encrypted, privacy preserving deep learning.
A toolbox for adversarial robustness research. It contains modules for generating adversarial examples and defending against attacks.
+torchdistill is a coding-free framework built on PyTorch for reproducible deep learning and knowledge distillation studies.
Framework for reproducible classification of Alzheimer's Disease
+Ray is a fast and simple framework for building and running distributed applications.
Diffusers provides pretrained diffusion models across multiple modalities, such as vision and audio, and serves as a modular toolbox for inference and training of diffusion models.
+The Unified Machine Learning Framework
An open source hyperparameter optimization framework to automate hyperparameter search.
+Polyaxon is a platform for building, training, and monitoring large-scale deep learning applications.
TorchIO is a set of tools to efficiently read, preprocess, sample, augment, and write 3D medical images in deep learning applications written in PyTorch.
+FuseMedML is a python framework accelerating ML based discovery in the medical field by encouraging code reuse
Deep Graph Library (DGL) is a Python package built for easy implementation of graph neural network model family, on top of PyTorch and other frameworks.
+TorchQuantum is a quantum classical simulation framework based on PyTorch. It supports statevector, density matrix simulation and pulse simulation on different hardware platforms such as CPUs and GPUs.
🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
+Diffusers provides pretrained diffusion models across multiple modalities, such as vision and audio, and serves as a modular toolbox for inference and training of diffusion models.
Pipeline Abstractions for Deep Learning in PyTorch
+PyPose is a robotics-oriented, PyTorch-based library that combines deep perceptual models with physics-based optimization techniques, so that users can focus on their novel applications.
FairScale is a PyTorch extension library for high performance and large scale training on one or multiple machines/nodes.
+A runtime fault injection tool for PyTorch.
AdaptDL is a resource-adaptive deep learning training and scheduling framework.
+Pyro is a universal probabilistic programming language (PPL) written in Python and supported by PyTorch on the backend.
Kornia is a differentiable computer vision library that consists of a set of routines and differentiable modules to solve generic CV problems.
+BoTorch is a library for Bayesian Optimization. It provides a modular, extensible interface for composing Bayesian optimization primitives.
PyTorch3D provides efficient, reusable components for 3D Computer Vision research with PyTorch.
+State-of-the-art Natural Language Processing for PyTorch.
ONNX Runtime is a cross-platform inferencing and training accelerator.
+octoml-profile is a python library and cloud service designed to provide a simple experience for assessing and optimizing the performance of PyTorch models.
octoml-profile is a python library and cloud service designed to provide a simple experience for assessing and optimizing the performance of PyTorch models.
+A PyTorch framework for deep learning on point clouds.
TIAToolbox provides an easy-to-use API where researchers can use, adapt and create models for CPath.
+Machine learning metrics for distributed, scalable PyTorch applications.
Stable Baselines3 (SB3) is a set of reliable implementations of reinforcement learning algorithms in PyTorch.
+USB is a Pytorch-based Python package for Semi-Supervised Learning (SSL). It is easy-to-use/extend, affordable to small groups, and comprehensive for developing and evaluating SSL algorithms.
Flexible and powerful tensor operations for readable and reliable code.
+Basic Utilities for PyTorch Natural Language Processing (NLP).
GPyTorch is a Gaussian process library implemented using PyTorch, designed for creating scalable, flexible Gaussian process models.
+PFRL is a deep reinforcement learning library that implements various state-of-the-art deep reinforcement algorithms in Python using PyTorch.
torchdistill is a coding-free framework built on PyTorch for reproducible deep learning and knowledge distillation studies.
+The easiest way to use deep metric learning in your application. Modular, flexible, and extensible.
A Python toolbox for data mining on Partially-Observed Time Series (POTS) and helps engineers focus more on the core problems in rather than missing parts in their data.
+fastai is a library that simplifies training fast and accurate neural nets using modern best practices.
Polyaxon is a platform for building, training, and monitoring large-scale deep learning applications.
+GPyTorch is a Gaussian process library implemented using PyTorch, designed for creating scalable, flexible Gaussian process models.
pystiche is a framework for Neural Style Transfer (NST) built upon PyTorch.
+Captum (“comprehension” in Latin) is an open source, extensible library for model interpretability built on PyTorch.
Colossal-AI is a Unified Deep Learning System for Big Model Era
+The PopTorch interface library is a simple wrapper for running PyTorch programs directly on Graphcore IPUs.
PyTorch Lightning is a Keras-like ML library for PyTorch. It leaves core training and validation logic to you and automates the rest.
+skorch is a high-level library for PyTorch that provides full scikit-learn compatibility.
PyTorch Geometric Temporal is a temporal (dynamic) extension library for PyTorch Geometric.
+A framework for elegantly configuring complex applications.
Avalanche: an End-to-End Library for Continual Learning
+Framework for reproducible classification of Alzheimer's Disease
Hummingbird compiles trained ML models into tensor computation for faster inference.
+A Python toolbox for data mining on Partially-Observed Time Series (POTS) and helps engineers focus more on the core problems in rather than missing parts in their data.
fastai is a library that simplifies training fast and accurate neural nets using modern best practices.
+A lightweight declarative PyTorch wrapper for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.
Renate is a library providing tools for re-training pytorch models over time as new data becomes available.
+TIAToolbox provides an easy-to-use API where researchers can use, adapt and create models for CPath.
Datasets, transforms, and models for geospatial data
+Stable Baselines3 (SB3) is a set of reliable implementations of reinforcement learning algorithms in PyTorch.
FuseMedML is a python framework accelerating ML based discovery in the medical field by encouraging code reuse
+A PyTorch-based knowledge distillation toolkit for natural language processing
Fast and extensible image augmentation library for different CV tasks like classification, segmentation, object detection and pose estimation.
+Catalyst helps you write compact, but full-featured deep learning and reinforcement learning pipelines with a few lines of code.
PySyft is a Python library for encrypted, privacy preserving deep learning.
+Minimalist Neural Machine Translation toolkit for educational purposes
Forte is a toolkit for building NLP pipelines featuring composable components, convenient data interfaces, and cross-task interaction.
+🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
RoMa is a standalone library to handle rotation representations with PyTorch (rotation matrices, quaternions, rotation vectors, etc). It aims for robustness, ease-of-use, and efficiency.
+A modular framework for vision & language multimodal research from Facebook AI Research (FAIR).
Captum (“comprehension” in Latin) is an open source, extensible library for model interpretability built on PyTorch.
+pystiche is a framework for Neural Style Transfer (NST) built upon PyTorch.
TorchQuantum is a quantum classical simulation framework based on PyTorch. It supports statevector, density matrix simulation and pulse simulation on different hardware platforms such as CPUs and GPUs.
+library of algorithms to speed up neural network training
A unified ensemble framework for PyTorch to improve the performance and robustness of your deep learning model.
+TensorLy is a high level API for tensor methods and deep tensorized neural networks in Python that aims to make tensor learning simple.
USB is a Pytorch-based Python package for Semi-Supervised Learning (SSL). It is easy-to-use/extend, affordable to small groups, and comprehensive for developing and evaluating SSL algorithms.
+Flexible and powerful tensor operations for readable and reliable code.
Pyro is a universal probabilistic programming language (PPL) written in Python and supported by PyTorch on the backend.
+A toolbox for adversarial robustness research. It contains modules for generating adversarial examples and defending against attacks.
Data-centric declarative deep learning framework
+Ignite is a high-level library for training neural networks in PyTorch. It helps with writing compact, but full-featured training loops.
ParlAI is a unified platform for sharing, training, and evaluating dialog models across many tasks.
+Intel® Neural Compressor provides unified APIs for network compression technologies for faster inference
A deep learning library for video understanding research. Hosts various video-focused models, datasets, training pipelines and more.
+Pipeline Abstractions for Deep Learning in PyTorch
Flair is a very simple framework for state-of-the-art natural language processing (NLP).
+Hummingbird compiles trained ML models into tensor computation for faster inference.
Machine learning metrics for distributed, scalable PyTorch applications.
+OpenMMLab covers a wide range of computer vision research topics including classification, detection, segmentation, and super-resolution.
Poutyne is a Keras-like framework for PyTorch and handles much of the boilerplating code needed to train neural networks.
+PyKale is a PyTorch library for multimodal learning and transfer learning with deep learning and dimensionality reduction on graphs, images, texts, and videos.
OpenMMLab covers a wide range of computer vision research topics including classification, detection, segmentation, and super-resolution.
+An open source framework for deep learning on satellite and aerial imagery.
According to statistics from MIT Sloan, 75% of top executives believe AI will help their organizations grow and gain a competitive edge. Since 2020, there has been a 14X increase in the number of active AI startups, and venture capitalist-funded startups have increased by 6X. The PwC Global Artificial Intelligence Study indicates that AI has the potential to contribute $15.7 trillion to the global economy by 2030, with 45% of the total economic gains coming from product enhancements that stimulate consumer demand.
+According to statistics from MIT Sloan, 75% of top executives believe AI will help their organizations grow and gain a competitive edge. Since 2020, there has been a 14X increase in the number of active AI startups, and venture capitalist-funded startups have increased by 6X. The PwC Global Artificial Intelligence Study indicates that AI has the potential to contribute $15.7 trillion to the global economy by 2030, with 45% of the total economic gains coming from product enhancements that stimulate consumer demand.
By joining the PyTorch Foundation, you can help build and shape the future of end-to-end machine learning frameworks alongside your industry peers. PyTorch offers a user-friendly front-end, distributed training, and an ecosystem of tools and libraries that enable fast, flexible experimentation and efficient production.
As a member of the PyTorch Foundation, you'll have access to resources that allow you to be stewards of stable, secure, and long-lasting codebases. You can collaborate on training, local and regional events, open-source developer tooling, academic research, and guides to help new users and contributors have a productive experience.