Skip to content

Commit

Permalink
Deploy to GitHub Pages on master [ci skip]
Browse files Browse the repository at this point in the history
  • Loading branch information
facebook-circleci-bot committed Nov 8, 2024
1 parent 010dcd2 commit bc7d112
Show file tree
Hide file tree
Showing 58 changed files with 986 additions and 962 deletions.
12 changes: 6 additions & 6 deletions assets/hub/datvuthanh_hybridnets.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"id": "a03f3c27",
"id": "7461d39d",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
Expand All @@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "c53374d0",
"id": "642e295f",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -34,7 +34,7 @@
},
{
"cell_type": "markdown",
"id": "187d33f9",
"id": "51c2ea62",
"metadata": {},
"source": [
"## Model Description\n",
Expand Down Expand Up @@ -93,7 +93,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "fe220c4b",
"id": "bda8c68e",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -109,7 +109,7 @@
},
{
"cell_type": "markdown",
"id": "e86df1d3",
"id": "468d5dda",
"metadata": {},
"source": [
"### Citation\n",
Expand All @@ -120,7 +120,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "ae665090",
"id": "7159f81c",
"metadata": {
"attributes": {
"classes": [
Expand Down
12 changes: 6 additions & 6 deletions assets/hub/facebookresearch_WSL-Images_resnext.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"id": "7f6bc74a",
"id": "b80018b5",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
Expand All @@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "d7c1aa68",
"id": "774b6486",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -39,7 +39,7 @@
},
{
"cell_type": "markdown",
"id": "1be946c9",
"id": "047a209f",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
Expand All @@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "a2cc7b85",
"id": "a5d58f1d",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -67,7 +67,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "46f1c52b",
"id": "dead0b79",
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -99,7 +99,7 @@
},
{
"cell_type": "markdown",
"id": "b597bf8a",
"id": "f9fac4b3",
"metadata": {},
"source": [
"### Model Description\n",
Expand Down
10 changes: 5 additions & 5 deletions assets/hub/facebookresearch_pytorch-gan-zoo_dcgan.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"id": "eb6ab35a",
"id": "7876bcf2",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
Expand All @@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "69a21a14",
"id": "c13b03dd",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -34,7 +34,7 @@
},
{
"cell_type": "markdown",
"id": "ec84b9cd",
"id": "15f8d6ce",
"metadata": {},
"source": [
"The input to the model is a noise vector of shape `(N, 120)` where `N` is the number of images to be generated.\n",
Expand All @@ -45,7 +45,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "33e4b4ca",
"id": "7f0dc223",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -63,7 +63,7 @@
},
{
"cell_type": "markdown",
"id": "01fe5ab4",
"id": "dc33614e",
"metadata": {},
"source": [
"You should see an image similar to the one on the left.\n",
Expand Down
10 changes: 5 additions & 5 deletions assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"id": "4f6f6eee",
"id": "0288a894",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
Expand All @@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "902dd745",
"id": "bbb7654f",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -44,7 +44,7 @@
},
{
"cell_type": "markdown",
"id": "51481d60",
"id": "4a1157ab",
"metadata": {},
"source": [
"The input to the model is a noise vector of shape `(N, 512)` where `N` is the number of images to be generated.\n",
Expand All @@ -55,7 +55,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "de9b54ff",
"id": "cdacbf60",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -74,7 +74,7 @@
},
{
"cell_type": "markdown",
"id": "e3401543",
"id": "1005f373",
"metadata": {},
"source": [
"You should see an image similar to the one on the left.\n",
Expand Down
36 changes: 18 additions & 18 deletions assets/hub/facebookresearch_pytorchvideo_resnet.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"id": "c6e3e109",
"id": "00619deb",
"metadata": {},
"source": [
"# 3D ResNet\n",
Expand All @@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "35df0999",
"id": "d595788b",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
"id": "565c3dd7",
"id": "102a8d2a",
"metadata": {},
"source": [
"Import remaining functions:"
Expand All @@ -42,7 +42,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "b4a5ff33",
"id": "ba23ffd1",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -64,7 +64,7 @@
},
{
"cell_type": "markdown",
"id": "71b77d71",
"id": "4a60936e",
"metadata": {},
"source": [
"#### Setup\n",
Expand All @@ -75,7 +75,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "d61f1edb",
"id": "92a7855b",
"metadata": {
"attributes": {
"classes": [
Expand All @@ -94,7 +94,7 @@
},
{
"cell_type": "markdown",
"id": "3eb1ebd5",
"id": "44c93646",
"metadata": {},
"source": [
"Download the id to label mapping for the Kinetics 400 dataset on which the torch hub models were trained. This will be used to get the category label names from the predicted class ids."
Expand All @@ -103,7 +103,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "494f8f29",
"id": "e4849ccb",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -116,7 +116,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "53f8397e",
"id": "b8dde08a",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -131,7 +131,7 @@
},
{
"cell_type": "markdown",
"id": "50b0697f",
"id": "ea6b33f1",
"metadata": {},
"source": [
"#### Define input transform"
Expand All @@ -140,7 +140,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "e59baf46",
"id": "71441349",
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -174,7 +174,7 @@
},
{
"cell_type": "markdown",
"id": "4a7866fd",
"id": "ce350472",
"metadata": {},
"source": [
"#### Run Inference\n",
Expand All @@ -185,7 +185,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "342b6d09",
"id": "d74d9827",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -197,7 +197,7 @@
},
{
"cell_type": "markdown",
"id": "2c98b756",
"id": "2f515992",
"metadata": {},
"source": [
"Load the video and transform it to the input format required by the model."
Expand All @@ -206,7 +206,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "a39b1e5e",
"id": "a420c700",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -231,7 +231,7 @@
},
{
"cell_type": "markdown",
"id": "d17b186a",
"id": "fb5970b0",
"metadata": {},
"source": [
"#### Get Predictions"
Expand All @@ -240,7 +240,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "df270bdb",
"id": "81c54fe7",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -259,7 +259,7 @@
},
{
"cell_type": "markdown",
"id": "186a7bf1",
"id": "563cb067",
"metadata": {},
"source": [
"### Model Description\n",
Expand Down
Loading

0 comments on commit bc7d112

Please sign in to comment.