From 03864ceaf0ac495d1b0afd1cd4fc19330bb9c5f1 Mon Sep 17 00:00:00 2001 From: Rajeev Rao Date: Mon, 2 Aug 2021 02:22:51 -0700 Subject: [PATCH] Fix documentation errors in sample_int8 Signed-off-by: Rajeev Rao --- samples/README.md | 2 +- samples/sampleINT8/README.md | 2 +- samples/sampleINT8API/README.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/samples/README.md b/samples/README.md index 470403aa..df83eab0 100644 --- a/samples/README.md +++ b/samples/README.md @@ -9,7 +9,7 @@ | [sampleDynamicReshape](sampleDynamicReshape) | C++ | ONNX | Digit Recognition With Dynamic Shapes In TensorRT | | [sampleFasterRCNN](sampleFasterRCNN) | C++ | Caffe | Object Detection With Faster R-CNN | | [sampleGoogleNet](sampleGoogleNet) | C++ | Caffe | Building And Running GoogleNet In TensorRT | -| [sampleINT8](sampleINT8) | C++ | Caffe | Building And Running GoogleNet In TensorRT | +| [sampleINT8](sampleINT8) | C++ | Caffe | Performing Inference In INT8 Using Custom Calibration | | [sampleINT8API](sampleINT8API) | C++ | Caffe | Performing Inference In INT8 Precision | | [sampleMLP](sampleMLP) | C++ | INetwork | “Hello World” For Multilayer Perceptron (MLP) | | [sampleMNIST](sampleMNIST) | C++ | Caffe | “Hello World” For TensorRT | diff --git a/samples/sampleINT8/README.md b/samples/sampleINT8/README.md index 9aca3d8f..1fc21757 100644 --- a/samples/sampleINT8/README.md +++ b/samples/sampleINT8/README.md @@ -27,7 +27,7 @@ This sample, sampleINT8, performs INT8 calibration and inference. -Specifically, this sample demonstrates how to perform inference in 8-bit integer (INT8). INT8 inference is available only on GPUs with compute capability 6.1 or 7.x. After the network is calibrated for execution in INT8, output of the calibration is cached to avoid repeating the process. You can then reproduce your own experiments with any deep learning framework in order to validate your results on ImageNet networks. +Specifically, this sample demonstrates how to perform inference in 8-bit integer (INT8). INT8 inference is available only on GPUs with compute capability 6.1 or newer. After the network is calibrated for execution in INT8, output of the calibration is cached to avoid repeating the process. You can then reproduce your own experiments with any deep learning framework in order to validate your results on ImageNet networks. ## How does this sample work? diff --git a/samples/sampleINT8API/README.md b/samples/sampleINT8API/README.md index adee6661..a8ea4107 100644 --- a/samples/sampleINT8API/README.md +++ b/samples/sampleINT8API/README.md @@ -21,7 +21,7 @@ ## Description -This sample, sampleINT8API, performs INT8 inference without using the INT8 calibrator; using the user provided per activation tensor dynamic range. INT8 inference is available only on GPUs with compute capability 6.1 or 7.x and supports Image Classification ONNX models such as ResNet-50, VGG19, and MobileNet. +This sample, sampleINT8API, performs INT8 inference without using the INT8 calibrator; using the user provided per activation tensor dynamic range. INT8 inference is available only on GPUs with compute capability 6.1 or newer and supports Image Classification ONNX models such as ResNet-50, VGG19, and MobileNet. Specifically, this sample demonstrates how to: - Use `nvinfer1::ITensor::setDynamicRange` to set per tensor dynamic range