Skip to content

Commit

Permalink
Fix documentation errors in sample_int8
Browse files Browse the repository at this point in the history
Signed-off-by: Rajeev Rao <[email protected]>
  • Loading branch information
rajeevsrao committed Aug 5, 2021
1 parent 8c1e9c6 commit 03864ce
Show file tree
Hide file tree
Showing 3 changed files with 3 additions and 3 deletions.
2 changes: 1 addition & 1 deletion samples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
| [sampleDynamicReshape](sampleDynamicReshape) | C++ | ONNX | Digit Recognition With Dynamic Shapes In TensorRT |
| [sampleFasterRCNN](sampleFasterRCNN) | C++ | Caffe | Object Detection With Faster R-CNN |
| [sampleGoogleNet](sampleGoogleNet) | C++ | Caffe | Building And Running GoogleNet In TensorRT |
| [sampleINT8](sampleINT8) | C++ | Caffe | Building And Running GoogleNet In TensorRT |
| [sampleINT8](sampleINT8) | C++ | Caffe | Performing Inference In INT8 Using Custom Calibration |
| [sampleINT8API](sampleINT8API) | C++ | Caffe | Performing Inference In INT8 Precision |
| [sampleMLP](sampleMLP) | C++ | INetwork | “Hello World” For Multilayer Perceptron (MLP) |
| [sampleMNIST](sampleMNIST) | C++ | Caffe | “Hello World” For TensorRT |
Expand Down
2 changes: 1 addition & 1 deletion samples/sampleINT8/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@

This sample, sampleINT8, performs INT8 calibration and inference.

Specifically, this sample demonstrates how to perform inference in 8-bit integer (INT8). INT8 inference is available only on GPUs with compute capability 6.1 or 7.x. After the network is calibrated for execution in INT8, output of the calibration is cached to avoid repeating the process. You can then reproduce your own experiments with any deep learning framework in order to validate your results on ImageNet networks.
Specifically, this sample demonstrates how to perform inference in 8-bit integer (INT8). INT8 inference is available only on GPUs with compute capability 6.1 or newer. After the network is calibrated for execution in INT8, output of the calibration is cached to avoid repeating the process. You can then reproduce your own experiments with any deep learning framework in order to validate your results on ImageNet networks.

## How does this sample work?

Expand Down
2 changes: 1 addition & 1 deletion samples/sampleINT8API/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@

## Description

This sample, sampleINT8API, performs INT8 inference without using the INT8 calibrator; using the user provided per activation tensor dynamic range. INT8 inference is available only on GPUs with compute capability 6.1 or 7.x and supports Image Classification ONNX models such as ResNet-50, VGG19, and MobileNet.
This sample, sampleINT8API, performs INT8 inference without using the INT8 calibrator; using the user provided per activation tensor dynamic range. INT8 inference is available only on GPUs with compute capability 6.1 or newer and supports Image Classification ONNX models such as ResNet-50, VGG19, and MobileNet.

Specifically, this sample demonstrates how to:
- Use `nvinfer1::ITensor::setDynamicRange` to set per tensor dynamic range
Expand Down

0 comments on commit 03864ce

Please sign in to comment.