Skip to content

Add support for fp16 and int8 in accuracy_checker to checks quantization for model(resnet, bert, gpt2). #637

Add support for fp16 and int8 in accuracy_checker to checks quantization for model(resnet, bert, gpt2).

Add support for fp16 and int8 in accuracy_checker to checks quantization for model(resnet, bert, gpt2). #637

name: Add items to GH project
on:
pull_request:
types:
- opened
issues:
types:
- opened
jobs:
add-to-project:
name: Add PRs and issues to MIGX project
runs-on: ubuntu-latest
steps:
- uses: actions/[email protected]
with:
project-url: https://github.com/orgs/ROCmSoftwarePlatform/projects/26
github-token: ${{ secrets.TEST_PR_WORKFLOW }}