Skip to content

Latest commit

 

History

History
 
 

iccv21-mfr

InsightFace Track of ICCV21-MFR

NEWS

2021-07-16 Implicit batch inference is prohibited. For example, inserting some data-related OPs to onnx graph to enable automatic flip-test is not allowed(or similar ideas). We will check it after submission closed, to ensure fairness.

2021-06-17 Participants are now ordered in terms of highest scores across two datasets: TAR@Mask and TAR@MR-All, by the formula of 0.25 * TAR@Mask + 0.75 * TAR@MR-All.

Introduction

The Masked Face Recognition Challenge & Workshop(MFR) will be held in conjunction with the International Conference on Computer Vision (ICCV) 2021.

Workshop-Homepage.

There're InsightFace track here and Webface260M track(with larger training set) in this workshop.

Submission server link: http://iccv21-mfr.com/

An alternative submission server for Non-Chinese users: http://124.156.136.55/

Discussion group:

WeChat:

mfr_group

QQ Group: 711302608, answer: github

Online issue discussion: deepinsight#1564

Testsets for insightface track

In this challenge, we will evaluate the accuracy of following testsets:

  • Accuracy between masked and non-masked faces.
  • Accuracy among children(2~16 years old).
  • Accuracy of globalised multi-racial benchmarks.

We ensure that there's no overlap between these testsets and public available training datasets, as they are not collected from online celebrities.

Our test datasets mainly comes from IFRT.

Mask test-set:

Mask testset contains 6,964 identities, 6,964 masked images and 13,928 non-masked images. There are totally 13,928 positive pairs and 96,983,824 negative pairs.

Click to check the sample images(here we manually blur it to protect privacy) ifrtsample

Children test-set:

Children testset contains 14,344 identities and 157,280 images. There are totally 1,773,428 positive pairs and 24,735,067,692 negative pairs.

Click to check the sample images(here we manually blur it to protect privacy) ifrtsample

Multi-racial test-set (MR in short):

The globalised multi-racial testset contains 242,143 identities and 1,624,305 images.

Race-Set Identities Images Positive Pairs Negative Pairs
African 43,874 298,010 870,091 88,808,791,999
Caucasian 103,293 697,245 2,024,609 486,147,868,171
Indian 35,086 237,080 688,259 56,206,001,061
Asian 59,890 391,970 1,106,078 153,638,982,852
ALL 242,143 1,624,305 4,689,037 2,638,360,419,683
Click to check the sample images(here we manually blur it to protect privacy) ifrtsample

Evaluation Metric

For Mask set, TAR is measured on mask-to-nonmask 1:1 protocal, with FAR less than 0.0001(e-4).

For Children set, TAR is measured on all-to-all 1:1 protocal, with FAR less than 0.0001(e-4).

For other sets, TAR is measured on all-to-all 1:1 protocal, with FAR less than 0.000001(e-6).

Similar to FRVT, participants are now ordered in terms of highest scores across two datasets: TAR@Mask and TAR@MR-All, by the formula of 0.25 * TAR@Mask + 0.75 * TAR@MR-All.

Baselines

Backbone Dataset Method Mask Children African Caucasian South Asian East Asian MR-All size(mb) infer(ms) link
R100 Casia ArcFace 26.623 30.359 39.666 53.933 47.807 21.572 42.735 248.904 7.073 download
R100 MS1MV2 ArcFace 65.767 60.496 79.117 87.176 85.501 55.807 80.725 248.904 7.028 download
R18 MS1MV3 ArcFace 47.853 41.047 62.613 75.125 70.213 43.859 68.326 91.658 1.856 download
R34 MS1MV3 ArcFace 58.723 55.834 71.644 83.291 80.084 53.712 77.365 130.245 3.054 download
R50 MS1MV3 ArcFace 63.850 60.457 75.488 86.115 84.305 57.352 80.533 166.305 4.262 download
R100 MS1MV3 ArcFace 69.091 66.864 81.083 89.040 88.082 62.193 84.312 248.590 7.031 download
R18 Glint360K ArcFace 53.317 48.113 68.230 80.575 75.852 47.831 72.074 91.658 2.013 download
R34 Glint360K ArcFace 65.106 65.454 79.907 88.620 86.815 60.604 83.015 130.245 3.044 download
R50 Glint360K ArcFace 70.233 69.952 85.272 91.617 90.541 66.813 87.077 166.305 4.340 download
R100 Glint360K ArcFace 75.567 75.202 89.488 94.285 93.434 72.528 90.659 248.590 7.038 download
- Private
insightface-000 of frvt
97.760 93.358 98.850 99.372 99.058 87.694 97.481 - - -

(MS1M-V2 means MS1M-ArcFace, MS1M-V3 means MS1M-RetinaFace).

Inference time was evaluated on Tesla V100 GPU, using onnxruntime-gpu==1.6.

Rules

  1. We have two sub-tracks, determined by the size of training dataset and inference time limitation.
  • Sub-Track A: Use MS1M-V3 as training set, download: ref-link, feature length must <= 512, and the inference time must <= 10ms on Tesla V100 GPU.
  • Sub-Track B: Use Glint360K as training set, download: ref-link, feature length must <= 1024, and the inference time must <= 20ms on Tesla V100 GPU.
  1. Training set and testing set are both aligned to 112x112, re-alignment is prohibited.
  2. Mask data-augmentation is allowed, such as this. The applied mask augmentation tool should be reproducible.
  3. External dataset and pretrained models are both prohibited.
  4. Participants submit onnx model, then get scores by our online evaluation. Test images are invisible.
  5. Matching score is measured by cosine similarity.
  6. Model size must <= 1GB.
  7. The input shape of submission model should equal to 3x112x112 (RGB order).
  8. Online evaluation server uses onnxruntime-gpu==1.6, cuda==10.2, cudnn==8.0.5.
  9. Any float-16 model weights is prohibited, as it will lead to incorrect model size estimiation.
  10. Please use onnx_helper.py to check whether the model is valid.
  11. Participants are now ordered in terms of highest scores across two datasets: TAR@Mask and TAR@MR-All, by the formula of 0.25 * TAR@Mask + 0.75 * TAR@MR-All.
  12. Top-ranked participants should provide their solutions and codes to ensure their validity after submission closed.

Tutorial

  1. ArcFace-PyTorch (with Partial-FC), code, tutorial-cn
  2. OneFlow, code
  3. MXNet, code

Submission Guide

  1. Participants must package the onnx model for submission using zip xxx.zip model.onnx.
  2. Each participant can submit three times a day at most.
  3. Please sign-up with the real organization name. You can hide the organization name in our system if you like.
  4. You can decide which submission to be displayed on the leaderboard by clicking 'Set Public' button.
  5. Please click 'sign-in' on submission server if find you're not logged in.

Server link: http://iccv21-mfr.com/

Timelines

  • 1 June - Release of the training data, baseline solutions and testing leader-board
  • 1 October - Stop leader-board submission (11:59 PM Pacific Time)
  • 7 October - Winners notification

Sponsors

(in alphabetical order)

DeepGlint

Kiwi Tech

OneFlow

[More]

Bonus Share

Sub-Track A Sub-Track B
1st place 30% 30%
2nd place 15% 15%
3rd place 5% 5%