Simple script for face detection, alignment and getting embeddings using pretrain models from insightface project, converted to PyTorch format using pytorch-insightface project and using detector from MTCNN project. Face alignment implemented using pytorch Tensor computing, based on original insightface numpy realization
- Load submodules:
$ git submodule init
or just use one-step method during clonnig project:
$ git clone --recursive https://github.com/MZHI/arcface-embedder.git
- Build FaceDetector submodule using instructions from
FaceDetector/README.md
:
$ cd FaceDetector/
$ pip install opencv-python numpy easydict Cython progressbar2 torch tensorboardX
$ python setup.py build_ext --inplace
$ python setup.py install
- Install insightface module:
$ pip install git+https://github.com/nizhib/pytorch-insightface
- If you want to get weights for embedder locally, you need to convert weights from model zoo, 3.1, 3.2 and 3.3, follow instructions from
pytorch-insightface/README.md
: download the original insightface zoo weights and place *.params and *.json files to pytorch-insightface/resource/{model}. Then run pythonpytorch-insightface/scripts/convert.py
to convert and test pytorch weights.
You can just run script without any parameters, all available input parameters will be setted to default values, see next section for details:
$ python3 run.py
For running with specific image path:
$ python3 run.py --image-path [image name]
There are some input parameters available:
- --image-path: path to image to be processed. Default: ./images/office5.jpg
- --is-local-weights: whether to use local weights or from remote server. Default: 0
- --weights-base-path: root path to insightface weights, converted to PyTorch format. Actual only if --is-local-weights == 1. Default: pytorch-insightface/resource
- --show-face: whether to show cropped and aligned face or not. Default: 0
- --align-torch: whether to use torch or numpy realization for face alignment. Default: 1
- --arch: architecture of embedder: iresnet34|iresnet50|iresnet100. Default: iresnet100
For comparing two vectors of features L1 norm was used:
numpy.linalg.norm(v1-v2, 1)
where v1
and v2
are feature vectors of size 512