Skip to content

Object Detection toolkit based on PaddlePaddle. It supports object detection, instance segmentation, multiple object tracking and real-time multi-person keypoint detection.

License

Notifications You must be signed in to change notification settings

houj04/PaddleDetection

 
 

Repository files navigation

English | 简体中文

Product news

Introduction

PaddleDetection is an end-to-end object detection development kit based on PaddlePaddle, which implements varied mainstream object detection, instance segmentation, tracking and keypoint detection algorithms in modular designwhich with configurable modules such as network components, data augmentations and losses, and release many kinds SOTA industry practice models, integrates abilities of model compression and cross-platform high-performance deployment, aims to help developers in the whole end-to-end development in a faster and better way.

PaddleDetection provides image processing capabilities such as object detection, instance segmentation, multi-object tracking, keypoint detection and etc.

Features

  • Rich Models PaddleDetection provides rich of models, including 100+ pre-trained models such as object detection, instance segmentation, face detection etc. It covers a variety of global competition champion schemes.

  • Highly Flexible: Components are designed to be modular. Model architectures, as well as data preprocess pipelines and optimization strategies, can be easily customized with simple configuration changes.

  • Production Ready: From data augmentation, constructing models, training, compression, depolyment, get through end to end, and complete support for multi-architecture, multi-device deployment for cloud and edge device.

  • High Performance: Based on the high performance core of PaddlePaddle, advantages of training speed and memory occupation are obvious. FP16 training and multi-machine training are supported as well.

Overview of Kit Structures

Architectures Backbones Components Data Augmentation
  • Object Detection
    • Faster RCNN
    • FPN
    • Cascade-RCNN
    • Libra RCNN
    • Hybrid Task RCNN
    • PSS-Det
    • RetinaNet
    • YOLOv3
    • YOLOv4
    • PP-YOLOv1/v2
    • PP-YOLO-Tiny
    • SSD
    • CornerNet-Squeeze
    • FCOS
    • TTFNet
    • PP-PicoDet
    • DETR
    • Deformable DETR
    • Swin Transformer
    • Sparse RCNN
  • Instance Segmentation
    • Mask RCNN
    • SOLOv2
  • Face Detection
    • FaceBoxes
    • BlazeFace
    • BlazeFace-NAS
  • Multi-Object-Tracking
    • JDE
    • FairMOT
    • DeepSort
  • KeyPoint-Detection
    • HRNet
    • HigherHRNet
  • ResNet(&vd)
  • ResNeXt(&vd)
  • SENet
  • Res2Net
  • HRNet
  • Hourglass
  • CBNet
  • GCNet
  • DarkNet
  • CSPDarkNet
  • VGG
  • MobileNetv1/v3
  • GhostNet
  • Efficientnet
  • BlazeNet
  • Common
    • Sync-BN
    • Group Norm
    • DCNv2
    • Non-local
  • KeyPoint
    • DarkPose
  • FPN
    • BiFPN
    • BFP
    • HRFPN
    • ACFPN
  • Loss
    • Smooth-L1
    • GIoU/DIoU/CIoU
    • IoUAware
  • Post-processing
    • SoftNMS
    • MatrixNMS
  • Speed
    • FP16 training
    • Multi-machine training
  • Resize
  • Lighting
  • Flipping
  • Expand
  • Crop
  • Color Distort
  • Random Erasing
  • Mixup
  • Mosaic
  • Cutmix
  • Grid Mask
  • Auto Augment
  • Random Perspective

Overview of Model Performance

The relationship between COCO mAP and FPS on Tesla V100 of representative models of each server side architectures and backbones.

NOTE:

  • CBResNet stands for Cascade-Faster-RCNN-CBResNet200vd-FPN, which has highest mAP on COCO as 53.3%

  • Cascade-Faster-RCNN stands for Cascade-Faster-RCNN-ResNet50vd-DCN, which has been optimized to 20 FPS inference speed when COCO mAP as 47.8% in PaddleDetection models

  • PP-YOLO achieves mAP of 45.9% on COCO and 72.9FPS on Tesla V100. Both precision and speed surpass YOLOv4

  • PP-YOLO v2 is optimized version of PP-YOLO which has mAP of 49.5% and 68.9FPS on Tesla V100

  • All these models can be get in Model Zoo

The relationship between COCO mAP and FPS on Qualcomm Snapdragon 865 of representative mobile side models.

NOTE:

  • All data tested on Qualcomm Snapdragon 865(4*A77 + 4*A55) processor with batch size of 1 and CPU threads of 4, and use NCNN library in testing, benchmark scripts is publiced at MobileDetBenchmark
  • PP-PicoDet and PP-YOLO-Tiny are developed and released by PaddleDetection, other models are not provided in PaddleDetection.

Tutorials

Get Started

Advanced Tutorials

Model Zoo

Applications

Updates

Updates please refer to change log for details.

License

PaddleDetection is released under the Apache 2.0 license.

Contributing

Contributions are highly welcomed and we would really appreciate your feedback!!

  • Thanks Mandroide for cleaning the code and unifying some function interface.
  • Thanks FL77N for contributing the code of Sparse-RCNN model.
  • Thanks Chen-Song for contributing the code of Swin Faster-RCNN model.
  • Thanks yangyudong, hchhtc123 for contributing PP-Tracking GUI interface.
  • Thanks Shigure19 for contributing PP-TinyPose fitness APP.

Citation

@misc{ppdet2019,
title={PaddleDetection, Object detection and instance segmentation toolkit based on PaddlePaddle.},
author={PaddlePaddle Authors},
howpublished = {\url{https://github.com/PaddlePaddle/PaddleDetection}},
year={2019}
}

About

Object Detection toolkit based on PaddlePaddle. It supports object detection, instance segmentation, multiple object tracking and real-time multi-person keypoint detection.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 69.5%
  • Jupyter Notebook 18.9%
  • C++ 8.6%
  • Java 1.3%
  • Shell 0.7%
  • CMake 0.6%
  • Other 0.4%