Skip to content
This repository has been archived by the owner on Mar 19, 2023. It is now read-only.

Commit

Permalink
Merge pull request #86 from robmarkcole/use-camera-name
Browse files Browse the repository at this point in the history
Use entity name in saved images
  • Loading branch information
robmarkcole authored Nov 12, 2019
2 parents b2684be + 707bd1a commit f0df622
Show file tree
Hide file tree
Showing 4 changed files with 37 additions and 68 deletions.
15 changes: 9 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,9 @@ docker run -e VISION-DETECTION=True -e API-KEY="Mysecretkey" -v localstorage:/da
```

## Usage of this component
The `deepstack_object` component adds an `image_processing` entity where the state of the entity is the total number of `target` objects that are above a `confidence` threshold which has a default value of 80%. The time of the last detection of the `target` object is in the `last detection` attribute. The type and number of objects (of any confidence) is listed in the `summary` attributes. Optionally the processed image can be saved to disk. If `save_file_folder` is configured two images are created, one with the filename of format `deepstack_latest_{target}.jpg` which is over-written on each new detection of the `target`, and another with a unique filename including the timestamp. An event `image_processing.object_detected` is fired for each object detected. If you are a power user with advanced needs such as zoning detections or you want to track multiple object types, you will need to use the `image_processing.object_detected` events.
The `deepstack_object` component adds an `image_processing` entity where the state of the entity is the total number of `target` objects that are above a `confidence` threshold which has a default value of 80%. The time of the last detection of the `target` object is in the `last detection` attribute. The type and number of objects (of any confidence) is listed in the `summary` attributes. Optionally the processed image can be saved to disk. If `save_file_folder` is configured two images are created, one with the filename of format `deepstack_object_{source name}_latest_{target}.jpg` which is over-written on each new detection of the `target`, and another with a unique filename including the timestamp. An event `image_processing.object_detected` is fired for each object detected. If you are a power user with advanced needs such as zoning detections or you want to track multiple object types, you will need to use the `image_processing.object_detected` events.

**Note** that by default the component will **not** automatically scan images, but requires you to call the `image_processing.scan` service e.g. using an automation triggered by motion. Alternativley, periodic scanning can be enabled by configuring a `scan_interval`.

## Home Assistant setup
Place the `custom_components` folder in your configuration directory (or add its contents to an existing `custom_components` folder). Then configure object detection. **Important:** It is necessary to configure only a single camera per `deepstack_object` entity. If you want to process multiple cameras, you will therefore need multiple `deepstack_object` `image_processing` entities. **Note** that at we can use `scan_interval` to (optionally) limit computation, [as described here](https://www.home-assistant.io/components/image_processing/#scan_interval-and-optimising-resources).
Expand All @@ -41,10 +43,11 @@ image_processing:
ip_address: localhost
port: 5000
api_key: Mysecretkey
save_file_folder: /config/www/deepstack_person_images
# scan_interval: 30 # Optional, in seconds
save_file_folder: /config/www/
source:
- entity_id: camera.local_file
name: person_detector
name: deepstack_person_detector
```
Configuration variables:
Expand Down Expand Up @@ -121,12 +124,12 @@ The `box` coordinates and the box center (`centroid`) can be used to determine w
* The centroid is in `(x,y)` coordinates where `(0,0)` is the top left hand corner of the image and `(1,1)` is the bottom right corner of the image.


## Displaying the `deepstack_latest_{target}.jpg` file
It easy to display the `deepstack_latest_{target}.jpg` image with a [local_file](https://www.home-assistant.io/components/local_file/) camera. An example configuration is:
## Displaying the deepstack latest jpg file
It easy to display the `deepstack_object_{source name}_latest_{target}.jpg` image with a [local_file](https://www.home-assistant.io/components/local_file/) camera. An example configuration is:
```yaml
camera:
- platform: local_file
file_path: /config/www/deepstack/deepstack_latest_person.jpg
file_path: /config/www/deepstack_object_local_file_latest_person.jpg
name: deepstack_latest_person
```

Expand Down
90 changes: 28 additions & 62 deletions custom_components/deepstack_object/image_processing.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,45 +7,47 @@
import base64
import datetime
import io
from typing import Tuple
import json
import logging
import os

from PIL import Image, ImageDraw
from datetime import timedelta
from typing import Tuple

import requests
import voluptuous as vol
from PIL import Image, ImageDraw

import deepstack.core as ds

import homeassistant.util.dt as dt_util
from homeassistant.const import ATTR_ENTITY_ID, ATTR_NAME
from homeassistant.core import split_entity_id
import homeassistant.helpers.config_validation as cv
import homeassistant.util.dt as dt_util
import voluptuous as vol
from homeassistant.components.image_processing import (
PLATFORM_SCHEMA,
ImageProcessingEntity,
ATTR_CONFIDENCE,
CONF_SOURCE,
CONF_ENTITY_ID,
CONF_NAME,
CONF_SOURCE,
DOMAIN,
PLATFORM_SCHEMA,
ImageProcessingEntity,
draw_box,
)
from homeassistant.const import (
ATTR_ENTITY_ID,
ATTR_NAME,
CONF_IP_ADDRESS,
CONF_PORT,
HTTP_BAD_REQUEST,
HTTP_OK,
HTTP_UNAUTHORIZED,
)
from homeassistant.core import split_entity_id

_LOGGER = logging.getLogger(__name__)

CONF_API_KEY = "api_key"
CONF_TARGET = "target"
CONF_TIMEOUT = "timeout"
CONF_SAVE_FILE_FOLDER = "save_file_folder"
DATETIME_FORMAT = "%Y-%m-%d %H:%M:%S"
DEFAULT_API_KEY = ""
DEFAULT_TARGET = "person"
DEFAULT_TIMEOUT = 10
Expand All @@ -55,6 +57,8 @@
CENTROID = "centroid"
FILE = "file"
OBJECT = "object"
RED = (255, 0, 0)
SCAN_INTERVAL = timedelta(days=365) # NEVER SCAN.


PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend(
Expand Down Expand Up @@ -102,41 +106,6 @@ def get_box_centroid(box: Tuple) -> Tuple:
return centroid


def draw_box(
draw: ImageDraw,
box: Tuple[float, float, float, float],
img_width: int,
img_height: int,
text: str = "",
color: Tuple[int, int, int] = (255, 255, 0),
) -> None:
"""
Draw a bounding box on and image.
The bounding box is defined by the tuple (y_min, x_min, y_max, x_max)
where the coordinates are floats in the range [0.0, 1.0] and
relative to the width and height of the image.
For example, if an image is 100 x 200 pixels (height x width) and the bounding
box is `(0.1, 0.2, 0.5, 0.9)`, the upper-left and bottom-right coordinates of
the bounding box will be `(40, 10)` to `(180, 50)` (in (x,y) coordinates).
"""

line_width = 5
y_min, x_min, y_max, x_max = box
(left, right, top, bottom) = (
x_min * img_width,
x_max * img_width,
y_min * img_height,
y_max * img_height,
)
draw.line(
[(left, top), (left, bottom), (right, bottom), (right, top), (left, top)],
width=line_width,
fill=color,
)
if text:
draw.text((left + line_width, abs(top - line_width)), text, fill=color)


def setup_platform(hass, config, add_devices, discovery_info=None):
"""Set up the classifier."""
ip_address = config.get(CONF_IP_ADDRESS)
Expand Down Expand Up @@ -192,7 +161,7 @@ def __init__(
self._name = name
else:
camera_name = split_entity_id(camera_entity)[1]
self._name = "{} {}".format(CLASSIFIER, camera_name)
self._name = "deepstack_object_{}".format(camera_name)
self._state = None
self._targets_confidences = []
self._predictions = {}
Expand Down Expand Up @@ -231,7 +200,7 @@ def process_image(self, image):
)
)
if self._state > 0:
self._last_detection = dt_util.now()
self._last_detection = dt_util.now().strftime(DATETIME_FORMAT)
self._summary = ds.get_objects_summary(self._predictions)
self.fire_prediction_events(self._predictions, self._confidence)
if hasattr(self, "_save_file_folder") and self._state > 0:
Expand All @@ -257,20 +226,19 @@ def save_image(self, image, predictions, target, directory):
box,
self._image_width,
self._image_height,
str(prediction_confidence),
text=str(prediction_confidence),
color=RED,
)

latest_save_path = directory + "deepstack_latest_{}.jpg".format(target)
timestamp_save_path = directory + "deepstack_{}_{}.jpg".format(
target, self._last_detection.strftime("%Y-%m-%d-%H-%M-%S")
latest_save_path = directory + "{}_latest_{}.jpg".format(self._name, target)
timestamp_save_path = directory + "{}_{}_{}.jpg".format(
self._name, target, self._last_detection
)
try:
img.save(latest_save_path)
img.save(timestamp_save_path)
self.fire_saved_file_event(timestamp_save_path)
_LOGGER.info("Saved bounding box image to %s", timestamp_save_path)
except Exception as exc:
_LOGGER.error("Error saving bounding box image : %s", exc)

img.save(latest_save_path)
img.save(timestamp_save_path)
self.fire_saved_file_event(timestamp_save_path)
_LOGGER.info("Saved bounding box image to %s", timestamp_save_path)

def fire_prediction_events(self, predictions, confidence):
"""Fire events based on predictions if above confidence threshold."""
Expand Down Expand Up @@ -323,8 +291,6 @@ def device_state_attributes(self):
"""Return device specific state attributes."""
attr = {}
if self._last_detection:
attr[
"last_{}_detection".format(self._target)
] = self._last_detection.strftime("%Y-%m-%d %H:%M:%S")
attr["last_{}_detection".format(self._target)] = self._last_detection
attr["summary"] = self._summary
return attr
Binary file modified docs/object_detail.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/object_usage.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit f0df622

Please sign in to comment.