From 64efd542751566a71811e2b9a01521c420cfd669 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Wed, 8 May 2024 06:39:13 +0000 Subject: [PATCH] Deployed aa87db6 with MkDocs version: 1.5.3 --- .nojekyll | 0 404.html | 850 +++ assets/images/favicon.png | Bin 0 -> 1870 bytes assets/javascripts/bundle.5cf534bf.min.js | 29 + assets/javascripts/bundle.5cf534bf.min.js.map | 8 + assets/javascripts/lunr/min/lunr.ar.min.js | 1 + assets/javascripts/lunr/min/lunr.da.min.js | 18 + assets/javascripts/lunr/min/lunr.de.min.js | 18 + assets/javascripts/lunr/min/lunr.du.min.js | 18 + assets/javascripts/lunr/min/lunr.es.min.js | 18 + assets/javascripts/lunr/min/lunr.fi.min.js | 18 + assets/javascripts/lunr/min/lunr.fr.min.js | 18 + assets/javascripts/lunr/min/lunr.hi.min.js | 1 + assets/javascripts/lunr/min/lunr.hu.min.js | 18 + assets/javascripts/lunr/min/lunr.it.min.js | 18 + assets/javascripts/lunr/min/lunr.ja.min.js | 1 + assets/javascripts/lunr/min/lunr.jp.min.js | 1 + assets/javascripts/lunr/min/lunr.ko.min.js | 1 + assets/javascripts/lunr/min/lunr.multi.min.js | 1 + assets/javascripts/lunr/min/lunr.nl.min.js | 18 + assets/javascripts/lunr/min/lunr.no.min.js | 18 + assets/javascripts/lunr/min/lunr.pt.min.js | 18 + assets/javascripts/lunr/min/lunr.ro.min.js | 18 + assets/javascripts/lunr/min/lunr.ru.min.js | 18 + .../lunr/min/lunr.stemmer.support.min.js | 1 + assets/javascripts/lunr/min/lunr.sv.min.js | 18 + assets/javascripts/lunr/min/lunr.ta.min.js | 1 + assets/javascripts/lunr/min/lunr.th.min.js | 1 + assets/javascripts/lunr/min/lunr.tr.min.js | 18 + assets/javascripts/lunr/min/lunr.vi.min.js | 1 + assets/javascripts/lunr/min/lunr.zh.min.js | 1 + assets/javascripts/lunr/tinyseg.js | 206 + assets/javascripts/lunr/wordcut.js | 6708 +++++++++++++++++ .../workers/search.12658920.min.js | 42 + .../workers/search.12658920.min.js.map | 8 + assets/stylesheets/main.f56500e0.min.css | 1 + assets/stylesheets/main.f56500e0.min.css.map | 1 + assets/stylesheets/palette.2505c338.min.css | 1 + .../stylesheets/palette.2505c338.min.css.map | 1 + config.for.camera.csi.legacy/index.html | 1006 +++ config.for.camera.csi.libcamera/index.html | 1009 +++ config.for.camera.esphome.snapshot/index.html | 1053 +++ config.for.camera.esphome.stream/index.html | 1056 +++ config.for.camera.mjpg/index.html | 1024 +++ config.for.camera.rtsp/index.html | 1052 +++ config.for.camera.snapshot/index.html | 1088 +++ config.for.camera.usb/index.html | 1061 +++ config.for.camera/index.html | 1139 +++ configuration.env.full/index.html | 1026 +++ configuration.env/index.html | 994 +++ configuration.overview/index.html | 948 +++ configuration.tuning/index.html | 1254 +++ css/extra.css | 3 + index.html | 507 ++ installation/index.html | 953 +++ performance/index.html | 953 +++ prusa.connect/index.html | 945 +++ requirements/index.html | 1067 +++ search/search_index.json | 1 + service.docker/index.html | 1137 +++ service.systemd/index.html | 1075 +++ service/index.html | 943 +++ sitemap.xml | 123 + sitemap.xml.gz | Bin 0 -> 419 bytes static/esp32-camera.jpg | Bin 0 -> 80717 bytes static/pi-camera.jpg | Bin 0 -> 16523 bytes static/prusa-connect-cam-small.png | Bin 0 -> 7327 bytes static/prusa-connect-cam.png | Bin 0 -> 11494 bytes static/usb_cam.png | Bin 0 -> 63884 bytes stream.mediamtx/index.html | 1021 +++ test.config/index.html | 962 +++ troubleshooting/index.html | 1182 +++ 72 files changed, 32718 insertions(+) create mode 100644 .nojekyll create mode 100644 404.html create mode 100644 assets/images/favicon.png create mode 100644 assets/javascripts/bundle.5cf534bf.min.js create mode 100644 assets/javascripts/bundle.5cf534bf.min.js.map create mode 100644 assets/javascripts/lunr/min/lunr.ar.min.js create mode 100644 assets/javascripts/lunr/min/lunr.da.min.js create mode 100644 assets/javascripts/lunr/min/lunr.de.min.js create mode 100644 assets/javascripts/lunr/min/lunr.du.min.js create mode 100644 assets/javascripts/lunr/min/lunr.es.min.js create mode 100644 assets/javascripts/lunr/min/lunr.fi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.fr.min.js create mode 100644 assets/javascripts/lunr/min/lunr.hi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.hu.min.js create mode 100644 assets/javascripts/lunr/min/lunr.it.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ja.min.js create mode 100644 assets/javascripts/lunr/min/lunr.jp.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ko.min.js create mode 100644 assets/javascripts/lunr/min/lunr.multi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.nl.min.js create mode 100644 assets/javascripts/lunr/min/lunr.no.min.js create mode 100644 assets/javascripts/lunr/min/lunr.pt.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ro.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ru.min.js create mode 100644 assets/javascripts/lunr/min/lunr.stemmer.support.min.js create mode 100644 assets/javascripts/lunr/min/lunr.sv.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ta.min.js create mode 100644 assets/javascripts/lunr/min/lunr.th.min.js create mode 100644 assets/javascripts/lunr/min/lunr.tr.min.js create mode 100644 assets/javascripts/lunr/min/lunr.vi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.zh.min.js create mode 100644 assets/javascripts/lunr/tinyseg.js create mode 100644 assets/javascripts/lunr/wordcut.js create mode 100644 assets/javascripts/workers/search.12658920.min.js create mode 100644 assets/javascripts/workers/search.12658920.min.js.map create mode 100644 assets/stylesheets/main.f56500e0.min.css create mode 100644 assets/stylesheets/main.f56500e0.min.css.map create mode 100644 assets/stylesheets/palette.2505c338.min.css create mode 100644 assets/stylesheets/palette.2505c338.min.css.map create mode 100644 config.for.camera.csi.legacy/index.html create mode 100644 config.for.camera.csi.libcamera/index.html create mode 100644 config.for.camera.esphome.snapshot/index.html create mode 100644 config.for.camera.esphome.stream/index.html create mode 100644 config.for.camera.mjpg/index.html create mode 100644 config.for.camera.rtsp/index.html create mode 100644 config.for.camera.snapshot/index.html create mode 100644 config.for.camera.usb/index.html create mode 100644 config.for.camera/index.html create mode 100644 configuration.env.full/index.html create mode 100644 configuration.env/index.html create mode 100644 configuration.overview/index.html create mode 100644 configuration.tuning/index.html create mode 100644 css/extra.css create mode 100644 index.html create mode 100644 installation/index.html create mode 100644 performance/index.html create mode 100644 prusa.connect/index.html create mode 100644 requirements/index.html create mode 100644 search/search_index.json create mode 100644 service.docker/index.html create mode 100644 service.systemd/index.html create mode 100644 service/index.html create mode 100644 sitemap.xml create mode 100644 sitemap.xml.gz create mode 100644 static/esp32-camera.jpg create mode 100644 static/pi-camera.jpg create mode 100644 static/prusa-connect-cam-small.png create mode 100644 static/prusa-connect-cam.png create mode 100644 static/usb_cam.png create mode 100644 stream.mediamtx/index.html create mode 100644 test.config/index.html create mode 100644 troubleshooting/index.html diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/404.html b/404.html new file mode 100644 index 0000000..95a2f33 --- /dev/null +++ b/404.html @@ -0,0 +1,850 @@ + + + +
+ + + + + + + + + + + + + +CSI Camera V2 +as of Sat 23 Mar 08:47:12 UTC 2024.
+Example for older operating systems (those with command raspistill
):
csi-legacy.dist
as .env
if you want to use Raspberry Pi camera.env
replace token-change-me
with the value of the token
+ you copied.env
replace fingerprint-change-me
with some random value,
+ which is alphanumeric and has at least 16 chars (and max of 40 chars),
+ for example set it to fingerprint-myprinter-camera-1
.env
Next, test config.
+Some older Rpi 3 with older Debian with basic cam:
+PRINTER_ADDRESS=127.0.0.1
+PRUSA_CONNECT_CAMERA_TOKEN=token-change-me
+PRUSA_CONNECT_CAMERA_FINGERPRINT=trash-cam-night-video-wide-1
+CAMERA_DEVICE=/dev/video0
+CAMERA_COMMAND=raspistill
+CAMERA_COMMAND_EXTRA_PARAMS="--nopreview --mode 640:480 -o"
+
CSI Camera V2 +as of Sat 23 Mar 08:47:12 UTC 2024.
+Example for newer operating systems (commands libcamera
or rpicam-still
):
csi.dist
as .env
if you want to use Raspberry Pi camera.env
replace token-change-me
with the value of the token
+ you copied.env
replace fingerprint-change-me
with some random value,
+ which is alphanumeric and has at least 16 chars (and max of 40 chars),
+ for example set it to fingerprint-myprinter-camera-1
.env
Next, test config.
+My Rpi Zero W with Raspberry Pi Camera v2 with +maximum resolution available:
+ +PRINTER_ADDRESS=192.168.1.25
+PRUSA_CONNECT_CAMERA_TOKEN=redacted
+PRUSA_CONNECT_CAMERA_FINGERPRINT=c10eb887-f107-41a4-900e-2c38ea12a11c
+CAMERA_DEVICE=/dev/video0
+CAMERA_COMMAND=rpicam-still
+CAMERA_COMMAND_EXTRA_PARAMS="--immediate --nopreview --mode 2592:1944:12:P --lores-width 0 --lores-height 0 --thumb none -o"
+
With esphome camera with snapshot we can use the ultimate power of curl
+command to fetch the image from the camera.
Configure esphome device:
+esp32_camera
and esp32_camera_web_server
with
+ snapshot
modules:esp32_camera:
+... (skipped due to the fact there are different modules)
+
+esp32_camera_web_server:
+ - port: 8081
+ mode: snapshot
+
Flash the device and wait until it boots and is available.
+esphome-snapshot.dist
as .env
.env
replace token-change-me
with the value
+ of the token you copied.env
replace fingerprint-change-me
with some
+ random value, which is alphanumeric and has at least 16 chars (and max of 40 chars),
+ for example set it to fingerprint-myprinter3-camera-3
.env
replace your esphome device address and port
+ in CAMERA_COMMAND_EXTRA_PARAMS
.env
Next, test config.
+I have esp32-wrover-dev board with camera + esphome + web ui for camera exposing
+snapshot frame on port 8081
.
We can use curl to fetch it.
+PRINTER_ADDRESS=127.0.0.1
+PRUSA_CONNECT_CAMERA_TOKEN=redacted
+PRUSA_CONNECT_CAMERA_FINGERPRINT=06f47777-f179-4025-bd80-9e4cb8db2aed
+CAMERA_DEVICE=/dev/null
+CAMERA_COMMAND=curl
+CAMERA_COMMAND_EXTRA_PARAMS=http://esp32-wrover-0461c8.local:8081/ -o
+
With esphome camera stream we can use the ffmpeg
to fetch the image from the
+camera stream. It requires a bit more computing power from esp device and the
+host that runs the image processing.
Notice that this is not recommended way due to the amount of consumed resources.
+Configure esphome device:
+esp32_camera
and esp32_camera_web_server
with
+ stream
modules:esp32_camera:
+... (skipped due to the fact there are different modules)
+
+esp32_camera_web_server:
+ - port: 8080
+ mode: stream
+
Flash the device and wait until it boots and is available.
+esphome-stream.dist
as .env
.env
replace token-change-me
with the value
+ of the token you copied.env
replace fingerprint-change-me
with some
+ random value, which is alphanumeric and has at least 16 chars (and max of 40 chars),
+ for example set it to fingerprint-myprinter3-camera-3
.env
replace your esphome device address and port
+ in CAMERA_COMMAND_EXTRA_PARAMS
-update 1
may not be needed in certain ffmpeg versions.env
Next, test config.
+The same ESP device with stream, notice different port (8080
).
PRINTER_ADDRESS=127.0.0.1
+PRUSA_CONNECT_CAMERA_TOKEN=token-change-me
+PRUSA_CONNECT_CAMERA_FINGERPRINT=f68336b-8dab-42cd-8729-6abd8855ff63
+CAMERA_DEVICE=/dev/null
+CAMERA_COMMAND=ffmpeg
+CAMERA_COMMAND_EXTRA_PARAMS="-y -i 'http://esp32-wrover-0461c8.local:8080/' -vframes 1 -q:v 1 -f image2 -update 1 "
+
This processing requires ffmpeg package.
+Most standalone webcams are actually mjpg cams, they send infinite motion jpeg stream +over specific URL.
+The best option to check what is the URL is in the camera manual, or if you +open web UI of the camera and see the stream image then right click on the image +and select Inspect to see the URL for the image - copy that URL.
+You should be able to test the stream locally with ffplay
command.
For example, if your camera is reachable over address 192.168.0.20
and port 8000
+under endpoint /ipcam/mjpeg.cgi
then below command should show the stream:
There may be some user and password in the URL.
+If that works, then configuration should be pretty straightforward:
+ffmpeg-mjpg-stream.dist
as .env
.env
replace token-change-me
with the value
+ of the token you copied.env
replace fingerprint-change-me
+ with some random value, which is alphanumeric and has at least 16 chars
+ (and max of 40 chars), for example set it to fingerprint-myprinter4-camera-4
.env
replace your RTSP device address raspberry-pi
,
+ port and stream id in CAMERA_COMMAND_EXTRA_PARAMS
if needed.env
Next, test config.
+Beagle Camera stream - if I remember correctly, then camera url to the stream
+is something like http://192.168.2.92/ipcam/mjpeg.cgi
Replace 192.168.2.92
with your address in the example below.
PRINTER_ADDRESS=127.0.0.1
+PRUSA_CONNECT_CAMERA_TOKEN=token-change-me
+PRUSA_CONNECT_CAMERA_FINGERPRINT=fingerprint-change-me
+CAMERA_DEVICE=/dev/null
+CAMERA_COMMAND=ffmpeg
+CAMERA_COMMAND_EXTRA_PARAMS="-y -i 'http://192.168.2.92/ipcam/mjpeg.cgi' -vframes 1 -q:v 1 -f image2 -update 1 "
+
But it is better to use a snapshot instead of stream if available, +see here.
+ +DO NOT use VLC to test streams, there are unfortunately problems with it.
+Please use ffplay
from ffmpeg
package.
You have some options such as TCP or UDP stream (whatever..). +This should work with any other camera (usually there is a different port per stream)
+You should be able to test the stream locally with ffplay
command.
For example, if your camera is reachable over address 192.168.0.20
and port 8000
+under endpoint /stream
then below command should show the stream:
If that works, then configuration should be pretty straightforward:
+ffmpeg-mediamtx-rtsp-tcp.dist
as .env
.env
replace token-change-me
with the value
+ of the token you copied.env
replace fingerprint-change-me
+ with some random value, which is alphanumeric and has at least 16 chars
+ (and max of 40 chars), for example set it to fingerprint-myprinter4-camera-4
.env
replace your RTSP device address raspberry-pi
,
+ port and stream id in CAMERA_COMMAND_EXTRA_PARAMS
if needed.env
You can try with UDP
, but you may not get it ;-)
Next, test config.
+My another Rpi Zero W named hormex
has two cameras:
and I'm running mediamtx
server to conver those to RTSP streams.
+More about mediamtx is here.
So I can have two configs:
+.stream-csi
over UDP:
PRINTER_ADDRESS=127.0.0.1
+PRUSA_CONNECT_CAMERA_TOKEN=redacted
+PRUSA_CONNECT_CAMERA_FINGERPRINT=62e8ab72-9766-4ad5-b8b1-174d389fc0d3
+CAMERA_DEVICE=/dev/null
+CAMERA_COMMAND=ffmpeg
+CAMERA_COMMAND_EXTRA_PARAMS="-loglevel error -y -rtsp_transport udp -i "rtsp://hormex:8554/cam" -f image2 -vframes 1 -pix_fmt yuvj420p "
+
.stream-endo
over TCP:
PRINTER_ADDRESS=127.0.0.1
+PRUSA_CONNECT_CAMERA_TOKEN=redacted
+PRUSA_CONNECT_CAMERA_FINGERPRINT=01a67af8-86a3-45c7-b6e2-39e9d086b367
+CAMERA_DEVICE=/dev/null
+CAMERA_COMMAND=ffmpeg
+CAMERA_COMMAND_EXTRA_PARAMS="-loglevel error -y -rtsp_transport tcp -i "rtsp://hormex:8554/endoscope" -f image2 -vframes 1 -pix_fmt yuvj420p "
+
Some cameras expose single image snapshot under specific URL.
+we can use the ultimate power of curl
command to fetch the image from the camera.
This is the preferred way to use web cams because right now Prusa Connect do not +support streams, and thus there is no point in wasting CPU on that.
+The best option to check what is the URL is in the camera manual, or if you +open web UI of the camera and see the still image then right click on the image +and select Inspect to see the URL for the image - copy that URL.
+You should be able to test the stream locally with ffplay
command.
For example, if your camera is reachable over address 192.168.0.20
and port 8001
+under endpoint /snap.jpg
then below command should show the image:
then you should see in the output something like Content-Type: image/jpeg
,
+then you are good - see snap.jpg
in the folder you executed the command.
snapshot.dist
as .env
.env
replace token-change-me
with the value
+ of the token you copied.env
replace fingerprint-change-me
with some
+ random value, which is alphanumeric and has at least 16 chars (and max of 40 chars),
+ for example set it to fingerprint-myprinter3-camera-3
.env
replace your esphome device address and port
+ in CAMERA_COMMAND_EXTRA_PARAMS
.env
Next, test config.
+For more in-depth details see esphome snapshot.
+I have esp32-wrover-dev board with camera + esphome + web ui for camera exposing
+snapshot frame on port 8081
.
We can use curl to fetch it.
+PRINTER_ADDRESS=127.0.0.1
+PRUSA_CONNECT_CAMERA_TOKEN=redacted
+PRUSA_CONNECT_CAMERA_FINGERPRINT=06f47777-f179-4025-bd80-9e4cb8db2aed
+CAMERA_DEVICE=/dev/null
+CAMERA_COMMAND=curl
+CAMERA_COMMAND_EXTRA_PARAMS=http://esp32-wrover-0461c8.local:8081/ -o
+
This is not tested, I do not own such camera so hard to tell if this is right.
+Camera URL for snapshot http://192.168.2.92/images/snapshot0.jpg
so the config
+should be like below:
PRINTER_ADDRESS=127.0.0.1
+PRUSA_CONNECT_CAMERA_TOKEN=redacted
+PRUSA_CONNECT_CAMERA_FINGERPRINT=06f47777-f179-4025-bd80-9e4cb8db2aed
+CAMERA_DEVICE=/dev/null
+CAMERA_COMMAND=curl
+CAMERA_COMMAND_EXTRA_PARAMS=http://192.168.2.92/images/snapshot0.jpg -o
+
This should work on any linux distro with any sane camera that you have.
+Run v4l2-ctl --list-devices
.
This should show list of devices to use, where /dev/video0
is a device
+name.
Notice that not every device is an actual camera.
+The quick all-in one output for camera /dev/video0
is
For more details about formats it is better to use
+v4l2-ctl --list-formats-ext -d /dev/video0
usb.dist
as .env
.env
replace token-change-me
with the value of the token
+ you copied.env
replace fingerprint-change-me
with some random value,
+ which is alphanumeric and has at least 16 chars (and max of 40 chars),
+ for example set it to fingerprint-myprinter2-camera-2
.env
replace /dev/video0
with desired device in CAMERA_DEVICE
.env
Next, test config.
+Raspberry Pi Zero W with endoscope camera over USB, registered as /dev/video1
:
PRINTER_ADDRESS=192.168.1.25
+PRUSA_CONNECT_CAMERA_TOKEN=redacted
+PRUSA_CONNECT_CAMERA_FINGERPRINT=7054ba85-bc19-4eb9-badc-6129575d9651
+CAMERA_DEVICE=/dev/video1
+CAMERA_COMMAND=fswebcam
+CAMERA_COMMAND_EXTRA_PARAMS="--resolution 1280x960 --no-banner"
+
PRUSA_CONNECT_CAMERA_TOKEN
should be taken from earlier step.
PRUSA_CONNECT_CAMERA_FINGERPRINT
should be uniqe and set only once for each camera.
Fingerprint can be easily generated using command:
+ +or via online website, +just copy/paste the output as fingerprint value into the config.
+Do not change fingerprint after launching the script - thus camera is registered +and you may need to revert the change or delete and readd camera again and start +from scratch.
+Other env vars are set depending on the camera device we want to use.
+Next, test config.
+ +Config for camera is to the script as environment variables (env vars).
+SLEEP
- sleep time in seconds between image captures,
+ notice that PrusaConnect accepts images at most every 10s or slower.
+ Default value 10
.
PRINTER_ADDRESS
- Printer address to ping, if address is unreachable there
+ is no point in sending an image. Set to 127.0.0.1
to always send images.
+ Set to empty value to disable ping check and always send images.
+ Default value 127.0.0.1
PRUSA_CONNECT_CAMERA_TOKEN
- required, PrusaConnect API key
PRUSA_CONNECT_CAMERA_FINGERPRINT
- required, PrusaConnect camera fingerprint,
+ use for example cli uuidgen
or web
+ to generate it, it must be at least 16 alphanumeric chars, 40 max.
+ Remember not to change this if it was already set, otherwise you need to
+ remove and add the camera again.
CAMERA_DEVICE
- camera device to use, if you use Raspberry Pi camera
+ attached to the CSI via camera ribbon then leave as is
+ Default /dev/video0
which points to first detected camera.
CAMERA_SETUP_COMMAND
- camera setup command and params executed before
+ taking image, default value is empty, because some cameras do not support it,
+ in general you want to use something like v4l2-ctl parameters, so
+ so for example
+ setup_command=v4l2-ctl --set-ctrl brightness=10,gamma=120 -d $CAMERA_DEVICE
+ will translate to:
+ v4l2-ctl --set-ctrl brightness=10,gamma=120 -d /dev/video0
CAMERA_COMMAND
- command used to invoke image capture,
+ default is rpicam-still
+ available options:
anything else will be processed directly, so for example you could use + 'ffmpeg' in here
+CAMERA_COMMAND_EXTRA_PARAMS
-extra params passed to the camera program,
+ passed directly as <command> <extra-params> <output_file>
+ example values per specific camera:
+
+
libcamera (rpicam-still)
+ --immediate --nopreview --mode 2592:1944:12:P --lores-width 0 --lores-height 0 --thumb none -o
--nopreview --mode 2592:1944:12:P -o
--resolution 1280x960 --no-banner
-f v4l2 -y -i /dev/video0 -f image2 -vframes 1 -pix_fmt yuvj420p
TARGET_DIR
- directory where to save camera images, image per camera will
+ be overwritten per image capture,
+ default value /dev/shm
so that we do not write to microSD cards or read only
+ filesystems/containers. /dev/shm
is a shared memory space. if you have more
+ printers you may need to increase this value on system level.
CURL_EXTRA_PARAMS
- extra params to curl when pushing an image,
+ default empty value, but you could for example add additional params if needed
+ such as -k
if using tls proxy with self-signed certificate
PRUSA_CONNECT_URL
- Prusa Connect endpoint where to post images,
+ default value https://webcam.connect.prusa3d.com/c/snapshot
+ You could put here Prusa Connect Proxy if you use one.
For more in-depth details (no need to repeat them here) please see the top of +the prusa-connect-camera.sh.
+ +Config for camera is to the script as environment variables (env vars).
+The most important env vars are:
+PRUSA_CONNECT_CAMERA_TOKEN
PRUSA_CONNECT_CAMERA_FINGERPRINT
CAMERA_COMMAND
CAMERA_COMMAND_EXTRA_PARAMS
Those env vars will be filled in in the next steps.
+Full list of env vars can be seen here
+ +Short overview of actions:
+Assuming you already have a working camera with basic setup, we can tune it further.
+Below steps depend on the camera capabilities, thus your mileage may vary.
+Notice that Prusa Connect has file size limit something about 8MB of the image uploaded, +so there may be no point in getting images with super high resolutions.
+Use v4l2-ctl
to get the list of available resolutions that camera provides
+and then update it in the env var configs.
Run v4l2-ctl --list-formats-ext -d /dev/video0
where /dev/video0
is a device
+listed from command above.
Example output:
+v4l2-ctl --list-formats-ext -d /dev/video1
+ioctl: VIDIOC_ENUM_FMT
+ Type: Video Capture
+
+ [0]: 'MJPG' (Motion-JPEG, compressed)
+ Size: Discrete 640x480
+ Interval: Discrete 0.033s (30.000 fps)
+ Interval: Discrete 0.033s (30.000 fps)
+ Size: Discrete 640x360
+ Interval: Discrete 0.033s (30.000 fps)
+ Size: Discrete 352x288
+ Interval: Discrete 0.033s (30.000 fps)
+ Size: Discrete 320x240
+ Interval: Discrete 0.033s (30.000 fps)
+ Size: Discrete 176x144
+ Interval: Discrete 0.033s (30.000 fps)
+ Size: Discrete 160x120
+ Interval: Discrete 0.033s (30.000 fps)
+ Size: Discrete 800x600
+ Interval: Discrete 0.033s (30.000 fps)
+ Size: Discrete 1280x720
+ Interval: Discrete 0.033s (30.000 fps)
+ Size: Discrete 1280x960
+ Interval: Discrete 0.033s (30.000 fps)
+ Size: Discrete 640x480
+ Interval: Discrete 0.033s (30.000 fps)
+ Interval: Discrete 0.033s (30.000 fps)
+ [1]: 'YUYV' (YUYV 4:2:2)
+ Size: Discrete 640x480
+ Interval: Discrete 0.033s (30.000 fps)
+ Interval: Discrete 0.033s (30.000 fps)
+ Size: Discrete 640x360
+ Interval: Discrete 0.033s (30.000 fps)
+ Size: Discrete 352x288
+ Interval: Discrete 0.033s (30.000 fps)
+ Size: Discrete 320x240
+ Interval: Discrete 0.033s (30.000 fps)
+ Size: Discrete 176x144
+ Interval: Discrete 0.033s (30.000 fps)
+ Size: Discrete 160x120
+ Interval: Discrete 0.033s (30.000 fps)
+ Size: Discrete 800x600
+ Interval: Discrete 0.200s (5.000 fps)
+ Size: Discrete 1280x720
+ Interval: Discrete 0.200s (5.000 fps)
+ Size: Discrete 1280x960
+ Interval: Discrete 0.200s (5.000 fps)
+ Size: Discrete 640x480
+ Interval: Discrete 0.033s (30.000 fps)
+ Interval: Discrete 0.033s (30.000 fps)
+
As you can see if I set video to YUYV and with resolution higher than 800x600 +I would get only 5 frames per second. +For still images this is not a problem, but for video streaming that could be +too low and I would have to switch to MJPG (or actually mjpeg in ffmpeg)
+For Raspberry Cam v2 you could use csi.dist
as source and add
+--mode 2592:1944:12:P
to the CAMERA_COMMAND_EXTRA_PARAMS
.
For certain USB cameras (such as Tracer Endoscope) you should use usb.dist
and
+you should be able to add --resolution 1280x960
to the CAMERA_COMMAND_EXTRA_PARAMS
.
Video controls are things like brightness, auto white balance (awb), +exposure and so on.
+Get device capabilities, especially User controls
:
and set accordingly parameters you want in CAMERA_SETUP_COMMAND
env var, for example:
remember to restart given camera service.
+You can try to use guvcview
desktop application to check prams in realtime.
You can pass on params to rpicam-still or fswebcam as you want.
+See rpicam-still --help
--hflip Read out with horizontal mirror
+ --vflip Read out with vertical flip
+ --rotation Use hflip and vflip to create the given rotation <angle>
+
so for example:
+CAMERA_COMMAND=rpicam-still
+CAMERA_COMMAND_EXTRA_PARAMS="--rotation 90 --immediate --nopreview --thumb none -o"
+
See fswebcam --help
--flip <direction> Flips the image. (h, v)
+ --crop <size>[,<offset>] Crop a part of the image.
+ --scale <size> Scales the image.
+ --rotate <angle> Rotates the image in right angles.
+
so for example:
+ +When curl is not enough and you don't really want to physically rotate your camera, +then use ffmpeg for post processing. +You can process static images with it, load v4l2 devices... whatever.
+With ffmpeg you can do interesting things with filters, it will just require +more computing power.
+v4l2
can be used as alias for video4linux2
.
You can pass video4linux options to ffmpeg on device initialization, for example:
+ +ffmpeg -f v4l2 -pix_fmt mjpeg -video_size 1280x960 -framerate 30 -i /dev/video1 \
+ -c:v libx264 -preset ultrafast -b:v 6000k -f rtsp rtsp://localhost:$RTSP_PORT/$MTX_PATH
+
would instruct ffmpeg to use video4linux and force it to talk to the camera under +/dev/video1 and forcing mjpeg encoder, resolution and framerate.
+This command above is directly taken from mediamtx.
+For more params, see official ffmpeg docs.
+Just remember to pass them before defining input (-i /dev/video1
).
See here +for basic ones.
+You probably want to use -vf "transpose=1"
to rotate image 90 degrees clockwise:
CAMERA_COMMAND=ffmpeg
+CAMERA_COMMAND_EXTRA_PARAMS="-y -i 'http://esp32-wrover-0461c8.local:8080/' -vf 'transpose=1' -vframes 1 -q:v 1 -f image2 -update 1 "
+
Frankly speaking you can do anything you want with ffmpeg, for example
+-vf transpose=1,shufflepixels=m=block:height=16:width=16
Why? why not :D
+ +This project aims to make it easier to use any camera to be used as +Prusa Connect camera.
+Rpi Zero W or older devices may have CPU limitations to process remote streams + or multiple cameras at once
+I was not able to test EVERY setting so this may still have some bugs
+Install system packages - assuming Debian based distros on Raspberry Pi OS, which +also come in with some pre-installed packages.
+Below commands should be executed in shell/terminal (on the Raspberry Pi).
+For most Raspberry Pi Cameras (CSI/USB):
+sudo apt-get update
+sudo apt-get install -y curl libcamera0 fswebcam git iputils-ping v4l-utils uuid-runtime
+
Additional packages for remote cameras - especially the one that are used for streaming:
+ +Download this script:
+mkdir -p /home/pi/src
+cd /home/pi/src
+git clone https://github.com/nvtkaszpir/prusa-connect-camera-script.git
+cd prusa-connect-camera-script
+
Raspberry Pi Zero W is able to process CSI camera + (Rpi Cam v2) and USB 2k camera + but it has load average about 1.4, and CPU is quite well utilized, so you may + need to decrease resolution per camera to see how + it goes.
+for webcams it is always better to choose snapshot + because it requires less computing both on camera and on the host, + otherwise we need to use ffmpeg
+ffmpeg is usually noticeably slow and cpu intensive, especially if you do more + complex operations
+Printer
Camera
Add new other camera
Token
, this is needed later as
+ PRUSA_CONNECT_CAMERA_TOKEN
env varPhysical host or virtual machine or container:
+Camera such as:
+ +esp32_camera_web_server
with snapshot
moduleesp32_camera_web_server
with stream
module using ffmpeg
ffmpeg
Linux operating system. +Debian based preferred, for example Raspberry Pi OS Lite if you run Raspberry Pi. +I use also laptop with Ubuntu 22.04, but I believe with minor tweaks it should +work on most distributions (mainly package names are different).
+Below list uses Debian package names.
+bash
5.x (what year is it?)git
(just to install scripts from this repo)curl
iputils-ping
uuid-runtime
to make generation of camera fingerprint easierv4l-utils
- to detect camera capabilitieslibcamera0
- for Rpi CSI cameraslibraspberrypi-bin
or rpicam-apps-lite
for Rpi CSI cameras
+ (should be already installed on Rpi OS)fswebcam
- for generic USB camerasffmpeg
- for custom commands for capturing remote streamsThis project aims to make it easier to use any camera to be used as Prusa Connect camera.
"},{"location":"#features","title":"Features","text":"Rpi Zero W or older devices may have CPU limitations to process remote streams or multiple cameras at once
I was not able to test EVERY setting so this may still have some bugs
CSI Camera V2 as of Sat 23 Mar 08:47:12 UTC 2024.
Example for older operating systems (those with command raspistill
):
csi-legacy.dist
as .env
if you want to use Raspberry Pi camera.env
replace token-change-me
with the value of the token you copied.env
replace fingerprint-change-me
with some random value, which is alphanumeric and has at least 16 chars (and max of 40 chars), for example set it to fingerprint-myprinter-camera-1
.env
Next, test config.
"},{"location":"config.for.camera.csi.legacy/#real-world-scenario","title":"Real world scenario","text":"Some older Rpi 3 with older Debian with basic cam:
PRINTER_ADDRESS=127.0.0.1\nPRUSA_CONNECT_CAMERA_TOKEN=token-change-me\nPRUSA_CONNECT_CAMERA_FINGERPRINT=trash-cam-night-video-wide-1\nCAMERA_DEVICE=/dev/video0\nCAMERA_COMMAND=raspistill\nCAMERA_COMMAND_EXTRA_PARAMS=\"--nopreview --mode 640:480 -o\"\n
"},{"location":"config.for.camera.csi.libcamera/","title":"CSI camera on Raspberry Pi","text":"CSI Camera V2 as of Sat 23 Mar 08:47:12 UTC 2024.
Example for newer operating systems (commands libcamera
or rpicam-still
):
csi.dist
as .env
if you want to use Raspberry Pi camera.env
replace token-change-me
with the value of the token you copied.env
replace fingerprint-change-me
with some random value, which is alphanumeric and has at least 16 chars (and max of 40 chars), for example set it to fingerprint-myprinter-camera-1
.env
Next, test config.
"},{"location":"config.for.camera.csi.libcamera/#real-example","title":"Real example","text":"My Rpi Zero W with Raspberry Pi Camera v2 with maximum resolution available:
PRINTER_ADDRESS=192.168.1.25\nPRUSA_CONNECT_CAMERA_TOKEN=redacted\nPRUSA_CONNECT_CAMERA_FINGERPRINT=c10eb887-f107-41a4-900e-2c38ea12a11c\nCAMERA_DEVICE=/dev/video0\nCAMERA_COMMAND=rpicam-still\nCAMERA_COMMAND_EXTRA_PARAMS=\"--immediate --nopreview --mode 2592:1944:12:P --lores-width 0 --lores-height 0 --thumb none -o\"\n
"},{"location":"config.for.camera.esphome.snapshot/","title":"ESPHome camera snapshot","text":"With esphome camera with snapshot we can use the ultimate power of curl
command to fetch the image from the camera.
Configure esphome device:
esp32_camera
and esp32_camera_web_server
with snapshot
modules:esp32_camera:\n... (skipped due to the fact there are different modules)\n\nesp32_camera_web_server:\n - port: 8081\n mode: snapshot\n
Flash the device and wait until it boots and is available.
"},{"location":"config.for.camera.esphome.snapshot/#create-config-for-script","title":"Create config for script","text":"esphome-snapshot.dist
as .env
.env
replace token-change-me
with the value of the token you copied.env
replace fingerprint-change-me
with some random value, which is alphanumeric and has at least 16 chars (and max of 40 chars), for example set it to fingerprint-myprinter3-camera-3
.env
replace your esphome device address and port in CAMERA_COMMAND_EXTRA_PARAMS
.env
Next, test config.
"},{"location":"config.for.camera.esphome.snapshot/#real-world-example","title":"Real world example","text":"I have esp32-wrover-dev board with camera + esphome + web ui for camera exposing snapshot frame on port 8081
.
We can use curl to fetch it.
PRINTER_ADDRESS=127.0.0.1\nPRUSA_CONNECT_CAMERA_TOKEN=redacted\nPRUSA_CONNECT_CAMERA_FINGERPRINT=06f47777-f179-4025-bd80-9e4cb8db2aed\nCAMERA_DEVICE=/dev/null\nCAMERA_COMMAND=curl\nCAMERA_COMMAND_EXTRA_PARAMS=http://esp32-wrover-0461c8.local:8081/ -o\n
"},{"location":"config.for.camera.esphome.stream/","title":"ESPHome camera stream","text":"With esphome camera stream we can use the ffmpeg
to fetch the image from the camera stream. It requires a bit more computing power from esp device and the host that runs the image processing.
Notice that this is not recommended way due to the amount of consumed resources.
"},{"location":"config.for.camera.esphome.stream/#prepare-esphome-device","title":"Prepare esphome device","text":"Configure esphome device:
esp32_camera
and esp32_camera_web_server
with stream
modules:esp32_camera:\n... (skipped due to the fact there are different modules)\n\nesp32_camera_web_server:\n - port: 8080\n mode: stream\n
Flash the device and wait until it boots and is available.
"},{"location":"config.for.camera.esphome.stream/#create-config-for-script","title":"Create config for script","text":"esphome-stream.dist
as .env
.env
replace token-change-me
with the value of the token you copied.env
replace fingerprint-change-me
with some random value, which is alphanumeric and has at least 16 chars (and max of 40 chars), for example set it to fingerprint-myprinter3-camera-3
.env
replace your esphome device address and port in CAMERA_COMMAND_EXTRA_PARAMS
-update 1
may not be needed in certain ffmpeg versions.env
Next, test config.
"},{"location":"config.for.camera.esphome.stream/#real-world-example","title":"Real world example","text":"The same ESP device with stream, notice different port (8080
).
PRINTER_ADDRESS=127.0.0.1\nPRUSA_CONNECT_CAMERA_TOKEN=token-change-me\nPRUSA_CONNECT_CAMERA_FINGERPRINT=f68336b-8dab-42cd-8729-6abd8855ff63\nCAMERA_DEVICE=/dev/null\nCAMERA_COMMAND=ffmpeg\nCAMERA_COMMAND_EXTRA_PARAMS=\"-y -i 'http://esp32-wrover-0461c8.local:8080/' -vframes 1 -q:v 1 -f image2 -update 1 \"\n
"},{"location":"config.for.camera/","title":"Create config for prusa-connect-camera-script env vars","text":""},{"location":"config.for.camera/#prusa-camera-token","title":"Prusa Camera Token","text":"PRUSA_CONNECT_CAMERA_TOKEN
should be taken from earlier step.
PRUSA_CONNECT_CAMERA_FINGERPRINT
should be uniqe and set only once for each camera.
Fingerprint can be easily generated using command:
uuidgen\n
or via online website, just copy/paste the output as fingerprint value into the config.
Do not change fingerprint after launching the script - thus camera is registered and you may need to revert the change or delete and readd camera again and start from scratch.
"},{"location":"config.for.camera/#example-devices","title":"Example devices","text":"Other env vars are set depending on the camera device we want to use.
"},{"location":"config.for.camera/#locally-connected","title":"Locally connected","text":"Next, test config.
"},{"location":"config.for.camera.mjpg/","title":"Web Cam - MJPG stream","text":"This processing requires ffmpeg package.
Most standalone webcams are actually mjpg cams, they send infinite motion jpeg stream over specific URL.
The best option to check what is the URL is in the camera manual, or if you open web UI of the camera and see the stream image then right click on the image and select Inspect to see the URL for the image - copy that URL.
You should be able to test the stream locally with ffplay
command.
For example, if your camera is reachable over address 192.168.0.20
and port 8000
under endpoint /ipcam/mjpeg.cgi
then below command should show the stream:
ffplay http://192.168.0.20:8000/ipcam/mjpeg.cgi\n
There may be some user and password in the URL.
If that works, then configuration should be pretty straightforward:
ffmpeg-mjpg-stream.dist
as .env
.env
replace token-change-me
with the value of the token you copied.env
replace fingerprint-change-me
with some random value, which is alphanumeric and has at least 16 chars (and max of 40 chars), for example set it to fingerprint-myprinter4-camera-4
.env
replace your RTSP device address raspberry-pi
, port and stream id in CAMERA_COMMAND_EXTRA_PARAMS
if needed.env
Next, test config.
"},{"location":"config.for.camera.mjpg/#unverified-example","title":"Unverified example","text":"Beagle Camera stream - if I remember correctly, then camera url to the stream is something like http://192.168.2.92/ipcam/mjpeg.cgi
Replace 192.168.2.92
with your address in the example below.
PRINTER_ADDRESS=127.0.0.1\nPRUSA_CONNECT_CAMERA_TOKEN=token-change-me\nPRUSA_CONNECT_CAMERA_FINGERPRINT=fingerprint-change-me\nCAMERA_DEVICE=/dev/null\nCAMERA_COMMAND=ffmpeg\nCAMERA_COMMAND_EXTRA_PARAMS=\"-y -i 'http://192.168.2.92/ipcam/mjpeg.cgi' -vframes 1 -q:v 1 -f image2 -update 1 \"\n
But it is better to use a snapshot instead of stream if available, see here.
"},{"location":"config.for.camera.rtsp/","title":"Web Cam - RTSP stream","text":""},{"location":"config.for.camera.rtsp/#caution","title":"Caution","text":"DO NOT use VLC to test streams, there are unfortunately problems with it. Please use ffplay
from ffmpeg
package.
You have some options such as TCP or UDP stream (whatever..). This should work with any other camera (usually there is a different port per stream)
You should be able to test the stream locally with ffplay
command.
For example, if your camera is reachable over address 192.168.0.20
and port 8000
under endpoint /stream
then below command should show the stream:
ffplay rtsp://192.168.0.20:8000/stream\n
If that works, then configuration should be pretty straightforward:
ffmpeg-mediamtx-rtsp-tcp.dist
as .env
.env
replace token-change-me
with the value of the token you copied.env
replace fingerprint-change-me
with some random value, which is alphanumeric and has at least 16 chars (and max of 40 chars), for example set it to fingerprint-myprinter4-camera-4
.env
replace your RTSP device address raspberry-pi
, port and stream id in CAMERA_COMMAND_EXTRA_PARAMS
if needed.env
You can try with UDP
, but you may not get it ;-)
Next, test config.
"},{"location":"config.for.camera.rtsp/#real-world-example","title":"Real world example","text":"My another Rpi Zero W named hormex
has two cameras:
and I'm running mediamtx
server to conver those to RTSP streams. More about mediamtx is here.
So I can have two configs:
.stream-csi
over UDP:
PRINTER_ADDRESS=127.0.0.1\nPRUSA_CONNECT_CAMERA_TOKEN=redacted\nPRUSA_CONNECT_CAMERA_FINGERPRINT=62e8ab72-9766-4ad5-b8b1-174d389fc0d3\nCAMERA_DEVICE=/dev/null\nCAMERA_COMMAND=ffmpeg\nCAMERA_COMMAND_EXTRA_PARAMS=\"-loglevel error -y -rtsp_transport udp -i \"rtsp://hormex:8554/cam\" -f image2 -vframes 1 -pix_fmt yuvj420p \"\n
.stream-endo
over TCP:
PRINTER_ADDRESS=127.0.0.1\nPRUSA_CONNECT_CAMERA_TOKEN=redacted\nPRUSA_CONNECT_CAMERA_FINGERPRINT=01a67af8-86a3-45c7-b6e2-39e9d086b367\nCAMERA_DEVICE=/dev/null\nCAMERA_COMMAND=ffmpeg\nCAMERA_COMMAND_EXTRA_PARAMS=\"-loglevel error -y -rtsp_transport tcp -i \"rtsp://hormex:8554/endoscope\" -f image2 -vframes 1 -pix_fmt yuvj420p \"\n
"},{"location":"config.for.camera.snapshot/","title":"Web Cam - snapshot","text":"Some cameras expose single image snapshot under specific URL. we can use the ultimate power of curl
command to fetch the image from the camera.
This is the preferred way to use web cams because right now Prusa Connect do not support streams, and thus there is no point in wasting CPU on that.
The best option to check what is the URL is in the camera manual, or if you open web UI of the camera and see the still image then right click on the image and select Inspect to see the URL for the image - copy that URL.
You should be able to test the stream locally with ffplay
command.
For example, if your camera is reachable over address 192.168.0.20
and port 8001
under endpoint /snap.jpg
then below command should show the image:
curl -vvv http://another-cam.local:8081/snap.jpg -o snap.jpg\n
then you should see in the output something like Content-Type: image/jpeg
, then you are good - see snap.jpg
in the folder you executed the command.
snapshot.dist
as .env
.env
replace token-change-me
with the value of the token you copied.env
replace fingerprint-change-me
with some random value, which is alphanumeric and has at least 16 chars (and max of 40 chars), for example set it to fingerprint-myprinter3-camera-3
.env
replace your esphome device address and port in CAMERA_COMMAND_EXTRA_PARAMS
.env
Next, test config.
"},{"location":"config.for.camera.snapshot/#real-world-example","title":"Real world example","text":""},{"location":"config.for.camera.snapshot/#esp32-with-esphome","title":"esp32 with esphome","text":"For more in-depth details see esphome snapshot.
I have esp32-wrover-dev board with camera + esphome + web ui for camera exposing snapshot frame on port 8081
.
We can use curl to fetch it.
PRINTER_ADDRESS=127.0.0.1\nPRUSA_CONNECT_CAMERA_TOKEN=redacted\nPRUSA_CONNECT_CAMERA_FINGERPRINT=06f47777-f179-4025-bd80-9e4cb8db2aed\nCAMERA_DEVICE=/dev/null\nCAMERA_COMMAND=curl\nCAMERA_COMMAND_EXTRA_PARAMS=http://esp32-wrover-0461c8.local:8081/ -o\n
"},{"location":"config.for.camera.snapshot/#beagle-camera","title":"Beagle Camera","text":"This is not tested, I do not own such camera so hard to tell if this is right.
Camera URL for snapshot http://192.168.2.92/images/snapshot0.jpg
so the config should be like below:
PRINTER_ADDRESS=127.0.0.1\nPRUSA_CONNECT_CAMERA_TOKEN=redacted\nPRUSA_CONNECT_CAMERA_FINGERPRINT=06f47777-f179-4025-bd80-9e4cb8db2aed\nCAMERA_DEVICE=/dev/null\nCAMERA_COMMAND=curl\nCAMERA_COMMAND_EXTRA_PARAMS=http://192.168.2.92/images/snapshot0.jpg -o\n
"},{"location":"config.for.camera.usb/","title":"USB camera","text":"This should work on any linux distro with any sane camera that you have.
"},{"location":"config.for.camera.usb/#how-to-get-info-which-cameras-are-available","title":"How to get info which cameras are available?","text":"Run v4l2-ctl --list-devices
.
This should show list of devices to use, where /dev/video0
is a device name.
Notice that not every device is an actual camera.
"},{"location":"config.for.camera.usb/#how-to-get-what-modes-are-available-for-the-camera","title":"How to get what modes are available for the camera?","text":"The quick all-in one output for camera /dev/video0
is
v4l2-ctl -d /dev/video0 --all\n
For more details about formats it is better to use v4l2-ctl --list-formats-ext -d /dev/video0
usb.dist
as .env
.env
replace token-change-me
with the value of the token you copied.env
replace fingerprint-change-me
with some random value, which is alphanumeric and has at least 16 chars (and max of 40 chars), for example set it to fingerprint-myprinter2-camera-2
.env
replace /dev/video0
with desired device in CAMERA_DEVICE
.env
Next, test config.
"},{"location":"config.for.camera.usb/#real-world-example","title":"Real world example","text":"Raspberry Pi Zero W with endoscope camera over USB, registered as /dev/video1
:
PRINTER_ADDRESS=192.168.1.25\nPRUSA_CONNECT_CAMERA_TOKEN=redacted\nPRUSA_CONNECT_CAMERA_FINGERPRINT=7054ba85-bc19-4eb9-badc-6129575d9651\nCAMERA_DEVICE=/dev/video1\nCAMERA_COMMAND=fswebcam\nCAMERA_COMMAND_EXTRA_PARAMS=\"--resolution 1280x960 --no-banner\"\n
"},{"location":"configuration.env.full/","title":"Configuration Env Vars","text":"Config for camera is to the script as environment variables (env vars).
SLEEP
- sleep time in seconds between image captures, notice that PrusaConnect accepts images at most every 10s or slower. Default value 10
.
PRINTER_ADDRESS
- Printer address to ping, if address is unreachable there is no point in sending an image. Set to 127.0.0.1
to always send images. Set to empty value to disable ping check and always send images. Default value 127.0.0.1
PRUSA_CONNECT_CAMERA_TOKEN
- required, PrusaConnect API key
PRUSA_CONNECT_CAMERA_FINGERPRINT
- required, PrusaConnect camera fingerprint, use for example cli uuidgen
or web to generate it, it must be at least 16 alphanumeric chars, 40 max. Remember not to change this if it was already set, otherwise you need to remove and add the camera again.
CAMERA_DEVICE
- camera device to use, if you use Raspberry Pi camera attached to the CSI via camera ribbon then leave as is Default /dev/video0
which points to first detected camera.
CAMERA_SETUP_COMMAND
- camera setup command and params executed before taking image, default value is empty, because some cameras do not support it, in general you want to use something like v4l2-ctl parameters, so so for example setup_command=v4l2-ctl --set-ctrl brightness=10,gamma=120 -d $CAMERA_DEVICE
will translate to: v4l2-ctl --set-ctrl brightness=10,gamma=120 -d /dev/video0
CAMERA_COMMAND
- command used to invoke image capture, default is rpicam-still
available options:
anything else will be processed directly, so for example you could use 'ffmpeg' in here
CAMERA_COMMAND_EXTRA_PARAMS
-extra params passed to the camera program, passed directly as <command> <extra-params> <output_file>
example values per specific camera:
libcamera (rpicam-still) --immediate --nopreview --mode 2592:1944:12:P --lores-width 0 --lores-height 0 --thumb none -o
--nopreview --mode 2592:1944:12:P -o
--resolution 1280x960 --no-banner
-f v4l2 -y -i /dev/video0 -f image2 -vframes 1 -pix_fmt yuvj420p
TARGET_DIR
- directory where to save camera images, image per camera will be overwritten per image capture, default value /dev/shm
so that we do not write to microSD cards or read only filesystems/containers. /dev/shm
is a shared memory space. if you have more printers you may need to increase this value on system level.
CURL_EXTRA_PARAMS
- extra params to curl when pushing an image, default empty value, but you could for example add additional params if needed such as -k
if using tls proxy with self-signed certificate
PRUSA_CONNECT_URL
- Prusa Connect endpoint where to post images, default value https://webcam.connect.prusa3d.com/c/snapshot
You could put here Prusa Connect Proxy if you use one.
For more in-depth details (no need to repeat them here) please see the top of the prusa-connect-camera.sh.
"},{"location":"configuration.env/","title":"Configuration Env Vars","text":""},{"location":"configuration.env/#minimum-required-env-vars","title":"Minimum required env vars","text":"Config for camera is to the script as environment variables (env vars).
The most important env vars are:
PRUSA_CONNECT_CAMERA_TOKEN
PRUSA_CONNECT_CAMERA_FINGERPRINT
CAMERA_COMMAND
CAMERA_COMMAND_EXTRA_PARAMS
Those env vars will be filled in in the next steps.
Full list of env vars can be seen here
"},{"location":"configuration.overview/","title":"Configuration Overview","text":"Short overview of actions:
Assuming you already have a working camera with basic setup, we can tune it further.
Below steps depend on the camera capabilities, thus your mileage may vary.
Notice that Prusa Connect has file size limit something about 8MB of the image uploaded, so there may be no point in getting images with super high resolutions.
"},{"location":"configuration.tuning/#getting-higher-quality-camera-images","title":"Getting higher quality camera images","text":"Use v4l2-ctl
to get the list of available resolutions that camera provides and then update it in the env var configs.
Run v4l2-ctl --list-formats-ext -d /dev/video0
where /dev/video0
is a device listed from command above.
Example output:
v4l2-ctl --list-formats-ext -d /dev/video1\nioctl: VIDIOC_ENUM_FMT\n Type: Video Capture\n\n [0]: 'MJPG' (Motion-JPEG, compressed)\n Size: Discrete 640x480\n Interval: Discrete 0.033s (30.000 fps)\n Interval: Discrete 0.033s (30.000 fps)\n Size: Discrete 640x360\n Interval: Discrete 0.033s (30.000 fps)\n Size: Discrete 352x288\n Interval: Discrete 0.033s (30.000 fps)\n Size: Discrete 320x240\n Interval: Discrete 0.033s (30.000 fps)\n Size: Discrete 176x144\n Interval: Discrete 0.033s (30.000 fps)\n Size: Discrete 160x120\n Interval: Discrete 0.033s (30.000 fps)\n Size: Discrete 800x600\n Interval: Discrete 0.033s (30.000 fps)\n Size: Discrete 1280x720\n Interval: Discrete 0.033s (30.000 fps)\n Size: Discrete 1280x960\n Interval: Discrete 0.033s (30.000 fps)\n Size: Discrete 640x480\n Interval: Discrete 0.033s (30.000 fps)\n Interval: Discrete 0.033s (30.000 fps)\n [1]: 'YUYV' (YUYV 4:2:2)\n Size: Discrete 640x480\n Interval: Discrete 0.033s (30.000 fps)\n Interval: Discrete 0.033s (30.000 fps)\n Size: Discrete 640x360\n Interval: Discrete 0.033s (30.000 fps)\n Size: Discrete 352x288\n Interval: Discrete 0.033s (30.000 fps)\n Size: Discrete 320x240\n Interval: Discrete 0.033s (30.000 fps)\n Size: Discrete 176x144\n Interval: Discrete 0.033s (30.000 fps)\n Size: Discrete 160x120\n Interval: Discrete 0.033s (30.000 fps)\n Size: Discrete 800x600\n Interval: Discrete 0.200s (5.000 fps)\n Size: Discrete 1280x720\n Interval: Discrete 0.200s (5.000 fps)\n Size: Discrete 1280x960\n Interval: Discrete 0.200s (5.000 fps)\n Size: Discrete 640x480\n Interval: Discrete 0.033s (30.000 fps)\n Interval: Discrete 0.033s (30.000 fps)\n
As you can see if I set video to YUYV and with resolution higher than 800x600 I would get only 5 frames per second. For still images this is not a problem, but for video streaming that could be too low and I would have to switch to MJPG (or actually mjpeg in ffmpeg)
For Raspberry Cam v2 you could use csi.dist
as source and add --mode 2592:1944:12:P
to the CAMERA_COMMAND_EXTRA_PARAMS
.
For certain USB cameras (such as Tracer Endoscope) you should use usb.dist
and you should be able to add --resolution 1280x960
to the CAMERA_COMMAND_EXTRA_PARAMS
.
Video controls are things like brightness, auto white balance (awb), exposure and so on.
Get device capabilities, especially User controls
:
v4l2-ctl -d /dev/video0 -l\n
and set accordingly parameters you want in CAMERA_SETUP_COMMAND
env var, for example:
CAMERA_SETUP_COMMAND=\"v4l2-ctl --set-ctrl brightness=64,gamma=300 -d $CAMERA_DEVICE\"\n
remember to restart given camera service.
You can try to use guvcview
desktop application to check prams in realtime.
You can pass on params to rpicam-still or fswebcam as you want.
"},{"location":"configuration.tuning/#rpicam-still","title":"rpicam-still","text":"See rpicam-still --help
--hflip Read out with horizontal mirror\n --vflip Read out with vertical flip\n --rotation Use hflip and vflip to create the given rotation <angle>\n
so for example:
CAMERA_COMMAND=rpicam-still\nCAMERA_COMMAND_EXTRA_PARAMS=\"--rotation 90 --immediate --nopreview --thumb none -o\"\n
"},{"location":"configuration.tuning/#fswebcam","title":"fswebcam","text":"See fswebcam --help
--flip <direction> Flips the image. (h, v)\n --crop <size>[,<offset>] Crop a part of the image.\n --scale <size> Scales the image.\n --rotate <angle> Rotates the image in right angles.\n
so for example:
CAMERA_COMMAND=fswebcam\nCAMERA_COMMAND_EXTRA_PARAMS=\"--flip v --resolution 640x480 --no-banner\"\n
"},{"location":"configuration.tuning/#ffmpeg","title":"ffmpeg","text":"When curl is not enough and you don't really want to physically rotate your camera, then use ffmpeg for post processing. You can process static images with it, load v4l2 devices... whatever.
With ffmpeg you can do interesting things with filters, it will just require more computing power.
"},{"location":"configuration.tuning/#adding-v4l2-options","title":"Adding v4l2 options","text":"v4l2
can be used as alias for video4linux2
.
You can pass video4linux options to ffmpeg on device initialization, for example:
ffmpeg -f v4l2 -pix_fmt mjpeg -video_size 1280x960 -framerate 30 -i /dev/video1 \\\n -c:v libx264 -preset ultrafast -b:v 6000k -f rtsp rtsp://localhost:$RTSP_PORT/$MTX_PATH\n
would instruct ffmpeg to use video4linux and force it to talk to the camera under /dev/video1 and forcing mjpeg encoder, resolution and framerate.
This command above is directly taken from mediamtx.
For more params, see official ffmpeg docs. Just remember to pass them before defining input (-i /dev/video1
).
See here for basic ones.
You probably want to use -vf \"transpose=1\"
to rotate image 90 degrees clockwise:
CAMERA_COMMAND=ffmpeg\nCAMERA_COMMAND_EXTRA_PARAMS=\"-y -i 'http://esp32-wrover-0461c8.local:8080/' -vf 'transpose=1' -vframes 1 -q:v 1 -f image2 -update 1 \"\n
"},{"location":"configuration.tuning/#other-processing","title":"Other processing","text":"Frankly speaking you can do anything you want with ffmpeg, for example
-vf transpose=1,shufflepixels=m=block:height=16:width=16
Why? why not :D
"},{"location":"installation/","title":"Installation","text":"Install system packages - assuming Debian based distros on Raspberry Pi OS, which also come in with some pre-installed packages.
Below commands should be executed in shell/terminal (on the Raspberry Pi).
For most Raspberry Pi Cameras (CSI/USB):
sudo apt-get update\nsudo apt-get install -y curl libcamera0 fswebcam git iputils-ping v4l-utils uuid-runtime\n
Additional packages for remote cameras - especially the one that are used for streaming:
sudo apt-get install -y ffmpeg\n
Download this script:
mkdir -p /home/pi/src\ncd /home/pi/src\ngit clone https://github.com/nvtkaszpir/prusa-connect-camera-script.git\ncd prusa-connect-camera-script\n
"},{"location":"performance/","title":"Performance","text":"Raspberry Pi Zero W is able to process CSI camera (Rpi Cam v2) and USB 2k camera but it has load average about 1.4, and CPU is quite well utilized, so you may need to decrease resolution per camera to see how it goes.
for webcams it is always better to choose snapshot because it requires less computing both on camera and on the host, otherwise we need to use ffmpeg
ffmpeg is usually noticeably slow and cpu intensive, especially if you do more complex operations
Printer
Camera
Add new other camera
Token
, this is needed later as PRUSA_CONNECT_CAMERA_TOKEN
env varPhysical host or virtual machine or container:
Camera such as:
esp32_camera_web_server
with snapshot
moduleesp32_camera_web_server
with stream
module using ffmpeg
ffmpeg
Linux operating system. Debian based preferred, for example Raspberry Pi OS Lite if you run Raspberry Pi. I use also laptop with Ubuntu 22.04, but I believe with minor tweaks it should work on most distributions (mainly package names are different).
Below list uses Debian package names.
"},{"location":"requirements/#generic-system-packages","title":"Generic system packages","text":"bash
5.x (what year is it?)git
(just to install scripts from this repo)curl
iputils-ping
uuid-runtime
to make generation of camera fingerprint easierv4l-utils
- to detect camera capabilitieslibcamera0
- for Rpi CSI cameraslibraspberrypi-bin
or rpicam-apps-lite
for Rpi CSI cameras (should be already installed on Rpi OS)fswebcam
- for generic USB camerasffmpeg
- for custom commands for capturing remote streamsYou can run the app as container.
Multi-platform images are available at quay.io/kaszpir/prusa-connect-script.
Currently available platforms:
Install docker on Debian.
Optional - you may want to make sure current user is in docker group so it is possible to run containers without using sudo
:
sudo usermod -a -G docker $(whoami)\n
logout and login again, or reboot Raspberry Pi.
"},{"location":"service.docker/#preparation-of-env-files-for-docker-command","title":"Preparation of env files for docker command","text":"Notice - you may not have to do it if you use docker-compose (I think...).
If you use docker
command directly you need to edit env files and remove quotation marks from the files (this is a limitation of the Docker)
For example:
CAMERA_COMMAND_EXTRA_PARAMS=\"--immediate --nopreview --thumb none -o\"\n
becomes
CAMERA_COMMAND_EXTRA_PARAMS=--immediate --nopreview --thumb none -o\n
"},{"location":"service.docker/#raspberry-pi-csi-or-usb-camera","title":"Raspberry Pi CSI or USB camera","text":"We assume that .csi
is a env file with example variables after edit, it is possible to run below command and have screenshots sent to the Prusa Connect.
docker run --env-file .csi -v /run/udev:/run/udev:ro -v /dev/:/dev/ --device /dev:/dev --read-only quay.io/kaszpir/prusa-connect-script:03c4886\n
"},{"location":"service.docker/#raspberry-pi-and-remote-cams","title":"Raspberry Pi and remote cams","text":"If you use remote camera you can make command even shorter:
docker run --env-file .esp32 --read-only quay.io/kaszpir/prusa-connect-script:03c4886\n
"},{"location":"service.docker/#other-examples","title":"Other examples","text":"docker run --env-file .docker-csi --device /dev:/dev -v /dev/:/dev/ -v /run/udev:/run/udev:ro -it quay.io/kaszpir/prusa-connect-script:03c4886-arm64\n\ndocker run --env-file .docker-esphome-snapshot --read-only quay.io/kaszpir/prusa-connect-script:03c4886-amd64\ndocker run --env-file .docker-video0 --device /dev:/dev -v /dev/:/dev/ -v /run/udev:/run/udev:ro -it quay.io/kaszpir/prusa-connect-script:03c4886\n
"},{"location":"service.docker/#running-multiple-cameras-at-once","title":"Running multiple cameras at once","text":"Create env file per camera and run each container separately.
"},{"location":"service.docker/#docker-compose","title":"docker-compose","text":"Instead of running single command per container, you can manage them using docker-compose. Example docker-compose.yaml
contains some examples. Some sections are commented out, though.
Notice they still require proper env files to work, for example copy usb.dist as .usb, edit its parameters and run docker-compose up
Notice that you may need to change remote cameras addresses from hostnames to IP addresses.
Another notice that sharing /dev/
or /dev/shm
across different containers with different architectures may be problematic.
Depending on the distro there are various options to configure scripts as service.
Other - not implemented, do it on your own.
"},{"location":"service.systemd/","title":"Install script as systemd service","text":"Depending on the distro there are various options to configure scripts as service. On newer distros Raspberry Pi runs systemd, we will use that.
cd /home/pi/src/prusa-connect-camera-script\nsudo cp -f prusa-connect-camera@.service /etc/systemd/system/prusa-connect-camera@.service\nsudo systemctl daemon-reload\n
"},{"location":"service.systemd/#configuring-single-camera","title":"Configuring single camera","text":"Assuming that /home/pi/src/prusa-connect-camera-script/.env
file was created in previous steps, we use that .env
file as example camera config.
Notice there is no dot before env
in the commands below!
sudo systemctl enable prusa-connect-camera@env.service\nsudo systemctl start prusa-connect-camera@env.service\nsudo systemctl status prusa-connect-camera@env.service\n
Above commands will enable given service on device restart (reboot), start the service and show current status.
"},{"location":"service.systemd/#configure-multiple-cameras","title":"Configure multiple cameras","text":"This project allows spawning multiple systemd units. The suffix after @
defines what env file to load from given path. For example if you set unit file name to prusa-connect-camera@csi.service
then systemd will load env vars from the file under path /home/pi/src/prusa-connect-camera-script/.csi
So in short:
csi.dist
as .csi
and edit itprusa-connect-camera@.service
as prusa-connect-camera@csi.service
cd /home/pi/src/prusa-connect-camera-script/\ncp csi.dist .csi\n# edit .csi and set custom command params, token and fingerprint etc...\nsudo systemctl enable prusa-connect-camera@csi.service\nsudo systemctl start prusa-connect-camera@csi.service\nsudo systemctl status prusa-connect-camera@csi.service\n
For another camera, let say for another camera attached over USB
cd /home/pi/src/prusa-connect-camera-script/\ncp usb.dist .usb1\n# edit .usb1 and set device, token and fingerprint etc...\nsudo systemctl enable prusa-connect-camera@usb1.service\nsudo systemctl start prusa-connect-camera@usb1.service\nsudo systemctl status prusa-connect-camera@usb1.service\n
For esphome camera, for static images:
cd /home/pi/src/prusa-connect-camera-script/\ncp esphome-snapshot.dist .esphome1\n# edit .esphome1 and set device, token and fingerprint etc...\nsudo systemctl enable prusa-connect-camera@esphome1.service\nsudo systemctl start prusa-connect-camera@esphome1.service\nsudo systemctl status prusa-connect-camera@esphome1.service\n
I hope you get the idea...
"},{"location":"service.systemd/#uninstall-systemd-service","title":"Uninstall systemd service","text":"Just run two commands per camera (where csi
is a camera config):
sudo systemctl stop prusa-connect-camera@csi.service\nsudo systemctl disable prusa-connect-camera@csi.service\n
After removing all cameras remove systemd service definition and reload daemon:
sudo rm -f /etc/systemd/system/prusa-connect-camera@.service\nsudo systemctl daemon-reload\n
"},{"location":"stream.mediamtx/","title":"mediamtx","text":"Use mediamtx on another Raspberry Pi to create RTSP camera stream for test.
Assuming you run mediamtx with Raspberry Pi CSI camera and that raspberry-pi
is the hostname of your device and that you expose two cams:
so your mediamtx.yml
has config fragment such as:
paths:\n cam:\n source: rpiCamera\n\n endoscope:\n runOnInit: ffmpeg -f v4l2 -pix_fmt mjpeg -video_size 1280x960 -framerate 30 -i /dev/video1 -c:v libx264 -preset ultrafast -b:v 6000k -f rtsp rtsp://localhost:$RTSP_PORT/$MTX_PATH\n runOnInitRestart: yes\n
Start mediamtx server:
./mediamtx\n
This should allow us to reach two streams, replace rpi-address
with the name of your Raspberry Pi hostname or IP address. The ports are default for mediamtx.
ffplay rtsp://rpi-address:8554/cam\nffplay rtsp://rpi-address:8554/endoscope\n
Or you could watch it via web browser under endpoints such as
http://rpi-address:8889/cam\nhttp://rpi-address:8889/endoscope\n
"},{"location":"stream.mediamtx/#example-with-single-camera-over-usb","title":"Example with single camera over USB","text":"Raspberry Pi Zero 2 + Logitech C920, thanks to user [&] undso.io for working example.
Allows to have a camera live stream and prusa camera script to use that stream as source of the images to send to PrusaConnect.
mediamtx config fragment
paths:\n cam:\n runOnInit: ffmpeg -f v4l2 -i /dev/video0 -pix_fmt yuv420p -video_size 1920x1080 -framerate 30 -preset ultrafast -c:v libx264 -b:v 6000k -f rtsp rtsp://localhost:$RTSP_PORT/$MTX_PATH\n runOnInitRestart: yes\n
env file for prusa connect script, remember to replace [rpizero-ip]
with device address (or try 127.0.0.1
or 0.0.0.0
if script runs on the same host where mediamtx runs)
PRINTER_ADDRESS=...\nPRUSA_CONNECT_CAMERA_TOKEN=...\nPRUSA_CONNECT_CAMERA_FINGERPRINT=...\nCAMERA_DEVICE=/dev/null\nCAMERA_COMMAND=ffmpeg\nCAMERA_COMMAND_EXTRA_PARAMS=\"-loglevel error -y -rtsp_transport udp -i 'rtsp://[rpizero-ip]:8554/cam' -f image2 -vframes 1 -pix_fmt yuv420p \"\n
"},{"location":"test.config/","title":"Test the config","text":".env
is the camera config we defined earlierset -o allexport; source .env; set +o allexport\n./prusa-connect-camera.sh\n
Above commands will load env vars and will start the script. In the beginning script shows some commands that will be executed, for example command to fetch the image from camera, example log line:
Camera capture command: fswebcam -d /dev/video0 --resolution 640x480 --no-banner /dev/shm/camera_87299de9-ea57-45be-b6ea-4d388a52c954.jpg\n
so you should run:
fswebcam -d /dev/video0 --resolution 640x480 --no-banner /dev/shm/camera_87299de9-ea57-45be-b6ea-4d388a52c954.jpg\n
and get the outputs from the command, and also it should write an image.
Check for errors, if any, if everything is ok you should see a lot of 204
every 10s.
If not, see troubleshooting, copy logs and raise an issue on GitHub.
"},{"location":"troubleshooting/","title":"Troubleshooting","text":"Things to check if it does not work.
"},{"location":"troubleshooting/#general","title":"General","text":"check /dev/shm/camera_*.stdout
and /dev/shm/camera_*.stderr
files for more details - if they still that 'everything is okay' then probably you have issues with permissions when running script for the second time (see below)
if you use feature to ping the printer then ensure printer is up and running and responds to ping, or just disable the feature (set PRINTER_ADDRESS=\"\"
or to PRINTER_ADDRESS=127.0.0.1
)
check if the camera actually works - check cables if they are not damaged, if the cables are properly plugged, if the camera connects to the network...
check if the camera supports passed parameters such as resolution and codec, especially after replacing the camera - see tuning how to use v4l2-ctl
to see available camera options.
check if any other app is not accessing the camera - especially local cameras are locked by another processes.
If another application is accessing camera then unfortunately only one app can access the camera and you must decide which app to run.
This means if you have something like Klipper/Obico/PrusaLink/motioneye/frigate (and so on) accessing the directly attached device to the Raspberry Pi then it will not work.
In such case you can try to find the process using fuser
package, assuming /dev/video0
is your camera:
sudo apt install -y psmisc\nfuser /dev/video0\n
See StackOverflow for more details.
In general you could create a loopback camera device but this is quite a lot of work to do.
check IP/domain names for remote camera - try that you can access camera over IP address, otherwise you have a DNS issues.
file permissions - check files under /dev/shm/camera*
and /dev/video0
ls -la /dev/shm/camera* /dev/video*\n
and compare them with the current user executing the script or the user that is running docker (see below) or systemd service (see section below).
The quickest fix is just to delete files in /dev/shm/camera_*
to fix only specific permission issues:
sudo systemctl stop prusa-connect-camera@env.service\nsudo rm -f /dev/shm/camera_*\nsudo systemctl start prusa-connect-camera@env.service\n
and see if the issue is resolved.
If you still have issues due to accessing /dev/video*
then ensure the user is added to video
group.
dockerized script - ensure you restart the pi after adding docker, check user permissions to the mounted files and devices (unfortunately this can get very messy with direct access to the devices and files on the host)
check IP/domain names for remote camera - ensure that you can access camera over IP address (or fully qualified domain name), because .local
or .lan
domains are not resolved. Another option is to reconfigure docker to use proper local DNS servers and not generic 8.8.8.8
.
You can also try to run container with --add-host or extra_hosts in docker-compose.
Another option is to run container with --network=\"host\" or network_mode: \"host\" in docker-compose.
"},{"location":"troubleshooting/#systemd-troubleshooting","title":"Systemd troubleshooting","text":""},{"location":"troubleshooting/#get-systemd-logs","title":"Get systemd logs","text":"If the script runs locally but service is not running then you can get the logs like below, ensure to replace env
with the name your camera is using:
sudo systemctl stop prusa-connect-camera@env.service\n
sudo journalctl -f -u prusa-connect-camera\n
and keep it open
sudo systemctl start prusa-connect-camera@env.service\nsleep 10\nsudo systemctl stop prusa-connect-camera@env.service\n
get back to the terminal with running journalctl and see the logs and look carefully at the errors described there
copy the output from starting
to the another starting
command and paste on GitHub
nano
editorsudo nano /etc/systemd/system/prusa-connect-camera@.service\n
and replace User=pi
and Group=pi
with the current user and group, then reload systemd and start service again
sudo systemctl daemon-reload\nsudo systemctl start prusa-connect-camera@env.service\n
This way it will use your user account to access camera device and write files.
"}]} \ No newline at end of file diff --git a/service.docker/index.html b/service.docker/index.html new file mode 100644 index 0000000..585adc5 --- /dev/null +++ b/service.docker/index.html @@ -0,0 +1,1137 @@ + + + + + + + + + + + + + + + + + + + + + + + +You can run the app as container.
+Multi-platform images are available at quay.io/kaszpir/prusa-connect-script.
+Currently available platforms:
+Install docker on Debian.
+Optional - you may want to make sure current user is in docker group so it is possible
+to run containers without using sudo
:
logout and login again, or reboot Raspberry Pi.
+Notice - you may not have to do it if you use docker-compose (I think...).
+If you use docker
command directly you need to edit env files
+and remove quotation marks from the files (this is a limitation of the Docker)
For example:
+ +becomes
+ +We assume that .csi
is a env file with example variables after edit, it is
+possible to run below command and have screenshots sent to the Prusa Connect.
docker run --env-file .csi -v /run/udev:/run/udev:ro -v /dev/:/dev/ --device /dev:/dev --read-only quay.io/kaszpir/prusa-connect-script:03c4886
+
If you use remote camera you can make command even shorter:
+ +docker run --env-file .docker-csi --device /dev:/dev -v /dev/:/dev/ -v /run/udev:/run/udev:ro -it quay.io/kaszpir/prusa-connect-script:03c4886-arm64
+
+docker run --env-file .docker-esphome-snapshot --read-only quay.io/kaszpir/prusa-connect-script:03c4886-amd64
+docker run --env-file .docker-video0 --device /dev:/dev -v /dev/:/dev/ -v /run/udev:/run/udev:ro -it quay.io/kaszpir/prusa-connect-script:03c4886
+
Create env file per camera and run each container separately.
+Instead of running single command per container, you can manage them using
+docker-compose. Example docker-compose.yaml
contains some examples.
+Some sections are commented out, though.
Notice they still require proper env files to work, for example
+copy usb.dist as .usb, edit its parameters and run docker-compose up
Notice that you may need to change remote cameras addresses from hostnames +to IP addresses.
+Another notice that sharing /dev/
or /dev/shm
across different containers
+with different architectures may be problematic.
Depending on the distro there are various options to configure scripts as service. +On newer distros Raspberry Pi runs systemd, we will use that.
+cd /home/pi/src/prusa-connect-camera-script
+sudo cp -f prusa-connect-camera@.service /etc/systemd/system/prusa-connect-camera@.service
+sudo systemctl daemon-reload
+
Assuming that /home/pi/src/prusa-connect-camera-script/.env
file was created in
+previous steps, we use that .env
file as example camera config.
Notice there is no dot before env
in the commands below!
sudo systemctl enable prusa-connect-camera@env.service
+sudo systemctl start prusa-connect-camera@env.service
+sudo systemctl status prusa-connect-camera@env.service
+
Above commands will enable given service on device restart (reboot), +start the service and show current status.
+This project allows spawning multiple systemd units.
+The suffix after @
defines what env file to load from given path.
+For example if you set unit file name to prusa-connect-camera@csi.service
+then systemd will load env vars from the file under path
+/home/pi/src/prusa-connect-camera-script/.csi
So in short:
+csi.dist
as .csi
and edit itprusa-connect-camera@.service
as prusa-connect-camera@csi.service
cd /home/pi/src/prusa-connect-camera-script/
+cp csi.dist .csi
+# edit .csi and set custom command params, token and fingerprint etc...
+sudo systemctl enable prusa-connect-camera@csi.service
+sudo systemctl start prusa-connect-camera@csi.service
+sudo systemctl status prusa-connect-camera@csi.service
+
For another camera, let say for another camera attached over USB
+cd /home/pi/src/prusa-connect-camera-script/
+cp usb.dist .usb1
+# edit .usb1 and set device, token and fingerprint etc...
+sudo systemctl enable prusa-connect-camera@usb1.service
+sudo systemctl start prusa-connect-camera@usb1.service
+sudo systemctl status prusa-connect-camera@usb1.service
+
For esphome camera, for static images:
+cd /home/pi/src/prusa-connect-camera-script/
+cp esphome-snapshot.dist .esphome1
+# edit .esphome1 and set device, token and fingerprint etc...
+sudo systemctl enable prusa-connect-camera@esphome1.service
+sudo systemctl start prusa-connect-camera@esphome1.service
+sudo systemctl status prusa-connect-camera@esphome1.service
+
I hope you get the idea...
+Just run two commands per camera (where csi
is a camera config):
sudo systemctl stop prusa-connect-camera@csi.service
+sudo systemctl disable prusa-connect-camera@csi.service
+
After removing all cameras remove systemd service definition and reload daemon:
+ + +Depending on the distro there are various options to configure scripts as service.
+ +Other - not implemented, do it on your own.
+ +