-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GStreamer libcamerasrc sensor-mode support #222
Comments
Indeed this can be quite troublesome. Pinging @kbingham to see if this patch can be progressed. |
Taking a look! |
So it looks like it never made it's way in because no one continued the discussion or the implementation. It's a tricky subject because not all of our pipelines support RAW roles and a processed role at the same time - so the 'mechanism' wasn't possible back then. Since then we added a SensorConfiguration structure however which I think could better describe this and be tied into the gstreamer component - but it's still an area with rough edges that needs someone to look at in more detail I fear. |
What I am trying to accomplish: I have an imx290 (variant...) which offers a few different modes--1920x1080 and 1280x720, 12 and 10bit. The 1280x720 is unfortunately a crop. I'd like to arrange a stream where the input from the camera is 1920x1080x12, and the output is 960x540 (half)--with this resizing being done inside the pi isp. I can arrange this with rpicam-vid (or hello) fairly trivially, but as soon as I ask for a lower resolution with libcamerasrc, it starts to pick the smaller camera mode and introduces cropping. I can of course ask for the full resolution and then resize it with videoconvertscale, but that then uses cpu resources. Especially as I ultimately want two 960x540 streams, one YUV one GRAY, I'd like to avoid the CPU until the last possible moment for reasons of resource management.
Until this makes its way through the development process, is there an alternative suggestion on how to set up something like what I want? Alternatively, is this an area where I could somehow support? |
p.s. on the pi5, it seems that v4l2transform got thrown out with the h264 stuff, so there doesn't seem to be a hardware-accelerated way to downscale unless you use the ISP. This renders computer vision applications nearly completely unusable with higher-resolution sensors if the first step is a cpu-resize. |
Confirming, when I patch libcamera to always choose the larger source format rather than the smaller one, and let the ISP do the resizing, the CPU usage is approximately 1/3. (from 15% of one core to 5% of one core) |
I find myself wanting to override the automatic sensor-mode selection performed by libcamera, as it seems to always pick the not-right one ;)
I found this patchwork from 2023, but it doesn't seem to have made its way into the project: https://patchwork.libcamera.org/cover/18458/
any idea why, or if there is a timeline or better way to accomplish this functionality?
The text was updated successfully, but these errors were encountered: