Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Camera timing out during acquisitions #758

Open
clara-not-claire opened this issue May 31, 2024 · 1 comment
Open

Camera timing out during acquisitions #758

clara-not-claire opened this issue May 31, 2024 · 1 comment

Comments

@clara-not-claire
Copy link

clara-not-claire commented May 31, 2024

Describe what you want to implement and what the issue & the steps to reproduce it are:

System Goal: Our goal is to have 2-4 cameras connected to capture images displayed on a screen simultaneously. The image is displayed, and the cameras acquire images sequentially, then the next image is displayed with the process in a loop over all the images we want to capture. We want the system to be able to capture a ~50k image dataset. The system should at least be able to capture 2000 images in one sitting without timing out or requiring manual resetting.

Current Issue: Currently, we can only capture between 90-400 images before the cameras time out with the following error:

  File "/Users/clarahung/repos/lensless-dataset/main_loop.py", line 213, in <module>
    with cam.RetrieveResult(2000, py.TimeoutHandling_ThrowException) as res:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/clarahung/opt/anaconda3/envs/diffuser_cam/lib/python3.11/site-packages/pypylon/pylon.py", line 3598, in RetrieveResult
    return _pylon.InstantCamera_RetrieveResult(self, *args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_genicam.TimeoutException: Grab timed out. Possible reasons are: The image transport from the camera device is not working properly, e.g., all GigE network packets for streaming are dropped; The camera uses explicit triggering (see TriggerSelector for more information) and has not been triggered; Single frame acquisition mode is used and one frame has already been acquired; The acquisition has not been started or has been stopped. : TimeoutException thrown (file 'InstantCameraImpl.h', line 1034)

How we've debugged: We've switched out the cables to make sure timeout wasn't a cable issue and checked our hardware. On the software side, we followed the Basler tutorials on how to write our code, tried extending timeout time, manually set framerate/buffers and implemented garbage collect inside the loop to make sure it wasn't a buffer issue, etc yet the same timeout issue persists.

Support: I've attached our original code below. We use the pygame package to display our images. Are we implementing our capture loop in the correct way? Is there a better, more elegant way to do so that doesn't result in timeout? I've looked at all existing support requests and couldn't find anything similar. We aren't trying to do hardware triggering and shouldn't need software triggering(?). I've read about eventHandlers from other requests but am not quite sure how it fits in. Any help with the code, best practices, and resolving the issue is appreciated!

import numpy as np
import matplotlib.pyplot as plt
import cv2
import os
import sys
import platform
import timeit
import datetime, pytz
import time
import pygame as pg
from time import sleep
from natsort import natsorted
import json
import gc

"""
NUM_IMG: total num imgs
NUM_CAMERAS: total num cameras

frames_to_grab: num frames each camera grabs
frame_counts: array storing frame counts of each camera
"""

recon = True
capture = True
SERIAL_ARR = ['40270065', '40270082', '40412531']
CAPTURE_FORMAT = "Mono8" 

## PATH VARIABLES
ARGS = sys.argv
CWD = ARGS[3] if ARGS[3] else os.getcwd()
DESTINATION = f"{CWD}/dataset/{DATETIME}"
SOURCE = ARGS[4] if ARGS[4] else f"./mirflickr_subset/" # debug
GT_PATH = f"{DESTINATION}/ground_truth"
RML_PATH = f"{DESTINATION}/rml"
DC_PATH = f"{DESTINATION}/diffusercam"
PATH_ARR = [DC_PATH, RML_PATH, GT_PATH]

os.makedirs(DESTINATION)
os.makedirs(GT_PATH)
os.makedirs(RML_PATH)
os.makedirs(DC_PATH)

## CAMERA VARIABLES
NUM_CAMERAS = 2
NUM_IMG = int(ARGS[1])
start_idx = int(ARGS[2])
frame_counts = [0]*NUM_CAMERAS
img = py.PylonImage()

# setup environment with 2 cameras 
os.environ["PYLON_CAMEMU"] = f"{NUM_CAMERAS}"
tlf = py.TlFactory.GetInstance()
devices = tlf.EnumerateDevices()
for d in devices:
    print("Cameras detected: ")
    print(d.GetModelName(), d.GetSerialNumber())

# create array to store and attach cameras
cam_array = py.InstantCameraArray(NUM_CAMERAS)
for idx, cam in enumerate(cam_array):
    cam.Attach(tlf.CreateDevice(devices[idx]))

# store a unique number for each camera to identify the incoming images
for _, cam in enumerate(cam_array):
    camera_serial = cam.DeviceInfo.GetSerialNumber()
    #TODO: USE SERIAL NUMBER TO DEFINE THE CAMERA!!!
    if camera_serial == SERIAL_ARR[0]:
        idx = 0
    if camera_serial == SERIAL_ARR[1]:
        idx = 1
    if camera_serial == SERIAL_ARR[2]:
        idx = 2
    cam.SetCameraContext(idx)
    print(f"set context {idx} for camera {camera_serial}")

exposure_times = [17000, 84000]

cam_array.Open()

for idx, cam in enumerate(cam_array):
    camera_serial = cam.DeviceInfo.GetSerialNumber()

    # set the exposure time for each camera
    print(f"set Exposuretime {idx} for camera {camera_serial}")
    cam.ExposureTime = exposure_times[idx]

    # set the gain for each camera - zero gain
    print(f"set Gain {idx} for camera {camera_serial}")
    cam.Gain = 0.0 

    # set pixel format for each camera 
    cam.PixelFormat.SetValue(CAPTURE_FORMAT) #"RGB8"
    print(f"set PixelFormat {idx} for camera {camera_serial} as ", cam.PixelFormat.GetValue())

    # Manually set framerate for each camera
    cam.AcquisitionFrameRateEnable.Value = True
    cam.AcquisitionFrameRate.Value = 10.0
    print(f"set FrameRate {idx} for camera {camera_serial} as ", cam.AcquisitionFrameRate.GetValue())

    # Manually set max buffer value:
    cam.MaxNumBuffer.Value = 15
    print(f"set MaxNumBuffer {idx} for camera {camera_serial} as ", cam.MaxNumBuffer.GetValue())

## INIT DISPLAY
pg.init()
screen_info = pg.display.Info()
print(pg.display.Info())
width, height = screen_info.current_w, screen_info.current_h
screen = pg.display.set_mode((width, height), pg.FULLSCREEN, display=1) # set according to your system
black_color = (0, 0, 0) # modifying in case different
screen.fill(black_color)
pg.display.flip()

source_imgs = os.listdir(SOURCE)
source_imgs = natsorted(source_imgs) # does natural sorting 

## GRAB LOOP
print("starting capture...")
if capture:
    cam_array.StartGrabbing(py.GrabStrategy_LatestImageOnly, py.GrabLoop_ProvidedByUser)
    for i in range(start_idx, NUM_IMG):
        filename = source_imgs[i]
        
        for event in pg.event.get():
            if event.type == pg.QUIT or event.type == pg.KEYDOWN:
                pg.quit()
                raise SystemExit
    
# Fill screen black and display image.
        screen.fill("black")
        image = pg.image.load(SOURCE + filename)
        img_size = image.get_size() # (width,height)
        crop = pg.Surface((1200, 1200))

        if img_size[0] < img_size[1]:
            image = pg.transform.flip(image, True , False)

        crop.blits(((image, (65, 150), (0, 0, 300, 300)), (image, (235, 890), (0, 0, 300, 300))))
        screen.blit(pg.transform.scale(crop, (500, 500)), (400, 0))
        pg.display.flip()
        sleep(2)

        for cam in cam_array:
            # Execute the software trigger, wait actively until the camera 
            # accepts the next frame trigger or until the timeout occurs.
            # print("Resulting Framerate:", cam.ResultingFrameRate.GetValue())
            with cam.RetrieveResult(1000, py.TimeoutHandling_ThrowException) as res:
                img_nr = res.ImageNumber
                cam_id = res.GetCameraContext()
                if res.GrabSucceeded():
                    cam_path = PATH_ARR[cam_id]
                    frame_counts[cam_id] = img_nr

                    print(f"Captured Image #{img_nr} using Cam #{cam_id}", '\n')

                    img.AttachGrabResultBuffer(res)
                    # print maximum value of image
                    array_value = img.GetArray()
                    print(f"Max value: {np.max(array_value)}, Min value: {np.min(array_value)}, Mean value: {np.mean(array_value)}")
                    
                    filename = f"{cam_path}/img_{i}_cam_{cam_id}.tiff"
                    img.Save(py.ImageFileFormat_Tiff, filename)
                    img.Release()
                else:
                    print(f"Failed: Image #{img_nr} of Cam #{cam_id}")
                    metadata["Failed Images"].append((img_nr, filename, cam_id))

    cam_array.StopGrabbing()
    cam_array.Close()
    pg.quit()

    print("exited normally: "+ SOURCE)

Is your camera operational in Basler pylon viewer on your platform

Yes

Hardware setup & camera model(s) used

Two Basler Dart daA1920-160uc cameras connected to a USB hub, connected to M1 Macbook Air/Pro.
A tablet monitor is also connected to the laptop via HDMI.

Runtime information:

python: 3.11.5 | packaged by conda-forge | (main, Aug 27 2023, 03:35:23) [Clang 15.0.7 ]
platform: darwin/x86_64/23.4.0
pypylon: 3.0.1 / 7.3.1.9
@HighImp
Copy link
Contributor

HighImp commented Jun 7, 2024

Hi, just a idea:
Can you please try to reduce the DeviceLinkThroughputLimit, or better, use the BandwithManager to check if you can reach a stable state?

And can you explain why you wait for 2 sec before the RetrieveResult?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants