Video Capture Stage

This stage connects to a video device (using V4L2) and publishes the images arriving to the rest of the system.


Use the VideoCaptureStageConfiguration structure to configure the stage. This allows you to set a V4L2 device (for example, setting it to /dev/video’), a desired frame rate, and a format.

These settings will vary based on the camera. For example, some cameras support hardware JPEG compression. You can set the format to mjpeg to get JPEG compressed frames. Alteratively, some cameras provide YUYV, RGB, or greyscale images.

See the VideoBufferFormat enum for all supported formats.

You can also configure ‘controls’ that you wish to have set for your capture device. These allow you to configure things like gain/exposure/etc from configuration. For example:

      path: "ark/image/stages/video_capture_stage_config.yml"
        camera_path: "/dev/video0"
        image_format: "BayerRg10"
        desired_frame_rate: 0
        width: 1920
        height: 1080
        buffer_count: 5
          override_enable: 1
          bypass_mode: 0
          sensor_mode: 5
          frame_rate: 21000000
          exposure: 47619
          gain: 170
          comms_namespace: "/nanocamera/forward/color"

This will invoke the given controls before applying format settings to the capture device. There exists a post_format_controls block for applying controls after applying format settings.

You can also set the thread_priority and thread_processor_affinity config options to control the priority and affinity of the thread that the video capture stage spawns.


Each camera frame that is read from the video capture stage is packaged into a CameraImage struct and published over the image channel.


The stage publishes VideoDeviceStatistics periodically (at 1Hz) on the statistics channel. This includes the number of times the stage has reconnected, a count of frames received, and the number of frames dropped.