Skip to content

Feature Request: Support Video Input for Frame-by-Frame Inference Using 2D Models #115

@jpylvanainen

Description

@jpylvanainen

Hi,

I'm working with live cell imaging data. My typical segmentation workflow involves training a 2D segmentation model on a set of static images, and then applying that trained model to segment cells in videos, frame by frame. I've previously used the ZeroCostDL4Mic Colab notebooks, which support this approach.

I was trying something similar using the BiaPy 2D instance segmentation notebook. I successfully trained a model, but when I tried to use the inference notebook on a video file, it understandably didn't work—since the notebook/configuraion file expects individual image files, not videos.

Would it be possible to add support to the inference notebook for loading a video file directly, processing it frame by frame using a trained 2D model, and then saving the segmented output as a video (multiframe TIFF)? This would make it much easier to work with time-lapse data and batch-process multiple videos, without needing to manually extract frames beforehand.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions