Skip to content

Improvements to segmentation-based volume creation #436

@Mo-Sc

Description

@Mo-Sc

Hi, I recently worked on the segmentation based volume creation. Here are some features that I was missing (PR incoming):

Updating the segmentatation volume for the MSOTAcuityEcho

The device twins have an update_settings_for_use_of_model_based_volume_creator method to adapt the volume to fit the transducer and add mediprene, heavy water, and US gel. It is however only implemented for the model based volume creation, so I added an update_settings_for_use_of_segmentation_based_volume_creator for the MSOTAcuityEcho:

  • matches model based variant
  • alwats adds heavy water and mediprene, US gel is optional via Tags.US_GEL (consistent with model based creation)
  • Pads the segmentation mask along x and z axis as needed and repositions detection and illumination geometries
  • New labels are selected dynamically (max(existing_labels) + 1) to avoid collisions with user-defined segmentation classes.

Partial volumes for segmentation based volume creation

The model based volume creation can simulate partial volume effects via Tags.CONSIDER_PARTIAL_VOLUME. But there is no equivalent for the segmentation based adapter. So I implemented boundary smoothing for the segmentation masks via scipy.ndimage.uniform_filter in the SegmentationBasedAdapter. The intensity can be controlled via the new tag Tags.PARTIAL_VOLUME_KERNEL_SIZE.

Now the SegmentationBasedAdapter can receive hard segmentations (3D integer label map of shape [X, Y, Z]) like before and optionally applies PV effects. But for my application, I also wanted to be able to pass precomputed 4D fraction maps of shape [C, X, Y, Z], where each voxel is 0-1 fraction of each class. This can be helpful in case more complex class blending was performed before, or the segmentation map is a neural network output containing class probabilities. To accomodate both options, I modified the SegmentationBasedAdapter to always work on 4D fraction maps, and if a 3D label map is passed, it is simply one-hot encoded into 4D. This creates some memory overhead, but reduces code complexity. Alternatives would be separate 3D + 4D paths or somehow processing 4D maps on the fly.

  • 3D label maps will be one-hot encoded into 4D tensor
  • If Tags.CONSIDER_PARTIAL_VOLUME, boundary fractions are smoothed (kernel size controlled by new Tags.PARTIAL_VOLUME_KERNEL_SIZE, defaults to 3).
  • Fractions are normalised to sum to 1 at each voxel (should already be the case for one hot encoded 3D maps, but can be helpful for precomputed 4D maps), and blended according to the class properties
  • For the resulting segmentation map property, the class with the highest fraction at each voxel determines the label.
  • Oxygenation is blended only where blood is present, and set to NaN where bvf is zero.

The implementation of the update_settings_for_use_of_segmentation_based_volume_creator should also be able to handle both cases.

I also added / adapted an example and test script:

  • SegmentationLoader.py: Similar to the already present test case for the RSOMExplorer. I added a test case for the Acuity, that also checks partial volume effects and 4D segmentation maps. The Acuity case also runs update_settings_for_use_of_segmentation_based_volume_creator.
  • segmentation_loader.py: Similar to the already present sample for the RSOMExplorer. Runs a simple optical pipeline with switchable devices (RSOMExplorer and Acuity) and optional PV effects.

If you guys think the 4D seg mask feature is not necessary, I can also create a new PR that only includes updating the segmentation volume for the MSOTAcuityEcho and 3D partial volume effects.

Cheers,
Moritz

Metadata

Metadata

Assignees

No one assigned

    Labels

    featureNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions