Skip to content

Releases: neuroinformatics-unit/movement

v0.16.0

23 Apr 08:18
b4341a9

Choose a tag to compare

To update movement to the latest version, see the update guide.

What's Changed

⚡️ Highlight 1: draw, save, and load Regions of Interest in the GUI

  • Widget for drawing Regions of Interest (ROIs) as napari Shapes by @niksirbi in #617
  • Add conversion between napari Shapes and movement RoIs by @niksirbi in #927

You can now define Regions of Interest (RoIs) interactively in the movement napari GUI, export them to a GeoJSON file, and load them back into Python for analysis — all without writing any coordinates by hand.

Demo_EPM-rois_2026-04-01_3x.mp4

The Define regions of interest menu lets you draw shapes directly on the video frames, name them, and save them via the Save layer button. The saved .geojson file can then be loaded with movement.roi.load_rois():

from movement.roi import load_rois

rois = load_rois("my_regions.geojson")
[roi.name for roi in rois]  # ['arena', 'nest', 'corridor']

# use in analysis — e.g. check if the animal is inside a region
is_in_nest = rois[1].contains_point(ds.position)

You can also load a .geojson file back into napari via the Load layer button to review or edit your regions.

See the Define regions of interest section of the GUI guide and the updated boundary_angles example for a full walkthrough.

⚡️ Highlight 2: automatic detection of source software in load_dataset

Thanks to a first contribution from @M0hammed-Reda, load_dataset() can now infer source_software automatically from the file format. When you know which software produced your file, we still recommend passing it explicitly, but automatic inference can be a convenient fallback.

from movement.io import load_dataset

# recommended: explicit is clearer and faster
ds = load_dataset("path/to/file.h5", source_software="DeepLabCut", fps=30)

# convenient fallback: automatic inference from file format
ds = load_dataset("path/to/file.h5", fps=30)

You can also call infer_source_software() directly to check what movement would infer for a given file.

🚀 Performance improvements

  • Speed up and improve memory use when loading VIA tracks file by @sfmig in #769

@sfmig has substantially sped up loading of VIA-tracks bounding box files. Loading a 34 MB file now takes ~0.5 s (down from ~30 s), and a 100 MB file takes ~1.7 s (down from ~1-2 min). Memory use at peak is now comparable to the final in-memory size of the dataset.

⚠️ Breaking changes

transforms.scale() no longer has a default value for the factor parameter. Previously, omitting factor silently multiplied the data by 1.0 (equivalent to no scaling). If you were omitting factor, add it explicitly:

# Before (silently did nothing)
ds_scaled = scale(ds)

# After
ds_scaled = scale(ds, factor=0.01)  # e.g. convert pixels to centimetres

🛠️ Refactoring

@roaldarbol has been a long-time collaborator and a constant source of ideas for movement — and this is his first, but not last, PR! He migrated the CLI from argparse to Typer, giving the movement command automatic shell completion support and a cleaner foundation for future CLI additions.

📚 Documentation

  • Update and add missing docstrings by @lochhh in #885
  • Added link to TheBehaviourForum virtual workshop talk by @niksirbi in #951
  • Add acknowledgements to examples by @niksirbi in #952
  • Replace fixed contributor table with responsive grid by @lochhh in #954
  • Update pandas URL in intersphinx config by @lochhh in #895

🧹 Housekeeping and dependencies

New Contributors

Full Changelog: v0.15.0...v0.16.0

v0.15.0

09 Mar 12:11
e0fbb2b

Choose a tag to compare

To update movement to the latest version, see the update guide.

What's Changed

⚡️ Highlight 1: new example for scaling pose tracks to real-world coordinates

  • Add example for scaling pose tracks to real-world coordinates by @HollyMorley in #827

This new gallery example by @HollyMorley walks through a complete workflow for converting pixel-based pose data to real-world units using a a known reference distance. It demonstrates pre-processing (confidence filtering, interpolation, smoothing), measuring a reference distance in napari, applying movement.transforms.scale(), and computing real-world body measurements and inter-limb distances.

napari_scale_draw-and-measure

Holly is working on a few more examples for our gallery, so keep an eye out for those in the coming releases!

⚡️ Highlight 2: Save and load RoIs via GeoJSON

  • Add ROI save/load functionality via GeoJSON by @niksirbi in #773

Regions of interest (RoIs) can now be saved to and loaded from GeoJSON files using movement.roi.save_rois() and movement.roi.load_rois(). Each RoI is stored as a GeoJSON Feature containing its geometry and properties (name and RoI type), within a FeatureCollection. This makes it easy to persist ROI definitions across sessions and share them with collaborators.

from movement.roi import LineOfInterest, PolygonOfInterest, save_rois, load_rois

# Create 2 RoIs: a polygon and a line
square = PolygonOfInterest([(0, 0), (1, 0), (1, 1), (0, 1)], name="square")
diagonal = LineOfInterest([(0, 0), (1, 1)], name="diagonal")

# Save them to a GeoJSON file
save_rois([square, diagonal], "rois.geojson")

# Load RoIs from a GeoJSON file
rois = load_rois("rois.geojson")
[roi.name for roi in rois]  # returns ["square", "diagonal"]

⚡️ Highlight 3: derive bounding boxes from pose tracks

  • Adds poses_to_bboxes() to movement.transforms to generate 2D bounding by @Edu92337 in #757

Thanks to @Edu92337, you can now compute bounding boxes from sets of keypoints (poses). The new movement.transforms.poses_to_bboxes() function finds the minimum and maximum coordinates across all keypoints for each individual at each time point. The resulting bounding boxes are represented by their centroid (center point position) and shape (width and height).

Suppose poses_ds is a movement poses dataset with a "position" variable containing keypoint coordinates.

from movement.transforms import poses_to_bboxes

# Compute bounding boxes with zero padding
bbox_centroid, bbox_shape = poses_to_bboxes(poses_ds["position"])

# Compute bounding boxes with 10 pixels of padding
bbox_centroid, bbox_shape = poses_to_bboxes(poses_ds["position"], padding=10)

🐛 Bug fixes

📚 Documentation

  • Updated conda installation instructions with pyqt6 by @niksirbi in #825
  • Add "See Also" to load_multiview_dataset docstring by @niksirbi in #824
  • Remove docstring types and use annotations for docs by @ParthChatupale in #727
  • Draft roadmap for 2026 by @niksirbi in #809
  • fix: correct loader param name, path assignment, and spelling in docs by @dhruv1955 in #857

🧹 Housekeeping

  • Remove OSSS announcement banner by @adamltyson in #828
  • Make hide_pooch_hash_logs internal by renaming to _hide_pooch_hash_logs by @Shreecharana24 in #842
  • Use idiomatic xarray coordinate comparison in test_load_poses by @Kayd-06 in #876
  • [pre-commit.ci] pre-commit autoupdate by @pre-commit-ci[bot] in #861
  • Contributors-Readme-Action: Update contributors list by @github-actions[bot] in #856
  • Bump actions/download-artifact from 7 to 8 by @dependabot[bot] in #859
  • Contributors-Readme-Action: Update contributors list by @github-actions[bot] in #880

New Contributors

Full Changelog: v0.14.0...v0.15.0

v0.14.0

17 Feb 10:16
447492e

Choose a tag to compare

To update movement to the latest version, see the update guide.

What's Changed

⚡️ Highlight: unified data loading interface

Thanks to @lochhh's tireless efforts, loading poses and bounding boxes is now handled by a single entry point, movement.io.load_dataset.

The new load_dataset function works for all our supported third-party formats. For example:

from movement.io import load_dataset

# DeepLabCut -> poses dataset
ds = load_dataset("path/to/file.h5", source_software="DeepLabCut", fps=30)

# SLEAP -> poses dataset
ds = load_dataset("path/to/file.slp", source_software="SLEAP")

# NWB -> poses dataset
ds = load_dataset("path/to/file.nwb", source_software="NWB")

# VGG Image Annotator tracks -> bounding boxes dataset
ds = load_dataset("path/to/file.csv", source_software="VIA-tracks")

Similarly, movement.io.load_multiview_dataset replaces the old movement.io.load_poses.from_multiview_files, with added support for bounding boxes:

from movement.io import load_multiview_dataset

# Load LightningPose pose predictions from two cameras.
ds = load_multiview_dataset(
    {"cam1": "path/to/cam1.csv", "cam2": "path/to/cam2.csv"},
    source_software="LightningPose",
    fps=30,
)

Software-specific loaders (e.g. load_poses.from_dlc_file, load_bboxes.from_via_tracks_file) remain available for users who want full control over the loading process.

Warning

The following functions are deprecated and will be removed in a future release:

  • load_poses.from_file → use load_dataset instead
  • load_bboxes.from_file → use load_dataset instead
  • load_poses.from_multiview_files → use load_multiview_dataset instead

Migrating from deprecated functions:

# Before
from movement.io import load_poses, load_bboxes

ds = load_poses.from_file("file.h5", source_software="DeepLabCut", fps=30)
ds = load_bboxes.from_file("file.csv", source_software="VIA-tracks", fps=30)
ds = load_poses.from_multiview_files(
    {"cam1": "cam1.csv", "cam2": "cam2.csv"},
    source_software="LightningPose", fps=30,
)

# After
from movement.io import load_dataset, load_multiview_dataset

ds = load_dataset("file.h5", source_software="DeepLabCut", fps=30)
ds = load_dataset("file.csv", source_software="VIA-tracks", fps=30)
ds = load_multiview_dataset(
    {"cam1": "cam1.csv", "cam2": "cam2.csv"},
    source_software="LightningPose", fps=30,
)

🐛 Bug fixes

  • Adapt savgol_filter for compatibility with Scipy >= 1.17 by @niksirbi in #761
  • Fix coordinate assignment for elem2 in _cdist by @HARSHDIPSAHA in #776
  • Hide _factorized properties from Points layer tooltips by @niksirbi in #781

🤝 Improving the contributor experience

📚 Documentation

🧹 Housekeeping

  • Restrict sphinx version to < 9 by @niksirbi in #768
  • Unpin sphinx and pin ablog>=0.11.13 by @niksirbi in #811
  • Fix deprecated license syntax by @sfmig in #549
  • Ignore Docutils URL in linkcheck by @lochhh in #793
  • Ignore ISO URL in linkcheck by @lochhh in #815
  • Authenticate with GitHub token during linkcheck by @niksirbi in #800
  • [pre-commit.ci] pre-commit autoupdate by @pre-commit-ci[bot] in #795
  • Contributors-Readme-Action: Update contributors list by @github-actions[bot] in #787
  • Bump conda-incubator/setup-miniconda from 3.2.0 to 3.3.0 by @dependabot[bot] in #788

New Contributors

Full Changelog: v0.13.0...v0.14.0

v0.13.0

13 Jan 17:56
2faa2ab

Choose a tag to compare

What's Changed

⚡️ New feature: loading netCDF files in the GUI

Thanks to @Sparshr04, netCDF files saved with movement can now be loaded into our GUI. See the relevant documentation for more information.

  • Enable loading netCDF files in the napari widget by @Sparshr04 in #689

🛠️ Refactoring

The changes in this section are primarily relevant to movement developers and contributors.

@lochhh has carried out a substantial refactor of our dataset validators. These have been renamed (ValidPosesDataset -> ValidPosesInputs, ValidBboxesDataset -> ValidBboxesInputs) and significant code duplication has been removed. This work is part of a broader effort to simplify and unify the interface for loading motion-tracking data into movement, and will make it easier to add support for additional formats going forward.

She has also simplified and cleaned up the parsing and validation logic for DeepLabCut CSV files.

  • Refactor DeepLabCut CSV read and validation by @lochhh in #715

📚 Documentation

The Community section of the website has been expanded and refined, including a new page collecting shared resources. Feel free to use and share these when communicating about movement, whether to promote it, teach it, or acknowledge it in your own work.

Moreover, the ability to execute our examples via Binder had been broken for some time. This functionality should now be restored with this release.

🧹 Housekeeping

This release also includes a number of maintenance and infrastructure updates aimed at improving dependency management, CI robustness, and the overall development workflow.

New Contributors

Full Changelog: v0.12.0...v0.13.0

v0.12.0

27 Nov 12:30
c63db8b

Choose a tag to compare

What's Changed

⚡️ Work begins on coordinate system transforms

@animeshsasan has kicked off our efforts in this area.

He has added a new function for computing homography-based transforms between sets of points on a plane. This is the first step towards supporting multi-session alignment in a common coordinate system, with further work planned for upcoming releases (see issue #565).

  • Add function to estimate perspective transform between two planes by @animeshsasan in #696

🧹 Housekeeping

Full Changelog: v0.11.1...v0.12.0

v0.11.1

10 Nov 17:52
757e109

Choose a tag to compare

This release patches some minor issues with our website and welcomes one new contributor to the project.

The actual package functionality is not affected.

What's Changed

New Contributors

Full Changelog: v0.11.0...v0.11.1

v0.11.0

07 Nov 16:57
6bb9712

Choose a tag to compare

Summary

movement v0.11.0 redefines displacement vectors, simplifies installation via pip and uv, and introduces multi-version documentation. It also brings support for 3D DeepLabCut files and welcomes many new contributors 🎉.

What’s changed

↗️ Refined displacement vectors

@carlocastoldi has overhauled how movement computes displacement vectors, making the definitions more explicit, flexible, and intuitive for users. You can read all about these changes in our latest blog post, written by Carlo.

displacement_old_vs_new

⚡️ Simplified installation via pip and uv

We’ve streamlined the dependency setup so that movement can now be installed directly with pip or uv—no extra steps required.

Early adopters can try:

uv pip install movement

See our updated installation guide for full details.

📚 Multi-version documentation

Thanks to @animeshsasan, the documentation now includes a version switcher! This means that, going forward, you will be able to browse docs for multiple versions of movement, making it much easier to follow along with future updates and interface changes.

📂 Improved handling of DeepLabCut files

Through a collaboration between @Akseli-Ilmanen and @lochhh, movement can now load and save DeepLabCut files with 3D coordinates (x, y, z).

In addition, @CeliaLrt clarified the documentation to make explicit that movement currently requires all keypoints to be shared across individuals.

  • Support loading and saving 3D DeepLabCut poses by @lochhh in #686
  • Add documentation to specify Movement require identical keypoints (#150) by @CeliaLrt in #658

🧹 Housekeeping

🧑‍💻 New Contributors

Full Changelog: v0.10.0...v0.11.0

v0.10.0

19 Sep 14:24
98e3ec5

Choose a tag to compare

What's Changed

✨ Highlight ✨ movement meets FreeMoCap

A new example demonstrating how to read 3D data from FreeMoCap recordings as movement poses datasets. Thanks to @maxstaras for collecting the data and writing this example! 🚀

New feature: export a bounding boxes dataset as a VIA-tracks .csv file

Documentation

  • Add missing contributors by @niksirbi in #665
  • Mention typical timing of community calls on website and README by @niksirbi in #672

Housekeeping

  • Pin napari-video >= 0.2.13 by @niksirbi in #662
  • Contributors-Readme-Action: Update contributors list by @github-actions[bot] in #671
  • [pre-commit.ci] pre-commit autoupdate by @pre-commit-ci[bot] in #673
  • Bump actions/download-artifact from 4 to 5 by @dependabot[bot] in #674

New Contributors

Full Changelog: v0.9.0...v0.10.0

v0.9.0

06 Aug 16:34
e1c67b2

Choose a tag to compare

What's Changed

New feature: compute kinetic energy

  • feat: compute_kinetic_energy for per-individual KE decomposition (#228) by @vtushar06 in #623

The compute_kinetic_energy function is the newest addition to the movement.kinematics module. It can be used to derive the total kinetic energy per individual, or decompose it into "translational" (centre-of-mass motion) and "internal" components (motion of keypoints relative to individual's centre-of-mass). Thanks to @vtushar06 for driving this forward.

Improvements to documentation

Spearheaded by @lochhh.

  • Replace external links to movement docs with explicit targets by @lochhh in #642
  • Aggregate API docs for flat modules by @lochhh in #648
  • Add gallery of examples in API documentation by @lochhh in #644

Housekeeping

  • Contributors-Readme-Action: Update contributors list by @github-actions[bot] in #650
  • Bump akhilmhdh/contributors-readme-action from 2.3.10 to 2.3.11 by @dependabot[bot] in #651
  • Add downloads badge by @adamltyson in #652
  • [pre-commit.ci] pre-commit autoupdate by @pre-commit-ci[bot] in #653

Full Changelog: v0.8.2...v0.9.0

v0.8.2

22 Jul 16:45
5de6097

Choose a tag to compare

A hotfix to redeploy the website

Full Changelog: v0.8.1...v0.8.2