Skip to content

Getting Started

This guide explains the end‑to‑end workflow and links to the relevant APIs. It is intentionally minimal and focuses on “what” and “where,” not code.

For detailed methods and syntax examples, see API

For code pipeline examples, see Examples

Core objects

  • Tracking: store and preprocess single‑view tracked keypoints with auto-generated metadata.
  • TrackingMV: like Tracking but with multi‑view plus calibration (for 3D).
  • Features: (generated from a Tracking object) derive per‑frame signals from a single Tracking (e.g. speeds, distances, location booleans, behavioural clusters).
  • Summary: (generated from a Features object) derive scalar/statistical results from a single Features (e.g. av speeds, total distances, time in location, behavioural flow analysis stats).

Collections

  • TrackingCollection: batch load/process and group Tracking (or TrackingMV) objects.
  • FeaturesCollection: (generated from TrackingCollection object) batch process and group Features objects, and perform whole-dataset operations (e.g. behavioural clustering from time-series embeddings).
  • SummaryCollection: (generated from FeaturesCollection object) mapping of handle → Summary with grouping/batch helpers.

Collection helpers

  • .groupby and .flatten allow dynamic reorganisation of collections based on arbitrary subsets of tags assigned to individual elements. Groupings persist when generating e.g. a FeaturesCollection from a TrackingCollection
  • flexible collection indexing by handle (e.g. coll['recording1']), integer (e.g. coll[0]) and slice (e.g. coll[0:2])

General helpers

  • All core objects and collections have .save and .load methods that preserve all data/metadata/groupings
  • Objects and collections allow .loc[] and .iloc[] (batch) slicing and indexing of the DataFrames in all Tracking, TrackingCollection, Features and FeaturesCollection objects

Typical workflow

  1. Load a dataset of single-view tracking files from DeepLabCut: TrackingCollection.from_dlc_folder

  2. Add tags to each recording (e.g. 'treatment', 'genotype'): TrackingCollection.add_tags_from_csv

  3. Group the collection by any subset of tags: TrackingCollection.groupby (grouping persists upon FeaturesCollection and SummaryCollection generation)

  4. Perform various QA checks:

    1. ensure recording length is as expected: Tracking.time_as_expected
    2. plot tracked point trajectories: TrackingCollection.plot
  5. Perform various batch pre-processing steps:

    1. remove low-likelihood tracked points: Tracking.filter_likelihood
    2. smooth data: Tracking.smooth_all
    3. interpolate gaps: Tracking.interpolate
    4. rescale pixels to metres: Tracking.rescale_by_known_distance
    5. trim the start/end of the recording: trim
  6. Generate a FeaturesCollection object from the TrackingCollection: FeaturesCollection.from_tracking_collection

  7. Calculate various features and store them with auto-generated names and metadata:

  8. Generate a SummaryCollection object from the FeaturesCollection: SummaryCollection.from_features_collection

  9. Generate summary statistics and export: