Getting Started
This guide explains the end‑to‑end workflow and links to the relevant APIs. It is intentionally minimal and focuses on “what” and “where,” not code.
For detailed methods and syntax examples, see API
For code pipeline examples, see Examples
Core objects¶
Tracking: store and preprocess single‑view tracked keypoints with auto-generated metadata.TrackingMV: likeTrackingbut with multi‑view plus calibration (for 3D).Features: (generated from aTrackingobject) derive per‑frame signals from a singleTracking(e.g. speeds, distances, location booleans, behavioural clusters).Summary: (generated from aFeaturesobject) derive scalar/statistical results from a singleFeatures(e.g. av speeds, total distances, time in location, behavioural flow analysis stats).
Collections¶
TrackingCollection: batch load/process and groupTracking(orTrackingMV) objects.FeaturesCollection: (generated fromTrackingCollectionobject) batch process and groupFeaturesobjects, and perform whole-dataset operations (e.g. behavioural clustering from time-series embeddings).SummaryCollection: (generated fromFeaturesCollectionobject) mapping of handle →Summarywith grouping/batch helpers.
Collection helpers¶
.groupbyand.flattenallow dynamic reorganisation of collections based on arbitrary subsets oftagsassigned to individual elements. Groupings persist when generating e.g. aFeaturesCollectionfrom aTrackingCollection- flexible collection indexing by handle (e.g.
coll['recording1']), integer (e.g.coll[0]) and slice (e.g.coll[0:2])
General helpers¶
- All core objects and collections have
.saveand.loadmethods that preserve all data/metadata/groupings - Objects and collections allow
.loc[]and.iloc[](batch) slicing and indexing of the DataFrames in allTracking,TrackingCollection,FeaturesandFeaturesCollectionobjects
Typical workflow¶
-
Load a dataset of single-view tracking files from DeepLabCut:
TrackingCollection.from_dlc_folder -
Add tags to each recording (e.g. 'treatment', 'genotype'):
TrackingCollection.add_tags_from_csv -
Group the collection by any subset of tags:
TrackingCollection.groupby(grouping persists uponFeaturesCollectionandSummaryCollectiongeneration) -
Perform various QA checks:
- ensure recording length is as expected:
Tracking.time_as_expected - plot tracked point trajectories:
TrackingCollection.plot
- ensure recording length is as expected:
-
Perform various batch pre-processing steps:
- remove low-likelihood tracked points:
Tracking.filter_likelihood - smooth data:
Tracking.smooth_all - interpolate gaps:
Tracking.interpolate - rescale pixels to metres:
Tracking.rescale_by_known_distance - trim the start/end of the recording:
trim
- remove low-likelihood tracked points:
-
Generate a
FeaturesCollectionobject from theTrackingCollection:FeaturesCollection.from_tracking_collection -
Calculate various features and
storethem with auto-generated names and metadata:- Distance and movement:
- Boundaries and locations:
- Define static boundary from tracked points:
Features.define_boundary - Static membership:
Features.within_boundary_static - Dynamic membership:
Features.within_boundary_dynamic - Boundary area:
Features.area_of_boundary - Distance to boundary (static/dynamic):
Features.distance_to_boundary_static,Features.distance_to_boundary_dynamic
- Define static boundary from tracked points:
- Orientation:
- Thresholds:
- Embeddings and clustering:
- Build time‑shifted embeddings:
Features.embedding_df - Batch k‑means on embeddings:
FeaturesCollection.cluster_embedding - Assign to precomputed centroids:
Features.assign_clusters_by_centroids
- Build time‑shifted embeddings:
-
Generate a
SummaryCollectionobject from theFeaturesCollection:SummaryCollection.from_features_collection -
Generate summary statistics and export:
- Per‑recording metrics (batched over the collection):
Summary.time_true,Summary.time_false,Summary.total_distance,Summary.transition_matrix,Summary.count_state_onsets,Summary.time_in_state - Collate scalar outputs into a tidy table:
SummaryCollection.to_df - Behaviour Flow Analysis on grouped collections:
SummaryCollection.bfaand post‑processing stats viaSummaryCollection.bfa_stats
- Per‑recording metrics (batched over the collection):