Assessing animal behavior with machine learning: New DeepLabCut tutorial

1 minute read comments

I have added a hands-on tutorial to the Assessing Animal Behavior lecture. The tutorial covers the GUI-based use of DeepLabCut, a popular open-source software package for markerless pose estimation of animals. DeepLabCut uses deep learning to track the movements of animals in videos, enabling the study of behavior in a high-throughput and multi-modal fashion.

png DeepLabCut logo. Source: deeplabcut.github.io

The tutorial provides a step-by-step guide of a default workflow, from setting up a project to labeling training data, training a neural network, and analyzing the results. The tutorial aims to introduce students to cutting-edge methods for assessing animal behavior and to provide them with practical experience in using machine learning tools for behavioral analysis.

The target group is neuroscience students with no or little programming knowledge. The tutorial will enable them to use DeepLabCut for their own research projects and to gain insights into the possibilities and limitations of machine learning-based behavioral analysis.

Please, feel free to share the tutorial with students or colleagues who might be interested in using DeepLabCut for their own projects.

Multi-animal arena tracking. Mouse whiskers tracking. Mouse pupil tracking.
Mouse paw tracking. Mouse paw tracking. Mouse locomotion tracking.
Example applications of DeepLabCut: multi-animal arena tracking, mouse whiskers tracking, mouse pupil tracking, mouse paw tracking, and mouse locomotion tracking. Of course, other animals and body parts can be tracked as well (even human body parts). Source: deeplabcut.github.io

Example plots from the 'plot-poses' folder.Example plots from the 'plot-poses' folder.
Example plots from the 'plot-poses' folder.Example plots from the 'plot-poses' folder.
Plots from one example “plot-poses” folder, showing the trajectories of the tracked body parts, a histogram of the tracked body parts’ x- and y-coordinates, the likelihood of the predictions as a function of frame index, and the filtered x- and y-coordinates of the tracked body parts (showing which coordinates are below a certain threshold pcutoff and, thus, set to missing).

Further readings


Comments

Comment on this post by publicly replying to this Mastodon post using a Mastodon or other ActivityPub/Fediverse account.

Comments on this website are based on a Mastodon-powered comment system. Learn more about it here.

There are no known comments, yet. Be the first to write a reply.