AI for Wildlife Conservation: how animal pose estimation with DeepLabCut can help

DeepLabCut Blog
6 min readFeb 26, 2022

--

Animal diversity is reducing in unprecedented rates, raising concerns about wildlife conservation. Being able to acquire data and monitor wildlife at scale is necessary to understand animal behavior, migration patterns and habitat selection, as well as to be able to protect them from illegal trafficking and hunting. Today, this is becoming possible with advances in hardware and deep learning. Also see this new piece in The Guardian on how AI is helping with conservation efforts.

Traditionally, the arduous, labor-intensive and dangerous task of data acquisition was done by human field workers. Advances in sensor technology, have led to the widespread adoption of cheap, easy to use, widely available devices such as cameras, accelerometers and GPS tags to collect data from wildlife animals. Also, satellite imaging and drones have enabled data collection at scales previously impossible. This development resulted in an explosive increase in data available. Nevertheless, wildlife researchers still lack the tools they need to be able to process the available data and extract the information and knowledge. Such knowledge is necessary to make informed actions that would reverse our planet’s decline in animal diversity, richness and abundance.

Our recently published paper in Nature Communications, with collaborators from various world leading institutions, discusses the open challenges in wildlife conservation, as well as how Machine Learning (ML) is currently used to revolutionize and help solve these challenges. We discuss the sensors used for collecting data, issues with current ML approaches, and future perspectives on solving these issues. Our key takeaway is the urgent need for greater collaboration between ecologists and computer scientists, so that the rich domain knowledge available to ecologists is integrated to ML models resulting in more robust, accurate and reliable algorithms.

Current uses of Machine Learning in wildlife conservation

ML tools are already experiencing adoption among ecologists, which enable them to monitor, understand and protect animals in unprecedented scale.

Some use cases include:

  • Animal detection and counting which helps in monitoring populations and their migration patterns, as well as detecting poachers and notifying rangers.
  • Identification of individual animals from their fur/skin pattern or their scars help in tracking and protecting species that are near extinct, or study social relationships in animals. ML algorithms are already much better and faster at this task than expert humans.
  • 3D shape recovery of animals helps determine their health, age and reproductive status.
  • Reconstructing the pose of animals (position and rotation of their limbs) helps monitor and understand their locomotion mechanisms and behavior better.
  • Environment reconstruction gives context on the studied animals which is vital in understanding their behavior, migration patterns, habitat preference etc.
  • Algorithms that model species diversity, abundance and interactions can help us understand causal relationships across species as well as their environment.

MegaDetector is one such example of using ML in the wild:

DeepLabCut™️ for pose estimation and DLC2Kinematics for kinematic analysis of wild animals

Part of DeepLabCut’s mission is to democratize the use of Artificial Intelligence (AI) tools by providing an accessible, easy to use tool for pose estimation of animals. The data can then be used for behavioral and kinematic analyses, in any kind of scenario. We, as the developer team, put a lot of effort in making DeepLabCut usable by non-ML experts, enabling ecologists and behavioral researchers to analyze data they collect in the wild.

Here is an example of using DeepLabCut for performing pose estimation of a wild elephant. Detailed guides and explanations on how to install and use DeepLabCut for single and multiple animals are available in our documentation.

First, we can run DeepLabCut fully from a graphical user interface (GUI) if desired by launching it from a command line, aka within a terminal using:

python -m deeplabcut

This opens the GUI software:

DLC GUI landing page

Then we can create a new project,

GUI project management tab

To then set the customized keypoints you want to label in your project, Edit the config (configuration) file!

In the `config.yaml` file, we define the bodyparts we want to track, such as:

bodyparts:
- head-top
- trunk-base
- trunk-mid
- trunk-end
- trunk-nose
- front-left-leg-start
- front-left-leg-mid
- front-left-leg-foot
- front-right-leg-start
- front-right-leg-mid
- front-right-leg-foot
- back-left-leg-start
- back-left-leg-mid
- back-left-leg-foot
- back-right-leg-start
- back-right-leg-mid
- back-right-leg-foot
- tail-base
- tail-mid
- tail-end

*ProTip: don’t add spaces to the bodypart (aka keypoint) names.

Following the guided interface of the software, we extract and label frames to let the ML algorithm know what we are interested in detecting, then create a training dataset and train a deep neural network to detect the bodyparts.

The labeling interface, with all the user-defined bodyparts of a frame labeled.

Now you have a trained model you can deploy on new videos!

To do so, we analyze the videos, and create a new video which shows the generated pose data:

We can also plot the position and trajectories of each bodypart of the animal. Here we only plot three bodyparts of the elephant’s trunk:

This can be done through the user interface, but for those interested, here is the python command to do so:

deeplabcut.plot_trajectories(config, videos, filtered=True, displayedbodyparts=[“trunk-mid”, “trunk-end”, “trunk-base”])

Notice how we used the filtered data for the plotting.

The DeepLabCut Model Zoo: using pretrained networks in the Wild

Apart from training your own models for specific animals, we offer some pre-trained trained models for several animals and specific use cases in our Model Zoo, which you can use on your videos using our COLAB Notebook! Anyone can contribute to the Model Zoo by labeling frames in our online web-app. Sharing our knowledge resources with each other is necessary to advance technology fast enough to save our ecosystem.

Here is an example result of a macaque model generously contributed to the Model Zoo by Laubugen et. al:

MacaquePose in the DeepLabCut Model Zoo

You can also check out our video tutorial on YouTube which covers the DeepLabCut process thoroughly, as well as our docs for even more information.

Getting kinematic information using the Mathis Labs new package, DLC2Kinematics 📈

Deep learning powers a motion-tracking revolution — Nature 2019

DLC2Kinematics is a new helper tool-kit that includes several functions that help analyze the pose data predicted by DeepLabCut. For this part you need to have some Python knowledge to be able to follow (but also we have a demo Jupyter Notebook available — read more about how we use these Computational Notebooks in Nature).

Using the notebook or a Python interpreter such as iPython, we first load the DeepLabCut data:

# path to the h5 file produced after analyzing a video
h5_file = “/home/mammoth/elephant-in-the-wild/elephant-trimmedDLC_resnet50_elephant-ai-blog-postFeb15shuffle1_15000_filtered.h5”
data, bodyparts, scorer = dlc2kinematics.load_data(h5_file)

Then we can compute the velocities and accelerations of the parts we are interested in:

# filter_window, and order are optional arguments,
# but improve the results in our case, so why not use them!
velocities = dlc2kinematics.compute_velocity(df,bodyparts=[‘trunk-mid’], filter_window=3, order=1)
accelerations = dlc2kinematics.compute_acceleration(data,bodyparts=[‘all’])

We can also compute joint information, such as angle:

joint_angles = dlc2kinematics.compute_joint_angles(data,joints_dict)

Plotting the data can be done like so:

dlc2kinematics.plot_velocities(data[scorer][“trunk-mid”], velocities)

Elephant Trunk Velocity

You can find more information on using DLC2Kinematics in our GitHub repo, https://github.com/AdaptiveMotorControlLab/DLC2Kinematics, and stay tuned for exciting new developments!

See you next time,

Your DeepLabCut Dev Team
🪐🐘 aka, Aristotelis, the DeepLabCut Community Manager!

--

--

DeepLabCut Blog
DeepLabCut Blog

Written by DeepLabCut Blog

bringing you top performing markerless pose estimation for animals: deeplabcut.org

No responses yet