The New DeepLabCut GUI is released, ready to jump in?

Getting started in the new DLC GUI: a primer on how we made Spooky DeepLabCut 🎃👻

DeepLabCut Blog
8 min readDec 6, 2022

By Timokleia Kousi, Research Scientist

If you are 🆕 to DeepLabCut (DLC) and would like to experiment🔬 with it before applying it to your research, this blog post is for you✨!

Choose a short, fun video 📼, save it in a folder 📁 on your Desktop in .mp4 or .mov form, and let's get started 💯!

For your first try, choose a video with one animal/creature👻. Also, make sure you can easily track the key points. It will decrease your labeling time and complexity!

I will use the same video we used on Halloween from the animated movie “The Nightmare Before Christmas” by Tim Burton. I call my video Mr. Jack.

Check it out on Twitter🐥!!

First, you need to install DLC on your pc 🖥️. You can find detailed instructions HERE. There are different ways to do it.

Tip💡: If you prefer to try it out before installing it, there is a Colab example with sample data HERE.

For this demo, I created a DLC Conda environment. My operating system is Windows 10, and my GPU is an NVIDIA GeForce GTX 1060 6GB. If you don’t have a GPU, you can always use your CPU. However, remember that you will need much more time👴 to train the model.

Let’s launch the GUI!

Open the programAnaconda Promptand type:

conda env list

Now you can see a list of all your Conda environments. If you followed the installation process correctly, there would be one called DEEPLABCUT. We need to activate it 💣!

Type:

conda activate DEEPLABCUT

You can see DEEPLABCUT on the left side of your command line.

To get the latest GUI, please then run:

##for windows
pip install --upgrade --force-reinstall "deeplabcut[gui,tf]"==2.3rc3

##for linux
pip install --upgrade --force-reinstall 'deeplabcut[gui,tf]'==2.3rc3

To start the new GUI, type:

python -m deeplabcut

The GUI is launched 🎉🎉🎉!

Tip💡: From now on, we will do most of the work in the GUI. However, make sure to check your Anaconda prompt often. It will help you understand what’s going and help spot and fix errors that might appear!

Now let’s create our first project. Click on Create New Project, and fill in the needed information:

I will name my project Halloween, add me Tim as the experimenter👩‍🔬, and create the project on the Desktop. Then I will load the folder that contains my video Mr. Jack and press Create.

Fantastic! A new project has been created. Check your desktop folders. There must be a new folder with the name of your project.

Before extracting frames to create our labeling dataset, we need to edit the configuration file. First, open the previously created project folder. There will be a file called config.yaml. Let's check it out; double-click on it.

You can play🕹️ and edit most parameters in this file based on the result you are aiming for. Use your creativity! Learn more HERE!

As this is a kick-start demo, we will edit only the necessary ones to create a stunning result. 😍

First, watch your video and decide on the key points 🔑 you want to track and the connections between them in pairs to create a cool skeleton at the end💀.

For Mr. Jack, I listed eleven key points 🔑 (right eye, left eye, nose, mouth, neck, right hand, left hand, right elbow, left elbow, right arm, left arm) and nine connections. In addition, I added the eleven key points under body parts:

Then, I changed the plotting configuration of the skeleton based on the connections I saw fit for my key points. I also set the skeleton_colour to white as it will look spookier👻, but for your video, black might be better.

Fantastic🎉! Now save and close the configuration file. Let’s move on🐌.

On the menu bar, click on Extract Frames. In the new dialogue, press Extract Frames.

We can change some attributes in this dialogue out of the scope of this demo. For today, we will keep the pre-filled options .

The following dialogue is called Label frames. Press Label Frames and select the folder that contains the frames you extracted in the previous step. It will be inside the labeled-data folder inside your project folder on the Desktop.

The path will look like that where Halloween DLC-Tim 2022-11-08 will be the name and credentials of your project:

The selection will trigger the launch of the DLC napari window, the tool we will use to annotate our data.

You can learn more about our DLC napari plugin HERE! Also, find out more about napari HERE!

Let’s examine the tool ⚙️ a bit closer!

These are the important components:

Let’s start labeling 🎈! The goal is to create a key-points layer on top of our data. You can see the data layers on the left side of the window. There are two layers, one called images that contain your data and one called CollectedData, which will save your annotations. The CollectedData layer must be selected (blue) while labeling and saving.

Go to the toolbox 🛠️ at the top left, select the add points icon, and choose the size of the label points (use the slider). You are ready to add your labels. Check the body part displayed at the bottom right of the window and left-click on it at the image to add your label. Continue till you finish all your key points. Double-check that the colors match the selected body parts (there is a color reference at the bottom left of the window). You can also move the labels or delete them if they don't fit.

I would suggest you go through and play🤾‍♀️ with all the tools available in the window.

This is the result when I finish labelling the first frame:

When happy with the result, use the slider under the image to move to the next frame. Continue till you finish all the extracted frames. In my case, I had to label 19 images. At the 🔚 select File at the top right corner and save selected layers. The work is done!

Let’s move to the next dialogue, Create Training Dataset. As before, we will keep it simple! Press Create Training Dataset!

Success !! We have created our training dataset. You can see that the dataset was created successfully in the bottom left corner.

We are ready to train 🚂 the model!

The following dialogue, called Train will use the labeled dataset we created to teach the model how to recognize the bodyparts we are tracking and apply the labels correctly to all the frames of our video.

The displayed attributes describe the training process. As before, we can keep the pre-filled options. However, if you are using a CPU, you might want to decrease the maximum iterations if it takes too long. So go ahead and press ok!

The training process will take a while! So you will need to be patient 🙏 !

I let the model train overnight.

You can also use this time to learn more about DLC ❤️! HERE you can find our paper published in Nature Neuroscience to understand what is going on behind the scenes.

You can check the training process in the Anaconda Prompt window:

When the training process is finished there will be a line like that:

The next step is to evaluate our trained model. At this step, we can see how well the model understood the labeled data and can label the rest of the video for us. So go ahead and try it out!

When you finish the evaluation, let’s move on to the analysis. First, in the Analyze videos dialogue, choose your video, which we saved on the Desktop at the beginning of the project, and press Analyze videos.

The analysis will trigger four pop-up windows with plots to appear. So go ahead and examine them🔭✨!

Let’s check together my favorite one❤️!

Every body part we track is represented on the plot by a different color in (x,y) coordinates. You can see all the position changes of each one of the body parts throughout the video. 🆒

Try to play with the attributes in the analyze videos dialogue and create more plots😎!

Now let’s see the tracking on the original video. First, move to the Create videos dialogue. Here choose your video, specify the video type (in my case, .mov), and press 'RUN' 🏃‍♀️. It might take a few minutes . . . Don't forget to monitor the progress in the Anaconda Prompt window. When the process hits 100%, go to the folder videos inside your projects folder and find your labeled video.

Suppose the result is different from what you are aiming for. In that case, you can go back to retrain your model for a longer time, or you can continue to extract outliers (possible mistakes), correct them and retrain your model on the newly created dataset.

Please share your work with us! #DeepLabCut

Don’t forget to follow us on twitter🐥!!

--

--

DeepLabCut Blog

bringing you top performing markerless pose estimation for animals: deeplabcut.org