none
Using Kinect for behavioral analysis of primates RRS feed

  • Question

  • Hi,

    Is it theoretically possible to use Kinect to analyze limb movements in an animal such a a monkey? If so, how would one go about doing it? Thank you.

    Thursday, July 16, 2015 1:16 PM

Answers

All replies

  • Hi,

    Yes, this is theoretically possible.  While Kinect's body tracking is tuned for human proportions, you can use the depth stream to train an algorithm to track other primates.

    Thanks,

    David

    Thursday, July 16, 2015 5:56 PM
    Moderator
  • Yes, this is theoretically possible.  While Kinect's body tracking is tuned for human proportions, you can use the depth stream to train an algorithm to track other primates.

    In this sense, tracking anything "big enough" (according to Kinect depth accuracy) would be theoretically possible. If the tracking algorithm has to be built from depth, I understand that tracking primates is no different from tracking felines.

    The interesting point would be, is it possible to track primates with the CURRENT body detection algorithm? Or in other words, how different from "standard human" can an animal be and still get recognized?

    Friday, July 17, 2015 12:21 PM
  • Thanks for your helpful replies.

    How difficult is it to modify the body detection algorithms in order to track primates? Also, I understand there is full body and seated modes. Are there ways to modify the settings only to look for and detect a limb, especially if the primate is seated in a chair without exposure to the full skeleton? Thanks again.

    Friday, July 17, 2015 4:17 PM
  • You can't just change settings on the body tracker.

    These types of trackers are based on machine learning algorithms and are trained using many thousands of labeled input frames, that's how they can detect body parts in novel frames.

    To give you an idea this is the paper explaining how it works in case you want to recreate something from scratch on your own: http://research.microsoft.com/pubs/145347/BodyPartRecognition.pdf


    Brekel

    Friday, July 17, 2015 6:23 PM
    Moderator
  • I have read several research articles where labs are using the Kinect to analyze primate behavior so it certainly seems possible - I am just wondering how one would go about doing this? Also here is a youtube video of someone tracking their lizard?

    https://www.youtube.com/watch?v=HQuHZNK4gUM

    How was this done? Thanks a lot.

    Sunday, July 19, 2015 10:54 PM
  • How difficult is it to modify the body detection algorithms in order to track primates?

    A new tracking algorithm will take a significant amount of work. Have a look at the paper Brekel linked. Think also that the lizard is being tracked as a "whole" not with body part detection (as you intend to do with primates) so it is a much simpler task. But in general it requires a similar procedure than the one described in the paper: using depth information to train a detection algorithm.

    Also, I understand there is full body and seated modes. Are there ways to modify the settings only to look for and detect a limb, especially if the primate is seated in a chair without exposure to the full skeleton? Thanks again.

    No, Kinect v1 had these modes, but Kinect v2 removed them, now there is only one mode. For humans (I'm not a zoologist, but we are also primates, aren't we?) the requirement for body tracking is that the sensor can see the head and the shoulders of the user.
    Monday, July 20, 2015 7:50 AM
  • Thanks for the response.

    I apologize for not being very clear about the upper limb detection. We do not intend to view the whole primate and only track its upper limb, making the program pick out the upper limb from the rest of the body. We only want to have the upper limb exposed and track its trajectory; so in a sense it will be the "whole" limb rather than just the limb picked out of the body. Would this be simpler? Thank you.

    Monday, July 20, 2015 8:39 PM
  • I still don't fully understand the settings of your scenario, but yes, maybe that would be "simpler". However building a new detection algorithm is not simple in any case. As I (and Brekel) already told you, this is not something you can get out of the box from Kinect SDK or tweaking a demo built for another purpose.

    Maybe if we knew your programming skills background we could help you better. I mean I wouldn't explain the difficulties of a project in the same words (or with the same level of detail) to a zoologist with no knowledge of programming who is thinking about paying to some software developers to make him a detection software than to a just graduated engineer who is thinking about starting a PhD about computer vision (just two examples that I've just think).

    Tuesday, July 21, 2015 9:02 AM
  • Hi, thanks for your response.

    I have an engineering background and some basic programming skills in Matlab and C (many years ago). Ideally, it would be great to collaborate with someone interested in customizing the Kinect for these purposes as I imagine many labs around the country would greatly benefit from this as behavioral analysis is being more and more important for results and also the current commercial products are far too expensive. In the meantime, would you recommend purchasing a Kinect and start fiddling with it on my own-I am eager to learn but if the learning curve is prohibitively steep to customize it myself I am not sure I want to embark on a lost cause.

    Tuesday, July 21, 2015 4:03 PM
  • I think that Kinect is not an expensive technology (but it depends on your budget), so if you can afford it, first thing I would try is testing primate tracking with the current (human)body tracking algorithm. Remember to check Kinect requirements and if your current computer meets them.

    Learning Kinect is not super-steep but (if the previous approach does not work) you could not use the Kinect for anything more than obtaining the depth images. Then you will have two options:

    1. Use an existing feature tracking algorithm (from another library) and train it. This is not a trivial thing, you will have to experiment with your scenario and build the appropriate training sets. Matlab has a computer vision toolbox (but I just know it exist, I have never worked with it). OpenCV is another popular (and open source) computer vision SDK.

    2. Develop your own feature tracking algorithm (and train it also). This will involve a lot of time (research, implementation, testing...).

    I think this is an interesting project but some expertise is also needed (or time to acquire it) and this is something I think you will not find "for free".

    Wednesday, July 22, 2015 11:51 AM
  • Hey i'm wondering whether you are able to send me or put the research articles that you've mentioned on here, i am very interest in conducting similar research and would like to know more information. Also is there any more information regarding the lizard tracking video? I have tried to contact the poster with no luck.

    Thank you very much, 

    much appreciated, Brian

    Thursday, June 30, 2016 3:43 PM