none
Beginner pondering about start with simple face tracking RRS feed

  • General discussion

  • Hello everyone!

    Backstory: I do not own Kinect for Windows v2 yet but I recently had an idea if I could control my second monitor's video player just by watching it so it'd for example pause when I'm too busy to watch it. Often I'm playing on my main monitor with fullscreen mode so play/pause buttons are not so quickly available. I though if I'd set a regular web camera to track my faces/eyes on one axis to control this or even just hotkey it but then I remembered there's the new Kinect! 

    I've been tempted to get one to help with 3D animating as well but with its infrared sensor it could even track my face/eyes in the dark. If this could be somewhat easily implemented I could perhaps even continue expanding the implementation to voice commands, for example to a music player.

    As a student a 200€ investment is a bit risky investment especially when I do not know if my idea would be too complicated to implement without previous experience so here I am asking for opinions.


    Main point:

    I'm familiar with java coding with 3 years of university experience, hobby scripting with Autoit, as well as game scripting with C# in Unity3D so..

    1. Would you consider it hard to start with making a simple program to only get the angle of my face/eyes on one axis? I'd use this value with other programming language to implement the controlling of my computer.

    2. Can you think of challenging traps for beginners in this kind of case?

    3. Is there sample code for simple face or eye angle reading?

    4. As it would be for PC usage, what's the minimum range Kinect can recognize face with? Would 70cm be too cose?

    5. How much approximately does Kinetic use my PC's CPU and GPU for this kind of activity or does it purely run with its own hardware?



    Saturday, September 13, 2014 4:01 AM

All replies

  • I think a lot of what you want can be achieved with the latest sensor and tech's available. Instead of tracking the eye's you can abstract it one level higher and look for head direction using the face tracking api's. There is an additional property of "isEngaged" that may be of use. This is a property that can be queried that is true when you are facing the sensor.

    Have a look at the face samples in the SDK browser. You don't need to install a sensor to have a look at the code. If you have someone you know that has a sensor he can record a Kinect Studio clip and you can play that back as a way to test.

    Kinect effective range is .5 to 4.5 meters, but depth can go out to 8 meters

    If you have a relatively good DX11 based GPU, the usage will depend mainly depend on CPU and data throughput from the USB3 bus. Have a look at the common issues people have with getting started(USB3 host controller and GPU/DX11 support).


    Carmine Sirignano - MSFT

    Monday, September 15, 2014 6:22 PM