none
Scope and Possibilities of Kinect Sdk NUI API and Programming RRS feed

  • General discussion

  • Hi All,

     

    I just started  exploring the Possibilities of kinect sdk. just wanted to know if is it possible to simulate a car mechanic workshop environment where the mechanic will be the person facing the kinect sensor and he will replace the tyres of the car, or open the door of the car , very similar to car mechanic workshop environment.

     

    Regards

     

     

    • Changed type Tomar2009 Wednesday, August 31, 2011 11:02 AM
    Wednesday, August 31, 2011 8:01 AM

All replies

  • Tomar,

    Could you be a little more specific and describe your application a little more? Do you mean that the application would render 3D models of cars and the users would interact with the 3D models via gestures detected by Kinect device? If so, this sounds like a very do-able proposition, but you will probably need to perform some amount of skeleton and posture recognition yourself directly from raw depth stream, because some of the positions that mechanics can get into are not the kinds of positions that the Kinect skeleton tracking algorithms are currently optimized to recognize.

    If you mean something else, please explain.

    Hope this helps,
    Eddy


    I'm here to help
    Friday, September 2, 2011 1:57 AM
  • Thanks Eddy for the response.it is very similar to this. the actor will be human skeleton detected by  the sensor and it will interact with the 3-D models of the objects. just need to know how to make the skeleton interact with the 3-D objects for some common actions like

    1) open the door by twisting the knob of the door.

    2) or picking up some object etc.

     

     

    Regards,

     

    Friday, September 2, 2011 6:12 AM
  • 1) The human posture when someone is trying to open a car door is something that does the Kinect would be able to recognize, except that Kinect SDK Beta will only give you 3D positions (in meters for each X, Y and Z coordinates) for joints such as wrist, hand, elbow, etc. (20 joints in total). It won't give you information such as hand rotation or twisting, so the best thing, if you don't want to implement your own skeletal tracker, would be to implement easy-to-perform gestures recognized based on the skeleton positions, when some joint as the hand is near an object of interest.

    2) For many cases of picking up an object, the default skeletal recognition provided will probably help you, but you'll have to experiment a bit and see which postures are well recognized and which are not. When many joints are occluded, skeletal tracking performs worse as it has to guess where the joints are.

    To get started with this, look at skeletal viewer sample, installed to C:\Users\Public\Documents\Microsoft Research KinectSDK Samples\NUI\SkeletalViewer when you install the SDK.

    Good luck!
    Eddy


    I'm here to help
    Friday, September 2, 2011 6:08 PM