none
Beta Testing Feedback RRS feed

  • General discussion

  • 1. DirectX interoperabilty; NuiImageBuffer could be a DirectX interface, for example.  The content from each sensor stream, streamed efficiently by the kernel mode device driver to a graphics card and the ability to utilize it from there with Cuda, DirectX and other techniques.  An option to have the driver stream the audio to the graphics card as well could enhance new research into GPGPU and speech recognition.

    2. Techniques to obtain the physical dimensions and positions of the screens relative to the sensor.  6D SLAM is one option and another is an interactive sequence where the user with their hands or eyes creates multiple overlapping pyramids or frustrums that indicate the screen planes in physical coordinates relative to the sensor.

    3. Head tracking built in.  I'd like head tracking to be seperable from skeleton tracking in the initialization options.  Head tracking seems like a major feature, in and of itself, and can be implemented efficiently singularly or in conjunction with skeleton tracking.  Head tracking is a new means of delivering new user experiences and combines with point 2, above, in interesting ways.  Head tracking can include the midpoint between the eyes, both of the eyes, or other data options.

    4. The raw data formats from each of the sensor streams including the infrared stream.

    Wednesday, June 22, 2011 5:39 PM

All replies

  • Adam,

    For issue #2 could you clarify what you mean by "screens" for which you want positions relative to the sensor?

    About issue #3, we have heard lots of feedback about supporting other forms of tracking than just the skeleton tracking that we do now, and we really appreciate it.

    About the other issues, I've made a note of them.

    Thanks for your interest,
    Eddy


    I'm here to help
    Friday, June 24, 2011 12:49 AM
  • Eddy,

    Sure, by "screens" I mean the rendering outputs; each rectangle of pixels connected to the graphics card(s) and otherwise the hardware rendering surfaces.  With the spatial coordinates of each of the four corners of each "screen" (the position, dimensions and orientation) some new 3D effects can be created or enhanced.

    Thanks.  In my opinion, these new sensors enhance the PC platform and are exciting to develop for.

     

    Cheers,

    Adam

    Friday, June 24, 2011 2:19 AM
  • Adam,

    Skeleton joint positions are in fact provided in meters (what we call "Skeleton Space" in pages 22-23 of programming guide: http://research.microsoft.com/en-us/um/redmond/projects/kinectsdk/docs/ProgrammingGuide_KinectSDK.pdf). And it is possible to map depth stream into spatial coordinates using SkeletonEngine.DepthImageToSkeleton, as described in question 10 in FAQ (http://social.msdn.microsoft.com/Forums/en-US/kinectsdknuiapi/thread/4da8c75e-9aad-4dc3-bd83-d77ab4cd2f82). It is also possible to map from depth coordinates into color video coordinates by using Camera.GetColorPixelCoordinatesFromDepthPixel, as described in question 11 in same FAQ.

    Hopefully this addresses most of your needs.

    About providing a more direct way to correlate color video stream to physical space, I'll make note of your feedback.

    Thanks.
    Eddy


    I'm here to help
    Thursday, June 30, 2011 5:41 PM