none
Multiple Kinect configurations

Answers

  • Ian,
    The Kinect SDK does provide support for Multiple Kinects. For using multiple kinects for audio:
     
    using Microsoft.Research.Kinect.Audio;<br/><br/>void YourMethod()<br/>{<br/> IEnumerable<AudioDeviceInfo> devices = source.FindCaptureDevices();
     foreach (AudioDeviceInfo device in devices)
     {
     KinectAudioSource deviceSpecificSource = new KinectAudioSource();
     deviceSpecificSource.MicrophoneIndex = (short)device.DeviceIndex;
    
     // your code here
     }<br/>}
    

    And for nui (video and skeletal tracking) API:
    using Microsoft.Research.Kinect.Nui;

    void YourMethod()
    {
    Device devices = new Device();
        for (int i = 0; i < devices.Count; i++)
        {
            Runtime nuiDevice = new Runtime(i);
            // Your code here
        }
    }

    Note that there are limitations for the video API, such as that skeletal tracking and depth + player index streams will only work for the default device at index 0. These are described in the programming guide: http://bit.ly/KinectSDKProgrammingGuide
     
    Hope this helps,
    Eddy

    I'm here to help

    Thursday, June 16, 2011 5:19 PM
    Owner

All replies

  • Kinect for Windows SDK Beta supports multiple Kinects. See details  http://bit.ly/KinectSDKProgrammingGuide under "The NUI API/NUI API Initialization" section.

    Thanks, Rob

    Thursday, June 16, 2011 5:12 PM
    Owner
  • Ian,
    The Kinect SDK does provide support for Multiple Kinects. For using multiple kinects for audio:
     
    using Microsoft.Research.Kinect.Audio;<br/><br/>void YourMethod()<br/>{<br/> IEnumerable<AudioDeviceInfo> devices = source.FindCaptureDevices();
     foreach (AudioDeviceInfo device in devices)
     {
     KinectAudioSource deviceSpecificSource = new KinectAudioSource();
     deviceSpecificSource.MicrophoneIndex = (short)device.DeviceIndex;
    
     // your code here
     }<br/>}
    

    And for nui (video and skeletal tracking) API:
    using Microsoft.Research.Kinect.Nui;

    void YourMethod()
    {
    Device devices = new Device();
        for (int i = 0; i < devices.Count; i++)
        {
            Runtime nuiDevice = new Runtime(i);
            // Your code here
        }
    }

    Note that there are limitations for the video API, such as that skeletal tracking and depth + player index streams will only work for the default device at index 0. These are described in the programming guide: http://bit.ly/KinectSDKProgrammingGuide
     
    Hope this helps,
    Eddy

    I'm here to help

    Thursday, June 16, 2011 5:19 PM
    Owner
  • If I understand you correctly, the SDK exposes data for multiple sensors but that the skeletal tracking currently does not integrate the sensor data from multiple kinect sensors to do skeletal tracking. And if I read into this further, I can't exactly do this myself because skeletal tracking is only provided based on device 0. I.e. the skeletal tracking only is provided for the default device so that I couldn't have in a simplistic implementation, say, 3 sensors all tracking and have them effectively "vote" on bone positions so that as one sensor got confused from occlusion, the other two which might still agree and would override the bad data.

    While I am at it, I am often at a loss to understand or explain why bones seem to manifest random noise when not visible. For example, if the legs are not visible below the knee, the foot can seem to jitter and turn in positions that are biomechanically impossible. At the point bones are presumed invisible...or even when visible...why aren't they constrained to biomechanically feasible ranges and/or positions or filtered to reduce noise? Is it just that filtering adds delay or that this "layer" is for others to implement on top of the raw skeletal data?

    Much thanks,

    Dan

    Thursday, June 30, 2011 8:51 PM
  • I don't believe you can use multiple Kinects to view the same volume, but rather differant volumes. It's sending out rapid pulses of IR light and then measuring how long a surface is illuminated during the same duration. The percentage of the time it was illuminated versus how long it was lit says how far away it is. Actually the fractional part of the multiple of the beam length. If there's a second source of illumination it becomes meaningless. How long it's lit has nothing to do with how far away it is because of that second source. They would have to run on seperate wavelengths of IR. I don't know the details of the hardware capabilities, but being able to select frequency to work in tandem is unlikely.

     


    PS: Make that how long it was lit by the strobe versus seen to be lit by the sensor.
    Friday, July 01, 2011 12:04 AM
  • Dan,

    you are correct in that data from multiple sensors will not be integrated into a single skeleton tracking process (the current algorithms have an independent skeleton tracking pipeline for each kinect device and, furthermore having the limitation of skeleton tracking only being functional for device 0).

    The random noise, jittering and biomechanical impossibilities you mention happen because the Kinect SDK Beta only supports full-body skeleton tracking. We hope to provide an improved experience with every release, and are actively listening to comments, so thanks for your feedback.

    Eddy


    I'm here to help
    Friday, July 01, 2011 2:25 AM
    Owner
  • Kinect depth measurement is based on structured light, making a triangulation between the dot pattern emited and the one captured by the IR CMOS sensor, not by time of flight. This allow you to use multiple kinects with a relatively low interference in ambigous points.

    Now, is it technically possible to strobe the IR laser fast enough, so multiple kinects can by synchronized and get rid of the interfernce without losing fluidity in the depth images?.

    Friday, July 01, 2011 4:21 AM
  • I think that if you wanna go that deep in the concepts maybe you should go to the site of the corporation that creates the device that is http://www.primesense.com/

    there you can find deeper information about the device.

    Friday, July 01, 2011 8:09 AM
  • I didn't find much depth at PrimeSense. I found other referances explaining a bit about how it works. It seems you still have the same problem, just differant reason. You can't tell which Kinect a dot came from rather than a pulse. Have you actually used two of these to illuminate the same volume? They are fairly cheap for what they do, but I would hate to pay for another to find out it doesn't really work.

    Friday, July 01, 2011 10:43 AM
  • Thanks Eddy much for the reply. It's what I suspected, although a little different from what I heard from some old contacts in Microsoft Research (I'm ex-msft). This could be my misunderstanding of what I was told or the difference between what's shipped and what's internal or even what's planned vs. what's happening near-term.

    My follow-on question is just to confirm whether it's possible (or not) to use 2 kinect sensors to operate over the same volume. If it were possible, then I could use multiple sensors and integrate the results myself. I suspect not possible though. It's my impression that when structured light cast by two sensors overlap on objects in a volume, they would interfere with each other. Thus it would not be possible to easily use 2 (or more) sensors to capture the same volume without additional work.

    Friday, July 01, 2011 11:19 PM
  • Searching the web it does seem to be possible. It is clear you don't get much problem from failing to get a depth reading, but it isn't clear what it does to those readings. It found a dot, but whether it was the right dot is the question.

    Saturday, July 02, 2011 4:28 AM
  • Multiple kinects can be pointed at the same volume with very little interference. If you are looking to test what's possible, search the openkinect google group, we've been at this for a while now. http://groups.google.com/group/openkinect/?pli=1
    Monday, July 04, 2011 7:14 AM
  • The issue I'm talking about is like whether one Kinect picks up depth when you turn the other Kinect where it had shadows by itself. If so then it's using the wrong dots. Most of what I've seen posted only makes it clear it can pick out a dot to use for depth measurement and not whether it's actually using the right one.

    Monday, July 04, 2011 9:20 PM
  • Nope, the dots have to fit in an almost unique pattern. They can interfere giving the kinect ambiguos or invalid data when reading certain spot (dots too close or overlaped?), but the chances of readeading others kinects dots are very low. You can check that by blocking the IR laser of one kinect, the depth image of that kinect will stay black, even when the other one is measuring correctly.
    Tuesday, July 05, 2011 12:39 AM
  • Well, I broke down and bought a second Kinect. So far no luck, but I still have stuff to try. When I plug the second in the camera fails to start and then I can't initialize the runtime on either. They both individually work, but not plugged in at the same time yet. I did manage to get my first BSOD under Windows 7 though. That's an accomplishment.


    PS: I got them working. With tracked players I make a pass for find max/min and then scale the colors across that range. That gives a fairly detailed depth image. I can't make out my nose, but I can make out my pot belly. The illumination of the second Kinect seems to have little impact on that. The readings it does get seems to be reasonably accurate. None of the things I saw gave any indication of what happened to the accuracy, just how many depth readings you lost, i.e. the black spots.

    Tuesday, July 05, 2011 3:01 AM
  • Thursday, July 07, 2011 11:10 AM
  • How were you able to get both kinects working at the same time? I tried many combinations plugging one in to the front and one in the back but I always get This device cannot start. (Code 10) in device manager with the second kinect that I plug in. I am not sure if it is on a separate hub like was mentioned in another post, it was also mentioned using a seperate PCI USB 2.0 card to solve the problem. 

    Rob

    Tuesday, July 19, 2011 2:01 AM
  • I'm using Windows 7, but Device Manager doesn't seem to have changed much. Under Universal Serial Bus controllers in Device Manager you should see an entry with "Host Controller" ending in four hex digits. Mine is:

    Intel(R) 5 Series/3400 Series Chipset Family USB Enhanced Host Controller - 3B34

    What name you'll see depends upon who made the controller, i.e. Intel, Marvel, ect. The easiest way to tell what port gots to what controller is plug the Kinect into it and use performance monitor to monitor the USB Isochronous Transfer rate. When you start streaming data off the Kinect that will shoot up to 10+ million bytes per second. How high depends upon what streams and what formats you requested. When you select add counter then usb then isochronous you'll get a list of instances. One is the Kinect Camera, but it also shows up for the host vontroller. So all all the host controllers then see which one the transfers show up on when you start streaming data off the Kinect.

    Tuesday, July 19, 2011 4:18 AM
  • So is there no solution to the problem besides installing an additional PCI USB slot? has anyone tried this successfully?

    I know it was asked but how rapidly can the projector component of the kinect be deactivated (strobed) to allow more devices for greater coverage without interference but sacrificing with reduced framerate?

     

     

     

     


    Tuesday, July 19, 2011 8:21 AM
  • I didn't have any luck just plugging them into the same controller. It doesn't sound like you are either so sounds like you're going to have to physically plug and unplug them to "strobe" them.
    Tuesday, July 19, 2011 5:17 PM