none
Synchronizing x,y positions with the depth and color stream RRS feed

  • Question

  • Hello peeps, I've been trying to work with the depth and color streams together recently, and I've noticed that the color image is almost 'compressed' as compared to the depth image in terms of its displayed FOV. Let me demonstrate:

     

                                                                                  Left Boundary offset

       Top Boundary Offset

     


    Notice that in both examples, the depth image shows my hand as being at the edge of the screen, while there is still some distance left to go in the color image. What's worse is that the problem is not as apparent at the bottom or right boundary meaning a linear translation of the color or depth image would just shift the problem to the opposite boundary. Adjusting stream resolution does not seem to make a difference either.


    Can someone explain why this happens, and if there is a correct way to adjust for this? 



    • Edited by AleksMA Saturday, February 4, 2012 12:01 PM Grammar and layout
    Saturday, February 4, 2012 12:00 PM

Answers

  • Oh - that is the problem.  You are defining a function that calls itself, so as soon as you call that function once then it will repeatedly call itself recursively until you get the stack overflow error.

    To use the method that you want, you just call the method - you don't need to redeclare it or anything like that.  The function itself is provided by the library in the SDK (my terminology may not be right with C#, but the idea is the same).  So if you remove the declaration part and just directly call the method, then you should be good to go.

    • Proposed as answer by Jason Zink Sunday, February 5, 2012 7:10 PM
    • Marked as answer by AleksMA Monday, February 6, 2012 12:06 AM
    Sunday, February 5, 2012 6:09 PM

All replies

  • The reason is that there are different optics for each of the cameras, which produce different fields of view for each camera.  In addition, there are some small piece to piece variation in the alignment of the two cameras.  Both of these things are can be corrected by performing a calibration - which is exactly what the Kinect SDK does.  There is a set of functions built into the API that can tell you which color pixel corresponds to a particular depth pixel, which take into account these calibration and geometric properties of each camera.

    Take a look in the SDK documentation for NuiImageGetColorPixelCoordinatesFromDepthPixel function in C++ to see how to do the mapping.  Please note that this mapping changes for each depth pixel value, so there is no way to create a static mapping between the depth and color image streams...

    Sunday, February 5, 2012 8:24 AM
  • Hey Jason, I did see this since I posted the topic and have been trying to get it to work. I'm using C# and the new SDK so I'm using the MapDepthToColorImagePoint method, however i'm getting errors during implementation. 
    public ColorImagePoint MapDepthToColorImagePoint(
            int depthX,
            int depthY,
            short depthPixelValue,
            ColorImageFormat colorImageFormat,
            DepthImageFormat depthImageFormat)
            {
                return MapDepthToColorImagePoint(depthX, depthY, depthPixelValue, colorImageFormat, depthImageFormat);
            }
    
     private byte[] GenerateColoredBytes(DepthImageFrame depthFrame)
            {
                 short[] rawDepthData = new short[depthFrame.PixelDataLength];
                depthFrame.CopyPixelDataTo(rawDepthData);
                 
                for (int depthIndex = 0, colorIndex = 0;
                    depthIndex < rawDepthData.Length && colorIndex < pixels.Length;
                    depthIndex++, colorIndex += 4)
                {
    
                    int depth = rawDepthData[depthIndex] >> DepthImageFrame.PlayerIndexBitmaskWidth;
                    ...
                                                    
                        //map x,y coordinates
                        int x = (depthIndex % depthFrame.Width);
                        int y = ((int)depthIndex / depthFrame.Width);
    
    
                        ColorImagePoint colorPoint = MapDepthToColorImagePoint(x, y, rawDepthData[depthIndex], ColorImageFormat.RgbResolution640x480Fps30, DepthImageFormat.Resolution320x240Fps30);
                        int colorx = colorPoint.X;
                        int colory = colorPoint.Y;
                  ...
    
                }
                
    

     I keep getting a stack overflow error with this code inside of the MapDepthToColorImagePoint function, any ideas as to why? 
                    
    Sunday, February 5, 2012 2:57 PM
  • I don't know much about C#, but where does the code break at when the error occurs?  I would suggest just making a nested for-loop for the x and y coordinates (there is no benefit to calculating them directly with the divide and mod operations), but other than that I don't see anything that jumps out at me.

    Can you post the full method and indicate where the error is generated (i.e. where is the break point)?

    Sunday, February 5, 2012 4:15 PM
  • The error comes in the  MapDepthToColorImagePoint method body itself, with the error being on the return statement. I'm not a pro or anything, am i setting up the function right? In the documentation they don't define a body for the method, just this part:

     

    public ColorImagePoint MapDepthToColorImagePoint(
            int depthX,
            int depthY,
            short depthPixelValue,
            ColorImageFormat colorImageFormat,
            DepthImageFormat depthImageFormat)

    I assumed i'd have to add the curly brackets, and when i did that i got an error which said the method must have a body, so i added the line(which is where the error pops up):

     return MapDepthToColorImagePoint(depthX, depthY, depthPixelValue, colorImageFormat, depthImageFormat);

    I don't think the [x,y] calc makes much of a difference and i put a breakpoint right before the call to the MapDepthToColorImagePoint method to make sure all the parameters were there (which they were), and besides i only want to save [x,y]'s in the depth boundary. The rest of the generatecoloredbytes method just has to do with setting up the color data based on a depth filter I have:

     


     //loop through all distances
                //pick a RGB color based on distance
                for (int depthIndex = 0, colorIndex = 0;
                    depthIndex < rawDepthData.Length && colorIndex < pixels.Length;
                    depthIndex++, colorIndex += 4)
                {
                    int depth = rawDepthData[depthIndex] >> DepthImageFrame.PlayerIndexBitmaskWidth;
                   
                    if (depth > 1000 && depth < 1500)
                    {
    
                        //we are in boundary
                        byte a = (byte)(((depth - 1000) / 500f) * 255);
                        byte b = (byte)(((1500 - depth) / 500f) * 255);
    
                        pixels[colorIndex + BlueIndex] = a;
                        pixels[colorIndex + GreenIndex] = b;
                        pixels[colorIndex + RedIndex] = 0;
                        //map x,y coordinates
                        int x = (depthIndex % depthFrame.Width);
                        int y = ((int)depthIndex / depthFrame.Width);
    
                       
                        ColorImagePoint colorPoint = MapDepthToColorImagePoint(x, y, rawDepthData[depthIndex], ColorImageFormat.RgbResolution640x480Fps30, DepthImageFormat.Resolution320x240Fps30);
                        int colorx = colorPoint.X;
                        int colory = colorPoint.Y;
    
                        depthboundary[boundaryIndex, 0] = x;
                        depthboundary[boundaryIndex, 1] = y;
    
                        boundaryIndex++;
                    }
                    else
                    {
                        pixels[colorIndex + BlueIndex] = 0;
                        pixels[colorIndex + GreenIndex] = 0;
                        pixels[colorIndex + RedIndex] = 0;
                    }
            }
                return pixels;
            }
    

     





    • Edited by AleksMA Sunday, February 5, 2012 4:44 PM
    Sunday, February 5, 2012 4:41 PM
  • Oh - that is the problem.  You are defining a function that calls itself, so as soon as you call that function once then it will repeatedly call itself recursively until you get the stack overflow error.

    To use the method that you want, you just call the method - you don't need to redeclare it or anything like that.  The function itself is provided by the library in the SDK (my terminology may not be right with C#, but the idea is the same).  So if you remove the declaration part and just directly call the method, then you should be good to go.

    • Proposed as answer by Jason Zink Sunday, February 5, 2012 7:10 PM
    • Marked as answer by AleksMA Monday, February 6, 2012 12:06 AM
    Sunday, February 5, 2012 6:09 PM
  • Thanks so much dude, works perfectly and now i get how to use all these other functions =D 

    for (int depthIndex = 0, colorIndex = 0;
                    depthIndex < rawDepthData.Length && colorIndex < pixels.Length;
                    depthIndex++, colorIndex += 4)
                {
    
    ...
    
    ColorImagePoint colorPoint = kinectSensorChooser1.Kinect.MapDepthToColorImagePoint(DepthImageFormat.Resolution320x240Fps30, x, y, rawDepthData[depthIndex], ColorImageFormat.RgbResolution640x480Fps30);
                        int colorx = colorPoint.X;
                        int colory = colorPoint.Y;
    
    ...
    
    }

    Sunday, February 5, 2012 6:36 PM
  • There is also a map depth to color frame API as well if you want to quickly process a whole frame at once.
    Monday, February 6, 2012 4:33 PM