none
Camera and Depth Calibration Support RRS feed

  • Question

  • Hello,

    I am getting started with using the Kinect, and I plan to write a simple scanner application.  After reading up a little bit on the topic, I have found several methods for calibrating the individual cameras, as well as finding the mapping in between the two images.  Most of the calibration techniques utilize OpenCV to determine the intrinsic parameters of the color and depth cameras separately, with a checkerboard pattern.  These methods all are utilizing the non-microsoft drivers for the Kinect.

    Is this camera calibration necessary with the Microsoft SDK?  What I mean is, are the images that we receive through the SDK already undistorted, or should we be applying our own correction?  It is my understanding that there is factory supplied calibration information, so it would be great if this information is already taken into account - it would also be reason for people to use the Microsoft SDK as opposed to the other ones floating around out there.

    Thank you in advance for any insights you can provide!

    Jason Zink

    DirectX MVP

    Thursday, December 29, 2011 8:53 PM

All replies

  • In some instances you can use the direct camera feed from kinect device like scotts thread on in which hes created a camera filter to access the stream from any application that can recogize cameras on the computer.

    Heres the link:

    http://social.msdn.microsoft.com/Forums/en-US/kinectsdk/thread/4ee6e7ca-123d-4838-82b6-e5816bf6529c

    Note: must fool around with it to get working with apps other then listed in thread. Excludes your code you use in your app if done correctly.

    Also, you can access the camera operations in code but the above file is for ease of use of the kinect cameras raw color camera feed without having to do a hardly any programming.

     P.S. Scott also has depth feed going too if you wish for a link.

     As far as calibaration goes i dont think you will need to do any because you can detect if the person is too close to the computer and inform them on the screen to back up with a little programming unless for advanced things like login but maybe not even then because the kinect can detect and send back information to the computer.

     

    If this is for xbox  also the kinect for windows sdk clearly stated in readme you cannot convert windows projects to xbox projects at the moment.


     

    Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. - "Sherlock holmes" "speak softly and carry a big stick" - theodore roosevelt. Fear leads to anger, anger leads to hate, hate leads to suffering - Yoda




    • Edited by The Thinker Tuesday, January 3, 2012 12:45 AM
    Tuesday, January 3, 2012 12:38 AM
  • Hello,

    Thank you for the reply.  I am not sure who Scott is, but if there is some sample code I would be happy to take a look at it.  I am working on this project in the PC space with the Kinect Beta 2 SDK, and I already have color and depth data being read (in fact, the code is available in my open source rendering engine Hieroglyph 3 if anyone want to take a look).  I am also starting to blog a little bit about my experiences with the Kinect here.

    Regarding the calibration, I am more specifically referring to the removal of distortion from the images (i.e. radial or tangential distortion).  This can be achieved fairly easily with OpenCV, but if the step is already being done in the SDK / driver then I will skip adding this to my pipeline.  In addition, if there is already rectification being done in the SDK / driver between the two images, then I will also skip this step too - but I need to know if these are currently already being done.

    The 'other' drivers all seem to use the raw data and hence require the calibration and rectification.  I am simply asking if either of these two features are performed in the current SDK.

    Thanks in advance for any help that someone can provide.

    - Jason

    Tuesday, January 3, 2012 6:35 AM
  • I think to some degree it happens automatically but you can apply certain amounts of smoothness of motion for better handling of a human figure which they do a little bit in the kinect mouse sample. The samples website is: http://kinectmouse.codeplex.com or search for kinect mouse on codeplex's search bar.  You can see that when detecting the person with kinect some  smoothing is applied to the joints in the sample so that mouse movement is smooth and not jumping from one side of screen to another. Since it uses direct x sdk to draw the skeleton then i would say capture said stream from kinect and apply your own removal of distortion but it doesnt detect or remove distorted skeleton figures very good before 5 feet or after a certain amount of distance (my guess was 4 meters but i wonder were the correct information can be found on that). 

    Their might be some parameters to set to get the kinect to remove distortion if they are available.

     Heres a enum holding image resolutions from the help docs:

    Resolution options.

    Collapse imageSyntax

    C++ 
    typedef enum _NUI_IMAGE_RESOLUTION
    {
        NUI_IMAGE_RESOLUTION_INVALID = -1,
        NUI_IMAGE_RESOLUTION_80x60 = 0,
        NUI_IMAGE_RESOLUTION_320x240,
        NUI_IMAGE_RESOLUTION_640x480,
        NUI_IMAGE_RESOLUTION_1280x1024
    } NUI_IMAGE_RESOLUTION;

    Collapse imageConstants

      Constant Description
      NUI_IMAGE_RESOLUTION_INVALID The image resolution is invalid.
      NUI_IMAGE_RESOLUTION_80x60 The image resolution is 80 x 60.
      NUI_IMAGE_RESOLUTION_320x240 The image resolution is 320 x 240.
      NUI_IMAGE_RESOLUTION_640x480 The image resolution is 640 x 480.
      NUI_IMAGE_RESOLUTION_1280x1024 The image resolution is 1280 x 1024.
    Also, this might help you:
    An ImageResolution value that specifies the image resolution.

    Namespace: Microsoft.Kinect.Nui
    Assembly: Microsoft.Kinect.Nui (in microsoft.kinect.nui.dll)

     

    Opens a color or depth image stream.

    Namespace: Microsoft.Kinect.Nui
    Assembly: Microsoft.Kinect.Nui (in microsoft.kinect.nui.dll)

    Collapse imageSyntax

    C# 
    public void Open (
             ImageStreamType streamType,
             int poolSize,
             ImageResolution resolution,
             ImageType image
    )
    Visual Basic (Declaration) 
    Public Sub Open ( _
             streamType As ImageStreamType, _
             poolSize As Integer, _
             resolution As ImageResolution, _
             image As ImageType _
    )

    Parameters

    streamType
    Type: ImageStreamType
    An ImageStreamType value that specifies the stream type.
    poolSize
    Type: Int32
    The number of frames that the NUI runtime should buffer. Beyond this value, the runtime drops un-retrieved frames. The maximum is currently 4; a value of 2 is sufficient for most applications.
    resolution
    Type: ImageResolution
    An ImageResolution value that specifies the resolution.
    image
    Type: ImageType
    The image type.

    These talk about changing the depth or color streams resolutions to what ever you want but is limited to the capabilities of kinect sdk

     


     

     

    Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. - "Sherlock holmes" "speak softly and carry a big stick" - theodore roosevelt. Fear leads to anger, anger leads to hate, hate leads to suffering - Yoda



    • Edited by The Thinker Tuesday, January 3, 2012 6:46 PM
    Tuesday, January 3, 2012 6:30 PM
  • Hello,

    I think we are talking about different things - I am looking for someone to indicate if the current color and depth images returned by the SDK are already calibrated and undistorted.  It should be a very simple question for someone from Microsoft to answer!  Camera calibration is not that complicated, but if it is already performed by the SDK then I will skip it.

    I read through those enums that you posted, but I don't see anything about calibration or distortion.  Can you point me to the part that you are referring to?  Also I think that mouse cursor sample that you pointed to is simply using the skeletal tracking and not directly using the depth image - right?  Nevertheless, it is still an interesting example - thank you for posting it.

    - Jason

    Tuesday, January 3, 2012 9:12 PM
  • I think the general answer would be no for depth/color images(but i dont know what scotts source is above for the filter file so i cant tell you for sure about any calibration). I think you would have to do manual calibration because as far as i know their is no function to automatically calibrate the kinect from reading posts since kinect sdk first came out. I would try searching in help files for kinect sdk for depth which i did to find above functions i posted and it talks about how depth data is handled and can players detected in depth stream and such but was too much to post here (about fifth to eighth help article down).


    Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. - "Sherlock holmes" "speak softly and carry a big stick" - theodore roosevelt. Fear leads to anger, anger leads to hate, hate leads to suffering - Yoda
    • Edited by The Thinker Tuesday, January 3, 2012 10:37 PM
    Tuesday, January 3, 2012 10:36 PM
  • I am aware that there is no mention of calibration in the SDK, but there is calibration cards in use for the Kinect that could be used for a factory based calibration.  My assumption is that the Xbox usage of the Kinect takes into account the calibration, but I am not sure.

    Once again, I am looking for a simple and clear answer, do the SDK images use the factory calibration to undistort the resulting images!

    Wednesday, January 4, 2012 6:07 AM
  • No as far as i know calibration is  not supported in the beta version. It may be supported in the commerical versions of the sdk which the kinect team has said will have features different from normal sdk version. Only the xbox version of kinect supports calibration as far as i know.

     

    The only thing i know is they will allow you to redistribute kinect programs for commerical purposes in the commerical version so i assume they will allow you to package drivers or make an installer.


    Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. - "Sherlock holmes" "speak softly and carry a big stick" - theodore roosevelt. Fear leads to anger, anger leads to hate, hate leads to suffering - Yoda




    • Edited by The Thinker Wednesday, January 4, 2012 2:18 PM
    Wednesday, January 4, 2012 2:07 PM
  • Thank you for sharing your opinion.

    Is there any Microsoft representatives hanging around these forums that can clue us in for sure?

    Wednesday, January 4, 2012 7:29 PM
  • Did you ever found a method ?

    I'm stuck at the same point, the raw SDK depth data isnt correct.
    As i compared my kinect to a tape-measurement device, and see the kinect error quickly builds up.
    So far i only tested midpoint frontal view.
    And so far my best estimate is to multiply kinect by 1,0063

    However i do think this doesnt really nail it, it doesnt tell me the error rate, and still doesnt matchup to real world handyman tape-measure tool. looking at the error rates i kinda doubt the error rate is linear. ( i think there is some curve here )

    I'm not looking forward to include huge libraries Mathlab/OpenCV/Aforge/etc to just correct this


    Tuesday, September 15, 2015 9:39 PM