Pixellation in the RGB images and firmware calibration parameters RRS feed

  • Question

  • Hi,

    I think there are two pressing issues that makes the SDK pretty much unusable for a lot of applications (such as mine):

    1. Pixellation on the edges in the RGB image. I see no reason why it should be there:

    The second image is a detail of the first one near the door handle. Note that this is not due to my magnification software, it looks like this. And no, please don't tell me to "blur your image" or something like that. This is just plain horrible if one wants to use the SDK to capture images to train object recognition algorithms or do visual tracking etc. Why is this happening? Can't you just supply a normal image like the N million webcam applications do ?

    2. Getting the calibration parameters from Kinect firmware. "read a few values and interpolate yourself" is not an answer. The values are there in the firmware, it takes <1 day to add functions to the SDK to read them in. This is dead easy in OpenNI yet the solutions offered here are plain wrong. I am working on a crowdsourced Kinect data set for research. I could just store a few floating point numbers that correspond to calibration parameters to align depth&rgb as a post-processing step, on demand. 

    But now I have to resort doing two things:

    2.1 Query the color from depth function for a few samples for each Kinect user that connects to my website and do linear interpolation of some sort (ughhhh!!!)

    2.2 Save raw RGB images as well as aligned RGB images with depth which pretty much doubles the data needs to be uploaded by the users.

    I believe solving these issues should be pretty simple, especially the second issue. 

     Will these features ever be included and when? Please answer this so that we know to use OpenNI or nor instead.

    • Edited by Alper Aydemir Monday, March 19, 2012 3:28 PM better wording
    Sunday, March 18, 2012 11:47 AM

All replies

  • Hello Alper,

    For your first issue -- bad image quality -- it looks like you are using the default 30 FPS Bayer image format.  Have you tried using one of the YUV formats instead?  In order to achieve 30 FPS of image and depth data we use a Bayer image format, the "gridding" you see is most likely related to the bilinear interpolation demosaicing of the Bayer format.  Can you try the YUV format at and see if it's a better match for your needs?

    There's one more alternative which will probably give the best results but is the most work: Use the High Resolution (10/12 FPS) mode of the sensor, which will give you either a 1280x1024 (for earlier XBox devices) or a 1280x960 (for Kinect for Windows devices) resolution Bayer image.  You can then downsample this image to 640x480 using your choice of filtering options.

    We can also look into providing better processing of the Bayer signal using other demosaicing algorithms but these algorithms may introduce extra processing cost and we need to stay in a tight CPU budget to maintain 30 FPS video feeds across multiple Kinects and still allow enough time for the user to do something useful with the data.

    Here is what I see with each of these methods:

    I'm following up with some other folks regarding your second issue -- exposing calibration parameters.


    -- Jon

    Monday, March 19, 2012 7:00 PM
  • Hi Alper,

    I spoke with someone working on the runtime side for your second issue -- the calibration parameters.  They have received this feedback from other users and it is being considered.


    -- Jon

    Monday, March 19, 2012 8:07 PM
  • Any update on releasing those calibration parameters?  

    Wednesday, December 19, 2012 6:03 PM
  • Hi Jon,

    I have been waiting for this information forever.  It shouldnt be that hard to provide access to those calibration parameters.  Now I have to do my work all over again using PLC.  Oh right, and I cant use my brand new 4 kinect for windows sensors with that!  

    Thanks Microsoft

    Monday, January 7, 2013 7:58 PM
  • Have you tried the 1.6 SDK and played with the Exposure and Color settings that are now available (see Kinect Explorer-WPF)?
    Saturday, January 12, 2013 3:57 AM