none
Kinect v2 RGB and Depth Registration RRS feed

  • Question

  • I was using MATLAB to get depth and RGB images from Kinect v2. On doing that I see MATLAB returns a 1080 * 1920 rgb as well as depth data. So can I just image resize both depth and rgb data to obtain lower resolutions and thus point clouds? Or doing so will lead to alignment issues?
    Wednesday, August 29, 2018 10:23 PM

All replies

  • You probably can but before doing so, make sure to map the depth to the color so the depth has the same resolution as the color. So even if you uniformly downres them, they are already aligned so the result will also be aligned.
    Thursday, August 30, 2018 8:53 AM
  • Hi, 

    Thank you. I tried to do the same, but when I collect data using "colorCentric" option of "alignment" argument of "pcfromkinect" in MATLAB, down scaling is causing scattering of points. Although more or less point cloud is satisfactory, but not enough for high accuracy applications. While doing the same with "depthCentric" option of "alignment" argument, the results are much better and cleaner, but with obvious loss of resolution. I don't know why the above effect is happening though. Any insights to this will be helpful in increasing my operating resolution. Thanks a lot in advance.

    Kind Regards

    Pranjal Biswas

    Monday, September 3, 2018 5:19 PM
  • Sorry but I haven't used the MATLAB API. I'm only familiar with C++/C# versions.

    So you have both color and depth at 1920x1080 resolution and you are downresing both of them? Can you describe the scattering? Does that mean that there's more space between the points?

    I'm thinking this has to do with the mapping of color to depth. The API has its own mapper which is definitely not straightforward(takes into account various factors). On the other hand, if you upres depth to 1920x1080, it will definitely duplicate stuff to make it happen and then you are downresing it in a more straightforward way than the CoordinateMapper of the API. One transaction takes depth into account as well, while the other is a depth-agnostic downres in 2D. So it doesn't seem to weird to see things like this happening.

    Also if you are downresing, you are mostly discarding data. So if you discard data in a uniform way, it will probably make the result more spatious than intended.

    Not sure if there's anything else you can do for that.

    The only thing that comes to mind is to use different distance offsets between points in the point cloud. If you have a high res, use a normal offset, if you have a lower res, use a lower offset to keep them closer.

    You'll have to try it out I guess.

    Monday, September 3, 2018 6:14 PM
  • Thank you for your inputs. Apparently I made a very trivial error. I calculated the K(intrinsic parameter) matrix using IR(depth camera) images, while I was attempting to create point cloud using depth images registered on images from the rgb camera, which will change the principal point and focal length i.e K matrix. This also resulted in depth image to be of the higher resolution as rgb image. After registering rgb image on depth image, the point clouds were exactly as expected. Although this reduces the resolution of rgb image to that of the depth image, but preserves the actual spatial geometry in the point cloud without any unexpected scattering of points.
    Tuesday, September 4, 2018 5:13 PM