none
Upsampling the Kinect depth image RRS feed

  • Question

  • Hello, 

    I am looking for a way to (emptily) upsample the kinect depth image. I am using recorded 3d pointclouds in a virtual reality scenario and the 512 by 424 pixels provided by the camera aren't enough. 

    I created a custom file format and wrote myself some CLI tools for inserting interpolated points into the stream, yet was wondering if it could be done at the source - like blowing up the depth image prior to mapping its points to the video stream, maybe by using "GetDepthFrameToCameraSpaceTable" (https://msdn.microsoft.com/en-us/library/microsoft.kinect.coordinatemapper.getdepthframetocameraspacetable.aspx)?

    Any help is much appreciated, 

    Christophe Leske

    multimedial.de


    KINECT V2

    Saturday, May 16, 2015 12:36 PM

Answers

  • have a look at: https://social.msdn.microsoft.com/Forums/en-US/f29a202a-fa27-4cfc-9079-5addad0906e0/how-can-i-map-a-depth-frame-to-camera-space-without-having-a-kinect-on-hand?forum=kinectv2sdk.


    Carmine Sirignano - MSFT

    Wednesday, May 20, 2015 9:38 PM

All replies

  • So are you wanting to expand the depth stream and map a higher resolution of color to the expanded depth stream?

    the depthframetocameraspace table is for mapping color and depth together, to my understanding that's what the coordinate mapper uses.

    Assume that's what you want, i'd recommend to evenly expand the depth stream (add a 3d point inbetween each existing 3d point) and then use the current mapper that would say

    depth index of 1 is color coordinate 127.25, 104.34

    and index 2 is 127.20, 105.44

    so the color coordinate for the new pixel in between those should be 127.225, 104.89

    Wednesday, May 20, 2015 12:07 AM
  • have a look at: https://social.msdn.microsoft.com/Forums/en-US/f29a202a-fa27-4cfc-9079-5addad0906e0/how-can-i-map-a-depth-frame-to-camera-space-without-having-a-kinect-on-hand?forum=kinectv2sdk.


    Carmine Sirignano - MSFT

    Wednesday, May 20, 2015 9:38 PM
  • Hello Jacob, thanks for your reply, which also makes totally sense, but is somewhat the same of what I am doing already. I was hoping to get the interpolation for free by just blowing up the depth image resolution. So instead of having to calculate the interpolated points, just submitting a bigger depth image to the method, but apparently this is not enough. The thing is, I'd ideally like to do this upsampling in realtime. Another option I am looking at is to use compute shaders to interpolate points and colors for me at runtime.

    KINECT V2

    Thursday, May 21, 2015 5:21 AM
  • Hello Carmine, thank you very much for your answer as well, which is extremely useful as it allows for a transportable format. I am not sure what William Legare is after, but he seems to be seeking a similar scenario as I have in mind. Carimne, you are difficult to get by, I tried to hunt down your email address in order to send you some questions. As stated before, I am working on a 3D pointcloud recording system for use in virtual reality scenarios and would have some questions about that, like how to properly fuse multiple kinect recordings into one output in order to increase the resolution. I am currently looking at getting 4 Kinects hooked up to 4 Intel NUCs networked up together, synched over the network for the recording in order to increase the space and resolution being recorded... it is my opinion that there is a need for software tools for properly recording, editing and postprocessing 3D videos, including side aspects as geometry reconstruction and effects. I seem to be the only one with this opinion however, which is why I am looking for help and funding in this area. Are there also any plans or specs already available for a Kinect3? I also built myself a mobile 3D recording rig using an external battery pack and a laptop so that I could record anything anywhere, the results are quite promising... Seriously, you guys should advertise the Kinect for VR much more aggressively and provide tools for it accordingly. Or, as some journalists stated it, the Kkinect will stay a device in search for an application, which would be a shame given its great capabilities.

    KINECT V2

    Thursday, May 21, 2015 5:32 AM
  • Have you also seen RoomAlive Toolkit? This is a project that maps multiple Kinect and projectors together to do a fully immersive 3D space. This was released as open source and is available on GitHub:

    http://channel9.msdn.com/coding4fun/kinect/RoomAlive-Toolkit--Hacking-Augmented-Reality-with-Kinect

    https://github.com/Kinect/RoomAliveToolkit


    Carmine Sirignano - MSFT

    Tuesday, May 26, 2015 4:39 PM
  • Hello Carmine, 

    thanks for your answer. Yes, I am aware of the RoomAlive Toolkit, I have seen this at the time it was released. But I got no projectors, I am more interested in virtual reality scenarios involving HMDs and multiple kinects. :-)

    I got a couple of questions: 

    - is there any application that allows me to record the depth stream of the kinect side-by-side with the color information coming from the camera to a video file? Or even just display both streams simultaneously?

    I am also trying to get in touch with Steve Sullivan, now working on something similar on the HoloLens project apparently: http://www.technologyreview.com/news/537651/microsofts-hololens-will-put-realistic-3-d-people-in-your-living-room/

    I would like to show him my results with the Kinects, if he is interested, I just can't figure out how to get in touch with him directly. 

    Is Hololens using the current Kinect in their setup? Will there be a Kinect3 anytime soon?

    Thank you for being available to questions on this forum, 

    Christophe Leske - multimedial.de


    KINECT V2


    • Edited by multimedial Wednesday, May 27, 2015 12:18 PM
    Wednesday, May 27, 2015 12:15 PM
  • Hi Christophe,

    You are looking to do exactly what I would like to do, but I'm only just starting (and limited by spare time!). How much progress have you made? Would be really interested in your project. I am slowly collecting Kinects and looking at the lowest spec PC that will drive them.

    Max

    Friday, July 31, 2015 10:44 AM
  • Hello Max, 

    we did quite some progress, see my site http://pointcloud.multimedial.de, especially the demo for the 4xinterpolation at http://pointcloud.multimedial.de/verbesserte-darstellung/

    As for the lowest spec PC that will drive them: we got Intel NUCs here. An USB3.0 port will suffice in general. You'll get an additional Kinect resolution for about 350€ (NUC + SSD + RAM + Kinect). Which in return means that you can get a 1K 3D resolution kinect rig with 4 cams under 1500€.

    Reg. Interpolation: 

    the demo provided on my site is CPU based (a C# script in Unity doing it at runtime). I also got GPU based interpolation working, which is much faster (realtime), but I got no screenshot for it available. Plus, it is only working horizontally for the moment, but I am on it. 

    Additionally, I can give you the hint that you can read out the depth table from the camera. Say you would assign the values to an image and scale it up, that would give you a scalable (=interpolated) depth table for arbitrary resolutions... :-) The coordinateMapper API has methods for converting single points: see https://msdn.microsoft.com/en-us/library/windowspreview.kinect.coordinatemapper.aspx, especially the last two methods.

    We are also considering C++ AMP (https://msdn.microsoft.com/de-de/library/Hh265137.aspx)


    KINECT V2


    Friday, July 31, 2015 11:38 AM
  • Many thanks, I'll take a look at the demo on my DK2.

    I'm trying to work out the lowest cost system for a kinect 'node' (PC with USB3+Kinect 2+adapter), since this is a hobby project and I can't keep buying PCs! 

    Your stuff looks very interesting. I'll try to keep up with your developments. I assume you're trying to do an environment capture for VR using the multi-kinect setup (as am I). Have you built a chassis to hold the Kinects?

    I also hope that one day my STEM system turns up so I could place a sensor on the rig to track it's position.

    Good luck.

    Max



    Friday, July 31, 2015 3:50 PM
  • Just found a blog of this guy doing fantastic work on point clouds and fusing them, sometimes working with a Kinect. Not sure if people have seen this before.

    http://www.thomaswhelan.ie/

    He now works at Oculus in their research dept (very jealous!).

    Max

    Tuesday, August 11, 2015 1:38 PM