Changing Color frame resolution to enable faster processing RRS feed

  • Question

  • Hi;

    Is there any way of reducing the color frame resolution from 1920X1080p to a lower resolution so as to enable faster processing?


    Monday, May 11, 2015 4:49 PM

All replies

  • By faster processing do you mean have your app run faster? Or reduce the bandwidth of the Kinect's data stream?

    The Kinect is a read only device and it does not seem like Microsoft will ever allow that hardware to be controlled by the Windows SDK. However you can reduce the amount of processing your application does by simply processing less data.

    So instead of processing a color frame with:

    for(int y = 0; y < 1080; y++)
        for(int x = 0; x < 1920; x++)
            int index = y * 1920 + x;
            // Modify data

    You could do a 720p extract like so:

    for(int y = 180; y < 900; y++) { for(int x = 320; x < 1600; x++) { int index = y * 1920 + x;

    // Modify data } }

    or even a qHD 960x540 image which would maintain the full field of view:

    for(int y = 0; y < 1080; y += 2)
        for(int x = 0; x < 1920; x += 2)
            int index = y * 1920 + x;
            // Modify data

    You would have to place the data that you process into a new smaller byte array. Keep in mind that because you're processing the image here, if all you need to do is display a color image this would actually be slower than just displaying the full 1920x1080 image.

    But if you are doing significant processing to the color data you can reduce the amount of pixels you are processing and speed up your application.

    There may even be a way to reduce the resolution on the GPU by using compute shaders. But as far as I know right now this is the fastest way to get a lower resolution image.

    Hope that helps!

    Monday, May 11, 2015 5:41 PM
  • I see! Okay so would I be able to pass in the 720p color frame to the coordinate mapper function?
    Tuesday, May 12, 2015 12:25 AM
  • As far as I know you don't pass any color information into the coordinate mapper itself. The color image itself has no depth data so there is no way to map any of it.

    Instead you "project" the information from the depth camera onto the color image, and then you either modify the color image in some way, or take the color from the projected points to color the depth image as a point cloud, or 3D model. (there are functions such as "MapColorFrameToDepthSpace" but these just provide look up tables for where the depth and 3d coordinate points "project" onto the color frame)

    All of the functions of the SDK itself are more or less "fixed" in terms of performance cost. What you need to do in your application is to find the fastest way in your application to process this data, or process less of it to boost performance.

    If it's the coordinate mapping that is causing performance issues you could try and see if "MapDepthPointsToColorSpace" is faster than "MapDepthFrameToColorSpace". In the former you could specify a smaller array and process less data. However in creating that smaller array you might end up taking more of a performance hit depending on what the rest of your application is doing.

    Tuesday, May 12, 2015 5:15 PM