none
Extracting hand region from an image using its depth information derived from Kinect RRS feed

  • Question

  • Hello everyone!
    I have asked my question in stackoverflow too, but it seems it was not right place to ask, therefore I decided to post it here.

    I have a "Dataset of Leap Motion and Microsoft Kinect hand acquisitions" dataset. It contains acquisitions:

    1. depth.png: Kinect depth map (640 x 480).

    2. depth.bin: Kinect depth map raw (640 x 480 short 16 bit. 0 means no valid value).

    3. rgb.png: Kinect color map (1280 x 960).

    4. leap_motion.cvs: Leap motion parameters.

    Color images (rgb.png) are the images of a person showing some hand gestures.

    What I want is to extract only hand regions of those images and save them as a separate image. But the problem is hand regions are not in same exact location. One way might be using a depth.bin file as it is made up of pixels that contain the distance (in millimeters) from the camera plane to the nearest object. The hand regions are closer to camera than the body itself, so it is kind of possible to extract hand regions. But I don't know well about these processes.

    I have read the .bin file in Matlab and its size is 614400x1.Shouldn't it be 307200x1 because 640*480 = 307200. Why is it 2 times bigger? Am I missing something tricky? Please help me grab the intuition of these things!

    I will explain in detail if some parts of the question are not clear enough!

    Thank you for your patience!

    And the dataset link is http://lttm.dei.unipd.it/downloads/gesture/





    • Edited by Sohib704 Saturday, April 8, 2017 4:26 AM
    Monday, March 27, 2017 1:50 AM