none
Localizing a region on depth map, given the bounding box of the corresponding region on color frame RRS feed

  • Question

  • Hi guys,

    I'm working with Kinect v2 for face analysis, using my own 3D face tracker. With Kinect v1, both color and depth frames have the same resolution, plus they are aligned (using OpenNI), so it was no issue. But with Kinect2, there's no direct way to map the bounding box of face on color image, detected using OpenCV, to the depth map. The reason for using bounding box is to extract a small point cloud for faster processing.

    Any help will be greatly appreciated!

    Thanks in advance,

    Friday, September 25, 2015 8:16 PM

All replies

  • Well the issue is if you're using your own algorithms, that means you'll have to figure out your own correlating coefficients. You're pretty much on your own.

    Some things to think about is that from a visual aspect, your display settings also need to be factored into your correlation. Expecially on windows 8.1 and windows 10. DPI settings can cause your face box region with the color frame to be different on different monitors and displays with different resolutions.

    Now with all the above being said, you'll have to use some calibration techniques to get the coefficients for the offsets of the Depth camera and the color camera for your device. Once you have these you can use simple math to figure out the region point coordinates for depth as compared to color. Hopefully that will help you out some.

    Now may I ask, why are you not using the Coordinate mapper and just getting the frames from the SDK, and getting the color frames, and depth frames and passing those arrays to OpenCV instead for your point cloud?


    Sr. Enterprise Architect | Trainer | Consultant | MCT | MCSD | MCPD | SharePoint TS | MS Virtual TS |Windows 8 App Store Developer | Linux Gentoo Geek | Raspberry Pi Owner | Micro .Net Developer | Kinect For Windows Device Developer |blog: http://dgoins.wordpress.com

    Sunday, September 27, 2015 4:53 PM