none
Why are the running results using C++/c# different for same function provided by SDK browser? RRS feed

  • Question

  • when I open SDK browser (Kinect for windows) v2.0 and run the exmaple "Depth basic-WPF", I got the depth image as follow and the live video is very slow. 

    However, if I run the example "Depth basic-D2D", it is very fast but the depth image I got is very different, just as shown below


    I am curious why there is a big difference between these two depth images captured almost at the same time at same place. I thought "Depth basic-D2D" and "Depth basic-WPF" should have the same depth images. And why "Depth basic-D2D" runs much faster than "Depth basic-WPF"?



    • Edited by icepowder Thursday, March 12, 2015 5:03 PM
    Wednesday, March 11, 2015 7:29 PM

Answers

  • The data shown is the same, just displayed differently. The c++ sample is wrapping the depth from white to black every 256 mm (I think), whereas the c# sample is scaling the depth from 0.5 m (depthmin) to 4.0 meters (depthmax).

    They did this to show that you can see fine depth detail in the wrapped data, yet you can get large distances as well.

    The difference in run speed I think has been answered elsewhere on the forums but you can address it by setting your GPU/CPU to always run in max power mode i.e. turn off adaptive power management. 
    • Edited by Phil Noonan Thursday, March 12, 2015 5:10 PM
    • Marked as answer by icepowder Thursday, March 12, 2015 5:26 PM
    Thursday, March 12, 2015 5:07 PM

All replies

  • How can I make my account verified before anyone could figure out what I just asked?
    Wednesday, March 11, 2015 7:42 PM
  • The images have been uploaded now, who can tell me why the results using C++ and C# code are different? Any special filter used in any language?

    Thanks,

    Thursday, March 12, 2015 5:00 PM
  • The data shown is the same, just displayed differently. The c++ sample is wrapping the depth from white to black every 256 mm (I think), whereas the c# sample is scaling the depth from 0.5 m (depthmin) to 4.0 meters (depthmax).

    They did this to show that you can see fine depth detail in the wrapped data, yet you can get large distances as well.

    The difference in run speed I think has been answered elsewhere on the forums but you can address it by setting your GPU/CPU to always run in max power mode i.e. turn off adaptive power management. 
    • Edited by Phil Noonan Thursday, March 12, 2015 5:10 PM
    • Marked as answer by icepowder Thursday, March 12, 2015 5:26 PM
    Thursday, March 12, 2015 5:07 PM
  • The data shown is the same, just displayed differently. The c++ sample is wrapping the depth from white to black every 256 mm (I think), whereas the c# sample is scaling the depth from 0.5 m (depthmin) to 4.0 meters (depthmax).

    They did this to show that you can see fine depth detail in the wrapped data, yet you can get large distances as well.

    The difference in run speed I think has been answered elsewhere on the forums but you can address it by setting your GPU/CPU to always run in max power mode i.e. turn off adaptive power management. 

    Great answer! Now I understand.

    Thanks,


    Thursday, March 12, 2015 5:27 PM