Set Exposure of Kinect 2 Color Camera RRS feed

  • Question

  • I am aware that there is a method to retrieve the camera settings used for a given color frame frame, but I can't seem to find the method for changing camera settings (exposure, etc). How to do this?



    Tuesday, November 25, 2014 9:47 PM


  • If you look at the following project it'll show you how to disable auto exposure and control it manually through code:



    • Marked as answer by BrekelMVP Saturday, June 17, 2017 6:14 PM
    Saturday, June 17, 2017 4:48 PM

All replies

  • This has been discussed many time. The Kinect v2 SDK does not provide camera/runtime level changes for the device because the core principle is a shared sensor between multiple applications:https://social.msdn.microsoft.com/Forums/en-US/home?forum=kinectv2sdk&sort=relevancedesc&brandIgnore=True&searchTerm=exposure

    You will have to implement your own image processing to change the whichever properties are specific to your application.

    Carmine Sirignano - MSFT

    Wednesday, November 26, 2014 6:34 PM
  • I humbly submit that this is really sub-optimal. I have multiple cameras looking at the same scene from different angles and each one is dynamically adjusting it's exposure. It's a mess, and there is no sound reason for it.

    Respectfully, I urge the Kinect 2 SDK team to give us some kind of control over the color camera settings. This sentiment seems to be echoed by all of the posters in the threads you linked.

    • Edited by MikeS27 Wednesday, November 26, 2014 8:28 PM
    Wednesday, November 26, 2014 8:27 PM
  • Agree, lack of camera control combined with a way to sensitive 15fps low light mode makes the v2 sensor inferior to Kinect v1 and competitor sensors for many applications and projects.

    Post processing is not an option in most cases as the data simply doesn't exist in an 8-bit image.

    The request/discussion started during pre-release in Dec 2013 and unfortunately hasn't progressed since.


    Wednesday, November 26, 2014 9:19 PM
  • 12 months (and ticking!) is an absurdly long time to wait for a basic and critical feature like this. Not to mention that, as others have indicated, a global camera configuration control panel (effective on all outbound streams) would be simple to implement and would not interfere with multi-app streaming. What is it going to take to get this done? Why put out this amazing piece of hardware and then needlessly put obstacles in the way of developers who are trying to make a living by creating a market for your product?

    • Edited by MikeS27 Thursday, November 27, 2014 8:40 AM
    Thursday, November 27, 2014 8:38 AM
  • Carmine, the rationale that manual control isn't possible because the sensor is shared between apps makes even less sense considering that the V2 SDK already allows manual control of the audiobeam angle.

    This is taken from the AudioBasics example:

    // Uncomment these two lines to overwrite the automatic mode of the audio beam.
    // It will change the beam mode to manual and set the desired beam angle.
    // In this example, point it straight forward.
    // Note that setting beam mode and beam angle will only work if the
    // application window is in the foreground.
    // Furthermore, setting these values is an asynchronous operation --
    // it may take a short period of time for the beam to adjust.
    audioSource.AudioBeams[0].AudioBeamMode = AudioBeamMode.Manual;
    audioSource.AudioBeams[0].BeamAngle = 0;

    Am I reading this wrong? There is already precedence in the SDK for manual control of a feature, why can't the color camera follow the same paradigm?

    If there is a specific limitation in the hardware itself that is one thing. But if the user base needs something changed, and it's possible for the sdk team to do it, it seems like a no-brainer. 

    Or at the very least please implement an absolute timestamp so we can reliably sync an external camera to the depth and infrared data.

    And again thank you for all your hard work and patience on the forums =)

    Tuesday, December 2, 2014 6:18 AM
  • The color exposure is fixed function of the camera, so changing exposure cannot be controlled by software. It is a firmware change.

    Audio is a different system and the data sizes is extremely small comparatively to color. Processing audio is very small in the low kbit range. Beam angle correlation to give you a body is very small and lightweight.

    Any type of synchronizing of time would require knowledge of a common clock between the 2 systems. This requires calibration which you can setup based on your knowledge of both the Kinect system and the camera setup. Personally, I would not run these on the same computer since Windows is not a real-time operating system. Since you cannot control the scheduler there is no guarantee on time slices that would align between the 2 camera capture systems.

    Carmine Sirignano - MSFT

    Tuesday, December 2, 2014 6:37 PM
  • Thanks for the additional info Carmine!

    If exposure is fixed, how does the 15fps low light mode work with that?

    And could that in theory be disabled, or the switching point be controlled in some way so it's less sensitive?


    Tuesday, December 2, 2014 7:07 PM
  • Thanks Carmine. It's good to know there is a legitimate reason for manual exposure not being implemented yet. 

    Brekel it sounds like the 15fps "switch" is part of how the firmware handles auto exposure. Any changes at all to how the camera behaves will probably require a firmware update.

    Hopefully the team will be able to add this to the firmware in the near future. In the meantime I'll investigate post syncing to an external camera. Hopefully it won't be too much of a hassle for the end user.

    Thanks again!

    Tuesday, December 2, 2014 7:48 PM
  • Luckily a firmware update is not impossible :)
    Happened during the pre-release and I think it's now actually doing auto updating when needed.


    Tuesday, December 2, 2014 8:35 PM
  • Thanks Carmine for your information.

    1- Is there any chance for us to find the brand or any information about the used cameras in our Kinect device?
    Maybe, If we find it, we can find a basic API or software for that cameras can set the exposure time and etc.

    2- In addition, the auto exposure in their firmware is based on which algorithm or criteria? If we know the algorithm or even region of interests for measurements, maybe we can find a tricky solution of environment lighting which help us to make the camera to expose near our desired exposure time.

    • Edited by Moha Es Saturday, June 17, 2017 4:42 PM
    Saturday, June 17, 2017 4:41 PM
  • If you look at the following project it'll show you how to disable auto exposure and control it manually through code:



    • Marked as answer by BrekelMVP Saturday, June 17, 2017 6:14 PM
    Saturday, June 17, 2017 4:48 PM
  • Thanks for the follow-up, Brekel. I downloaded the code in the repo and tried to compile the NuiSensor project (I'm writing C++) but had some issues with build tools - I'm using VS2013 and the project seems to require 2015 or even 2017. At any rate, can you confirm that this NuiSensor project will allow me to control the exposure of a Kinect 2 device running in Windows 8? Most of the repo contains C# so I want to ensure there is no miscommunication here before I get my hopes up. Thanks.
    • Edited by MikeS27 Thursday, June 29, 2017 8:22 AM
    Thursday, June 29, 2017 8:21 AM
  • I've been happily using the techniques it shows in my own code with C++ and VS2015 on Win10 personally.

    Don't see why Win8 wouldn't work, no idea about VS2013.


    Friday, June 30, 2017 7:22 AM