The KinectRegion as currently implemented is broken and unusable RRS feed

  • Question

  • I am a developer who has spent the last year working primarily with both the Kinect V1 and the Kinect V2 since the first batch of sensors were released. The Kinect 2.0 SDK by and large is absolutely fantastic, simple and usable.

    The concept of the KinectRegion is great in that it handles:

    1. The engagement model
    2. The push/click semantics 
    3. The cursor pipeline (look, feel and behavior with controls)

    However the KinectRegion as it stands is extremely un-intuitive for most non-technical users to operate. In our experience approximately 80%+ of users were not able to engage with the onscreen controls without significant prompting. This occurred because:

    • The acceleration of the pointer is far too sensitive for most users
    • In practice its preferable to have a smaller usable area than an overly sensitive movement of the cursor
    • The hand press gives too many false positives
    • The action required to invoke a button press (i.e. pushing forwards) is not always apparent to most users.

    So our first thoughts were, lets see how we can change the smoothing/acceleration parameters of the KinectRegion and maybe add click-on-hover? Searching through the KinectCoreWindow and KinectRegion code, quickly its evident that this is not possible. OK, maybe looking through the source will help? The InputPointerManager property of the KinectRegion sounds promising, look you even provide it as a public class! Oh, wait I can't set it publicly... And so on it goes.

    In the end you find it's impossible to modify either the cursor acceleration rate, nor the clicking mechanism. Since just about every user we have tested can't use the KinectRegion as it functions by default (and clearly I am not the only one who is having this problem) it basically leaves us all in the position to have to re-invent what is actually an extremely complicated wheel thats right in-front of us. As you can imagine, this is extremely frustrating!

    So my question that I am hoping a Kinect SDK team member can answer is. What are the future plans with the KinectRegion? Are you opening up the internals to tweaking of cursor parameters and click semantics in future releases? Or should I continue ahead with my attempt at re-writing this functionality based on the KinectRegion, in my own code?

    PS: I just noticed that in the question linked above one MSFT employee argues that "Hover is not a good interaction model and a lot of user research was put into the new model". I wonder who it was that you tested because our experience completely disagrees with that statement. Go put your mother infront of a KinectRegion and see how she goes.... Sincerely hope some other people post their experiences. 

    • Edited by Maxim_G Monday, June 8, 2015 12:10 PM
    Sunday, June 7, 2015 6:57 PM

All replies

  • Similar experience here. Maybe I would not say it is "broken and unusable", but I think tuning the Kinect Region parameters is a feature that many developers need.

    Even if I thought of re-building the interaction model I wouldn't be sure where to start. The "advanced topics" chapter in the jumpstart is supposed to cover the topic, but no sample is provided.

    Monday, June 8, 2015 8:36 AM
  • Fair enough, perhaps 'broken and unsuable' is a bit of a exaggeration but if over 50% of people cant use it, you have a serious problem. 
    Monday, June 8, 2015 12:00 PM
  • Anyone alive out there? The silence is deafening!!! Surely this post at least deserves a comprehensive answer on Microsoft's position....

    • Edited by Maxim_G Monday, June 22, 2015 9:36 AM
    Monday, June 22, 2015 9:31 AM
  • Microsoft staff sometimes, but not always, answer posts in this forum. I would say in general that their attention is very irregular (although I actually got some useful answers from them).

    From the support you received here from other people, I would think that not many developers are actually interested on the KinectRegion component which is really surprising to me, but the fact is that nobody else wrote here and only one user voted your question apart from me in a couple of weeks.

    Sometimes I even wonder where are the Kinect developers because this forum has a relatively low activity and a lot of posts are from people starting with the SDK and asking basic support questions. So few people are working with Kinect? Do they get some other kind of (maybe paid) support? Do they have no problems or doubts?

    I would also like somebody answered this post. At least we could figure out how to build our own interaction...

    Monday, June 22, 2015 2:50 PM
  • Hey mate,

    I've been working on this problem while I was waiting on a response from MSFT. I've created a simple solution that demonstrates how to create Canvas based mouse overlay control for WPF that functions similarly to the KinectRegion but uses a click on hover interaction model.

    Right now it only supports buttons and isn't a complete project (its missing a little bit of basic plumbing that we don't want to release) but I will try and get a complete version (that doesn't use any of our proprietary stuff) checked into GitHub for everyone at some-point soon. 

    For the time being it should give you a head start on implementing something yourself. It shows how to track the hands, create cursors, behave like mouse (i.e. VisualStateManagement) and do all the HitTest stuff required... 

    PS: It's nowhere near as good or comprehensive as the KinectRegion (could be) and I am still hoping that MSFT can make the KinectRegion work (or at least open source it).

    PSS: Please excuse any poor code :-)

    • Edited by Maxim_G Wednesday, June 24, 2015 7:22 AM
    Tuesday, June 23, 2015 5:12 PM
  • I just finished watching that video. I'd love that source code too, and its definitely not in the SDK browser.
    Tuesday, June 23, 2015 7:13 PM
  • Thank you very much for sharing, I'm sure your code will be useful for people trying to develop similar components even if it is not complete. I'll have a look at it as soon as I can find some time.

    Wednesday, June 24, 2015 8:53 AM
  • Hi

    I feel your pain, Since I also had the need to tweak the way KinectRegion works but since it's completely closed, I left as it is.

    I understand the point of view of Microsoft in trying to keep the interaction model closed: and it's also something we, as developers should take into account: the homogeneous experience across applications.

    If each of us develop our own interaction model, it might work for our own subset of users, but whenever any of our users jumps to an application built by someone else, with a slightly different interaction model, the user might feel the application is broken, even if the interaction model of the new app is actually better.

    So I understand Microsoft in keeping the interaction model closed, so the experience is homogeneous across applications.

    On the other hand, it's obvious that there's a lot of room for improvement, my personal priorities are:

    - More "press" models, including the "hold to press" one

    - Two hand support per user (a la multitouch)

    - More control over the PHIZ

    - Better automatic switch between left and right hands

    - Better detection of "intent" to avoid involuntary presses by discarding hands in resting positions.

    The last point is particularly troublesome: many of our users use our application while sitting; they typically rest their hands over their knees or the arms of the chair. Somehow, the hands fall within the PHIZ and you can see the hand cursors jumping randomly, so it seems the current SDK is not able to detect the "intent" of the user is to "rest" his hands and not "interact" with the screen.

    It's been quite a while since the sdk2.0 was released, and no word for updates in the near future, so one might think the sdk is going to stay as it is. At this point I am, as others here, beginning to consider rolling my own KinectRegion.

    Vicente Penades

    Wednesday, October 14, 2015 4:11 PM