none
Make clickable links with the push gesture through Kinect Hand Cursor RRS feed

  • Question

  • Hi,

    I'm trying to use the Hand Cursor provided by the ControlsBasicsWPF example to navigate within my application that has an integrated browser (cefsharp). My problem is when the cursor goes over the links and should recognize the push gesture to continue navigating. Is there a way to detect objects other than the buttons (KinectTileButton or KinectCircleButton) on which the cursor moves?
    Thanks so much

    Thursday, September 7, 2017 11:27 AM

All replies

  • Links are part of the content the integrated browser shows inside its container ,therefore they are in a different scope than Kinect-interactable elements. So there's no standard way.

    The problem is that chromium ,and perhaps most software so far, are firmly based on mouse events. Had they had some way to override input sources it would have been easier.

    A dead simple , but not elegant, idea is to extract all links from cefsharp on site load and provide a collapsible/popup list where they are shown as Pushable WPF buttons. This is doable.

    A more elaborate but , untested, way is to see if you define the integrated browser container as an area you can click anywhere in and whenever you do a push gesture, figure out the screen coordinates and forward them to the browser.

    From what I see you can run javascript code in the integrated browser so you can probably(perhaps chromium doesn't support this) do something like:

    document.elementFromPoint(x,y).click();

    Also since JS runs inside the browser's container, you want to convert the Kinect cursor coordinates to the relative coordinates inside the container.

    Perhaps these forced clicks are enough but as an additional step, you might want to look into rect overlap.

    Kinect cursor is large so it might overlap with two links. If you try to use the cursor's center as the position to forward to the browser, it might be difficult to actually hit a link. So you could try to pass a rectangle of the location of the cursor(again, converted to browser relative coordinates) and ask DOM about what elements are inside that rectangle area(there is a function but again you'd have to check compatibility with Chromium). You can check which one is closer to the rectangle center and click that.

    PS:You have double posted. Check if you can delete the duplicate thread.


    Thursday, September 7, 2017 2:02 PM
  • Hi Nikolaos,
    Thanks for the answer, the first proposal came to mind also to me but it is not what I want, the last one seems a good idea though more complicated.
    Do you think that you can update the position of the mouse pointer with that of the cursor position? This way when the push gesture is made you can trigger the mouse click event?

    Ps. Thanks for the report, I removed the duplicate thread
    Thursday, September 7, 2017 2:27 PM
  • Last time I did it, I did everything using C++ Interop with user32.dll. But it's very easy to mess things up. Requires a lot of marshaling and several functions,flags,etc etc.

    There is another way, but again I haven't tested it. The system cursor moves in screen coordinates while the hand cursor moves in window coordinates. You move the hand cursor through Kinect inside the window, so you can get the hand cursor's window position easily. Then you can use this to convert it to screen coordinates. This should also take into account your monitor setup(how many monitors, extended desktop or duplicated etc, screen orientation etc), or at least most of them. Then all you have to do something like this, or this. I've only used the second so that one works.

    Imho, it might be a good idea to check the approach above with the javascript as well. Chances are you might need something else to interact with Kinect from inside the page so at least trying it out a proof of concept might help later. That is , if you can afford the time cost right now. Another reason is that I eventually removed all of it from the application as I found a way to integrate Kinect with the Unity UI without needing a mouse and found out that there's a lot of stuff that you don't know the engine/framework takes care for you that it's way better to avoid handling it yourself.Especially since we needed to support multiple screen setups,any screen orientation etc and all of that , which is no small feat.


    Thursday, September 7, 2017 7:32 PM