locked
PointerType zaniness RRS feed

  • Question

  • Trying to get our app to work gracefully on "all in one" computers which accept both touch and mouse input and have discovered some inconsistent behavior with the event.pointerType member. During MSPointerMove events, pointerType is undefined on touch only devices (Surface). The MS_POINTER_TYPE____ constants are also defined.

    Here's where it gets weird. On a Windows 8 desktop with mouse, pointerType is correctly set to 4 (MS_POINTER_TYPE_MOUSE) when you move the mouse without a button pressed but when you press the mouse button the event looks exactly like a touch event! That is, the pointerType becomes undefined.

    Not owning an All In One computer yet I don't know what the behavior is on a device that supports both types of actions but this seems like an impediment to building apps that respond gracefully to different types of pointers. I don't necessarily want my application to respond the same way when touching as opposed to a mouse. For instance, we utilize crosshairs in our application and on touch devices we locate the crosshairs a few pixels up and to the left, so that they are out of the way of your finger. On a device with a mouse however the crosshairs need to be at the exact mouse location. Without the ability to detect the pointer type our users will receive a sub optimal experience.

    Would be curious if there's a workaround and certainly think that Microsoft should address this API discrepancy in a patch.

    Wednesday, January 30, 2013 10:46 PM

All replies

  • Well, I did manage a workaround by using the MSPointerDown event which *does* consistently return the pointerType. I save the pointerType and mouseDown state so that when MSPointerMove is called I check that either the saved state is "mouse pointer", or, if the pointerType==4 (mouse pointer type but without the mouse button pressed). Note that is is critical to check pointerType==4 because the way this bug manifests is that the static type MS_POINTER_TYPE_MOUSE is also not set. So e.pointerType==e.MS_POINTER_TYPE_MOUSE translates to undefined==undefined, which will always evaluate to true!

    Meanwhile however, I've noticed another frustrating problem. There is a significant amount of latency involved in the pointer events. For instance, when I click the mouse and then begin moving (such as to draw a line) the MSPointerDown event doesn't get triggered immediately. It gets triggered a fraction of a second late and the pageX, pageY location have since changed because the user is moving the mouse.

    Pointer event latency has always been noticeably bad on the Surface but I assumed it was a hardware issue. Now that this is occurring on a desktop it must indicate a problem at the software level. So firstly, there's a latency. That's a problem. But secondly, the MSPointerDown event should offer a way to get the original coordinates of the event and not the current coordinates of the finger. In my application I am only checking the first MSPointerDown event, ignoring all subsequent events, so this is definitely a latency issue and not merely catching a later MSPointerDown even from a different location.

    I should note that there is no such latency issue when using plain old MouseDown. The problem for me is that I'm trying to support "All In One" computers that require accessibility from both touch and mouse. It is quite frustrating that the pointer interface doesn't provide a ready solution since this was supposed to be one of the benefits. At this point I will need to find a way to parse plain old mouse events and *ignore* mouse based pointer events.

    Certainly would like to hear from MS in regards to these issues since we would like to provide our Windows users with an experience comparable to our Apple/Android users.

    Thursday, January 31, 2013 11:41 PM
  • Hi Terry,

    Thanks for your post.

    Firstly, I would like to suggest you read the post in the following link and check if helps, the post explains how Web developers can use the new IE10 pointer event model along with the iOS touch event model and the W3C mouse event model (as extended) to create a cross-browser, common-code handler for pointer, touch, and mouse input.

    http://blogs.msdn.com/b/ie/archive/2011/10/19/handling-multi-touch-and-mouse-input-in-all-browsers.aspx

    Additionally, in order to better help you with your question, I'm trying to involve some senior engineers into the issue and it will take some time. Your patience will be greatly appreciated.

    best regards,


    Yanping Wang
    MSDN Community Support | Feedback to us
    Develop and promote your apps in Windows Store
    Please remember to mark the replies as answers if they help and unmark them if they provide no help.

    Monday, February 4, 2013 7:26 AM
    Moderator
  • Thanks for looking into this.

    As a relevant aside, I'm familiar with that post. I actually ended up taking the reverse approach, creating a layer that adapts the windows pointers to the IOS touch style events by keeping my own internal changedTouches array. While elegant, the pointer approach has a significant weakness in that the individual pointers from a multi-touch gesture are not aware of each other. Under IOS touches, at any point I can get an accurate read of the total state of a multi-touch gesture by looking at both touches[] and changedTouches[]. It is trivial to derive MS pointers from touches, but not so trivial to go the opposite direction.

    For instance, one issue that the pointers interface provided trouble with was detecting the difference between a pinch event and a two finger pan. With IOS touches I can at any point decide whether the two fingers are moving in the same or opposite directions. With pointers however I only get one event at a time, the effect being that from one state to the next it appears to the code that one or other of the fingers is standing still. The result is that of an ambiguous state. There is no programmatic way to determine whether the fingers are moving in the same direction or not as what one sees, whether pinching or panning is one finger having moved while the other one stands still, and then the same situation reversed.

    Granted, this is not a common thing that most programs have to account for but for our app it is a core function. I was only able to work around this with some fancy state tracking, timers, and such. I should note that in native context it appears that one can traverse up the windows tree and gain access to the state of all the touches but this is not exposed in the web context. While this type of gesture is currently uncommon I would expect that as touch devices evolve that we will see more complex gestures that require such coordination between individual touches. Exposing the complete multi-touch state through the pointer API would address this issue.

    Monday, February 4, 2013 7:51 PM
  • I found a workaround to the issue. I now detect *both* pointer and mouse events. I only process mouse events however if the button is not pressed. That is, I keep a global flag that toggles based on mouse up and mouse down events. If that flag is not toggled (mouse up) then I process the mouse move, otherwise I discard the mouse move events since the pointer interface will process the moves under that condition.

    It works. Being able to detect pointer type would be better so I would encourage the engineers to address that in a patch release.

    However, while working with pointers on a desktop I have found that the latencies involved make them essentially inoperable for use with a mouse. MSPointerDown events from mouse clicks are not registering the x/y position that was clicked. They seem to fire a few milliseconds later than the mouse click, by which time a typical mouse user will have begun moving the mouse, and so the x/y position that is registered is actually along the vector of mouse movement. I've noted the pointer event latencies that we've experienced on the Surface and this development is an indication that the latencies are in the software layer. I would consider that to be good news.

    For our own app this means back to the drawing board again. There are no latencies associated with mousedown and mouseup so we'll need to find a way use those events rather than pointerdown and pointerup. Again, having access to the pointerType during all pointer events would give the flexibility needed even for overcoming this issue.

    Monday, February 18, 2013 7:27 PM
  • So finally, to get around the latency issue I've reversed the strategy. I keep a global flag to determine whether we're in "mouse mode" or "touch mode". I use the pointerType from the MSTouchDown request to determine whether the pointer is a mouse or touch pointer. That event is reliable.

    Then I capture all events, both mouse and pointer events. I discard any pointer events if "mouse mode" and I discard any mouse events if "touch mode".

    Finally, to seal it all together, if there's a mousemove event, and no current touches, then I automatically switch to mouse mode so as to capture mouse movements without the button pressed.

    This algorithm seems to give the best of both worlds without the latencies affecting the mouse.

    Monday, February 18, 2013 11:12 PM