there seems to be a very peculiar behaviour related to keyboard hook and Win. 8.1 "Project to a second screen" panel. I have included the source code (VS 2013, .NET 2.0) and binaries (Debug/Release folder) in the download link. I have also included a README folder with the original question that I shall copy after link.
So here is my question/problem (find the same contents in README/ReadMe.txt):
Running global keyboard hook, processing messages and sending out final output with SendInput. This is a replication of the problem that exist in a more complex codebase and in the linear context presented by the included source code could be solved by not using SendInput.
The reason why the output is always send out with SendInput is that in the real world scenario we process all messages in diffrent threads and the result is very offen a diffrent character or a multitude of them than the physical key/scancode that was actually pressed. The number of inputed keys is relativly large and many of them relate to each other,
so any linear processing is not an option.
To find out exactly why this problem occurs and what we can do to either improve our codebase, help Microsoft identify a potential bug or learn about potential anti-injection design of certain Windows 8.1 elements that we're not aware yet.
Repeat 2x, normally and with the included global hook console running (MSKeyboardHookProblemExample\bin\Release\\MSKeyboardHookProblemExample.exe)
1. Open start
2. Type "project to a second screen"
3. Select first option
4. "Project to a second screen" panel opens up (see ProjectToScreenPanel.jpg in README folder - root project folder)
5. With keyboard (Up/Down arrows) select any option press and release "Enter" key
What we expect:
The selection is confirmed and the changes are applied.
What happens (with global hook running):
"Enter" KeyDown is recived, processed and successfully injected with SendInput the hook returns (IntPtr)1 to "suppress" the event. Injected KeyDown is recived by the hook and returns the pointer returned from CallNextHookEx function.
"Enter" KeyUp is recived processed and successfully injected much of the same as KeyDown was. NO Injected KeyUp is recived as expected to forward to the next hook/window.
Selected option stays "popped-in" as it happens when you hold down "Enter" before releasing it. No changes get applied, it seems obvious because the panel still has all the information pointing in the direction that "Enter" key is still pressed, as the injected KeyUp was never recived. Not even by the issuer, us in this case.
In no way is the included source code production quality and there is not guarantee that it will even work. Use it as you wish but I would advise caution as it's the result of me copying a bunch of stuff together, which often result in various errors. Console ouputs scancode of the pressed key with some additional information, scancode for "Enter" is 28.
Our real world application has uiAccess flag set to true, all files are digitaly signed and it is installed in the trusted directory (Program Files). uiAccess permissions are therefore granted to us and we have the ability to stay on top of the "Modern UI" much like Microsoft's "On-Screen Keyboard", that does work as expected for a given case (although it has no need to "suppress" real keyboard events).
This case is truly the only hook related problem that we are currently aware of and would very much like to solve it. As you can imagine many potential solutions were already explored. Thank you all.
- Edited by 0001111111000000 Wednesday, April 02, 2014 6:09 AM extra empty lines removed
This doesn't have anything to do with Accessibility or UI Automation. You may get better responses in a more appropriate forum, though this likely will require deep enough investigation that you'll need to open a support case if you need help from Microsoft on this.
That said, since you did post in the UI Automation forum I'll recommend you remove the hook / SendInput "automation" and use UI Automation directly. If you know semantically what is being automated then this will likely be more efficient and effective. The only case I can think of where I'd generate keyboard input is if you're reflecting what somebody is typing and don't know what is actually being controlled.
I don't think it's related to the problem you're seeing, but you have another major problem with your hook: it isn't possible to properly handle global hooks from managed code. You cannot inject a high level hook, and you cannot guarantee that a low level hook will run in a timely manner. A low level mouse hook which doesn't respond quickly enough will be removed without notice. If this is important you'll need to rewrite the hook code natively.
Thank you for your advice. What forum or other form of contact with appropriate people would you recommend? Our application is, among other things, Accessibility application - therefore uiAccess flag in our manifesto. But I do agree that it might have been a mistake to publish this specific question under this topic, because application is not a UI Automation app per se (although it does support custom shortcuts that can be used for automation).
Application does act as a keyboard middleware in fact every key finally outputed on the system is outputed by the app, so in most cases you could say we are reflecting exactly what was typed. We are aware of potential hook problem and we solved it by threading (producer/consumer) and returning "immediately", which measured on my system is 13.9 µs, 0.0139 ms (arithmetic mean). If you don't respond for X amount of time (X = 5000 ms on my Win. 8.1 system for keyboard hook) system does unhook you as you pointed out.
Thank you again for your effort.