none
using kinect with the multi-point mouse sdk RRS feed

  • General discussion

  • I would like to use each person/user recogized by the kinect as a different mouse for the muti-point mouse sdk is their an easy way to do this programming or not? Im using as an experiment in a class room setting but want to make sure an application works first. I want to modify the sample code for the included sample test or quiz in the multi-point sdk to test this with.

    Thursday, June 16, 2011 7:24 PM

All replies

  • The easiest way to do this is using PINVOKE and overriding the method in the Win32.dll API for setting the cursor position, similar to:

    [DllImport("user32.dll")]
      static extern bool SetCursorPos(int X, int Y);
    

    You can then pass it the X-Y co-ordinates of the hands (which can be found in the Skeltal data API) ensuring they are proportional to the screen.

    I'm not 100% sure how you can create multiple cursors, but I would think it would simply be a case of using what mulit-point provides and enumerating the cursors and setting them using the above method.

    One thing to note though is hands don't replace mice very well (i.e. you have two hands vs one mouse, some people use left hands, some with right) so there is a lot of think about here. Also hand gestures are less accurate than mouse movements.

    Certainly though the theory works and I've already tested it out!

    Thanks,
    Lewis


    Follow Me on Twitter: @LewisBenge Or check out my blog: http://www.geekswithblogs.com/pointtoshare/
    Friday, June 17, 2011 1:11 AM
  • Yeah test it out on the sample in the multimouse sdk and then post the modified code back if you can for just get the mouses to work I hope this wont make you mad. I will still try this and thanks that puts me on the right track to doing what i needed.

    I think this will already work but can i  setup a person to act as a dummy mouse inside my program? Anyone have thoughts on that part. The above is going to set where the mouse moves i just need a dummy mouse driver for each person recogized(4 max to run the quiz program with the multi-mouse sdk).

     

    AKA. The hand is the mouse okay to put it simply and i want my multi-mouse quiz sample to recogize the hands as mouse devices so i can program them or anyone with a similar work around.

    Monday, June 27, 2011 4:14 AM
  • Over need is:

    1. Recogize people first

    2. get the person's hand recogized as mouse devices.(the first hand put up for a period of time is made the mouse) (the multi-point sdk relies on it recogizing it as mouse)

    3. The above code looks like it will work! I also am trying to get the hands to emulate mouse clicks not just movements. How can that be done?

     4. Finally, can it be used for other multimouse scenarios?

    Thanks,

    Jeffery



    Tuesday, July 5, 2011 2:13 PM
  • BTW, thanks lewis for that tedbit. I want it to do the above.
    Tuesday, July 5, 2011 4:24 PM
  • 3. The above code looks like it will work! I also am trying to get the hands to emulate mouse clicks not just movements. How can that be done?


    Hi Jeffery,

    you can use something like this:

    SkeletonFrame allSkeletons = e.SkeletonFrame;
    foreach (SkeletonData s in allSkeletons.Skeletons)
    {
     if (s.TrackingState == SkeletonTrackingState.Tracked)
     {
     var scaledHandRight = s.Joints[JointID.HandRight].ScaleTo (screenWidth, screenHeight, 0.5f, 0.5f);
     int zCoordinate = (int)scaledHandRight.Position.Z;
     //filling global variables
     if(lastZ == 0)
     {
      lastZ = zCoordinate;
     }
    
     if(zCoordinate < lastZ)
     {
      MouseHook.SendDown();
     }
     if(zCoordinate >= lastZ){
      MouseHook.SendUp();
     }
      
     MouseHook.MoveMouse(new System.Drawing.Point((int)scaledHandRight.Position.X, (int)scaledHandRight.Position.Y));
     }
    }
    
    

    your position will be saved in lastZ (for instance lastZ = 2), now when you stretch your left Hand in front of you the coordinateZ will be lower than your Z position (e.g. lastZ = 1), this will be recognized as a MouseButton down event


    MouseHook class:

    public class MouseHook
     {
     [DllImport("user32.dll")]
     private static extern void mouse_event(UInt32 dwFlags, UInt32 dx, UInt32 dy, UInt32 dwData, IntPtr dwExtraInfo);
     private const UInt32 MOUSEEVENTF_LEFTDOWN = 0x0002;
     private const UInt32 MOUSEEVENTF_LEFTUP = 0x0004;
    
     internal static void SendClick()
     {
      SendDown();
      SendUp();
     }
    
     internal static void SendUp()
     {
      mouse_event(MOUSEEVENTF_LEFTUP, 0, 0, 0, new System.IntPtr());
     }
     internal static void SendDown()
     {
      mouse_event(MOUSEEVENTF_LEFTDOWN, 0, 0, 0, new System.IntPtr());
     }
    
     internal static void MoveMouse(System.Drawing.Point p)
     {
      System.Windows.Forms.Cursor.Position = new System.Drawing.Point((int)p.X, (int)p.Y);
     }
     }
    

     

     

     



    Wednesday, July 6, 2011 7:02 AM
  • I see how the above code works.

    I only have one question what do i have to do to get it to recogize different people/users in the multi-mouse sdk?


    Thanks Raphael for putting it in c# my c++ skills do not involve windows forms but I can program in it.

    Wednesday, July 6, 2011 12:34 PM
  • You could check if the tracked skeleton is the first tracked skeleton with something like this:

    SkeletonFrame allSkeleton = e.SkeletonFrame;
    SkeletonData skeleton = (from sk in allSkeleton.Skeletons
         where sk.TrackingState == SkeletonTrackingState.Tracked
         select sk).FirstOrDefault();
       if (s == skeleton)
       {
        //control first mouse
       }
       else
       {
        //second mouse
       }


    in combination with the code above

     



    Wednesday, July 6, 2011 12:58 PM
  • Have a last question how do i recoginize the different person as a different mouse? Do i just write a simple device driver that recognizes each person as a mouse? Or can I just run it through the OS? Or does the kinect offer this functionality? Just trying to get feedback on how to get each person recoginized as a mouse. Can you point me to the driver SDK and forums if i cant do this in here.

    What i have to do is recoginize the people on the kinect as mice first before using the multi-mouse sdk.

    P.S. Using usb generic mouse driver for each person and its either dell or ibm for better help.

    I suppose I would do raphaels code above except for all four people and iterate through them and get windows to recooginize the human from the kinect as a mouse and automatically install the driver or I need to modify multi-mouse code to tie to the kinect recogonition. I want the kinect to play well with multi-mouse though. If someone could do the first two mice for demostration I can do the rest.

     

    I will mark answered after that because i have all the code besides recoginizing mice (I actually want it to show up as installing 4 mice everytime plug in the kinect but if this can recoginize the kinect people as generic mouse drivers and jsut use them then extra points.)


    if you dont know how to get it to work with the multi-mouse sdk then i will considered it answered and just move on
    have to remember multi-point needs mouses recoginizied but i can probably trick it into thinking each person recoginizied on the kinect is recoginizied a mice and install/use only the generic mouse drivers for it. <-this is more of what im trying to do.

    Basically this is probably close to what the code will look like but i havent tryed it yet(minus mouse recgonition):

    Hi Jeffery,

    you can use something like this:

    SkeletonFrame allSkeletons = e.SkeletonFrame;
    foreach (SkeletonData s in allSkeletons.Skeletons)
    {
     if (s.TrackingState == SkeletonTrackingState.Tracked)
     {
    SkeletonFrame allSkeleton = e.SkeletonFrame;
    SkeletonData skeleton = (from sk in allSkeleton.Skeletons
       where sk.TrackingState == SkeletonTrackingState.Tracked
       select sk).FirstOrDefault();
      if (s == skeleton)
      {
      //control first mouse by installing the driver and fooling windows into thinking the person is a mouse here
      }
      else
      {
      //second mouse
      }
     var scaledHandRight = s.Joints[JointID.HandRight].ScaleTo (screenWidth, screenHeight, 0.5f, 0.5f);
     int zCoordinate = (int)scaledHandRight.Position.Z;
     //filling global variables
     if(lastZ == 0)
     {
     lastZ = zCoordinate;
     }
    
     if(zCoordinate < lastZ)
     {
     MouseHook.SendDown();
     }
     if(zCoordinate >= lastZ){
     MouseHook.SendUp();
     }
     
     MouseHook.MoveMouse(new System.Drawing.Point((int)scaledHandRight.Position.X, (int)scaledHandRight.Position.Y));
     }
    }
    
    

    your position will be saved in lastZ (for instance lastZ = 2), now when you stretch your left Hand in front of you the coordinateZ will be lower than your Z position (e.g. lastZ = 1), this will be recognized as a MouseButton down event


    MouseHook class:

    public class MouseHook
     {
     [DllImport("user32.dll")]
     private static extern void mouse_event(UInt32 dwFlags, UInt32 dx, UInt32 dy, UInt32 dwData, IntPtr dwExtraInfo);
     private const UInt32 MOUSEEVENTF_LEFTDOWN = 0x0002;
     private const UInt32 MOUSEEVENTF_LEFTUP = 0x0004;
    
     internal static void SendClick()
     {
     SendDown();
     SendUp();
     }
    
     internal static void SendUp()
     {
     mouse_event(MOUSEEVENTF_LEFTUP, 0, 0, 0, new System.IntPtr());
     }
     internal static void SendDown()
     {
     mouse_event(MOUSEEVENTF_LEFTDOWN, 0, 0, 0, new System.IntPtr());
     }
    
     internal static void MoveMouse(System.Drawing.Point p)
     {
     System.Windows.Forms.Cursor.Position = new System.Drawing.Point((int)p.X, (int)p.Y);
     }
     }
    

     

     

    Thursday, July 28, 2011 12:28 PM
  • Please anyone that has fooled around with device recogition/installation in C# please post and help me finish this. C++ helps too but preferrably C#
    Thursday, July 28, 2011 1:23 PM
  • As long as the people as recognized by the multi-point sdk as mice you code will be fine. Sorry above i meant multi-point if i didnt state that already. Oh can someone please post a full code sample with it? The application can be simple as long as it works.

     THe reason i want it to recoginize each person as a mouse is i dont want to pay more for mice/mouse and the belwo quote from multi-mouse sdks website:

    MultiPoint Mouse SDK supports USB, PS/2, Bluetooth, trackpad, and wireless mouse devices. For wireless mouse devices, a frequency of 2.4 GHz is recommended, as 27 MHz mouse devices often interfere with each other in close proximity. Other HID devices (such as joysticks and game controllers) are not supported.

    That means i might have to code in my own modified mouse driver for the kinect anyone up for it? It will operate like a normal mouse driver except tied to the kinect using the above combined with recogintion points it looks like it could work as a mouse driver (just need to have distribute correct dlls with the driver or distribute the kinecgt driver and use its functions).


    Thursday, July 28, 2011 2:17 PM
  • I used the Multipoint Mouse for Imagine Cup 2009, but I must admit I'm no expert in it. Anyway, the point is that I don't think you would need to create a driver to move the mouse cursor, just use the SetPosition method. On the other hand, multipoint mouse doesn't give you anything much except a screen cursor. The Kinect doesn't give events like clicking, but nonetheless you might still want to try it out with multipoint mouse. I don't see why not. Anyway, there is a forum archive for multipoint mouse (a bit hard to find with all the dead links around). Anyway there are two relevant threads:

    From http://social.msdn.microsoft.com/Forums/en-US/mpttroubleshoot/thread/9d818cb7-fd6a-4000-91fd-e9686f006e18 we see we can set the position of the multipoint mouse like this:

    for (int i = 0; i < 4; i++)
    {

    DeviceInfo mpDeviceInfo = (DeviceInfo)MultiPointSDK.Instance.MouseDeviceList[i];
    MultiPointMouseDevice mpMouseDevice = (MultiPointMouseDevice)mpDeviceInfo.DeviceVisual;
    Point mousePosition = mpMouseDevice.SetPosition();
    }

    Note that the last line is an error, SetPosition should actually be called like:

    mpDevice.SetPosition((int)screenPosition.X,(int)screenPosition.Y);

    as we can see from a big sample code about restricting mouse movement:

    http://social.msdn.microsoft.com/Forums/en-US/mpttroubleshoot/thread/fb85a187-e689-4f83-8c3b-6c83260a2280

    On the other hand, I don't think there is a way to create the clicks in multipoint version 1.5. On the other hand, you could work around this by just forcing the standard mouse to click in certain locations on the screen. Windows doesn't care which person clicks the mouse, it will just call the mouse event with the screen location. There are a bunch of tutorials about multipoint mouse at:

    http://msdnnepal.net/blogs/nutan/archive/2009/11.aspx

    http://msdnnepal.net/blogs/nutan/archive/2009/12.aspx

    Friday, July 29, 2011 1:48 PM
  • Sorry i had must admit this is experimental but my boss would be very pleased if i got it working. After all we can mount it on the wall and not use mice (mice are more a liability because kids and teens throw them at each other).  I like the code above thank you thats more what i was looking for a way to trick the program into regcogzing the multi-point sdk into thinking the skeleton was a mouse thank you kind sir. You should be awarded medal of honor but microsoft would probably over look this thread until someone else tries to comes up with a program that does extactly the same thing.

    Microsoft please award the people of this thread for their time and effort. Wheter just recogition or actual implementation of their ideas do something.

    Tuesday, August 2, 2011 12:27 PM
  • One question tom so i would replace this part of rapheals code:

     internal static void MoveMouse(System.Drawing.Point p)
    {
    System.Windows.Forms.Cursor.Position = new System.Drawing.Point((int)p.X, (int)p.Y);
    }
    }

     with this part:

     internal static void MultipointObject_MouseMove(object sender, RoutedEventArgs e)
    {

     

    for (int i = 0; i < 4; i++)
    {

    DeviceInfo mpDeviceInfo = (DeviceInfo)MultiPointSDK.Instance.MouseDeviceList[i];
    MultiPointMouseDevice mpMouseDevice = (MultiPointMouseDevice)mpDeviceInfo.DeviceVisual;
    Point mousePosition = mpMouseDevice.SetPosition();
    }

    }
    }

     and it will work fully? It seems if i mix the two i have a win-win although this is experimental would be useful to release a add-on for the commerical version of the kinect sdk for computer that has this tidbit implemented. I know a lot of educators that would like that.


    Or put it directly into the code and it work fine? Anyone experimented with typing either with the kinect? although thats a different discussion.

    I believe from looking at the forum post that the multi-point has its own event handlers for the mouse. Please of anyone gets a full working sample post it here for future research. Although I probably could figure it out after count less hours I dont want to get into it anymore then i have to directly. But thanks for the post i will try modifying everyones suggestions together and it will work.

    for the part about the kinect sdk full code sample would be appreciated as iam at a loss as to the overall picture.


    • Edited by The Thinker Wednesday, August 3, 2011 2:37 PM
    Tuesday, August 2, 2011 12:35 PM
  • I couldn't get your code to work tom. I think it has to do something with the way im referencing something but i will post the sample project that i want to get it work on so you can tell me what im doing wrong.  I have never fooled much with the sdk much but understand most of the events because its easier to digest code then in c++.



    Edit: Never mind i created variables for each of the hard typed ones and put them on seperate lines and it worked.

    Personal Quote: most complex problems can be solved by simple solutions or the combination of simple steps together(thats how i passed physics).

    Quote from IT article in reader digest: Things your IT guy wont tell you: Forgot to plug in power cable, Turn the computer off and back on. <-take note help guys



    Still testing tom fingers crossed if it works i will post sample code anyone wanting sample code please post here and i will post the full project as its just a modification of the quiz sample in multipoint sdk samples.


    Still have one question for rapheal where do i place your code at to move the mouse? I know the mouse move will be implemented by toms code and the rest is rapheals so your both correct. I thought that you can also do  mouse down and up events with the multipoint sdk? Can i set the mouse hook to call them instead to invoke the already defined event handlers for them inside of my program.
    • Edited by The Thinker Wednesday, August 3, 2011 2:44 PM
    Wednesday, August 3, 2011 2:12 PM
  • The Thinker, pls send me your multipoint project work.
    Wednesday, August 3, 2011 2:37 PM
  • Here is the code as is right now. Just add rapheals code and it will be done. If i figure out where i can put rapheals code i will repost the updated version. Somewhere their is a variable if you want to change the number of people/users connected but i would stick with the default 4 and then when i get it working i can help anyone with a whole classroom.
    I need to put it on my live drive then i will post link here. link to the project as is right now:

    https://skydrive.live.com/?cid=28bae91ca075b1e5#!/?cid=28bae91ca075b1e5&sc=documents&id=28BAE91CA075B1E5%21127


    is it possible to zip and post the project in skydrive correctly?
    Wednesday, August 3, 2011 3:04 PM
  • okay I need some help with getting the project to work with your code rapheal although you could call some code in multipoint using toms links above and it has mouse events so every time i can tell multi-point to call the mouse event every time  the kinect code  above becomes true causng it to call the mouse event. trouble is i need it to load somewhere were i can check constantly for kinect movement then move the mouse. Anyone have any ideas which event handler or subroutine to put rapheals kinect code parts into?
    Anyone that can make the quiz sample project modified todothe biding of kinect should receive fame. I plan  on actually implementing this in my school if anyone can get this to work.
    Wednesday, August 3, 2011 3:51 PM
  • BTW, the quiz sample is installed automatically with the multi-point sdk and that makes it easier to do it. I should mention that i want more then four quiz players and want four players on one kinect to use the correct window for their player and freeze it when that player should leave/resume for that same player when they return(player freeze optional since teacher can freeze all mice already). Another kinect has another 4 players on it and so on until all 20-25 class kids are accounted for.

    For example, kids 1-4 are on kinect 1 and use box regions 1-4 and the kids 5-8 are on kinect 2 and interface with the multi-point mouse sdk interface and imitate mice so that a person can take the quiz without liability coming from mice thrown all over the place. Microsoft if you could piece this together as an add-on package in the next update for the kinect sdk or commerical version that would make some teachers and myself (IT department of school system) happy.  I know teachers hate something that would cause them more trouble but this would be a joy because mice would not be throw around everywhere.



    Wednesday, August 3, 2011 5:40 PM
  • I have no idea where i can put this code to try it out:

    SkeletonFrame allSkeletons = e.SkeletonFrame;
    foreach (SkeletonData s in allSkeletons.Skeletons)
    {
    if (s.TrackingState == SkeletonTrackingState.Tracked)
    {
    var scaledHandRight = s.Joints[JointID.HandRight].ScaleTo (screenWidth, screenHeight, 0.5f, 0.5f);
    int zCoordinate = (int)scaledHandRight.Position.Z;
    //filling global variables
    if(lastZ == 0)
    {
      lastZ = zCoordinate;
    }

    if(zCoordinate < lastZ)
    {
      MouseHook.SendDown();
    }
    if(zCoordinate >= lastZ){
      MouseHook.SendUp();
    }
     
    MouseHook.MoveMouse(new System.Drawing.Point((int)scaledHandRight.Position.X, (int)scaledHandRight.Position.Y));
    }
    }

    Anyone please help?

    Tuesday, August 9, 2011 12:57 PM
  • Assuming a program structure like the C# SkeletalViewer sample (installed to C:\Users\Public\Documents\Microsoft Research KinectSDK Samples\NUI\SkeletalViewer\CS), that code would be appropriate for nui_SkeletonFrameReady method in MainWindow.xaml.cs

    Hope this helps,
    Eddy


    I'm here to help
    Tuesday, August 9, 2011 11:32 PM
  • If i were directly programming into the kinect sdk that would be fine but im  using the quiz sample from multi-point sdk and modifying it and putting the kinect code into it. Any more ideas? I think a loop where code is run every so often (maybe redraw statement/event code).  Im not for sure though were to put it and it work correctly. Redraw the multi-point sdk form for the sample (it's a WPF form). It does use the same form type i just need a full code sample and where to place it so i dont effect the other code already in place except i want it to recoginize the kinect as 4 mice and i know toms code does that. I need an event to fire when the kinect moves or when a loop goes through to recoginize the mice and instead recoginize the kinect(i use rapheals code if it works correctly).
    Anyone help me put rapheals code together with tom's? Also, can someone help me out and please put init and uninit code here too so i can see the overall picture (for rapheals code)?
    Wednesday, August 10, 2011 12:53 PM
  • I created a code plex page with a download link so anyone can download the source code and help me with this project and get it working. I will share full source code if i get it working before then or maybe just make a compile and remove the source code.

    Download link to source code for what i have so far:
    http://kinectmultipoint.codeplex.com/releases/71540/download/268699


    Wednesday, August 10, 2011 1:46 PM
  • Awesome! I will try it out.

    Wednesday, August 10, 2011 2:26 PM
  • My overall problem is if i put it into the nui_SkeletonFrameReady method what sub calls nui_SkeletonFrameReady? Or is it an event which fires when i move my hand? If you wish to try and help me the project source code link is above so you can find out what im talking about and not have me talk to death on this post.
    Wednesday, August 10, 2011 2:27 PM
  • I dont know if nui_SkeletonFrameReady alone will help me to do this but anyone wanting source code its too big to list here so i provided a download link so you can cipher through and put the rapheals kinect code and tom's multi-point code in the correct places although im shotting in the dark as to the proper event to put them in because the kinect needs to be recoginized first and then tom's code needs to be executed. tom's code is suppose to enumerate through the mice and set properties/emulate a click if im dechipering it correctly. Is this correct?
    Wednesday, August 10, 2011 7:56 PM
  • HI The Thinker,

    Looked over your code and I think I understand your aims a little better. But I think I should get a bit of a clarification. Mouse movement to me usually means that I can interact with the underlying operating system. We got (single) mouse movements working together with switching images and camera feeds using WinForms at one of Lewis' Kinect hack-a-thon workshops. We even had trouble when the Kinect mouse movements were interfering with Visual Studio. 

    However, it looks like you might just be looking for something simpler, like an e-learning application that lets more than one participant join in. Is it right that you can accept mouse movements on the WPF surface as enough? I mean, it's different than controlling a computer, since in WPF/WinForms we can control all the controls like buttons, etc. In that case, I think it will be much simpler to make a quiz application and then move mouse graphics in the locations on the WPF corresponding to the hand motions. In that case, bringing multi-point mouse into the picture would not be necessary. 


    Thursday, August 11, 2011 1:52 AM
  • tom i want each person that the kinect recoginizes to be used as a mouse in the quiz sample from the multi-point sdk. But if i dont need the mult-point sdk to get a mult-person mouse program running for more then 4 people please help me. I think it would be easier to program using the multi-point sdk and enumerate through the (fake) mice moving them according to where each persons hand is with the kinect.

     

    No, im not controlling the actual computer im just staying within the WPF application so multi-point sdk is still possible. I just wont be able to use the mult-point if i decide to go out of the application. Any help or contributions to the source code on the project site would be nice. Any specifics you make/change to the code and upload to the site be sure to post them. The project website is: http://kinectmultipoint.codeplex.com.




    Thursday, August 11, 2011 1:52 PM
  • Anyone wanting to revise the code please upload at codeplex or your website. I would like to accept mouse movements and clicking events in the realm of the WPF form.  Anyone with any help? I might be able to get the quiz application working but im still figuring out the proper event to put the kinect section of code into in the quiz application so i dont have to recode the whole quiz application by hand (which would take time).

    Sorry if i have been posting twice. Sometimes i forget something important most times and although most looks like a repeat theirs usually something I added.



    Im confused as to where nui_skeleton frame ready  would work in the multi-point quiz sample.
    According to this kinect code it checks for people then toms code takes the mice and moves them around so i dont have to recode the drawing of the mice graphics onscreen and manage the code for redrawing objects onto the window:

    SkeletonFrame allSkeletons = e.SkeletonFrame;
    
    foreach (SkeletonData s in allSkeletons.Skeletons)
    
    {
    
     if (s.TrackingState == SkeletonTrackingState.Tracked)
    
     {
    SkeletonFrame allSkeleton = e.SkeletonFrame;
    
    SkeletonData skeleton = (from sk in allSkeleton.Skeletons
    
     where sk.TrackingState == SkeletonTrackingState.Tracked
    
     select sk).FirstOrDefault();
    
     if (s == skeleton)
    
     {
    
     //control first mouse by installing the driver and fooling windows into thinking the person is a mouse here
    
     }
    
     else
    
     {
    
     //second mouse
    
     }
    
     var scaledHandRight = s.Joints[JointID.HandRight].ScaleTo (screenWidth, screenHeight, 0.5f, 0.5f);
    
     int zCoordinate = (int)scaledHandRight.Position.Z;
    
     //filling global variables
    
     if(lastZ == 0)
    
     {
    
     lastZ = zCoordinate;
    
     }
    
    
    
     if(zCoordinate < lastZ)
    
     {
    
     MouseHook.SendDown();
    
     }
    
     if(zCoordinate >= lastZ){
    
     MouseHook.SendUp();
    
     }
    
    
    Wondering how to get this working i think nui_skeleton frame ready would only work when the skeleton becomes ready but i need the multipoint code executed at somepoint in time so i dont have to redraw objects and handle events for them onscreen.

    Tuesday, August 16, 2011 12:11 PM
  • As moderator, I won't set the precedent of contributing directly to your codeplex project, or doing the equivalent of creating my own codeplex project just for you, but I can try to help you in the same way I try to help others such that information is searchable within this forum. I finally got around to running your project, and ended up doing the following things:

    1) Create a new MouseHook.cs file to hold only the MouseHook class code. you'll need to add a "using System.Runtime.InteropServices;" to get DllImport to work

    2) Added an x86 build platform configuration for Microsoft.Multipoint.SDK.Samples.Quiz project and set that to be the active build target platform, since otherwise I would get problems trying to load INuiInstanceHelper.dll when using Kinect SDK APIs.

    3) Added following declarations to WindowMain.xaml.cs:

    Microsoft.Research.Kinect.Nui.Runtime nui;
    int lastZ = 0;

    4) Added following code at the end of Window_Loaded method (also in WindowMain.xaml.cs):

    nui = new Microsoft.Research.Kinect.Nui.Runtime();
    nui.Initialize(RuntimeOptions.UseSkeletalTracking);
    nui.SkeletonFrameReady += new EventHandler<SkeletonFrameReadyEventArgs>(nui_SkeletonFrameReady);

    5) Added the following nui_SkeletonFrameReady method definition to WindowMain class:

    void nui_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
    {
      SkeletonFrame allSkeletons = e.SkeletonFrame;
      foreach (SkeletonData s in allSkeletons.Skeletons)
      {
        if (s.TrackingState == SkeletonTrackingState.Tracked)
        {
          int zCoordinate = (int)(s.Joints[JointID.HandRight].Position.Z * 300.0);
          //filling global variables
          if (lastZ == 0)
          {
            lastZ = zCoordinate;
          }
    
          if (zCoordinate < lastZ)
          {
            MouseHook.SendDown();
          }
          if (zCoordinate >= lastZ)
          {
            MouseHook.SendUp();
          }
        }
      }
    }

    I just multiplied Position.Z by a hardcoded 300.0 because I didn't find your screenHeight and screenWidth definitions within your project, and I feel like getting those values is a problem for you to figure out in the context of your application, rather than being Kinect-specific.

    After making these modifications, I could see both MouseHook.SendDown and MouseHook.SendUp methods being called as I moved my right hand when I was far enough away from camera so that my entire body fit within camera frame. If you get your program to this state, I hope that you can iterate from there to get closer to your goal.

    Hope this helps,
    Eddy


    I'm here to help
    Wednesday, August 17, 2011 2:18 AM
  • I think 300 is a close approximation to the screen size in the quiz program. somewhere their is either a file or piece of code that codes the screen size to be so big depending on the amount of users present. Also, somewhere you change the amount of users too which is hard coded to a set default of 4 which can be changed. Iam going to try and do the modifications after i finish an infinite campus crystal reports report card or while i wait on a response from their forums try your suggestion. As long as nui_SkeletonFrameReady runs every time my hand/s get attention to the kinect then it will work. BTW does that inlcude the mouse movement part? I can call the mouse events passing the number of the mouse fooling the os into thinking i have it their.

    How do i enumerate through the mice? I want each person to be recognized as a mice so toms code comes into play with a little more  code maybe if necessary. I know their has to be mouse emulation because i know how to get my program to send keyboard keys.

    My.Computer.SendKeys <- used in a command prompt to send commands to do certain tasks in vbscript and vb.net (seems like the two are very similar)  sends the keyboard keys input to a program like a ghost writer and you dont have to do that task.

    I once thought about making a program/script that can send telnet commands to the telnet(dialer) application and program a router or switch that way automated but that would be a lot easier since i know the commands for it.



    • Edited by The Thinker Wednesday, August 17, 2011 5:36 PM
    Wednesday, August 17, 2011 12:12 PM
  • it almost works as it right now after my first test its just  how do i get it to put the coordindates of the hand to my screen? It will do bounds checking but i need it to call MultipointObject_DeviceArrivalEvent or add a player to the region every time a person is detected instead of wait for mice. Any idea on how to do this because after the add person to the canvas i can truely test it for bugs and what doesnt work. .

    All in All i just want the drawing of the window for each player and xml file code to be used with the kinect instead of mice.
    Wednesday, August 17, 2011 1:13 PM
  • Forgot to tell uploading new modified code to codeplex today sometime so anyone can download the version with some kinect code in it implemented. Also, first offical executable build to try (only 32-bit build will do 64-bit later on my computer at home).

    Wednesday, August 17, 2011 5:55 PM
  • Regarding "BTW does that inlcude the mouse movement part?", No, the code I pasted above only includes sending mouse down and mouse up events.

    Regarding "How do i enumerate through the mice?" I don't know. You should look at their developer resources (http://www.microsoft.com/multipoint/mouse-sdk/developer.aspx) which might include a domain-specific forum. In this forum we can help you with Kinect-specific questions and a couple of general development issues enough to get parts of your application up and running, but in the end you own your application, so since it has components from multiple SDKs you will probably need to do a bunch of your own research to connect all the pieces together. I'd suggest playing around with multi-mouse SDK on its own (i.e.: without Kinect) to get some expertise in all the pieces needed to use it well, playing around with kinect SDK on its own, playing around with emulating mouse messages without Kinect and then after playing around with all of these independently start integrating them together. Maybe good intermediate steps are:

    1) an application that uses kinect to emulate single-mouse activity
    2) an application that emulates multi-mouse activity with different groups of alphabetic keys (i.e.: no kinect) to represent each mouse. E.g.: mouse 1 represented by A,S,D,W for direction + Q for representing click, mouse 2 represented by H,J,K,U for direction + Y for representing click, etc.

    Good luck,
    Eddy


    I'm here to help
    Wednesday, August 17, 2011 7:36 PM
  • If i had more time at my workplace i would probably just take out the mouse draw code and use it for each person eliminating the need for mice. So i wont need to recognize the mice but i could find and post that if someone can help me piece together the components/make sense of the hugh code pile. Anyone wanting to help please post here or the page that you downloaded it from on codeplex. But source code to the codeplex project.

     

    Thursday, August 18, 2011 5:24 PM
  • Anyone wanting to help me out post important information here and make a copy of the updated source code for code plex.
    Tuesday, August 23, 2011 12:04 PM
  • I will stop postng to this thread but if anyone has any ful code samples to contribute please post important parts here and full code in zip file on codeplex website.
    Thursday, August 25, 2011 12:03 PM