none
Kinect Interactions RRS feed

  • Question

  • Hi,

    I've been playing with the new SDK (1.7).

    Is there a way of triggering an event when the users hand does a press or a grip (not over any button/control) without having to have a KinectRegion?

    I just want to be able to know when the user does these actions.

    Any help will be much appreciated.

    ilo_oli

    Tuesday, March 19, 2013 3:47 AM

Answers

  • Your problem is here:

            public InteractionInfo GetInteractionInfoAtLocation(int skeletonTrackingId, InteractionHandType handType, double x, double y)
            {
                //not used
                return null;
            }
    

    You need to supply a real implementation for this method. In fact, for your purposes, you can probably delete all of the other code from this class. Try this code:

            public InteractionInfo GetInteractionInfoAtLocation(int skeletonTrackingId, InteractionHandType handType, double x, double y)
            {
                return new InteractionInfo
                    {
                        IsPressTarget = false,
                        IsGripTarget = true,
                    };
            }
    
    What this effectively says is, "Every possible hand position -- regardless of the skeleton tracking ID or the hand type -- is a grip target (and not a press target)."


    John | Kinect for Windows development team

    Tuesday, March 26, 2013 9:56 PM

All replies

  • Hi,

    Sorry can't help you, I'm actually having similar problems trying to get it to work with C++.

    Especially on how to setup the "NuiCreateInteractionStream" function and the INuiInteractionClient pointer it needs.
    It seems it's an abstract class so it can't be simply instantiated.
    Deriving a custom class based on it doesn't seem to work for me (yet), possibly because I don't understand the "interface" behaviour.

    It seems the gripping info can eventually be gotten from the INuiInteractionStream::GetNextFrame method.
    And then dive into NUI_INTERACTION_FRAME, then NUI_USER_INFO and then the NUI_HANDPOINTER_INFO structs.


    Any tips or code snippets would be highly appreciated, since there doesn't seem to be a C++ example that covers the new KinectInteraction yet.
    Tuesday, March 19, 2013 1:15 PM
    Moderator
  • I've got no idea how to make this work indepedently as I wish to tie it into my my own C# project!
    Tuesday, March 19, 2013 3:35 PM
  • The place to get started would be the Kinect Interactions documentation: http://msdn.microsoft.com/en-us/library/dn188671.aspx.

    There are essentially two layers:

    • Interaction Stream (C++ or managed)
    • Interaction Controls (managed only, WPF-specific)

    The WPF controls are implemented in terms of the interaction stream.

    If you are using a UI framework other than WPF, you will need to do the following:

    • Implement the "interaction client" interface. This interface has a single method, GetInteractionInfoAtLocation. This method will be called repeatedly by the interaction stream as it tracks the user's hand movements. Each time it is called, it is your responsibility to return the "interaction info" (InteractionInfo in managed, NUI_INTERACTION_INFO in C++) for the given user, hand, and position. Essentially, this is how the interaction stream performs hit-testing on the controls within your user interface.
    • Create an instance of the interaction stream, supplying it a reference to your interaction client implementation.
    • Start the Kinect sensor's depth and skeleton streams.
    • For each depth and skeleton frame produced by the sensor streams, pass the frame's data to the appropriate method (ProcessDepth or ProcessSkeleton) of the interaction stream.
    • As the interaction stream processes the input frames from the sensor, it will produce interaction frames for your code to consume. In C++, call the interaction stream's GetNextFrame method to retrieve each such frame. In managed code, you can either call OpenNextFrame, or subscribe to the InteractionFrameReady event.
    • Read the data from each interaction frame to find out what the user is doing. Each frame has a timestamp and a collection of user info structures, each of which has a user tracking ID and a collection of hand info structures, which provide information about each hand's position, state, and grip/ungrip events.

    I hope this brief summary gives you a sense of what's involved in using the interaction stream. You should consult the documentation for the rest of the details.


    John | Kinect for Windows development team

    Wednesday, March 20, 2013 12:19 AM
  • Thanks for confirming I was on the right track!

    I'm having some troubles with the first step, the subsequent ones seem easy enough.
    The QueryInterface, AddRef and Release functions got me confused since dealing with interfaces and reference counting is new to me.

    Will digg deeper today and start googling some more.

    Cheers,
    Wednesday, March 20, 2013 9:03 AM
    Moderator
  • Ok, so I've implemented the interaction client like this:

    class interactionClient : public INuiInteractionClient
    {
    public:
        interactionClient()                                                 {}
        ~interactionClient()                                                {}
        STDMETHODIMP_(ULONG)    AddRef()                                    { return 2;     }
        STDMETHODIMP_(ULONG)    Release()                                   { return 1;     }
        STDMETHODIMP            QueryInterface(REFIID riid, void **ppv)     { return S_OK;  }
        HRESULT                 GetInteractionInfoAtLocation(DWORD skeletonTrackingId, NUI_HAND_TYPE handType, FLOAT x, FLOAT y, NUI_INTERACTION_INFO *pInteractionInfo)
        {
            if(pInteractionInfo)
            {
                pInteractionInfo->IsPressTarget         = false;
                pInteractionInfo->PressTargetControlId  = 0;
                pInteractionInfo->PressAttractionPointX = 0.f;
                pInteractionInfo->PressAttractionPointY = 0.f;
                pInteractionInfo->IsGripTarget          = true;
                return S_OK;
            }
            return E_POINTER;
        }
    };

    Then I create the interaction stream like this:

    interactionClient               m_interactionClient;
    INuiInteractionStream*          m_interactionStream;
    NuiCreateInteractionStream(m_nuiSensor, &m_interactionClient, &m_interactionStream);

    Checking HRESULT is does not fail and m_interactionStream is not NULL afterwards.




    Processing the depth, I'm not entirely sure what BYTE buffer to pass, I'm currently doing something like this:

    NUI_IMAGE_FRAME imageFrame;
    m_nuiSensor->NuiImageStreamGetNextFrame(m_pDepthStreamHandle, 0, &imageFrame);
    NUI_LOCKED_RECT     LockedRect;
    INuiFrameTexture*   pTexture = imageFrame.pFrameTexture;
    if(LockedRect.Pitch)
       memcpy(m_depthBuffer->GetBuffer(), PBYTE(LockedRect.pBits), m_depthBuffer->GetBufferSize());
    pTexture->UnlockRect(0);
    hr = m_nuiSensor->NuiImageStreamReleaseFrame(m_pDepthStreamHandle, &imageFrame);

    And then pass the m_depthBuffer->GetBuffer() buffer to ProcessDepth, which seems to fail.




    When I then do ProcessSkeleton it also fails, possibly since the depth already failed?
    Then doing GetNextFrame naturally results in the interaction frame to be NULL as well which is to be expected.




    I feel like I'm getting closer but since the manual isn't entirely explicit and there is no example the devil is probably in the details and I'm probably doing something stupid somewhere :)
    Note that I'm using the depth buffer and skeletons in my application already so I know the setup stuff and data fetching/processing for all that works correctly.


    Any pointers to help solve the puzzle? :)

    Wednesday, March 20, 2013 2:54 PM
    Moderator
  • OK, I think I see the problem... There are actually two different formats for the depth data, and the Interaction Stream requires the "other" one, which can be obtained by using NuiImageFrameGetDepthImagePixelFrameTexture (what a name!).

    Other issues with your code: You need to call LockRect; otherwise, your LockedRect structure will contain garbage data. Also, be sure that m_depthBuffer has a buffer size of exactly LockedRect.size bytes.

    NUI_IMAGE_FRAME imageFrame;
    NUI_IMAGE_FRAME* pDepthImagePixelFrame;
    m_nuiSensor->NuiImageStreamGetNextFrame(m_pDepthStreamHandle, 0, &imageFrame);
    m_nuiSensor->NuiImageFrameGetDepthImagePixelFrameTexture(m_pDepthStreamHandle, &imageFrame, nullptr, &pDepthImagePixelFrame);
    NUI_LOCKED_RECT     LockedRect;
    INuiFrameTexture*   pTexture = pDepthImagePixelFrame->pFrameTexture;
    pTexture->LockRect(0, &LockedRect, nullptr, 0);
    m_depthBuffer->SetBufferSize(LockedRect.size);
    if(LockedRect.Pitch) memcpy(m_depthBuffer->GetBuffer(), PBYTE(LockedRect.pBits), m_depthBuffer->GetBufferSize()); pTexture->UnlockRect(0); hr = m_nuiSensor->NuiImageStreamReleaseFrame(m_pDepthStreamHandle, &imageFrame);


    John | Kinect for Windows development team

    Wednesday, March 20, 2013 5:40 PM
  • Thanks again John!

    Will look into using the "other" depth data format.

    And thanks for the tips on LockRect and buffer sizes, these were just snippets from my otherwise quite long code :)

    Wednesday, March 20, 2013 6:15 PM
    Moderator
  • Hi John,

    i have successfully run the two function:ProcessDepth and ProcessSkeleton,

    but when i try to retrive information from the interaction stream,it fails.

    my Update Depth and Skeleton code,

    NUI_IMAGE_FRAME imageFrame;
    	HRESULT hr = m_pNuiSensor->NuiImageStreamGetNextFrame(m_pDepthStreamHandle,0,	&imageFrame );
    
    	if ( FAILED( hr ) )
    	{
    		return false;
    	}	
    
    	BOOL nearMode = FALSE;
    	INuiFrameTexture *frameTexture = 0;
    	m_pNuiSensor->NuiImageFrameGetDepthImagePixelFrameTexture( m_pDepthStreamHandle, &imageFrame,
    		&nearMode, &frameTexture );
    
    	
    	NUI_LOCKED_RECT LockedRect ;
    	frameTexture->LockRect( 0, &LockedRect, 0, 0 );
    	
    HRESULT hr = g_pInteractionStream->ProcessDepth(LockedRect.size,LockedRect.pBits,imageFrame.liTimeStamp);
    m_skeletonFrameTimeStamp = SkeletonFrame.liTimeStamp;
    		Vector4 v = {0};
    		m_pNuiSensor->NuiAccelerometerGetCurrentReading( &v );
    		HRESULT hr = g_pInteractionStream->ProcessSkeleton(NUI_SKELETON_COUNT,SkeletonFrame.SkeletonData,&v,m_skeletonFrameTimeStamp);
    		

    my interaction stream information retrival code,

    NUI_INTERACTION_FRAME interactionFrame ;
    HRESULT hr = g_pInteractionStream->GetNextFrame(0,&interactionFrame);










    • Edited by Justin.Ko Tuesday, March 26, 2013 8:40 AM
    Monday, March 25, 2013 9:24 AM
  • I'm trying to do the same thing in C#, I think I am very close but it stops/pauses (see code below)

    Here is a cut down version of my code, it might help:

    using Microsoft.Kinect.Toolkit;
    using Microsoft.Kinect.Toolkit.Controls;
    using Microsoft.Kinect.Toolkit.Interaction;
    
    private readonly KinectSensorChooser sensorChooser;
    private UserInfo[] userInfos;
    private InteractionStream its;
    private MDKinectInteractions kinIn; //My interface class
    
    public MainWindow()
    {
    
    // initialize the sensor chooser and UI
    this.sensorChooser = new KinectSensorChooser();
    this.sensorChooser.KinectChanged += SensorChooserOnKinectChanged;
    this.sensorChooserUi.KinectSensorChooser = this.sensorChooser;
    this.sensorChooser.Start();
    kinIn = new MDKinectInteractions();
    
    }
    
    private static void SensorChooserOnKinectChanged(object sender, KinectChangedEventArgs args)
            {
                if (args.OldSensor != null)
                {
                    try
                    {
                        args.OldSensor.DepthStream.Range = DepthRange.Default;
                        args.OldSensor.SkeletonStream.EnableTrackingInNearRange = false;
                        args.OldSensor.DepthStream.Disable();
                        args.OldSensor.SkeletonStream.Disable();
                    }
                    catch (InvalidOperationException)
                    {
                        // KinectSensor might enter an invalid state while enabling/disabling streams or stream features.
                        // E.g.: sensor might be abruptly unplugged.
                    }
                }
    
                if (args.NewSensor != null)
                {
                    try
                    {
                        TransformSmoothParameters smooth = new TransformSmoothParameters();
                        
                        // Enable things
                        args.NewSensor.SkeletonStream.EnableTrackingInNearRange = true; // enable returning skeletons while depth is in Near Range
                        args.NewSensor.DepthStream.Range = DepthRange.Near;
                                            args.NewSensor.ColorStream.Enable(ColorImageFormat.InfraredResolution640x480Fps30);
                        args.NewSensor.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30);
                        args.NewSensor.SkeletonStream.Enable(smooth);
                       
                    }
                    catch (InvalidOperationException)
                    {
                        // KinectSensor might enter an invalid state while enabling/disabling streams or stream features.
                        // E.g.: sensor might be abruptly unplugged.
                    }
                }
            }
    
    private void WindowLoaded(object sender, RoutedEventArgs e)
            {
    this.sensor = this.sensorChooser.Kinect;
    
                if (null != this.sensor)
                {
                                          
                    // Camera
                    // Allocate space to put the pixels we'll receive
                    this.colorPixels = new byte[this.sensor.ColorStream.FramePixelDataLength];
                    // This is the bitmap we'll display on-screen
                    //this.colorBitmap = new WriteableBitmap(this.sensor.ColorStream.FrameWidth, this.sensor.ColorStream.FrameHeight, 96.0, 96.0, PixelFormats.Bgr32, null);
                    this.colorBitmap = new WriteableBitmap(this.sensor.ColorStream.FrameWidth, this.sensor.ColorStream.FrameHeight, 96.0, 96.0, PixelFormats.Gray16, null);
                    // Set the image we display to point to the bitmap where we'll put the image data
                    this.cameraImage.Source = this.colorBitmap;
                    
                    // Depth
                    // Allocate space to put the depth pixels we'll receive
                    this.depthPixels = new short[this.sensor.DepthStream.FramePixelDataLength];
                    
                    // Allocate space to put the color pixels we'll create
                    this.colorDepthPixels = new byte[this.sensor.DepthStream.FramePixelDataLength * sizeof(int)];
                    // This is the bitmap we'll display on-screen
                    this.colorDepthBitmap = new WriteableBitmap(this.sensor.DepthStream.FrameWidth, this.sensor.DepthStream.FrameHeight, 96.0, 96.0, PixelFormats.Bgra32, null); //PixelFormats.Bgr32
                    // Set the image we display to point to the bitmap where we'll put the image data
                    this.depthImage.Source = this.colorDepthBitmap;
    
                    // Events              
                    this.sensor.ColorFrameReady += this.SensorColorFrameReady;
                    this.sensor.DepthFrameReady += this.SensorDepthFrameReady;
                    this.sensor.SkeletonFrameReady += this.SensorSkeletonFrameReady;
    
                    its = new InteractionStream(sensorChooser.Kinect, kinIn);
                    its.InteractionFrameReady += its_InteractionFrameReady;
      }
    }
    
    private void SensorDepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
            {
                 using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
                {
                    if (depthFrame != null)
                    {
                        // Copy the pixel data from the image to a temporary array
                        depthFrame.CopyPixelDataTo(depthPixels);
                       its.ProcessDepth(depthFrame.GetRawPixelData(), depthFrame.Timestamp);
                    }
                }
            }
    
    private void SensorSkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
            {
                using (SkeletonFrame skeletonFrame = e.OpenSkeletonFrame())
                {
                    if (skeletonFrame != null)
                    {
                        skeletonFrame.CopySkeletonDataTo(skeletons);                 
                        its.ProcessSkeleton(skeletons, sensorChooser.Kinect.AccelerometerGetCurrentReading(), skeletonFrame.Timestamp);
                    }
                }
            }
    
    private void its_InteractionFrameReady(object sender, InteractionFrameReadyEventArgs e)
            {
    ///////////////////////////////////////////////
    //Kinect seems to crash/pause/stop somewhere here, don't know why!!
    ///////////////////////////////////////////////
                using (InteractionFrame frame = e.OpenInteractionFrame())
                {
                    if (frame != null)
                    {
                        if (this.userInfos == null)
                        {
                            this.userInfos = new UserInfo[InteractionFrame.UserInfoArrayLength];
                        }
    
                        frame.CopyInteractionDataTo(this.userInfos);
                    }
                    else
                    {
                        return;
                    }
                }
    
                foreach (UserInfo userInfo in this.userInfos)
                {
                    foreach (InteractionHandPointer handPointer in userInfo.HandPointers)
                    {
                        string action = null;
    
                        switch (handPointer.HandEventType)
                        {
                            case InteractionHandEventType.Grip:
                                action = "gripped";
                                break;
    
                            case InteractionHandEventType.GripRelease:
                                action = "released";
                                break;
                        }
    
                        if (action != null)
                        {
                            string handSide = "unknown";
    
                            switch (handPointer.HandType)
                            {
                                case InteractionHandType.Left:
                                    handSide = "left";
                                    break;
    
                                case InteractionHandType.Right:
                                    handSide = "right";
                                    break;
                            }
    
                            //Update Label
                            Interaction.Content = "User " + userInfo.SkeletonTrackingId + " " + action + " their " + handSide + "hand.";
                        }
                    }
                }
            }

    Also here is my IInteractionClient. I basically got KinectAdapter from the example and removed the interactionRootElement as I am not interacting with a FrameworkElement I just want to know when someone grips and presses.

        using System;
        using System.Collections.Generic;
        using System.Collections.ObjectModel;
        using System.Diagnostics;
        using System.Linq;
        using System.Windows;
        using System.Windows.Media;
    
        using Microsoft.Kinect.Toolkit.Controls;
        using Microsoft.Kinect.Toolkit.Interaction;
    
        /// <summary>
        /// Helper class to provide common conversion operations between 
        /// Physical Interaction Zone and User Interface.
        /// </summary>
    
        /// <summary>
        /// Class to route KinectControls events to WPF controls
        /// </summary>
        public class MDKinectInteractions : IInteractionClient
        {
            private const double PressTargetPointMargin = 0.0001;
    
            /// <summary>
            /// Mapping of the userId and hand to the hand pointer information
            /// </summary>
            private readonly Dictionary<Tuple<int, HandType>, HandPointer> handPointers =
                new Dictionary<Tuple<int, HandType>, HandPointer>();
    
            private readonly ObservableCollection<HandPointer> publicHandPointers = new ObservableCollection<HandPointer>();
    
            private readonly EventArgs emptyEventArgs = new EventArgs();
    
            /// <summary>
            /// Component that maintains and updates the current primary user.
            /// </summary>
            private readonly KinectPrimaryUserTracker kinectPrimaryUserTracker = new KinectPrimaryUserTracker();
    
            /// <summary>
            /// True if request to clear data for tracked hand pointers has not been serviced yet.
            /// </summary>
            private bool isClearRequestPending;
    
            /// <summary>
            /// Initializes a new instance of the <see cref="KinectAdapter"/> class. 
            /// </summary>
            public MDKinectInteractions()
            {
                this.IsInInteractionFrame = false;
    
                this.ReadOnlyPublicHandPointers = new ReadOnlyObservableCollection<HandPointer>(publicHandPointers);
            }
    
            public ReadOnlyObservableCollection<HandPointer> ReadOnlyPublicHandPointers { get; private set; }
    
            public bool IsInInteractionFrame { get; private set; }
            
            public void BeginInteractionFrame()
            {
                Debug.Assert(!this.IsInInteractionFrame, "Call to BeginInteractionFrame made without call to EndInteractionFrame");
    
                this.IsInInteractionFrame = true;
                this.isClearRequestPending = false;
    
                foreach (var handPointer in this.handPointers.Values)
                {
                    handPointer.Updated = false;
                }
            }
    
            /// <summary>
            /// Function that takes the state of the core interaction components
            /// and translates it to RoutedEvents.  Also updates hand pointer states.
            /// </summary>
            /// <param name="data">Data directly from the core interaction components</param>
            public void HandleHandPointerData(InteractionFrameData data)
            {
                Debug.Assert(this.IsInInteractionFrame, "Call to HandleHandPointerData made without call to BeginInteractionFrame");
    
                if (this.isClearRequestPending)
                {
                    // We don't care about new hand pointer data if client requested to clear
                    // all hand pointers while in the middle of interaction frame processing.
                    return;
                }
    
                var id = new Tuple<int, HandType>(data.TrackingId, data.HandType);
    
                HandPointer handPointer;
                if (!this.handPointers.TryGetValue(id, out handPointer))
                {
                    handPointer = new HandPointer
                    {
                        TrackingId = data.TrackingId,
                        PlayerIndex = data.PlayerIndex,
                        HandType = data.HandType,
                        //Owner = this,
                    };
                    this.handPointers[id] = handPointer;
                }
    
                handPointer.Updated = true;
    
                handPointer.TimestampOfLastUpdate = data.TimeStampOfLastUpdate;
                handPointer.HandEventType = data.HandEventType;
    
                bool pressedChanged = handPointer.IsPressed != data.IsPressed;
                handPointer.IsPressed = data.IsPressed;
    
                handPointer.IsTracked = data.IsTracked;
                handPointer.IsActive = data.IsActive;
                handPointer.IsInteractive = data.IsInteractive;
    
                bool primaryHandOfPrimaryUserChanged = handPointer.IsPrimaryHandOfPrimaryUser != (data.IsPrimaryHandOfUser && data.IsPrimaryUser);
                handPointer.IsPrimaryHandOfUser = data.IsPrimaryHandOfUser;
                handPointer.IsPrimaryUser = data.IsPrimaryUser;
    
                bool positionChanged = (data.X != handPointer.X) ||
                                       (data.Y != handPointer.Y) ||
                                       (data.Z != handPointer.PressExtent);
                handPointer.X = data.X;
                handPointer.Y = data.Y;
                handPointer.PressExtent = data.Z;
    
                this.HandleHandPointerChanges(handPointer, pressedChanged, positionChanged, primaryHandOfPrimaryUserChanged, false);
            }
    
            public int ChoosePrimaryUser(long timestamp, int oldPrimaryUser, QueryPrimaryUserTrackingIdCallback callback)
            {
                // Stale hand pointers could confuse clients if they see them as part of callback parameters
                this.RemoveStaleHandPointers();
    
                this.kinectPrimaryUserTracker.Update(this.handPointers.Values, timestamp, callback);
                int newPrimaryUser = this.kinectPrimaryUserTracker.PrimaryUserTrackingId;
    
                if (oldPrimaryUser != newPrimaryUser)
                {
                    // if primary user id has changed, update tracked pointers
                    foreach (var handPointer in this.handPointers.Values)
                    {
                        // If tracking id is valid and matches the new primary user,
                        // this hand pointer corresponds to the new primary user.
                        bool isPrimaryUser = (handPointer.TrackingId == newPrimaryUser) && (newPrimaryUser != KinectPrimaryUserTracker.InvalidUserTrackingId);
    
                        if (handPointer.IsPrimaryUser != isPrimaryUser)
                        {
                            bool oldIsPrimaryHandOfPrimaryUser = handPointer.IsPrimaryHandOfPrimaryUser;
                            handPointer.IsPrimaryUser = isPrimaryUser;
    
                            this.HandleHandPointerChanges(handPointer, false, false, oldIsPrimaryHandOfPrimaryUser != handPointer.IsPrimaryHandOfPrimaryUser, false);
                        }
                    }
                }
    
                return newPrimaryUser;
            }
    
            public void EndInteractionFrame()
            {
                Debug.Assert(this.IsInInteractionFrame, "Call to EndInteractionFrame made without call to BeginInteractionFrame");
    
                this.RemoveStaleHandPointers();
    
                // Update the public list of hand pointers based on hand pointer tracking state
                foreach (var handPointer in this.handPointers.Values)
                {
                    if (handPointer.IsTracked)
                    {
                        if (!this.publicHandPointers.Contains(handPointer))
                        {
                            this.publicHandPointers.Add(handPointer);
                        }
                    }
                    else
                    {
                        this.publicHandPointers.Remove(handPointer);
                    }
                }
    
                this.isClearRequestPending = false;
                this.IsInInteractionFrame = false;
            }
    
            public void ClearHandPointers()
            {
                this.kinectPrimaryUserTracker.Clear();
    
                if (!this.IsInInteractionFrame)
                {
                    // If we're not already processing an interaction frame, we fake
                    // an empty frame so that all hand pointers get cleared out.
                    this.BeginInteractionFrame();
                    this.EndInteractionFrame();
                }
                else
                {
                    // If we're in the middle of processing an interaction frame, we
                    // can't modify all of our data structures immediately, but need to
                    // remember to do so.
                    this.isClearRequestPending = true;
                }
            }
    
            public InteractionInfo GetInteractionInfoAtLocation(int skeletonTrackingId, InteractionHandType handType, double x, double y)
            {
                //not used
                return null;
            }
    
            internal bool CaptureHandPointer(HandPointer handPointer, UIElement element)
            {
                //this.InteractionRootElement.VerifyAccess();
    
                var id = new Tuple<int, HandType>(handPointer.TrackingId, handPointer.HandType);
                HandPointer checkHandPointer;
    
    
                if (!this.handPointers.TryGetValue(id, out checkHandPointer))
                {
                    // No hand pointer to capture
                    return false;
                }
    
                if (!object.ReferenceEquals(handPointer, checkHandPointer))
                {
                    // The HandPointer we have for this hand pointer is not the one
                    // we were handed in.  Caller has an older instance of the HandPointer
                    return false;
                }
    
                if (element != null && handPointer.Captured != null)
                {
                    // Request wasn't to clear capture and some other element has this captured
                    // Maybe this isn't necessary.
                    return false;
                }
    
                SwitchCapture(handPointer, handPointer.Captured, element);
    
                return true;
            }
    
            private static HandPointerEventArgs CreateEventArgs(RoutedEvent routedEvent, UIElement targetElement, HandPointer handPointer)
            {
                return new HandPointerEventArgs(handPointer, routedEvent, targetElement);
            }
    
            private static void SwitchCapture(HandPointer handPointer, UIElement oldElement, UIElement newElement)
            {
                handPointer.Captured = newElement;
    
                if (oldElement != null)
                {
                    var lostArgs = CreateEventArgs(KinectRegion.HandPointerLostCaptureEvent, oldElement, handPointer);
                    oldElement.RaiseEvent(lostArgs);
                }
    
                if (newElement != null)
                {
                    var gotArgs = CreateEventArgs(KinectRegion.HandPointerGotCaptureEvent, newElement, handPointer);
                    newElement.RaiseEvent(gotArgs);
                }
            }
    
            /// <summary>
            /// Applies the PressTargetPointMargin to the press target point.
            /// </summary>
            /// <param name="pressTargetPoint">The press target point we are given.</param>
            /// <returns>The updated press target point with the margin applied.</returns>
            private static Point ApplyControlPressPointMargin(Point pressTargetPoint)
            {
                pressTargetPoint.X = ApplyControlPressPointMargin(pressTargetPoint.X);
                pressTargetPoint.Y = ApplyControlPressPointMargin(pressTargetPoint.Y);
                return pressTargetPoint;
            }
    
            /// <summary>
            /// Takes in a coordinate and forces it into ranges [-Infinity, -PressTargetPointMargin], [PressTargetPointMargin, 1.0 - PressTargetPointMargin], or [PressTargetPointMargin, +Infinity]
            /// </summary>
            /// <param name="coordinate">The original coordinate.</param>
            /// <returns>The updated coordinate.</returns>
            private static double ApplyControlPressPointMargin(double coordinate)
            {
                if (coordinate >= 0 && coordinate <= 1.0)
                {
                    return Math.Max(PressTargetPointMargin, Math.Min(coordinate, 1.0 - PressTargetPointMargin));
                }
    
                return coordinate < 0 ? Math.Min(coordinate, 0 - PressTargetPointMargin) : Math.Max(coordinate, 1.0 + PressTargetPointMargin);
            }
    
            private void HandleHandPointerChanges(
                HandPointer handPointer, bool pressedChanged, bool positionChanged, bool primaryHandOfPrimaryUserChanged, bool removed)
            {
                bool doPress = false;
                bool doRelease = false;
                bool doMove = false;
                bool doLostCapture = false;
                bool doGrip = false;
                bool doGripRelease = false;
    
                if (removed)
                {
                    // Deny the existence of this hand pointer
                    doRelease = handPointer.IsPressed;
                    doLostCapture = handPointer.Captured != null;
                }
                else
                {
                    if (pressedChanged)
                    {
                        doPress = handPointer.IsPressed;
                        doRelease = !handPointer.IsPressed;
                    }
    
                    if (positionChanged)
                    {
                        doMove = true;
                    }
    
                    doGrip = handPointer.HandEventType == HandEventType.Grip;
                    doGripRelease = handPointer.HandEventType == HandEventType.GripRelease;
                }
    
                if (doLostCapture)
                {
                    SwitchCapture(handPointer, handPointer.Captured, null);
                }
    
                /////////////////////////////
                //trigger events
    
                if (doGrip)
                {
    
                }
    
                if (doGripRelease)
                {
    
                }
    
                if (doPress)
                {
    
                }
    
                if (doMove)
                {
    
                }
    
                if (doRelease)
                {
    
                }
            }
    
            private void RemoveStaleHandPointers()
            {
                List<HandPointer> nonUpdatedHandpointers = null;
    
                foreach (var handPointer in this.handPointers.Values)
                {
                    // If we need to stop tracking this hand pointer
                    if (this.isClearRequestPending || !handPointer.Updated)
                    {
                        if (nonUpdatedHandpointers == null)
                        {
                            nonUpdatedHandpointers = new List<HandPointer>();
                        }
    
                        nonUpdatedHandpointers.Add(handPointer);
                    }
                }
    
                if (nonUpdatedHandpointers != null)
                {
                    foreach (var handPointer in nonUpdatedHandpointers)
                    {
                        var pressedChanged = handPointer.IsPressed;
                        const bool PositionChanged = false;
                        var primaryHandOfPrimaryUserChanged = handPointer.IsPrimaryHandOfPrimaryUser;
                        const bool Removed = true;
    
                        handPointer.IsTracked = false;
                        handPointer.IsActive = false;
                        handPointer.IsInteractive = false;
                        handPointer.IsPressed = false;
                        handPointer.IsPrimaryUser = false;
                        handPointer.IsPrimaryHandOfUser = false;
    
                        this.HandleHandPointerChanges(handPointer, pressedChanged, PositionChanged, primaryHandOfPrimaryUserChanged, Removed);
    
                        this.handPointers.Remove(new Tuple<int, HandType>(handPointer.TrackingId, handPointer.HandType));
    
                        this.publicHandPointers.Remove(handPointer);
                    }
                }
            }
    
        }

    I dont really know what all the code means for the IInteractionClient, so the problem might be there some where.

    Any help would be much appreciated.

    Tuesday, March 26, 2013 1:18 AM
  • Your problem is here:

            public InteractionInfo GetInteractionInfoAtLocation(int skeletonTrackingId, InteractionHandType handType, double x, double y)
            {
                //not used
                return null;
            }
    

    You need to supply a real implementation for this method. In fact, for your purposes, you can probably delete all of the other code from this class. Try this code:

            public InteractionInfo GetInteractionInfoAtLocation(int skeletonTrackingId, InteractionHandType handType, double x, double y)
            {
                return new InteractionInfo
                    {
                        IsPressTarget = false,
                        IsGripTarget = true,
                    };
            }
    
    What this effectively says is, "Every possible hand position -- regardless of the skeleton tracking ID or the hand type -- is a grip target (and not a press target)."


    John | Kinect for Windows development team

    Tuesday, March 26, 2013 9:56 PM
  • Thanks John.

    For anyone else trying to do this here is my class

    public class MDKinectInteractions : IInteractionClient
        {
            public InteractionInfo GetInteractionInfoAtLocation(int skeletonTrackingId, InteractionHandType handType, double x, double y)
            {
                return new InteractionInfo
                {
                    IsPressTarget = true,
                    IsGripTarget = true,
                };
            }
        }

    Like John said, i was able to delete all the other code.

    Does anyone know if it is possible to make the "press" more or less sensitive?


    Wednesday, March 27, 2013 2:26 AM
  • Good Day All,

        I hope to konw how to verify the action when I pressed, I have tried to judge if the variable PressExtent of UserPointers is bigger than a value, like 1.0f. But I find that every time I PUSH my hand, the PressExtent will bigger than 1.0f sometimes, So it's not a solution.

       After that, I try judge the difference between the adjacent value of PressExtent that whether it's bigger than 1.0f. That try make the result to judge if PUSH is happen sometime will fail, So its not a good method.

      I am confused whether the userInfos.state's enum NUI_HANDPOINTER_STATE member NUI_HANDPOINTER_STATE_PRESSED would help me, I have tried if( state == NUI_HANDPOINTER_STATE_PRESSED | NUI_HANDPOINTER_STATE_TRACKED), but that also fail.

      What did I miss?

       Sincerely look forward to your answers!

    Sholee

    Saturday, March 30, 2013 2:10 PM
  • Hey Man,

    Can you post a demo project around this?

    as i'm new to Kinect environment and i need this project to build upon

    thanks 

    Sunday, March 31, 2013 5:03 PM
  • Hello ,I Refer to your code。But it is does not work

    Can you give me some advice

    I am Kinect Developer Beginner

    This is My code

                                                                                                           


    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Windows;
    using System.Windows.Controls;
    using System.Windows.Data;
    using System.Windows.Documents;
    using System.Windows.Input;
    using System.Windows.Media;
    using System.Windows.Media.Imaging;
    using System.Windows.Navigation;
    using System.Windows.Shapes;
    using Microsoft.Kinect;
    using Microsoft.Kinect.Toolkit.Interaction;
    using Microsoft.Kinect.Toolkit.Controls;
    using Microsoft.Kinect.Toolkit;

    namespace GripHands_WPF_Demo
    {
        /// <summary>
        /// MainWindow.xaml 的交互逻辑
        /// </summary>
        /// 

        public partial class MainWindow : Window
        {
            KinectSensor kinectSensor;
            private byte[] pixelData;
            private short[] depthPixelData;
            private Skeleton[] skeletonData;

            private InteractionStream its;
            private MDKinectInteractions kinIn = new MDKinectInteractions();//My interface class


            public MainWindow()
            {
                InitializeComponent();
            }
            private void Window_Loaded(object sender, RoutedEventArgs e)
            {
                kinectSensor = (from sensor in KinectSensor.KinectSensors where sensor.Status == KinectStatus.Connected select sensor).FirstOrDefault();


                //kinectSensor.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);
                kinectSensor.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30);
                kinectSensor.SkeletonStream.Enable();

                kinectSensor.Start();







                //kinectSensor.ColorFrameReady += kinectSensor_ColorFrameReady;

                
                kinectSensor.DepthFrameReady += kinectSensor_DepthFrameReady;
                kinectSensor.SkeletonFrameReady += kinectSensor_SkeletonFrameReady;
                its = new InteractionStream(kinectSensor, kinIn);
         its.InteractionFrameReady += InteractionFrameReady;




            }
            private void Window_Closed(object sender, EventArgs e)
            {
                kinectSensor.Stop();
            }

            UserInfo[] userInfos = null;
            private void InteractionFrameReady(object sender, InteractionFrameReadyEventArgs e)
            {
                MessageBox.Show("gripped");

                using (InteractionFrame frame = e.OpenInteractionFrame())
                {
                    if (frame != null)
                    {
                        if (this.userInfos == null)
                        {
                            this.userInfos = new UserInfo[InteractionFrame.UserInfoArrayLength];
                        }
                        frame.CopyInteractionDataTo(this.userInfos);
                    }
                    else
                    {
                        return;
                    }
                }

                foreach (UserInfo userInfo in this.userInfos)
                {

                    foreach (InteractionHandPointer handPointer in userInfo.HandPointers)
                    {
                        string action = null;

                        switch (handPointer.HandEventType)
                        {
                            case InteractionHandEventType.Grip:
                                action = "gripped";
                                MessageBox.Show("gripped");
                                break;

                            case InteractionHandEventType.GripRelease:
                                action = "released";
                                MessageBox.Show("released");
                                break;
                        }

                        if (action != null)
                        {
                            string handSide = "unknown";

                            switch (handPointer.HandType)
                            {
                                case InteractionHandType.Left:
                                    handSide = "left";
                                    break;

                                case InteractionHandType.Right:
                                    handSide = "right";
                                    break;
                            }

                            //Update Label
                            Interaction.Content = "User " + userInfo.SkeletonTrackingId + " " + action + " their " + handSide + "hand.";
                        }
                    }
                }
            }

            private void kinectSensor_DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
            {
                using (DepthImageFrame depthImageFrame = e.OpenDepthImageFrame())
                {
                    if (depthImageFrame != null)
                    {
                        depthPixelData = new short[depthImageFrame.PixelDataLength];
                        depthImageFrame.CopyPixelDataTo(depthPixelData);
                        //this.DepthImage.Source =
                        //    BitmapSource.Create(depthImageFrame.Width, depthImageFrame.Height, 96, 96, PixelFormats.Gray16, null, depthPixelData, depthImageFrame.Width * depthImageFrame.BytesPerPixel);

                        its.ProcessDepth(depthImageFrame.GetRawPixelData(), depthImageFrame.Timestamp);
                    }
                }

            }

            private void kinectSensor_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
            {

                using (SkeletonFrame skeletonFrame = e.OpenSkeletonFrame())
                {
                    if (skeletonFrame != null)
                    {
                        skeletonData = new Skeleton[kinectSensor.SkeletonStream.FrameSkeletonArrayLength];

                        skeletonFrame.CopySkeletonDataTo(this.skeletonData);
                        Skeleton skeleton = (from s in skeletonData where s.TrackingState == SkeletonTrackingState.Tracked select s).FirstOrDefault();
                        its.ProcessSkeleton(skeletonData, kinectSensor.AccelerometerGetCurrentReading(), skeletonFrame.Timestamp);

                    }
                }

            }
        }
    }

     -----------------------------------------------------------------------------------------------------------------------------------          

    using System;
    using System.Collections.Generic;
    using System.Collections.ObjectModel;
    using System.Diagnostics;
    using System.Linq;
    using System.Windows;
    using System.Windows.Media;

    using Microsoft.Kinect.Toolkit.Controls;
    using Microsoft.Kinect.Toolkit.Interaction;


    namespace GripHands_WPF_Demo
    {
        public class MDKinectInteractions : IInteractionClient
        {
            public InteractionInfo GetInteractionInfoAtLocation(int skeletonTrackingId, InteractionHandType handType, double x, double y)
            {
                return new InteractionInfo
                {
                    IsPressTarget = true,
                    IsGripTarget = true,

                };
            }
        }

    }





    • Edited by Mr.Root Tuesday, April 2, 2013 9:11 AM
    Tuesday, April 2, 2013 8:59 AM
  • Can you be more specific about what does not work?

    John | Kinect for Windows development team

    Tuesday, April 2, 2013 6:33 PM
  • Good Day all,

    Well, I can't get started with the sample code, is there any version which is not complicated?

    Um... Like, 3 *.cs in the solution?

    btw, John is such a great coding guy.

    Saturday, April 6, 2013 7:38 AM
  • Could you send an easier C++ sample about "kinect interaction" for me(my email is : guoming0000@sina.com)? I met the same question like you,and tried what he said,but still met the problem(ProcessDepth function  failed).Thank you very much.

    from a Chinese student.

    my code sample about kinect interaciton(failed after ProcessDepth):

    #include "stdafx.h"
    #include <Windows.h>
    //#include <strsafe.h>
    #include <iostream>
    #include <NuiApi.h>
    #include <KinectInteraction.h>
    using namespace	std;
    //----------------------------------------------------
    //#define _WINDOWS
    INuiSensor            *m_pNuiSensor;
    
    INuiInteractionStream *m_nuiIStream;
    class CIneractionClient:public INuiInteractionClient
    {
    public:
    	CIneractionClient()
    	{;}
    	~CIneractionClient()
    	{;}
    
    	STDMETHOD(GetInteractionInfoAtLocation)(THIS_ DWORD skeletonTrackingId, NUI_HAND_TYPE handType, FLOAT x, FLOAT y, _Out_ NUI_INTERACTION_INFO *pInteractionInfo)
    	{        
    		if(pInteractionInfo)
    	{
    		pInteractionInfo->IsPressTarget         = false;
    		pInteractionInfo->PressTargetControlId  = 0;
    		pInteractionInfo->PressAttractionPointX = 0.f;
    		pInteractionInfo->PressAttractionPointY = 0.f;
    		pInteractionInfo->IsGripTarget          = true;
    		return S_OK;
    	}
    	return E_POINTER;
    		
    		//return S_OK; 
    	
    	}
    
    	STDMETHODIMP_(ULONG)    AddRef()                                    { return 2;     }
    	STDMETHODIMP_(ULONG)    Release()                                   { return 1;     }
    	STDMETHODIMP            QueryInterface(REFIID riid, void **ppv)     { return S_OK;  }
    
     };
    
    CIneractionClient m_nuiIClient;
    //--------------------------------------------------------------------
    HANDLE	m_hNextColorFrameEvent;
    HANDLE	m_hNextDepthFrameEvent;
    HANDLE	m_hNextSkeletonEvent;
    HANDLE	m_hNextInteractionEvent;
    HANDLE	m_pColorStreamHandle;
    HANDLE	m_pDepthStreamHandle;
    HANDLE	m_hEvNuiProcessStop;
    //-----------------------------------------------------------------------------------
    
    int DrawColor(HANDLE h)
    {
    	return 0;
    }
    
    int DrawDepth(HANDLE h)
    {
    	NUI_IMAGE_FRAME pImageFrame;
    	INuiFrameTexture* pDepthImagePixelFrame;
    	HRESULT hr = m_pNuiSensor->NuiImageStreamGetNextFrame( h, 0, &pImageFrame );
    	BOOL nearMode = TRUE;
    	m_pNuiSensor->NuiImageFrameGetDepthImagePixelFrameTexture(m_pDepthStreamHandle, &pImageFrame, &nearMode, &pDepthImagePixelFrame);
    	INuiFrameTexture * pTexture = pDepthImagePixelFrame;
    	NUI_LOCKED_RECT LockedRect;  
    	pTexture->LockRect( 0, &LockedRect, NULL, 0 );  
    	if( LockedRect.Pitch != 0 )
    	{
    		HRESULT hr = m_nuiIStream->ProcessDepth(LockedRect.size,PBYTE(LockedRect.pBits),pImageFrame.liTimeStamp);
    		if( FAILED( hr ) )
    		{
    			cout<<"ProcessDepth failed"<<endl;
    		}
    	}
    	pTexture->UnlockRect(0);
    	m_pNuiSensor->NuiImageStreamReleaseFrame( h, &pImageFrame );
    	return 0;
    }
    
    int DrawSkeleton()
    {
    	NUI_SKELETON_FRAME SkeletonFrame = {0};;
    	HRESULT hr = m_pNuiSensor->NuiSkeletonGetNextFrame( 0, &SkeletonFrame );
    	if( FAILED( hr ) )
    	{
    		cout<<"Get Skeleton Image Frame Failed"<<endl;
    		return -1;
    	}
    
    	bool bFoundSkeleton = true;  
    	for( int i = 0 ; i < NUI_SKELETON_COUNT ; i++ )  
    	{  
    		if( SkeletonFrame.SkeletonData[i].eTrackingState == NUI_SKELETON_TRACKED )  
    		{  
    			bFoundSkeleton = true;  
    			static int static_one_is_enough=0;
    			if(static_one_is_enough==0)
    			{
    				cout<<"find skeleton !"<<endl;
    				static_one_is_enough++;
    			}
    
    			m_pNuiSensor->NuiTransformSmooth(&SkeletonFrame,NULL);  
    
    			Vector4 v;
    			m_pNuiSensor->NuiAccelerometerGetCurrentReading(&v);
    			//	m_nuiIStream->ProcessSkeleton(i,&SkeletonFrame.SkeletonData[i],&v,SkeletonFrame.liTimeStamp);
    			HRESULT hr =m_nuiIStream->ProcessSkeleton(1,&SkeletonFrame.SkeletonData[i],&v,SkeletonFrame.liTimeStamp);
    			if( FAILED( hr ) )
    			{
    				cout<<"ProcessDepth failed"<<endl;
    			}
    		}  
    	}  
    
    	return 0;
    }
    int ShowInteraction()
    {
    	NUI_INTERACTION_FRAME Interaction_Frame;
    	m_nuiIStream->GetNextFrame( 0,&Interaction_Frame );
    	int trackingID = 0;
    	int event=0;
    	cout<<"show Interactions!"<<endl;
    	for(int i=0;i<NUI_SKELETON_COUNT;i++)
    	{
    		trackingID = Interaction_Frame.UserInfos[i].SkeletonTrackingId;
    		event=Interaction_Frame.UserInfos[i].HandPointerInfos->HandEventType;
    		cout<<"id="<<trackingID<<"---------event:"<<event<<endl;
    	}
    
    
    	return 0;
    }
    DWORD WINAPI KinectDataThread(LPVOID pParam)
    {
    	HANDLE hEvents[5] = {m_hEvNuiProcessStop,m_hNextColorFrameEvent,
    		m_hNextDepthFrameEvent,m_hNextSkeletonEvent,m_hNextInteractionEvent};
    	while(1)
    	{
    		int nEventIdx;
    		nEventIdx=WaitForMultipleObjects(sizeof(hEvents)/sizeof(hEvents[0]),
    			hEvents,FALSE,100);
    		if (WAIT_OBJECT_0 == WaitForSingleObject(m_hEvNuiProcessStop, 0))
    		{
    			break;
    		}
    		// Process signal events
    		if (WAIT_OBJECT_0 == WaitForSingleObject(m_hNextColorFrameEvent, 0))
    		{
    			DrawColor(m_pColorStreamHandle);
    		}
    		if (WAIT_OBJECT_0 == WaitForSingleObject(m_hNextDepthFrameEvent, 0))
    		{
    			DrawDepth(m_pDepthStreamHandle);
    		}
    		if (WAIT_OBJECT_0 == WaitForSingleObject(m_hNextSkeletonEvent, 0))
    		{
    			DrawSkeleton();
    		}
    		if (WAIT_OBJECT_0 == WaitForSingleObject(m_hNextInteractionEvent, 0))
    		{
    			ShowInteraction();
    		}
    	}
    	CloseHandle(m_hEvNuiProcessStop);
    	m_hEvNuiProcessStop = NULL;
    	CloseHandle( m_hNextSkeletonEvent );
    	CloseHandle( m_hNextDepthFrameEvent );
    	CloseHandle( m_hNextColorFrameEvent );
    	CloseHandle( m_hNextInteractionEvent );
    	return 0;
    }
    DWORD ConnectKinect()
    {
    	INuiSensor * pNuiSensor;
    	HRESULT hr;
    	int iSensorCount = 0;
    	hr = NuiGetSensorCount(&iSensorCount);
    	if (FAILED(hr))
    	{
    		return hr;
    	}
    	// Look at each Kinect sensor
    	for (int i = 0; i < iSensorCount; ++i)
    	{
    		// Create the sensor so we can check status, if we can't create it, move on to the next
    		hr = NuiCreateSensorByIndex(i, &pNuiSensor);
    		if (FAILED(hr))
    		{
    			continue;
    		}
    		// Get the status of the sensor, and if connected, then we can initialize it
    		hr = pNuiSensor->NuiStatus();
    		if (S_OK == hr)
    		{
    			m_pNuiSensor = pNuiSensor;
    			break;
    		}
    		// This sensor wasn't OK, so release it since we're not using it
    		pNuiSensor->Release();
    	}
    	if (NULL != m_pNuiSensor)
    	{
    		if (SUCCEEDED(hr))
    		{			
    			hr = m_pNuiSensor->NuiInitialize(\
    				NUI_INITIALIZE_FLAG_USES_DEPTH_AND_PLAYER_INDEX|\
    				NUI_INITIALIZE_FLAG_USES_COLOR|\
    				NUI_INITIALIZE_FLAG_USES_SKELETON);
    			if( hr != S_OK )
    			{
    				cout<<"NuiInitialize failed"<<endl;
    				return hr;
    			}
    
    			m_hNextColorFrameEvent	= CreateEvent( NULL, TRUE, FALSE, NULL );
    			m_pColorStreamHandle	= NULL;
    
    			hr = m_pNuiSensor->NuiImageStreamOpen(
    				NUI_IMAGE_TYPE_COLOR,NUI_IMAGE_RESOLUTION_640x480, 0, 2, 
    				m_hNextColorFrameEvent, &m_pColorStreamHandle);
    			if( FAILED( hr ) )
    			{
    				cout<<"Could not open image stream video"<<endl;
    				return hr;
    			}
    
    			m_hNextDepthFrameEvent	= CreateEvent( NULL, TRUE, FALSE, NULL );
    			m_pDepthStreamHandle	= NULL;
    
    			hr = m_pNuiSensor->NuiImageStreamOpen( 
    				NUI_IMAGE_TYPE_DEPTH_AND_PLAYER_INDEX,
    				NUI_IMAGE_RESOLUTION_640x480, 0, 2, 
    				m_hNextDepthFrameEvent, &m_pDepthStreamHandle);
    			if( FAILED( hr ) )
    			{
    				cout<<"Could not open depth stream video"<<endl;
    				return hr;
    			}
    			m_hNextSkeletonEvent = CreateEvent( NULL, TRUE, FALSE, NULL );
    			hr = m_pNuiSensor->NuiSkeletonTrackingEnable( 
    				m_hNextSkeletonEvent, 
    				NUI_SKELETON_TRACKING_FLAG_ENABLE_IN_NEAR_RANGE//|
    				);
    			//0);
    			if( FAILED( hr ) )
    			{
    				cout<<"Could not open skeleton stream video"<<endl;
    				return hr;
    			}
    		}
    	}
    	if (NULL == m_pNuiSensor || FAILED(hr))
    	{
    		cout<<"No ready Kinect found!"<<endl;
    		return E_FAIL;
    	}
    	return hr;
    }
    int main()
    {
    	ConnectKinect();
    	HRESULT hr;
    	m_hNextInteractionEvent = CreateEvent( NULL,TRUE,FALSE,NULL );
    	m_hEvNuiProcessStop = CreateEvent(NULL,TRUE,FALSE,NULL);
    	hr = NuiCreateInteractionStream(m_pNuiSensor,(INuiInteractionClient *)&m_nuiIClient,&m_nuiIStream);
    	if( FAILED( hr ) )
    	{
    		cout<<"Could not open Interation stream video"<<endl;
    		return hr;
    	}
    	//	hr = NuiCreateInteractionStream(m_pNuiSensor,0,&m_nuiIStream);
    	hr = m_nuiIStream->Enable(m_hNextInteractionEvent);
     	if( FAILED( hr ) )
     	{
     		cout<<"Could not open Interation stream video"<<endl;
     		return hr;
     	}
    	HANDLE m_hProcesss = CreateThread(NULL, 0, KinectDataThread, 0, 0, 0);
    	while(1)
    	{
    		Sleep(1);
    	}
    	m_pNuiSensor->NuiShutdown();
    	SafeRelease(m_pNuiSensor);
    	return 0;
    }


    Saturday, April 6, 2013 2:30 PM
  • I think I did everything like you, ilo_oli and John Elsbree, had suggested. But I still get a “System.InvalidOperationException” in “Microsoft.Kinect.Toolkit.Interaction.dll” (I marked the line where it occured). What is wrong?

    public class InteractionClient : IInteractionClient { public InteractionInfo GetInteractionInfoAtLocation(int skeletonTrackingId, InteractionHandType handType, double x, double y) { return new InteractionInfo { IsPressTarget = true, IsGripTarget = true, }; } }

    private void WindowLoaded(object sender, RoutedEventArgs e)
    {
        // try to get first Kinect sensor
        sensor = KinectSensor.KinectSensors.FirstOrDefault(s => s.Status == KinectStatus.Connected);
        if (sensor != null)
        {
            // enable skeleton tracking
            sensor.SkeletonStream.Enable(smoothingParameters);
            // allocate skeleton array and add event to process skeleton data
            skeletons = new Skeleton[sensor.SkeletonStream.FrameSkeletonArrayLength];
            sensor.SkeletonFrameReady += SkeletonFrameReady;
    
            // enable depth information
            sensor.DepthStream.Enable();
            // allocate depth pixel array and add event to process depth data
            depthPixels = new DepthImagePixel[this.sensor.DepthStream.FramePixelDataLength];
            sensor.DepthFrameReady += this.DepthFrameReady;
    
            // enable interaction tracking
            // === THE EXCEPTION OCCURS HERE ===
            interactionStream = new InteractionStream(sensor, new InteractionClient());
            // add event to process interactions
            interactionStream.InteractionFrameReady += InteractionFrameReady;
    
            sensor.Start();
    }
    Monday, April 8, 2013 12:12 PM
  • I'm not sure about your problem but I converted the code in this thread to VB.NET along with a few tweaks and got it to work. So recheck your code and especially that you have the right DLLs referenced.

    I find the SDK documentation very sparse. Can someone explain how to set the interaction region so that I can use the X and Y coordinates and the IsPressed property? Is this from the screen boundaries? I suppose I can just scale the rawX and Y coordinates.

    Monday, April 8, 2013 9:44 PM
  • Crap, I forgot to include the “KinectInteraction170_*.dll” properly. Now it works. Thanks a lot for the hint, deandob.

    • Proposed as answer by Mr.Root Tuesday, April 9, 2013 1:41 PM
    • Unproposed as answer by Mr.Root Tuesday, April 9, 2013 1:42 PM
    Tuesday, April 9, 2013 9:45 AM
  • Excuse me I run your code,But InteractionFrameReady Methods can not get calls

    Can you give me some more information.it would be best give some more code.




    • Edited by Mr.Root Tuesday, April 9, 2013 1:59 PM
    Tuesday, April 9, 2013 1:58 PM
  • I seem to have the same problem many are having around here with the C++ code. ProcessDepth fails and I get my error message ("Interaction ProcessDepth error"), everything else seems to be fine. Does anybody have any suggestions on how to make it work?

    This is how I'm consuming the depth frames:

    Just to make it clear, this is how I'm declaring my buffer somewhere else:

    "BYTE depthImagePixelBuffer[320*240*4]; //width * height * sizeof(INuiFrameTexture)"

    NUI_IMAGE_FRAME imageFrame; m_pNuiSensor->NuiImageStreamGetNextFrame( m_pDepthStreamHandle, 0, &imageFrame ); BOOL nearMode = FALSE; INuiFrameTexture* pDepthImagePixelFrame = 0; m_pNuiSensor->NuiImageFrameGetDepthImagePixelFrameTexture( m_pDepthStreamHandle, &imageFrame, &nearMode, &pDepthImagePixelFrame ); NUI_LOCKED_RECT LockedRect = {0}; pDepthImagePixelFrame->LockRect(0, &LockedRect, nullptr, 0); if ( 0 != LockedRect.Pitch ){ memcpy(&depthImagePixelBuffer[0], PBYTE(LockedRect.pBits), sizeof(depthImagePixelBuffer)); }else{ printf("Invalid Buffer\n\r"); } pDepthImagePixelFrame->UnlockRect(0);

    HRESULT hr = m_interactionStream->ProcessDepth( sizeof(depthImagePixelBuffer), &depthImagePixelBuffer[0], imageFrame.liTimeStamp ); if ( FAILED(hr) ){ printf("Interaction ProcessDepth error \n\r"); } m_pNuiSensor->NuiImageStreamReleaseFrame( m_pDepthStreamHandle, &imageFrame );


    Wednesday, April 10, 2013 8:16 PM
  • JohnnyRL this is how I set up my Depth Processing and works for me. 

    DepthManager::DepthManager() { m_hNextDepthFrameEvent = CreateEvent( NULL, TRUE, FALSE, NULL ); KinectManager::Instance()->Sensor()->NuiImageStreamOpen( NUI_IMAGE_TYPE_DEPTH, NUI_IMAGE_RESOLUTION_640x480, 0, 2, m_hNextDepthFrameEvent, &m_pDepthStreamHandle ); m_pImageFrame = new NUI_IMAGE_FRAME(); } void DepthManager::ReceivedDepthAlert() { INuiSensor *sensor = KinectManager::Instance()->Sensor(); if(sensor == nullptr || sensor->NuiStatus() != S_OK) return; HRESULT hr = S_OK; hr = sensor->NuiImageStreamGetNextFrame(m_pDepthStreamHandle, 0, m_pImageFrame ); if(FAILED(hr)) return; if(KinectManager::Instance()->GetInteractionManager() != NULL) { INuiFrameTexture *pDepthImagePixelFrame = 0; hr = sensor->NuiImageFrameGetDepthImagePixelFrameTexture(m_pDepthStreamHandle, m_pImageFrame, FALSE, &pDepthImagePixelFrame); NUI_LOCKED_RECT LockedRect = { 0 }; pDepthImagePixelFrame->LockRect(0, &LockedRect, nullptr, 0); if(LockedRect.Pitch != 0) { INuiInteractionStream *interaction = KinectManager::Instance()->GetInteractionManager()->GetInteractionStream(); if(interaction != NULL) { hr = interaction->ProcessDepth(LockedRect.size, LockedRect.pBits, m_pImageFrame->liTimeStamp); if(FAILED(hr)) cout << "Interaction ProcessDepth Error" << endl;

    } } pDepthImagePixelFrame->UnlockRect(0); } sensor->NuiImageStreamReleaseFrame(m_pDepthStreamHandle, m_pImageFrame); }





    • Edited by Goss84 Wednesday, April 10, 2013 9:46 PM
    Wednesday, April 10, 2013 8:36 PM
  • Thank you very much for your reply m_goss!!! I had been using my depth stream with a 320*240 resolution, once I changed it to 640x480 and corrected my buffer size it started working.

    Update: ProcessDepth and ProcessSkeleton work but I can't successfully 'GetNextFrame' from interaction stream.


    • Edited by JohnnyRL Wednesday, April 10, 2013 10:24 PM
    Wednesday, April 10, 2013 9:33 PM
  • Make sure you have this in your depth processing. 

     NUI_IMAGE_FRAME* m_pImageFrame = new NUI_IMAGE_FRAME();


    Instead of

    NUI_IMAGE_FRAME* m_pImageFrame;
    
    sensor->NuiImageStreamGetNextFrame(m_pDepthStreamHandle, 0, m_pImageFrame );

    Also use break points and see if the data gets filled for m_pImageFrame. 


    For example in your code you have this

    NUI_IMAGE_FRAME imageFrame;
    m_pNuiSensor->NuiImageStreamGetNextFrame(
    	m_pDepthStreamHandle,
    	0,
    	&imageFrame
    	);

    Move the imageFrame to your header and set it to new NUI_IMAGE_FRAME(); in your constructor. 


    • Edited by Goss84 Wednesday, April 10, 2013 10:31 PM
    Wednesday, April 10, 2013 10:28 PM
  • 你的问题已经解决,原因在几行代码。

    #include <Windows.h>
    //#include <strsafe.h>
    #include <iostream>
    #include <NuiApi.h>
    #include <KinectInteraction.h>
    using namespace std;
    #define SafeRelease(X) if(X) delete X;
    //----------------------------------------------------
    //#define _WINDOWS
    INuiSensor            *m_pNuiSensor;

    INuiInteractionStream *m_nuiIStream;
    class CIneractionClient:public INuiInteractionClient
    {
    public:
     CIneractionClient()
     {;}
     ~CIneractionClient()
     {;}

     STDMETHOD(GetInteractionInfoAtLocation)(THIS_ DWORD skeletonTrackingId, NUI_HAND_TYPE handType, FLOAT x, FLOAT y, _Out_ NUI_INTERACTION_INFO *pInteractionInfo)
     {       
      if(pInteractionInfo)
     {
      pInteractionInfo->IsPressTarget         = false;
      pInteractionInfo->PressTargetControlId  = 0;
      pInteractionInfo->PressAttractionPointX = 0.f;
      pInteractionInfo->PressAttractionPointY = 0.f;
      pInteractionInfo->IsGripTarget          = true;
      return S_OK;
     }
     return E_POINTER;
      
      //return S_OK;
     
     }

     STDMETHODIMP_(ULONG)    AddRef()                                    { return 2;     }
     STDMETHODIMP_(ULONG)    Release()                                   { return 1;     }
     STDMETHODIMP            QueryInterface(REFIID riid, void **ppv)     { return S_OK;  }

     };

    CIneractionClient m_nuiIClient;
    //--------------------------------------------------------------------
    HANDLE m_hNextColorFrameEvent;
    HANDLE m_hNextDepthFrameEvent;
    HANDLE m_hNextSkeletonEvent;
    HANDLE m_hNextInteractionEvent;
    HANDLE m_pColorStreamHandle;
    HANDLE m_pDepthStreamHandle;
    HANDLE m_hEvNuiProcessStop;
    //-----------------------------------------------------------------------------------

    int DrawColor(HANDLE h)
    {
     return 0;
    }

    int DrawDepth(HANDLE h)
    {
     NUI_IMAGE_FRAME pImageFrame;
     INuiFrameTexture* pDepthImagePixelFrame;
     HRESULT hr = m_pNuiSensor->NuiImageStreamGetNextFrame( h, 0, &pImageFrame );
     BOOL nearMode = TRUE;
     m_pNuiSensor->NuiImageFrameGetDepthImagePixelFrameTexture(m_pDepthStreamHandle, &pImageFrame, &nearMode, &pDepthImagePixelFrame);
     INuiFrameTexture * pTexture = pDepthImagePixelFrame;
     NUI_LOCKED_RECT LockedRect; 
     pTexture->LockRect( 0, &LockedRect, NULL, 0 ); 
     if( LockedRect.Pitch != 0 )
     {
      HRESULT hr = m_nuiIStream->ProcessDepth(LockedRect.size,PBYTE(LockedRect.pBits),pImageFrame.liTimeStamp);
      if( FAILED( hr ) )
      {
       cout<<"Process Depth failed"<<endl;
      }
     }
     pTexture->UnlockRect(0);
     m_pNuiSensor->NuiImageStreamReleaseFrame( h, &pImageFrame );
     return 0;
    }

    int DrawSkeleton()
    {
     NUI_SKELETON_FRAME SkeletonFrame = {0};
     HRESULT hr = m_pNuiSensor->NuiSkeletonGetNextFrame( 0, &SkeletonFrame );
     if( FAILED( hr ) )
     {
      cout<<"Get Skeleton Image Frame Failed"<<endl;
      return -1;
     }

     bool bFoundSkeleton = true;
     bFoundSkeleton = true; 
     static int static_one_is_enough=0;
     if(static_one_is_enough==0)
     {
      cout<<"find skeleton !"<<endl;
      static_one_is_enough++;
     }

     m_pNuiSensor->NuiTransformSmooth(&SkeletonFrame,NULL); 

     Vector4 v;
     m_pNuiSensor->NuiAccelerometerGetCurrentReading(&v);
     // m_nuiIStream->ProcessSkeleton(i,&SkeletonFrame.SkeletonData[i],&v,SkeletonFrame.liTimeStamp);
     hr =m_nuiIStream->ProcessSkeleton(NUI_SKELETON_COUNT,
      SkeletonFrame.SkeletonData,
      &v,
      SkeletonFrame.liTimeStamp);
     if( FAILED( hr ) )
     {
      cout<<"Process Skeleton failed"<<endl;
     }

     return 0;
    }

    int ShowInteraction()
    {
     NUI_INTERACTION_FRAME Interaction_Frame;
     m_nuiIStream->GetNextFrame( 0,&Interaction_Frame );
     int trackingID = 0;
     int event=0;
     cout<<"show Interactions!"<<endl;
     for(int i=0;i<NUI_SKELETON_COUNT;i++)
     {
      trackingID = Interaction_Frame.UserInfos[i].SkeletonTrackingId;

      event=Interaction_Frame.UserInfos[i].HandPointerInfos->HandEventType;
      
      cout<<"id="<<trackingID<<"---------event:"<<event<<endl;
     }

     return 0;
    }

    DWORD WINAPI KinectDataThread(LPVOID pParam)
    {
     HANDLE hEvents[5] = {m_hEvNuiProcessStop,m_hNextColorFrameEvent,
      m_hNextDepthFrameEvent,m_hNextSkeletonEvent,m_hNextInteractionEvent};
     
     while(1)
     {
      int nEventIdx;
      nEventIdx=WaitForMultipleObjects(sizeof(hEvents)/sizeof(hEvents[0]),
       hEvents,FALSE,100);
      if (WAIT_OBJECT_0 == WaitForSingleObject(m_hEvNuiProcessStop, 0))
      {
       break;
      }
      // Process signal events
      if (WAIT_OBJECT_0 == WaitForSingleObject(m_hNextColorFrameEvent, 0))
      {
       DrawColor(m_pColorStreamHandle);
      }
      if (WAIT_OBJECT_0 == WaitForSingleObject(m_hNextDepthFrameEvent, 0))
      {
       DrawDepth(m_pDepthStreamHandle);
      }
      if (WAIT_OBJECT_0 == WaitForSingleObject(m_hNextSkeletonEvent, 0))
      {
       DrawSkeleton();
      }
      if (WAIT_OBJECT_0 == WaitForSingleObject(m_hNextInteractionEvent, 0))
      {
       ShowInteraction();
      }
     }

     CloseHandle(m_hEvNuiProcessStop);
     m_hEvNuiProcessStop = NULL;
     CloseHandle( m_hNextSkeletonEvent );
     CloseHandle( m_hNextDepthFrameEvent );
     CloseHandle( m_hNextColorFrameEvent );
     CloseHandle( m_hNextInteractionEvent );
     return 0;
    }

    DWORD ConnectKinect()
    {
     INuiSensor * pNuiSensor;
     HRESULT hr;
     int iSensorCount = 0;
     hr = NuiGetSensorCount(&iSensorCount);
     if (FAILED(hr))
     {
      return hr;
     }
     // Look at each Kinect sensor
     for (int i = 0; i < iSensorCount; ++i)
     {
      // Create the sensor so we can check status, if we can't create it, move on to the next
      hr = NuiCreateSensorByIndex(i, &pNuiSensor);
      if (FAILED(hr))
      {
       continue;
      }
      // Get the status of the sensor, and if connected, then we can initialize it
      hr = pNuiSensor->NuiStatus();
      if (S_OK == hr)
      {
       m_pNuiSensor = pNuiSensor;
       break;
      }
      // This sensor wasn't OK, so release it since we're not using it
      pNuiSensor->Release();
     }
     if (NULL != m_pNuiSensor)
     {
      if (SUCCEEDED(hr))
      {   
       hr = m_pNuiSensor->NuiInitialize(\
        NUI_INITIALIZE_FLAG_USES_DEPTH_AND_PLAYER_INDEX|\
        NUI_INITIALIZE_FLAG_USES_COLOR|\
        NUI_INITIALIZE_FLAG_USES_SKELETON);
       if( hr != S_OK )
       {
        cout<<"NuiInitialize failed"<<endl;
        return hr;
       }

       m_hNextColorFrameEvent = CreateEvent( NULL, TRUE, FALSE, NULL );
       m_pColorStreamHandle = NULL;

       hr = m_pNuiSensor->NuiImageStreamOpen(
        NUI_IMAGE_TYPE_COLOR,NUI_IMAGE_RESOLUTION_640x480, 0, 2,
        m_hNextColorFrameEvent, &m_pColorStreamHandle);
       if( FAILED( hr ) )
       {
        cout<<"Could not open image stream video"<<endl;
        return hr;
       }

       m_hNextDepthFrameEvent = CreateEvent( NULL, TRUE, FALSE, NULL );
       m_pDepthStreamHandle = NULL;

       hr = m_pNuiSensor->NuiImageStreamOpen(
        NUI_IMAGE_TYPE_DEPTH_AND_PLAYER_INDEX,
        NUI_IMAGE_RESOLUTION_640x480, 0, 2,
        m_hNextDepthFrameEvent, &m_pDepthStreamHandle);
       if( FAILED( hr ) )
       {
        cout<<"Could not open depth stream video"<<endl;
        return hr;
       }
       m_hNextSkeletonEvent = CreateEvent( NULL, TRUE, FALSE, NULL );
       hr = m_pNuiSensor->NuiSkeletonTrackingEnable(
        m_hNextSkeletonEvent,
        NUI_SKELETON_TRACKING_FLAG_ENABLE_IN_NEAR_RANGE//|
        );
       //0);
       if( FAILED( hr ) )
       {
        cout<<"Could not open skeleton stream video"<<endl;
        return hr;
       }
      }
     }
     if (NULL == m_pNuiSensor || FAILED(hr))
     {
      cout<<"No ready Kinect found!"<<endl;
      return E_FAIL;
     }
     return hr;
    }

    int main()
    {
     ConnectKinect();
     HRESULT hr;
     m_hNextInteractionEvent = CreateEvent( NULL,TRUE,FALSE,NULL );
     m_hEvNuiProcessStop = CreateEvent(NULL,TRUE,FALSE,NULL);
     hr = NuiCreateInteractionStream(m_pNuiSensor,(INuiInteractionClient *)&m_nuiIClient,&m_nuiIStream);
     if( FAILED( hr ) )
     {
      cout<<"Could not open Interation stream video"<<endl;
      return hr;
     }
     // hr = NuiCreateInteractionStream(m_pNuiSensor,0,&m_nuiIStream);
     hr = m_nuiIStream->Enable(m_hNextInteractionEvent);
      if( FAILED( hr ) )
      {
       cout<<"Could not open Interation stream video"<<endl;
       return hr;
      }
     HANDLE m_hProcesss = CreateThread(NULL, 0, KinectDataThread, 0, 0, 0);
     while(1)
     {
      Sleep(1);
     }
     m_pNuiSensor->NuiShutdown();
     SafeRelease(m_pNuiSensor);
     return 0;
    }

    Thursday, April 11, 2013 4:34 AM
  • My problem turned out to be that I created an event for the interaction stream and I was doing something wrong with its handle (automatic instead of manual reset). Thanks anyway m_goss.
    Thursday, April 11, 2013 5:00 AM
  • Thank you very much to DsoTsen.

    Though it can work now ,and I will going on my adventure with kinect interaction.

    Saturday, April 13, 2013 1:45 PM
  • I tried  DsoTsen code with win32 console, it work wells.

    But after that, I add into SkeletonBascis-D2D sample code of Developer Toolkit Browser v1.7.0 (Kinect for Windows), it just go into ProcessInteraction() 1 times at begining while ProcessDepth or ProcessSkeloton works well (could draw skeleton on screen).

    Is there any problem in my code? Do I need some additional code when move move win32 console to win32 mfc?

    Please help me!!!!!

    void CSkeletonBasics::Update()
    {
        if (NULL == m_pNuiSensor)
        {
            return;
        }
    	if (WAIT_OBJECT_0 == WaitForSingleObject(m_hNextDepthFrameEvent, 0))
    	{
    		ProcessDepth();
    	}
    
        if ( WAIT_OBJECT_0 == WaitForSingleObject(m_hNextSkeletonEvent, 0) )
        {
            ProcessSkeleton();
        }
    	if ( WAIT_OBJECT_0 == WaitForSingleObject(m_hNextInteractionEvent, 0) )
        {
    	    ProcessInteraction();
        }
    }


    //------------------------------------------------------------------------------
    // <copyright file="SkeletonBasics.cpp" company="Microsoft">
    //     Copyright (c) Microsoft Corporation.  All rights reserved.
    // </copyright>
    //------------------------------------------------------------------------------
    
    #include "stdafx.h"
    #include <strsafe.h>
    #include "SkeletonBasics.h"
    #include "resource.h"
    #include "SkeletonDef.h"
    static const float g_JointThickness = 3.0f;
    static const float g_TrackedBoneThickness = 6.0f;
    static const float g_InferredBoneThickness = 1.0f;
    
    //HANDLE hFileMapping;
    //KINECTDATAFIG *KinectDataMem;
    //float tiltAngle = 100;
    
    class CIneractionClient:public INuiInteractionClient
    {
    		public:
    		 CIneractionClient()
    		 {;}
    		 ~CIneractionClient()
    		 {;}
    
    
    		 STDMETHOD(GetInteractionInfoAtLocation)(THIS_ DWORD skeletonTrackingId, NUI_HAND_TYPE handType, FLOAT x, FLOAT y, _Out_ NUI_INTERACTION_INFO *pInteractionInfo)
    		 {        
    		  if(pInteractionInfo)
    		 {
    			  pInteractionInfo->IsPressTarget         = false;
    			  pInteractionInfo->PressTargetControlId  = 0;
    			  pInteractionInfo->PressAttractionPointX = 0.f;
    			  pInteractionInfo->PressAttractionPointY = 0.f;
    			  pInteractionInfo->IsGripTarget          = true;
    			  return S_OK;
    		 }
    		 return E_POINTER;
      
    		  //return S_OK; 
     
    		 }
    
    		 STDMETHODIMP_(ULONG)    AddRef()                                    { return 2;     }
    		 STDMETHODIMP_(ULONG)    Release()                                   { return 1;     }
    		 STDMETHODIMP            QueryInterface(REFIID riid, void **ppv)     { return S_OK;  }
    
    };
    CIneractionClient m_nuiIClient;
    INuiInteractionStream *m_nuiIStream;
    /// <summary>
    /// Entry point for the application
    /// </summary>
    /// <param name="hInstance">handle to the application instance</param>
    /// <param name="hPrevInstance">always 0</param>
    /// <param name="lpCmdLine">command line arguments</param>
    /// <param name="nCmdShow">whether to display minimized, maximized, or normally</param>
    /// <returns>status</returns>
    int APIENTRY wWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPWSTR lpCmdLine, int nCmdShow)
    {
        CSkeletonBasics application;
        application.Run(hInstance, nCmdShow);
    }
    
    /// <summary>
    /// Constructor
    /// </summary>
    CSkeletonBasics::CSkeletonBasics() :
        m_pD2DFactory(NULL),
        m_hNextSkeletonEvent(INVALID_HANDLE_VALUE),
        m_pSkeletonStreamHandle(INVALID_HANDLE_VALUE),
        m_bSeatedMode(false),
        m_pRenderTarget(NULL),
        m_pBrushJointTracked(NULL),
        m_pBrushJointInferred(NULL),
        m_pBrushBoneTracked(NULL),
        m_pBrushBoneInferred(NULL),
        m_pNuiSensor(NULL)
    {
        ZeroMemory(m_Points,sizeof(m_Points));
    
    
    }
    
    /// <summary>
    /// Destructor
    /// </summary>
    CSkeletonBasics::~CSkeletonBasics()
    {
        if (m_pNuiSensor)
        {
            m_pNuiSensor->NuiShutdown();
        }
    
        if (m_hNextSkeletonEvent && (m_hNextSkeletonEvent != INVALID_HANDLE_VALUE))
        {
            CloseHandle(m_hNextSkeletonEvent);
        }
    	if (m_hNextInteractionEvent && (m_hNextInteractionEvent != INVALID_HANDLE_VALUE))
        {
    		CloseHandle(m_hNextInteractionEvent);
        }
        // clean up Direct2D objects
        DiscardDirect2DResources();
    
        // clean up Direct2D
        SafeRelease(m_pD2DFactory);
    
        SafeRelease(m_pNuiSensor);
    }
    
    /// <summary>
    /// Creates the main window and begins processing
    /// </summary>
    /// <param name="hInstance">handle to the application instance</param>
    /// <param name="nCmdShow">whether to display minimized, maximized, or normally</param>
    int CSkeletonBasics::Run(HINSTANCE hInstance, int nCmdShow)
    {
        MSG       msg = {0};
        WNDCLASS  wc  = {0};
    
        // Dialog custom window class
        wc.style         = CS_HREDRAW | CS_VREDRAW;
        wc.cbWndExtra    = DLGWINDOWEXTRA;
        wc.hInstance     = hInstance;
        wc.hCursor       = LoadCursorW(NULL, IDC_ARROW);
        wc.hIcon         = LoadIconW(hInstance, MAKEINTRESOURCE(IDI_APP));
        wc.lpfnWndProc   = DefDlgProcW;
        wc.lpszClassName = L"SkeletonBasicsAppDlgWndClass";
    
        if (!RegisterClassW(&wc))
        {
            return 0;
        }
    
        // Create main application window
        HWND hWndApp = CreateDialogParamW(
            hInstance,
            MAKEINTRESOURCE(IDD_APP),
            NULL,
            (DLGPROC)CSkeletonBasics::MessageRouter, 
            reinterpret_cast<LPARAM>(this));
    
        // Show window
        ShowWindow(hWndApp, nCmdShow);
    
        const int eventCount = 1;
        HANDLE hEvents[eventCount];
    
        // Main message loop
        while (WM_QUIT != msg.message)
        {
            hEvents[0] = m_hNextSkeletonEvent;
    
            // Check to see if we have either a message (by passing in QS_ALLEVENTS)
            // Or a Kinect event (hEvents)
            // Update() will check for Kinect events individually, in case more than one are signalled
            DWORD dwEvent = MsgWaitForMultipleObjects(eventCount, hEvents, FALSE, INFINITE, QS_ALLINPUT);
    
            // Check if this is an event we're waiting on and not a timeout or message
            if (WAIT_OBJECT_0 == dwEvent)
            {
                Update();
            }
    
            if (PeekMessageW(&msg, NULL, 0, 0, PM_REMOVE))
            {
                // If a dialog message will be taken care of by the dialog proc
                if ((hWndApp != NULL) && IsDialogMessageW(hWndApp, &msg))
                {
                    continue;
                }
    
                TranslateMessage(&msg);
                DispatchMessageW(&msg);
            }
        }
    
        return static_cast<int>(msg.wParam);
    }
    
    /// <summary>
    /// Main processing function
    /// </summary>
    void CSkeletonBasics::Update()
    {
        if (NULL == m_pNuiSensor)
        {
            return;
        }
    	if (WAIT_OBJECT_0 == WaitForSingleObject(m_hNextDepthFrameEvent, 0))
    	{
    		ProcessDepth();
    	}
    
        if ( WAIT_OBJECT_0 == WaitForSingleObject(m_hNextSkeletonEvent, 0) )
        {
            ProcessSkeleton();
        }
    	if ( WAIT_OBJECT_0 == WaitForSingleObject(m_hNextInteractionEvent, 0) )
        {
    	    ProcessInteraction();
        }
    }
    
    /// <summary>
    /// Handles window messages, passes most to the class instance to handle
    /// </summary>
    /// <param name="hWnd">window message is for</param>
    /// <param name="uMsg">message</param>
    /// <param name="wParam">message data</param>
    /// <param name="lParam">additional message data</param>
    /// <returns>result of message processing</returns>
    LRESULT CALLBACK CSkeletonBasics::MessageRouter(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
    {
        CSkeletonBasics* pThis = NULL;
    
        if (WM_INITDIALOG == uMsg)
        {
            pThis = reinterpret_cast<CSkeletonBasics*>(lParam);
            SetWindowLongPtr(hWnd, GWLP_USERDATA, reinterpret_cast<LONG_PTR>(pThis));
        }
        else
        {
            pThis = reinterpret_cast<CSkeletonBasics*>(::GetWindowLongPtr(hWnd, GWLP_USERDATA));
        }
    
        if (pThis)
        {
            return pThis->DlgProc(hWnd, uMsg, wParam, lParam);
        }
    
        return 0;
    }
    
    /// <summary>
    /// Handle windows messages for the class instance
    /// </summary>
    /// <param name="hWnd">window message is for</param>
    /// <param name="uMsg">message</param>
    /// <param name="wParam">message data</param>
    /// <param name="lParam">additional message data</param>
    /// <returns>result of message processing</returns>
    LRESULT CALLBACK CSkeletonBasics::DlgProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
    {
        switch (message)
        {
        case WM_INITDIALOG:
            {
                // Bind application window handle
                m_hWnd = hWnd;
    
                // Init Direct2D
                D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &m_pD2DFactory);
    
                // Look for a connected Kinect, and create it if found
                CreateFirstConnected();
            }
            break;
    
            // If the titlebar X is clicked, destroy app
        case WM_CLOSE:
            DestroyWindow(hWnd);
            break;
    
        case WM_DESTROY:
            // Quit the main message pump
            PostQuitMessage(0);
            break;
    
            // Handle button press
        case WM_COMMAND:
            // If it was for the near mode control and a clicked event, change near mode
            if (IDC_CHECK_SEATED == LOWORD(wParam) && BN_CLICKED == HIWORD(wParam))
            {
                // Toggle out internal state for near mode
                m_bSeatedMode = !m_bSeatedMode;
    
                if (NULL != m_pNuiSensor)
                {
                    // Set near mode for sensor based on our internal state
                    m_pNuiSensor->NuiSkeletonTrackingEnable(m_hNextSkeletonEvent, m_bSeatedMode ? NUI_SKELETON_TRACKING_FLAG_ENABLE_SEATED_SUPPORT : 0);
                }
            }
            break;
        }
    
        return FALSE;
    }
    
    /// <summary>
    /// Create the first connected Kinect found 
    /// </summary>
    /// <returns>indicates success or failure</returns>
    HRESULT CSkeletonBasics::CreateFirstConnected()
    {
        INuiSensor * pNuiSensor;
    
        int iSensorCount = 0;
        HRESULT hr = NuiGetSensorCount(&iSensorCount);
        if (FAILED(hr))
        {
            return hr;
        }
    
        // Look at each Kinect sensor
        for (int i = 0; i < iSensorCount; ++i)
        {
            // Create the sensor so we can check status, if we can't create it, move on to the next
            hr = NuiCreateSensorByIndex(i, &pNuiSensor);
            if (FAILED(hr))
            {
                continue;
            }
    
            // Get the status of the sensor, and if connected, then we can initialize it
            hr = pNuiSensor->NuiStatus();
            if (S_OK == hr)
            {
                m_pNuiSensor = pNuiSensor;
                break;
            }
    
            // This sensor wasn't OK, so release it since we're not using it
            pNuiSensor->Release();
        }
    
        if (NULL != m_pNuiSensor)
        {
            // Initialize the Kinect and specify that we'll be using skeleton
            //hr = m_pNuiSensor->NuiInitialize(NUI_INITIALIZE_FLAG_USES_SKELETON); 
            hr = m_pNuiSensor->NuiInitialize(NUI_INITIALIZE_FLAG_USES_DEPTH_AND_PLAYER_INDEX|  NUI_INITIALIZE_FLAG_USES_COLOR|  NUI_INITIALIZE_FLAG_USES_SKELETON); 
            if (SUCCEEDED(hr))
            {
    
    		   m_hNextDepthFrameEvent = CreateEvent( NULL, TRUE, FALSE, NULL );
    		   m_pDepthStreamHandle = NULL;
    
    		   hr = m_pNuiSensor->NuiImageStreamOpen( NUI_IMAGE_TYPE_DEPTH_AND_PLAYER_INDEX, NUI_IMAGE_RESOLUTION_640x480, 0, 2,  m_hNextDepthFrameEvent, &m_pDepthStreamHandle);
    		   if( FAILED( hr ) )
    		   {
    				return hr;
    		   }
                // Create an event that will be signaled when skeleton data is available
                m_hNextSkeletonEvent = CreateEventW(NULL, TRUE, FALSE, NULL);
                // Open a skeleton stream to receive skeleton data
                hr = m_pNuiSensor->NuiSkeletonTrackingEnable(m_hNextSkeletonEvent, NUI_SKELETON_TRACKING_FLAG_ENABLE_IN_NEAR_RANGE); 
    
    			
    			m_hNextInteractionEvent = CreateEvent(NULL, TRUE, FALSE, NULL);
    			hr = NuiCreateInteractionStream(m_pNuiSensor,(INuiInteractionClient *)&m_nuiIClient, &m_nuiIStream);
    			if(FAILED(hr))
    				return hr;
    			hr = m_nuiIStream->Enable(m_hNextInteractionEvent);
    
    			if(FAILED(hr))
    				return hr;
    			
            }
        }
    
        if (NULL == m_pNuiSensor || FAILED(hr))
        {
            SetStatusMessage(L"No ready Kinect found!");
            return E_FAIL;
        }
    
        return hr;
    }
    
    
    void CSkeletonBasics::ProcessInteraction()
    {
    	NUI_INTERACTION_FRAME Interaction_Frame;
    	HRESULT hr =  m_nuiIStream->GetNextFrame( 0,&Interaction_Frame );
    	if (FAILED(hr))
    	{
    		return ;
    	}
    	int trackingID = 0;
    	int event=0;
    	for(int i=0;i<NUI_SKELETON_COUNT;i++)
    	{
    	trackingID = Interaction_Frame.UserInfos[i].SkeletonTrackingId;
    
    	event=Interaction_Frame.UserInfos[i].HandPointerInfos->HandEventType;
    	if (event>0)
    	{
    		OutputDebugStringW(L"show Interactions!\n");
    		LPCTSTR str;
    		printf("id=%itrackingID---------event:%i",i,event);
    		OutputDebugString(str);
    	}
    	}
    	/*
    	HRESULT hr;
    	NUI_INTERACTION_FRAME interactionFrame;
    	hr = m_nuiIStream->GetNextFrame(0, &interactionFrame );
    
    	if(FAILED(hr)) return; 
    	
    	for(int i = 0; i < NUI_SKELETON_COUNT; i++)
    	{
    		NUI_USER_INFO user = interactionFrame.UserInfos[i];
    		
    		if (user.SkeletonTrackingId != 0)
    		{
    			for(int j = 0; j < NUI_USER_HANDPOINTER_COUNT; j++)
    			{
    				NUI_HANDPOINTER_INFO hand = user.HandPointerInfos[j];
    				NUI_HANDPOINTER_STATE state  = (NUI_HANDPOINTER_STATE)hand.State;
    
    				if(state & NUI_HANDPOINTER_STATE_INTERACTIVE)
    					printf(	"Interactive: ");				
    					//cout << "Interactive: " << hand.X << " " << hand.Y <<endl;
    				if(state & NUI_HANDPOINTER_STATE_PRESSED)
    					//cout << "Pressed Button" << endl;
    
    				if(hand.State != NUI_HANDPOINTER_STATE_NOT_TRACKED)
    				{
    					//if(m_pInteractionStreamFunction != NULL)
    					//	m_pInteractionStreamFunction(hand);
    					switch(hand.HandEventType)
    					{
    						case 0:
    							//cout << hand.HandType << " NUI_HAND_EVENT_TYPE_NONE" << endl;
    							break;
    						case 1:
    							//cout << hand.HandType << " NUI_HAND_EVENT_TYPE_GRIP" << endl; //This never output
    							break;
    						case 2:
    							//cout << hand.HandType << " NUI_HAND_EVENT_TYPE_GRIPRELEASE" << endl; //This never outputs
    							break;
    					}
    				}
    			}
    		}
    	}
    	*/
    	
    }
    void CSkeletonBasics::ProcessDepth()
    {
    	 NUI_IMAGE_FRAME pImageFrame;
    	 INuiFrameTexture* pDepthImagePixelFrame;
    	 HRESULT hr = m_pNuiSensor->NuiImageStreamGetNextFrame( m_pDepthStreamHandle, 0, &pImageFrame );
    	 BOOL nearMode = TRUE;
    	 m_pNuiSensor->NuiImageFrameGetDepthImagePixelFrameTexture(m_pDepthStreamHandle, &pImageFrame, &nearMode, &pDepthImagePixelFrame);
    	 INuiFrameTexture * pTexture = pDepthImagePixelFrame;
    	 NUI_LOCKED_RECT LockedRect;  
    	 pTexture->LockRect( 0, &LockedRect, NULL, 0 );  
    	 if( LockedRect.Pitch != 0 )
    	 {
    	  HRESULT hr = m_nuiIStream->ProcessDepth(LockedRect.size,PBYTE(LockedRect.pBits),pImageFrame.liTimeStamp);
    	  if( FAILED( hr ) )
    	  {
    		  OutputDebugStringW(L"Process Depth failed\n");
    	  }
    	 }
    	 pTexture->UnlockRect(0);
    	 m_pNuiSensor->NuiImageStreamReleaseFrame(m_pDepthStreamHandle, &pImageFrame );
    }
    
    /// <summary>
    /// Handle new skeleton data
    /// </summary>
    void CSkeletonBasics::ProcessSkeleton()
    {
    
    
        NUI_SKELETON_FRAME skeletonFrame = {0};
    
        HRESULT hr = m_pNuiSensor->NuiSkeletonGetNextFrame(0, &skeletonFrame);
    
    
        if ( FAILED(hr) )
        {
            return;
        }
    
    	// smooth out the skeleton data
        m_pNuiSensor->NuiTransformSmooth(&skeletonFrame, NULL);
    
    	 Vector4 v;
    	 m_pNuiSensor->NuiAccelerometerGetCurrentReading(&v);
    	 hr =m_nuiIStream->ProcessSkeleton(NUI_SKELETON_COUNT, skeletonFrame.SkeletonData, &v, skeletonFrame.liTimeStamp);
    
        // Endure Direct2D is ready to draw
        hr = EnsureDirect2DResources( );
        if ( FAILED(hr) )
        {
            return;
        }
    
        m_pRenderTarget->BeginDraw();
        m_pRenderTarget->Clear( );
    
        RECT rct;
        GetClientRect( GetDlgItem( m_hWnd, IDC_VIDEOVIEW ), &rct);
        int width = rct.right;
        int height = rct.bottom;
    
        for (int i = 0 ; i < NUI_SKELETON_COUNT; ++i)
        {
            NUI_SKELETON_TRACKING_STATE trackingState = skeletonFrame.SkeletonData[i].eTrackingState;
    
            if (NUI_SKELETON_TRACKED == trackingState)
            {
                // We're tracking the skeleton, draw it
                DrawSkeleton(skeletonFrame.SkeletonData[i], width, height);
            }
            else if (NUI_SKELETON_POSITION_ONLY == trackingState)
            {
                // we've only received the center point of the skeleton, draw that
                D2D1_ELLIPSE ellipse = D2D1::Ellipse(
                    SkeletonToScreen(skeletonFrame.SkeletonData[i].Position, width, height),
                    g_JointThickness,
                    g_JointThickness
                    );
    
                m_pRenderTarget->DrawEllipse(ellipse, m_pBrushJointTracked);
            }
        }
    
    
    
        hr = m_pRenderTarget->EndDraw();
    
        // Device lost, need to recreate the render target
        // We'll dispose it now and retry drawing
        if (D2DERR_RECREATE_TARGET == hr)
        {
            hr = S_OK;
            DiscardDirect2DResources();
        }
    
    
    }
    
    /// <summary>
    /// Draws a skeleton
    /// </summary>
    /// <param name="skel">skeleton to draw</param>
    /// <param name="windowWidth">width (in pixels) of output buffer</param>
    /// <param name="windowHeight">height (in pixels) of output buffer</param>
    void CSkeletonBasics::DrawSkeleton(const NUI_SKELETON_DATA & skel, int windowWidth, int windowHeight)
    {      
        int i;
    
        for (i = 0; i < NUI_SKELETON_POSITION_COUNT; ++i)
        {
            m_Points[i] = SkeletonToScreen(skel.SkeletonPositions[i], windowWidth, windowHeight);
        }
    
        // Render Torso
        DrawBone(skel, NUI_SKELETON_POSITION_HEAD, NUI_SKELETON_POSITION_SHOULDER_CENTER);
        DrawBone(skel, NUI_SKELETON_POSITION_SHOULDER_CENTER, NUI_SKELETON_POSITION_SHOULDER_LEFT);
        DrawBone(skel, NUI_SKELETON_POSITION_SHOULDER_CENTER, NUI_SKELETON_POSITION_SHOULDER_RIGHT);
        DrawBone(skel, NUI_SKELETON_POSITION_SHOULDER_CENTER, NUI_SKELETON_POSITION_SPINE);
        DrawBone(skel, NUI_SKELETON_POSITION_SPINE, NUI_SKELETON_POSITION_HIP_CENTER);
        DrawBone(skel, NUI_SKELETON_POSITION_HIP_CENTER, NUI_SKELETON_POSITION_HIP_LEFT);
        DrawBone(skel, NUI_SKELETON_POSITION_HIP_CENTER, NUI_SKELETON_POSITION_HIP_RIGHT);
    
        // Left Arm
        DrawBone(skel, NUI_SKELETON_POSITION_SHOULDER_LEFT, NUI_SKELETON_POSITION_ELBOW_LEFT);
        DrawBone(skel, NUI_SKELETON_POSITION_ELBOW_LEFT, NUI_SKELETON_POSITION_WRIST_LEFT);
        DrawBone(skel, NUI_SKELETON_POSITION_WRIST_LEFT, NUI_SKELETON_POSITION_HAND_LEFT);
    
        // Right Arm
        DrawBone(skel, NUI_SKELETON_POSITION_SHOULDER_RIGHT, NUI_SKELETON_POSITION_ELBOW_RIGHT);
        DrawBone(skel, NUI_SKELETON_POSITION_ELBOW_RIGHT, NUI_SKELETON_POSITION_WRIST_RIGHT);
        DrawBone(skel, NUI_SKELETON_POSITION_WRIST_RIGHT, NUI_SKELETON_POSITION_HAND_RIGHT);
    
        // Left Leg
        DrawBone(skel, NUI_SKELETON_POSITION_HIP_LEFT, NUI_SKELETON_POSITION_KNEE_LEFT);
        DrawBone(skel, NUI_SKELETON_POSITION_KNEE_LEFT, NUI_SKELETON_POSITION_ANKLE_LEFT);
        DrawBone(skel, NUI_SKELETON_POSITION_ANKLE_LEFT, NUI_SKELETON_POSITION_FOOT_LEFT);
    
        // Right Leg
        DrawBone(skel, NUI_SKELETON_POSITION_HIP_RIGHT, NUI_SKELETON_POSITION_KNEE_RIGHT);
        DrawBone(skel, NUI_SKELETON_POSITION_KNEE_RIGHT, NUI_SKELETON_POSITION_ANKLE_RIGHT);
        DrawBone(skel, NUI_SKELETON_POSITION_ANKLE_RIGHT, NUI_SKELETON_POSITION_FOOT_RIGHT);
    
        // Draw the joints in a different color
        for (i = 0; i < NUI_SKELETON_POSITION_COUNT; ++i)
        {
            D2D1_ELLIPSE ellipse = D2D1::Ellipse( m_Points[i], g_JointThickness, g_JointThickness );
    
            if ( skel.eSkeletonPositionTrackingState[i] == NUI_SKELETON_POSITION_INFERRED )
            {
                m_pRenderTarget->DrawEllipse(ellipse, m_pBrushJointInferred);
            }
            else if ( skel.eSkeletonPositionTrackingState[i] == NUI_SKELETON_POSITION_TRACKED )
            {
                m_pRenderTarget->DrawEllipse(ellipse, m_pBrushJointTracked);
            }
        }
    }
    
    /// <summary>
    /// Draws a bone line between two joints
    /// </summary>
    /// <param name="skel">skeleton to draw bones from</param>
    /// <param name="joint0">joint to start drawing from</param>
    /// <param name="joint1">joint to end drawing at</param>
    void CSkeletonBasics::DrawBone(const NUI_SKELETON_DATA & skel, NUI_SKELETON_POSITION_INDEX joint0, NUI_SKELETON_POSITION_INDEX joint1)
    {
        NUI_SKELETON_POSITION_TRACKING_STATE joint0State = skel.eSkeletonPositionTrackingState[joint0];
        NUI_SKELETON_POSITION_TRACKING_STATE joint1State = skel.eSkeletonPositionTrackingState[joint1];
    
        // If we can't find either of these joints, exit
        if (joint0State == NUI_SKELETON_POSITION_NOT_TRACKED || joint1State == NUI_SKELETON_POSITION_NOT_TRACKED)
        {
            return;
        }
    
        // Don't draw if both points are inferred
        if (joint0State == NUI_SKELETON_POSITION_INFERRED && joint1State == NUI_SKELETON_POSITION_INFERRED)
        {
            return;
        }
    
        // We assume all drawn bones are inferred unless BOTH joints are tracked
        if (joint0State == NUI_SKELETON_POSITION_TRACKED && joint1State == NUI_SKELETON_POSITION_TRACKED)
        {
            m_pRenderTarget->DrawLine(m_Points[joint0], m_Points[joint1], m_pBrushBoneTracked, g_TrackedBoneThickness);
        }
        else
        {
            m_pRenderTarget->DrawLine(m_Points[joint0], m_Points[joint1], m_pBrushBoneInferred, g_InferredBoneThickness);
        }
    }
    
    /// <summary>
    /// Converts a skeleton point to screen space
    /// </summary>
    /// <param name="skeletonPoint">skeleton point to tranform</param>
    /// <param name="width">width (in pixels) of output buffer</param>
    /// <param name="height">height (in pixels) of output buffer</param>
    /// <returns>point in screen-space</returns>
    D2D1_POINT_2F CSkeletonBasics::SkeletonToScreen(Vector4 skeletonPoint, int width, int height)
    {
        LONG x, y;
        USHORT depth;
    
        // Calculate the skeleton's position on the screen
        // NuiTransformSkeletonToDepthImage returns coordinates in NUI_IMAGE_RESOLUTION_320x240 space
        NuiTransformSkeletonToDepthImage(skeletonPoint, &x, &y, &depth);
    
        float screenPointX = static_cast<float>(x * width) / cScreenWidth;
        float screenPointY = static_cast<float>(y * height) / cScreenHeight;
    
        return D2D1::Point2F(screenPointX, screenPointY);
    }
    
    /// <summary>
    /// Ensure necessary Direct2d resources are created
    /// </summary>
    /// <returns>S_OK if successful, otherwise an error code</returns>
    HRESULT CSkeletonBasics::EnsureDirect2DResources()
    {
        HRESULT hr = S_OK;
    
        // If there isn't currently a render target, we need to create one
        if (NULL == m_pRenderTarget)
        {
            RECT rc;
            GetWindowRect( GetDlgItem( m_hWnd, IDC_VIDEOVIEW ), &rc );  
    
            int width = rc.right - rc.left;
            int height = rc.bottom - rc.top;
            D2D1_SIZE_U size = D2D1::SizeU( width, height );
            D2D1_RENDER_TARGET_PROPERTIES rtProps = D2D1::RenderTargetProperties();
            rtProps.pixelFormat = D2D1::PixelFormat( DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_IGNORE);
            rtProps.usage = D2D1_RENDER_TARGET_USAGE_GDI_COMPATIBLE;
    
            // Create a Hwnd render target, in order to render to the window set in initialize
            hr = m_pD2DFactory->CreateHwndRenderTarget(
                rtProps,
                D2D1::HwndRenderTargetProperties(GetDlgItem( m_hWnd, IDC_VIDEOVIEW), size),
                &m_pRenderTarget
                );
            if ( FAILED(hr) )
            {
                SetStatusMessage(L"Couldn't create Direct2D render target!");
                return hr;
            }
    
            //light green
            m_pRenderTarget->CreateSolidColorBrush(D2D1::ColorF(0.27f, 0.75f, 0.27f), &m_pBrushJointTracked);
    
            m_pRenderTarget->CreateSolidColorBrush(D2D1::ColorF(D2D1::ColorF::Yellow, 1.0f), &m_pBrushJointInferred);
            m_pRenderTarget->CreateSolidColorBrush(D2D1::ColorF(D2D1::ColorF::Green, 1.0f), &m_pBrushBoneTracked);
            m_pRenderTarget->CreateSolidColorBrush(D2D1::ColorF(D2D1::ColorF::Gray, 1.0f), &m_pBrushBoneInferred);
        }
    
        return hr;
    }
    
    /// <summary>
    /// Dispose Direct2d resources 
    /// </summary>
    void CSkeletonBasics::DiscardDirect2DResources( )
    {
        SafeRelease(m_pRenderTarget);
    
        SafeRelease(m_pBrushJointTracked);
        SafeRelease(m_pBrushJointInferred);
        SafeRelease(m_pBrushBoneTracked);
        SafeRelease(m_pBrushBoneInferred);
    }
    
    /// <summary>
    /// Set the status bar message
    /// </summary>
    /// <param name="szMessage">message to display</param>
    void CSkeletonBasics::SetStatusMessage(WCHAR * szMessage)
    {
        SendDlgItemMessageW(m_hWnd, IDC_STATUS, WM_SETTEXT, 0, (LPARAM)szMessage);
    }
    
    

    Wednesday, April 17, 2013 3:38 AM
  • Hi,

    what if you remove

    DWORD dwEvent = MsgWaitForMultipleObjects(eventCount, hEvents, FALSE, INFINITE, QS_ALLINPUT);

    i have interaction fully working without it. Just keep WaitForSingleObjects.


    EDIT: Tested your code and it works without waiting multiple events. Also, don't forget to iterate throw the HandPointerInfos, because it's an array of hands. So you'll be able to detect interaction on the 2 hands.
    • Edited by Louliloul Friday, May 3, 2013 4:05 PM Code testing
    • Proposed as answer by King Nguyen Monday, May 6, 2013 2:16 AM
    Tuesday, April 30, 2013 4:05 PM
  • Hi !

    did you successfully been able to use GetInteractionInfoAtLocation function ? Actually, the coordinates i'm getting in this function are skeleton relative whereas the doc says they are UI relative. So i can't really know where the hand pointer is on the UI.


    "Find a job that you enjoy, and you will never work any more" Confucius

    Friday, May 3, 2013 9:08 AM
  • Hi,

    what if you remove

    DWORD dwEvent = MsgWaitForMultipleObjects(eventCount, hEvents, FALSE, INFINITE, QS_ALLINPUT);

    i have interaction fully working without it. Just keep WaitForSingleObjects.


    EDIT: Tested your code and it works without waiting multiple events. Also, don't forget to iterate throw the HandPointerInfos, because it's an array of hands. So you'll be able to detect interaction on the 2 hands.

    I was on holiday during last 10 days. 

    I could do it now, just remove DWORD dwEvent = MsgWaitForMultipleObjects(eventCount, hEvents, FALSE, INFINITE, QS_ALLINPUT);

    Thanks Louliloul, you save my life :)

    Monday, May 6, 2013 2:16 AM
  • You're welcome :-)

    Now i think that all of us that are working at this new SDK will face a new issue. I started a new thread about the coordinates supplied by GetInteractionInfoAtLocation  function here.


    "Find a job that you enjoy, and you will never work any more" Confucius


    • Edited by Louliloul Monday, May 6, 2013 7:00 AM
    Monday, May 6, 2013 7:00 AM
  • hi: 

              When I use the above code,Can‘t find the right hand. the NUI_HAND_TYPE only return    NUI_HAND_TYPE_LEFT  .Please help me!

    Wednesday, May 22, 2013 8:13 AM
  • hi: 

              When I use the above code,Can‘t find the right hand. the NUI_HAND_TYPE only return    NUI_HAND_TYPE_LEFT  .Please help me!

    So strange, I could see left and right hand grip. Here is my code:

    void CSkeletonBasics::ProcessInteraction()
    {
    	HRESULT hr;
    	NUI_INTERACTION_FRAME interactionFrame;
    	hr = m_nuiIStream->GetNextFrame(0, &interactionFrame );
    
    	if(FAILED(hr)) return; 
    	
    	int i = 0;
    	for(int ii = 0; ii < NUI_SKELETON_COUNT; ii++)
    	{
    		NUI_USER_INFO user = interactionFrame.UserInfos[ii];
    		if (user.SkeletonTrackingId != 0)
    		{
    			for(int j = 0; j < NUI_USER_HANDPOINTER_COUNT; j++)
    			{
    				NUI_HANDPOINTER_INFO hand = user.HandPointerInfos[j];
    				NUI_HANDPOINTER_STATE state  = (NUI_HANDPOINTER_STATE)hand.State;
    
    				//if(state & NUI_HANDPOINTER_STATE_INTERACTIVE)
    				//	//cout << "Interactive: " << hand.X << " " << hand.Y <<endl;
    				//if(state & NUI_HANDPOINTER_STATE_PRESSED)
    				//	OutputDebugString(L"Pressed Button");
    				if (i>1) 
    					return;
    				if(hand.State != NUI_HANDPOINTER_STATE_NOT_TRACKED && hand.HandEventType>0)
    				{
    					//if(m_pInteractionStreamFunction != NULL)
    					//	m_pInteractionStreamFunction(hand);
    					CString str = "handpointer=";
    					CString t;
    					t.Format(_T("%d"),j);
    					str += t+ "---------hand:";
    					if (hand.HandType == NUI_HAND_TYPE_LEFT)
    						str += "left";
    					if (hand.HandType == NUI_HAND_TYPE_RIGHT)
    						str += "right";
    					str += "----- event: ";
    
    					if (hand.HandEventType == NUI_HAND_EVENT_TYPE_GRIP)
    						str += "grip";
    					if (hand.HandEventType == NUI_HAND_EVENT_TYPE_GRIPRELEASE)
    						str += "release";
    					str += "\n";
    					OutputDebugString(str);
    				}
    				if(hand.State != NUI_HANDPOINTER_STATE_NOT_TRACKED)
    				{
    					if (hand.HandType == NUI_HAND_TYPE_LEFT)
    					{
    						handLeftEvent[i] = hand.HandEventType;
    					}
    					if (hand.HandType == NUI_HAND_TYPE_RIGHT)
    					{
    						handRightEvent[i] = hand.HandEventType;
    
    					}
    				}
    			}
    			i += 1;
    
    		}
    	}
    	
    	
    }

    Thursday, May 23, 2013 3:25 AM
  •   thank you!  I have lost something
    Thursday, May 23, 2013 5:51 AM
  • Hi

    I have been trying the same thing in C# - but it won't work somehow.

    There are no errors and no warnings - it stops with "TargetInvocationExeption"

    Does someone know, what I did wrong? Thank you for your help! :-)

    Here is my code:

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Threading.Tasks;
    using System.Windows;
    using System.Windows.Controls;
    using System.Windows.Data;
    using System.Windows.Documents;
    using System.Windows.Input;
    using System.Windows.Media;
    using System.Windows.Media.Imaging;
    using System.Windows.Navigation;
    using System.Windows.Shapes;
    using Microsoft.Kinect;
    using Microsoft.Kinect.Toolkit;
    using Microsoft.Kinect.Toolkit.Controls;
    using Microsoft.Kinect.Toolkit.Interaction;
    
    namespace Kinect_0._1
    {
    
        
    
        /// <summary>
        /// Interaktionslogik für MainWindow.xaml
        /// </summary>
        /// 
    
        public partial class MainWindow : Window
        {
            private readonly KinectSensorChooser sensorChooser;
            
    
            KinectSensor sensor;
            short[] pixelData;
            Skeleton[] skeletons;
    
            private UserInfo[] userInfos;
            private InteractionStream its;
            private CADInteractions cadIn; 
    
            public MainWindow()
            {
    
                // initialize the sensor chooser and UI
                this.sensorChooser = new KinectSensorChooser();
                this.sensorChooser.KinectChanged += SensorChooserOnKinectChanged;
                this.sensorChooserUi.KinectSensorChooser = this.sensorChooser;
                this.sensorChooser.Start();
                cadIn = new CADInteractions();
                
            }
    
            private static void SensorChooserOnKinectChanged(object sender, KinectChangedEventArgs args)
            {
                if (args.OldSensor != null)
                {
                    try
                    {
                        args.OldSensor.DepthStream.Range = DepthRange.Default;
                        args.OldSensor.SkeletonStream.EnableTrackingInNearRange = false;
                        args.OldSensor.DepthStream.Disable();
                        args.OldSensor.SkeletonStream.Disable();
                    }
                    catch (InvalidOperationException)
                    {
                        // KinectSensor might enter an invalid state while enabling/disabling streams or stream features.
                        // E.g.: sensor might be abruptly unplugged.
                    }
                }
    
                if (args.NewSensor != null)
                {
                    try
                    {
                        TransformSmoothParameters smooth = new TransformSmoothParameters();
    
                        // Enable things
                        args.NewSensor.SkeletonStream.EnableTrackingInNearRange = true; // enable returning skeletons while depth is in Near Range
                        args.NewSensor.DepthStream.Range = DepthRange.Near;
                        args.NewSensor.ColorStream.Enable(ColorImageFormat.InfraredResolution640x480Fps30);
                        args.NewSensor.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30);
                        args.NewSensor.SkeletonStream.Enable(smooth);
    
                    }
                    catch (InvalidOperationException)
                    {
                        // KinectSensor might enter an invalid state while enabling/disabling streams or stream features.
                        // E.g.: sensor might be abruptly unplugged.
                    }
                }
            }
    
            private void WindowLoaded(object sender, RoutedEventArgs e)
            {
                this.sensor = this.sensorChooser.Kinect;
    
                if (null != this.sensor)
                {
    
    
                    this.sensor.DepthFrameReady += this.SensorDepthFrameReady;
                    this.sensor.SkeletonFrameReady += this.SensorSkeletonFrameReady;
    
                    its = new InteractionStream(sensorChooser.Kinect, cadIn);
                    its.InteractionFrameReady += its_InteractionFrameReady;
                }
            }
    
    
            private void SensorDepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
            {
                using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
                {
                    if (depthFrame != null)
                    {
                        if (pixelData == null)
                        {
                            pixelData = new short[depthFrame.PixelDataLength];
                        }
                        // Copy the pixel data from the image to a temporary array
                        depthFrame.CopyPixelDataTo(pixelData);
                        its.ProcessDepth(depthFrame.GetRawPixelData(), depthFrame.Timestamp);
                    }
                }
            }
    
            private void SensorSkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
            {
                using (SkeletonFrame skeletonFrame = e.OpenSkeletonFrame())
                {
                    if (skeletonFrame != null)
                    {
                        if (skeletons == null)
                        {
                            skeletons = new Skeleton[skeletonFrame.SkeletonArrayLength];
                        }
                        skeletonFrame.CopySkeletonDataTo(skeletons);
                        its.ProcessSkeleton(skeletons, sensorChooser.Kinect.AccelerometerGetCurrentReading(), skeletonFrame.Timestamp);
                    }
                }
            }
    
            private void its_InteractionFrameReady(object sender, InteractionFrameReadyEventArgs e)
            {
                using (InteractionFrame frame = e.OpenInteractionFrame())
                {
                    if (frame != null)
                    {
                        if (this.userInfos == null)
                        {
                            this.userInfos = new UserInfo[InteractionFrame.UserInfoArrayLength];
                        }
    
                        frame.CopyInteractionDataTo(this.userInfos);
                    }
                    else
                    {
                        return;
                    }
                }
    
                foreach (UserInfo userInfo in this.userInfos)
                {
                    foreach (InteractionHandPointer handPointer in userInfo.HandPointers)
                    {
                        string action = null;
    
                        switch (handPointer.HandEventType)
                        {
                            case InteractionHandEventType.Grip:
                                action = "gripped";
                                break;
    
                            case InteractionHandEventType.GripRelease:
                                action = "released";
                                break;
                        }
    
                        if (action != null)
                        {
                            string handSide = "unknown";
    
                            switch (handPointer.HandType)
                            {
                                case InteractionHandType.Left:
                                    handSide = "left";
                                    break;
    
                                case InteractionHandType.Right:
                                    handSide = "right";
                                    break;
                            }
    
                            //Update Label
                            Befehl.Content = "User " + userInfo.SkeletonTrackingId + " " + action + " their " + handSide + "hand.";
                        }
                    }
                }
            }
    
            public class CADInteractions : IInteractionClient
            {
                public InteractionInfo GetInteractionInfoAtLocation(int skeletonTrackingId, InteractionHandType handType, double x, double y) 
                {
                    return new InteractionInfo
                    {
                        IsPressTarget = true,
                        IsGripTarget = true,
                    };
                }
            }
    
            private void WindowClosing(object sender, System.ComponentModel.CancelEventArgs e)
            {
                this.sensorChooser.Stop();
            }
                
        }
    }
    



    Monday, June 10, 2013 11:49 AM
  • Hello,

    I am a "newish" kinect developer. I have followed this thread and have found it quite useful. I have been developing my application in VB.net.

    After some time converting the code posted above from CS, I am stil;l having issues with the

    MDKinectInteractions Class

    I can seem to understand what the "IsPressTarget" object is?

     Looks like it is supposed to be a Boolean, but I get an error saying that a Boolean can not be converted to an Interaction Object. 

    my Class looks like this:

    Imports Microsoft.Kinect
    Imports Microsoft.Kinect.Toolkit
    Imports Microsoft.Kinect.Toolkit.Controls
    Imports Microsoft.Kinect.Toolkit.Interaction


    Public Class MDKinectInteractions
        Implements IInteractionClient
        
        Public Function GetInteractionInfoAtLocation(skeletonTrackingId As Integer, handType As InteractionHandType, x As Double, y As Double) As InteractionInfo Implements IInteractionClient.GetInteractionInfoAtLocation
            Return New InteractionInfo() {IsPressTarget = True, IsGripTarget = True}
        End Function
    End Class

    Obviously VB.net doesn't like declaring this "inline" so what should I decalre the "IsPressTarget" and "IsGripTarget" as?

    Thank you so much :)

    Ben


    • Edited by chefbennyj Saturday, June 15, 2013 1:25 PM
    Saturday, June 15, 2013 1:25 PM
  • could you explain what you meant by Adding the "KinectInteraction170_*.dll properly" where should this DLL be added? 
    Saturday, June 15, 2013 2:16 PM
  • 把它注册一下,这是COM组件
    Sunday, June 16, 2013 2:17 PM
  • Okay this took FOREVER, but I finally got the ball rolling using some of the CS code from this thread.

    So If you want to use VB.Net and get the Kinect Interaction Stream Working here is how you do it.

    Honestly, why is this so difficult!

    Here is a sample from my main Form:

    Public Class IFiguredThisBitchOut

      ''' <summary>
        ''' Microsoft Kinect Sensor
        ''' </summary>
        Public kinect As KinectSensor                          ' Kinect Sensor


        ''' <summary>
        '''  Kinect Interactions
        ''' </summary>
        Dim userInfos() As UserInfo
        Dim its As InteractionStream
        Dim kinIn As MDKinectInteractions = New MDKinectInteractions 


    Public Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load

    ILoadKinectSensor()

    End Sub

      Public Sub ILoadKinectSensor()

            '// Find the Microsoft Kinect on the computer
            For Each potentialSensor In KinectSensor.KinectSensors
                If potentialSensor.Status = KinectStatus.Connected Then

                    kinect = potentialSensor
                    Exit For
                End If
            Next potentialSensor


            ''/ A Kinect is Connected and ready!
            If KinectStatus.Connected Then
                'KinectConnected.Invoke(Sub() KinectConnected.Visible = True)
                'KinectConnected.Invoke(Sub() KinectConnected.Text = "Microsoft Kinect Ready!")

                KinectConnected.Visible = True
                KinectConnected.Text = "Microsoft Kinect Ready!"


                ''/ A Kinect is Connected and ready add parameters to the sensor
                If Nothing IsNot kinect Then
                    Try
                        ''// Turn on Skeleton Stream to recive skeleton frames
                        kinect.SkeletonStream.TrackingMode = SkeletonTrackingMode.Seated
                        Dim smoothingParam As TransformSmoothParameters = New TransformSmoothParameters
                        smoothingParam.Smoothing = 0.7!
                        smoothingParam.Correction = 0.5!
                        smoothingParam.Prediction = 0.7!
                        smoothingParam.JitterRadius = 0.5!
                        smoothingParam.MaxDeviationRadius = 0.5!

                        ' kinect.ColorStream.Enable(ColorImageFormat.InfraredResolution640x480Fps30)
                        kinect.SkeletonStream.Enable(smoothingParam)
                        kinect.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30)
                        ''// Start the sensor!
                        kinect.Start()

                        ' ''//Enable Skeleton And Audio
                        'kinect.SkeletonStream.Enable()

                        ''// Turn on the Audio Stream to recive audio for sensor
                        kinect.AudioSource.EchoCancellationSpeakerIndex = EchoCancellationMode.CancellationOnly

                        kinect.AudioSource.BeamAngleMode = BeamAngleMode.Adaptive
                        kinect.AudioSource.Start()

                        AddHandler kinect.AudioSource.BeamAngleChanged, AddressOf Me.AudioSourceBeamChanged
                        AddHandler kinect.AudioSource.SoundSourceAngleChanged, AddressOf Me.AudioSourceSoundSourceAngleChanged
                        AddHandler kinect.SkeletonFrameReady, AddressOf SensorSkeletonFrameReady
                        AddHandler kinect.AllFramesReady, AddressOf SensorAllFrameReady
                        AddHandler kinect.DepthFrameReady, AddressOf SensorDepthFramesReady
                        its = New InteractionStream(kinect, kinIn)
                        AddHandler its.InteractionFrameReady, AddressOf its_InteractionFrameReady

                    Catch e1 As IOException
                        ''// Some other application is streaming from the same Kinect sensor
                        kinect = Nothing
                    End Try
                End If
            End If
            If Nothing Is kinect Then
                Dim intResponse As Integer
                intResponse = MsgBox("Please check the connection of the Kinect sensor and try again.", _
                       MsgBoxStyle.Exclamation, "No Microsoft Kinect Sensors Found")
                If intResponse = vbOK Then
                    Me.Dispose()
                End If
                Return
            End If
        End Sub

          

      ''' <summary>
        ''' Kinect Sensor Camera Subroutines
        ''' </summary>
        ''' <param name="sender"></param>
        ''' <param name="e"></param>
        ''' <remarks></remarks>
        ''' 

        Public Sub its_InteractionFrameReady(ByVal sender As Object, ByVal e As InteractionFrameReadyEventArgs)
           Dim frame As InteractionFrame = e.OpenInteractionFrame
            If (Not (frame) Is Nothing) Then
                If (Me.userInfos Is Nothing) Then
                    Me.userInfos = New UserInfo((InteractionFrame.UserInfoArrayLength) - 1) {}
                End If
                frame.CopyInteractionDataTo(Me.userInfos)
            Else
                Return
            End If
            For Each userInfo As UserInfo In Me.userInfos
                For Each handPointer As InteractionHandPointer In userInfo.HandPointers
                    Dim action As String = Nothing
                    Select Case (handPointer.HandEventType)
                        Case InteractionHandEventType.Grip
                            action = "grip"
                        Case InteractionHandEventType.GripRelease
                            action = "release"
                    End Select
                    If (Not (action) Is Nothing) Then
                        Dim handSide As String = "unknown"
                        Select Case (handPointer.HandType)
                            Case InteractionHandType.Left
                                handSide = "left"
                            Case InteractionHandType.Right
                                handSide = "right"
                        End Select
                        'Update Label
                        GripRelease.Text = ("Grip/Release: " _
                                    & (" " _
                                    & (action & (" " & _
                                    (handSide & "hand")))))
                    End If
                Next
            Next
        End Sub

          Public Sub SensorDepthFramesReady(ByVal sender As Object, ByVal e As DepthImageFrameReadyEventArgs)
            Dim depthFrame As DepthImageFrame = e.OpenDepthImageFrame
            Dim depthPixels = New Short((kinect.DepthStream.FramePixelDataLength) - 1) {}
            If (Not (depthFrame) Is Nothing) Then
                ' Copy the pixel data from the image to a temporary array
                depthFrame.CopyPixelDataTo(depthPixels)
                its.ProcessDepth(depthFrame.GetRawPixelData, depthFrame.Timestamp)
            End If
        End Sub
        Public Sub SensorSkeletonFrameReady(ByVal sender As Object, ByVal e As SkeletonFrameReadyEventArgs)



            Dim skeletons(-1) As Skeleton
            Dim passiveCount As Integer = 0
            Dim activeCount As Integer = 0


            Using skeletonFrame As SkeletonFrame = e.OpenSkeletonFrame()
                If skeletonFrame IsNot Nothing Then
                    skeletons = New Skeleton(skeletonFrame.SkeletonArrayLength - 1) {}
                    skeletonFrame.CopySkeletonDataTo(skeletons)

                    Dim accelerometerReading = kinect.AccelerometerGetCurrentReading()
                    its.ProcessSkeleton(skeletons, accelerometerReading, skeletonFrame.Timestamp)
                End If
            End Using

            If skeletons.Length <> 0 Then
                For Each skel As Skeleton In skeletons

      ''// Focus on the closest Person
                    Dim closestID As Integer = 0
                    If Not Me.kinect.SkeletonStream.AppChoosesSkeletons Then
                        Me.kinect.SkeletonStream.AppChoosesSkeletons = True
                        ' Ensure AppChoosesSkeletons is set
                    End If
                    ' Start with a far enough distance
                    Dim closestDistance As Single = 10000.0!

                    If (skel.Position.Z < closestDistance) Then
                        closestID = skel.TrackingId
                        closestDistance = skel.Position.Z


                        If (closestID > 0) Then
                            ' Track this skeleton

                            Me.kinect.SkeletonStream.ChooseSkeletons(closestID)

    ''//Do Some regular kinect Skeleton stream Gestures here if ya want to

       End If
                    End If
                Next

            End If

        End Sub

    MDKinectinteraction Class

    Imports System.Collections.Generic
    Imports System.Linq
    Imports System.Text
    Imports System.Threading.Tasks
    Imports Microsoft.Kinect.Toolkit.Interaction

    Public Class MDKinectInteractions

        Implements IInteractionClient

        Public Function GetInteractionInfoAtLocation1(skeletonTrackingId As Integer, _
                                                      handType As InteractionHandType, _
                                                      x As Double, y As Double) As InteractionInfo _
                                                  Implements IInteractionClient.GetInteractionInfoAtLocation

            Dim result = New InteractionInfo()
            result.IsGripTarget = True
            result.IsPressTarget = True
            result.PressAttractionPointX = 0.5
            result.PressAttractionPointY = 0.5
            result.PressTargetControlId = 1

            Return result
        End Function
    End Class

    This now shows a grip and release in VB.Net,

    Hey Microsoft! Yeah You! some of us still code in Vb.Net, i know...i know, but we do! can ya give some more samples please!

    Tuesday, June 18, 2013 1:32 AM
  • Hey guys,

    I have been learning through this thread so far, however, I cannot get the hand event such as grip and grip release while  I can get the X,Y coordinate of the two hands. My HandEventType is always NUI_HAND_EVENT_TYPE_NONE. 

    Below is my code, please help me.

    #include "stdafx.h"
    #include <strsafe.h>
    #include "SkeletonBasics.h"
    #include "resource.h"
    #include <iostream>
    
    using namespace std;
    static const float g_JointThickness = 3.0f;
    static const float g_TrackedBoneThickness = 6.0f;
    static const float g_InferredBoneThickness = 1.0f;
    
    /// <summary>
    /// Entry point for the application
    /// </summary>
    /// <param name="hInstance">handle to the application instance</param>
    /// <param name="hPrevInstance">always 0</param>
    /// <param name="lpCmdLine">command line arguments</param>
    /// <param name="nCmdShow">whether to display minimized, maximized, or normally</param>
    /// <returns>status</returns>
    int APIENTRY wWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPWSTR lpCmdLine, int nCmdShow)
    {
        CSkeletonBasics application;
        application.Run(hInstance, nCmdShow);
    }
    
    /// <summary>
    /// Constructor
    /// </summary>
    CSkeletonBasics::CSkeletonBasics() :
        m_pD2DFactory(NULL),
        m_pInteractionStreamHandle(INVALID_HANDLE_VALUE),
        m_hNextInteractionFrameEvent(INVALID_HANDLE_VALUE),
        m_hNextSkeletonEvent(INVALID_HANDLE_VALUE),
        m_pSkeletonStreamHandle(INVALID_HANDLE_VALUE),
        m_bSeatedMode(false),
        m_hNextDepthFrameEvent(INVALID_HANDLE_VALUE),
        m_pDepthStreamHandle(INVALID_HANDLE_VALUE),
        m_bNearMode(false),
        m_pRenderTarget(NULL),
        m_pBrushJointTracked(NULL),
        m_pBrushJointInferred(NULL),
        m_pBrushBoneTracked(NULL),
        m_pBrushBoneInferred(NULL),
        m_pNuiSensor(NULL)
    {
        ZeroMemory(m_Points,sizeof(m_Points));
    	// create heap storage for depth pixel data in RGBX format
        m_depthRGBX = new BYTE[cDepthWidth*cDepthHeight*cBytesPerPixel];
    }
    
    /// <summary>
    /// Destructor
    /// </summary>
    CSkeletonBasics::~CSkeletonBasics()
    {
        if (m_pNuiSensor)
        {
            m_pNuiSensor->NuiShutdown();
        }
    
        if (m_hNextSkeletonEvent && (m_hNextSkeletonEvent != INVALID_HANDLE_VALUE))
        {
            CloseHandle(m_hNextSkeletonEvent);
        }
        if (m_hNextDepthFrameEvent != INVALID_HANDLE_VALUE)
        {
            CloseHandle(m_hNextDepthFrameEvent);
        }
        if (m_hNextInteractionFrameEvent != INVALID_HANDLE_VALUE)
        {
            CloseHandle(m_hNextInteractionFrameEvent);
        }
        // clean up Direct2D objects
        DiscardDirect2DResources();
    
        // clean up Direct2D
        SafeRelease(m_pD2DFactory);
    
        SafeRelease(m_pNuiSensor);
    }
    
    /// <summary>
    /// Creates the main window and begins processing
    /// </summary>
    /// <param name="hInstance">handle to the application instance</param>
    /// <param name="nCmdShow">whether to display minimized, maximized, or normally</param>
    int CSkeletonBasics::Run(HINSTANCE hInstance, int nCmdShow)
    {
        MSG       msg = {0};
        WNDCLASS  wc  = {0};
    
        // Dialog custom window class
        wc.style         = CS_HREDRAW | CS_VREDRAW;
        wc.cbWndExtra    = DLGWINDOWEXTRA;
        wc.hInstance     = hInstance;
        wc.hCursor       = LoadCursorW(NULL, IDC_ARROW);
        wc.hIcon         = LoadIconW(hInstance, MAKEINTRESOURCE(IDI_APP));
        wc.lpfnWndProc   = DefDlgProcW;
        wc.lpszClassName = L"SkeletonBasicsAppDlgWndClass";
    
        if (!RegisterClassW(&wc))
        {
            return 0;
        }
    
        // Create main application window
        HWND hWndApp = CreateDialogParamW(
            hInstance,
            MAKEINTRESOURCE(IDD_APP),
            NULL,
            (DLGPROC)CSkeletonBasics::MessageRouter, 
            reinterpret_cast<LPARAM>(this));
    
        // Show window
        ShowWindow(hWndApp, nCmdShow);
    
        const int eventCount = 1;
        HANDLE hEvents[eventCount];
    
        // Main message loop
        while (WM_QUIT != msg.message)
        {
            hEvents[0] = m_hNextSkeletonEvent;
    		//hEvents[1] = m_hNextDepthFrameEvent;
    		//hEvents[2] = m_hNextInteractionFrameEvent;
            // Check to see if we have either a message (by passing in QS_ALLEVENTS)
            // Or a Kinect event (hEvents)
            // Update() will check for Kinect events individually, in case more than one are signalled
           // DWORD dwEvent = MsgWaitForMultipleObjects(eventCount, hEvents, FALSE, INFINITE, QS_ALLINPUT);
    
            // Check if this is an event we're waiting on and not a timeout or message
           // if (WAIT_OBJECT_0 == dwEvent)
           // {
                Update();
           // }
    
            if (PeekMessageW(&msg, NULL, 0, 0, PM_REMOVE))
            {
                // If a dialog message will be taken care of by the dialog proc
                if ((hWndApp != NULL) && IsDialogMessageW(hWndApp, &msg))
                {
                    continue;
                }
    
                TranslateMessage(&msg);
                DispatchMessageW(&msg);
            }
        }
    
        return static_cast<int>(msg.wParam);
    }
    
    /// <summary>
    /// Main processing function
    /// </summary>
    void CSkeletonBasics::Update()
    {
        if (NULL == m_pNuiSensor)
        {
            return;
        }
    
        // Wait for 0ms, just quickly test if it is time to process a skeleton
        if ( WAIT_OBJECT_0 == WaitForSingleObject(m_hNextSkeletonEvent, 0) )
        {
            ProcessSkeleton();
        }
    	if ( WAIT_OBJECT_0 == WaitForSingleObject(m_hNextDepthFrameEvent, 0) )
        {
            ProcessDepth();
        }
    	if ( WAIT_OBJECT_0 == WaitForSingleObject(m_hNextInteractionFrameEvent, 0) )
        {
            ProcessInteraction();
        }
    }
    
    /// <summary>
    /// Handles window messages, passes most to the class instance to handle
    /// </summary>
    /// <param name="hWnd">window message is for</param>
    /// <param name="uMsg">message</param>
    /// <param name="wParam">message data</param>
    /// <param name="lParam">additional message data</param>
    /// <returns>result of message processing</returns>
    LRESULT CALLBACK CSkeletonBasics::MessageRouter(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
    {
        CSkeletonBasics* pThis = NULL;
    
        if (WM_INITDIALOG == uMsg)
        {
            pThis = reinterpret_cast<CSkeletonBasics*>(lParam);
            SetWindowLongPtr(hWnd, GWLP_USERDATA, reinterpret_cast<LONG_PTR>(pThis));
        }
        else
        {
            pThis = reinterpret_cast<CSkeletonBasics*>(::GetWindowLongPtr(hWnd, GWLP_USERDATA));
        }
    
        if (pThis)
        {
            return pThis->DlgProc(hWnd, uMsg, wParam, lParam);
        }
    
        return 0;
    }
    
    /// <summary>
    /// Handle windows messages for the class instance
    /// </summary>
    /// <param name="hWnd">window message is for</param>
    /// <param name="uMsg">message</param>
    /// <param name="wParam">message data</param>
    /// <param name="lParam">additional message data</param>
    /// <returns>result of message processing</returns>
    LRESULT CALLBACK CSkeletonBasics::DlgProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
    {
    	
        switch (message)
        {
        case WM_INITDIALOG:
            {
                // Bind application window handle
                m_hWnd = hWnd;
    
                // Init Direct2D
                D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &m_pD2DFactory);
    
                // Look for a connected Kinect, and create it if found
                CreateFirstConnected();
    			
            }
            break;
    
            // If the titlebar X is clicked, destroy app
        case WM_CLOSE:
            DestroyWindow(hWnd);
            break;
    
        case WM_DESTROY:
            // Quit the main message pump
            PostQuitMessage(0);
            break;
    
            // Handle button press
        case WM_COMMAND:
            // If it was for the near mode control and a clicked event, change near mode
            if (IDC_CHECK_SEATED == LOWORD(wParam) && BN_CLICKED == HIWORD(wParam))
            {
                // Toggle out internal state for near mode
                m_bSeatedMode = !m_bSeatedMode;
    
                if (NULL != m_pNuiSensor)
                {
                    // Set near mode for sensor based on our internal state
                    m_pNuiSensor->NuiSkeletonTrackingEnable(m_hNextSkeletonEvent, m_bSeatedMode ? NUI_SKELETON_TRACKING_FLAG_ENABLE_SEATED_SUPPORT : 0);
                }
            }
            break;
        }
    
        return FALSE;
    }
    
    /// <summary>
    /// Create the first connected Kinect found 
    /// </summary>
    /// <returns>indicates success or failure</returns>
    HRESULT CSkeletonBasics::CreateFirstConnected()
    {
        INuiSensor * pNuiSensor;
    	
        int iSensorCount = 0;
        HRESULT hr = NuiGetSensorCount(&iSensorCount);
        if (FAILED(hr))
        {
            return hr;
        }
    
        // Look at each Kinect sensor
        for (int i = 0; i < iSensorCount; ++i)
        {
            // Create the sensor so we can check status, if we can't create it, move on to the next
            hr = NuiCreateSensorByIndex(i, &pNuiSensor);
            if (FAILED(hr))
            {
                continue;
            }
    
            // Get the status of the sensor, and if connected, then we can initialize it
            hr = pNuiSensor->NuiStatus();
            if (S_OK == hr)
            {
                m_pNuiSensor = pNuiSensor;
                break;
            }
    
            // This sensor wasn't OK, so release it since we're not using it
            pNuiSensor->Release();
        }
    
        if (NULL != m_pNuiSensor)
        {
            // Initialize the Kinect and specify that we'll be using skeleton
            hr = m_pNuiSensor->NuiInitialize(NUI_INITIALIZE_FLAG_USES_SKELETON | NUI_INITIALIZE_FLAG_USES_DEPTH); 
            if (SUCCEEDED(hr))
            {
                // Create an event that will be signaled when skeleton data is available
                m_hNextSkeletonEvent = CreateEventW(NULL, TRUE, FALSE, NULL);
    			// Create an event that will be signaled when depth data is available
                m_hNextDepthFrameEvent = CreateEvent(NULL, TRUE, FALSE, NULL);
    			// Create an event that will be signaled when interaction data is available
                m_hNextInteractionFrameEvent = CreateEvent(NULL, TRUE, FALSE, NULL);
    
    			// Open a depth image stream to receive depth frames
                hr = m_pNuiSensor->NuiImageStreamOpen(
                    NUI_IMAGE_TYPE_DEPTH,
                    NUI_IMAGE_RESOLUTION_640x480,
                    0,
                    2,
                    m_hNextDepthFrameEvent,
                    &m_pDepthStreamHandle);
    		    if (FAILED(hr))
    			{
    				SetStatusMessage(L"cannot open image stream!");
    				return E_FAIL;
    			}
                // Open a skeleton stream to receive skeleton data
                hr = m_pNuiSensor->NuiSkeletonTrackingEnable(m_hNextSkeletonEvent, NUI_SKELETON_TRACKING_FLAG_ENABLE_IN_NEAR_RANGE); 
                if (FAILED(hr))
    			{
    				SetStatusMessage(L"cannot enbale skeletal stream!");
    				return E_FAIL;
    			}			
    			hr = NuiCreateInteractionStream(m_pNuiSensor,(INuiInteractionClient *)&m_nuiIClient,&m_nuiIStream);
    		    if (FAILED(hr))
    			{
    				SetStatusMessage(L"cannot create interaction stream!");
    				return E_FAIL;
    			}
    			hr = m_nuiIStream->Enable(m_hNextInteractionFrameEvent);
    			if (FAILED(hr))
    			{
    				SetStatusMessage(L"cannot enable interaction stream!");
    				return E_FAIL;
    			}
    		}
    
        }
    
        if (NULL == m_pNuiSensor || FAILED(hr))
        {
            SetStatusMessage(L"No ready Kinect found!");
            return E_FAIL;
        }
    
        return hr;
    }
    void CSkeletonBasics::ProcessInteraction()
    {
    	 NUI_INTERACTION_FRAME Interaction_Frame;
    	m_nuiIStream->GetNextFrame( 0,&Interaction_Frame );
    	 int trackingID = 0;
    	 int event=0;
    	 
    	int i = 0;
    	for(int ii = 0; ii < NUI_SKELETON_COUNT; ii++)
    	{
    		NUI_USER_INFO user = Interaction_Frame.UserInfos[ii];
    		if (user.SkeletonTrackingId != 0)
    		{
    			for(int j = 0; j < NUI_USER_HANDPOINTER_COUNT; j++)
    			{
    				NUI_HANDPOINTER_INFO hand = user.HandPointerInfos[j];
    				NUI_HANDPOINTER_STATE state  = (NUI_HANDPOINTER_STATE)hand.State;
    
    
    				if (i>1) 
    					return;
    				if(hand.State != NUI_HANDPOINTER_STATE_NOT_TRACKED && hand.HandEventType>0)
    				{
    					//if(m_pInteractionStreamFunction != NULL)
    					//	m_pInteractionStreamFunction(hand);
    					string str = "handpointer=";
    					string t;
    					
    					str += t;
    					str += "---------hand:";
    					if (hand.HandType == NUI_HAND_TYPE_LEFT)
    						str += "left";
    					if (hand.HandType == NUI_HAND_TYPE_RIGHT)
    						str += "right";
    					str += "----- event: ";
    
    					if (hand.HandEventType == NUI_HAND_EVENT_TYPE_GRIP)
    						str += "grip";
    					if (hand.HandEventType == NUI_HAND_EVENT_TYPE_GRIPRELEASE)
    						str += "release";
    					str += "\n";
    					//OutputDebugString(str);
    					//wstringstream wss;
    					//wss <<hand.X;
    					//SetDlgItemText(this->m_hWnd,IDC_STATUS1, wss.str().c_str());
    					//m_pRenderTarget->DrawTextW(str, length
    				}
    				if(hand.State != NUI_HANDPOINTER_STATE_NOT_TRACKED)
    				{
    					if (hand.HandType == NUI_HAND_TYPE_LEFT)
    					{
    						wstringstream wss;
    						wss <<hand.X;
    						
    						OutputDebugString(wss.str().c_str());
    						OutputDebugString(L", ");
    						wss <<hand.Y;
    						OutputDebugString(wss.str().c_str());
    						OutputDebugString(L"\n");
    						//OutputDebugString(L"yay");
    					}
    					if (hand.HandType == NUI_HAND_TYPE_RIGHT)
    					{
    						//OutputDebugString(L"yay");
    
    					}
    				}
    			}
    			i += 1;
    		}
    	}
    }
    /// <summary>
    /// Handle new skeleton data
    /// </summary>
    void CSkeletonBasics::ProcessSkeleton()
    {
        NUI_SKELETON_FRAME skeletonFrame = {0};
    	HRESULT hr;
        hr = m_pNuiSensor->NuiSkeletonGetNextFrame(0, &skeletonFrame);
        if ( FAILED(hr) )
        {
            return;
        }
    	//skeletonFrame.SkeletonData
        // smooth out the skeleton data
        m_pNuiSensor->NuiTransformSmooth(&skeletonFrame, NULL);
    	Vector4 v;
    	m_pNuiSensor->NuiAccelerometerGetCurrentReading(&v);
    	hr =m_nuiIStream->ProcessSkeleton(NUI_SKELETON_COUNT, 
    	  skeletonFrame.SkeletonData,
    	  &v,
    	  skeletonFrame.liTimeStamp);
    	 if ( FAILED(hr) )
        {
            return;
        }
    	 
        // Endure Direct2D is ready to draw
        hr = EnsureDirect2DResources( );
        if ( FAILED(hr) )
        {
            return;
        }
    
        m_pRenderTarget->BeginDraw();
        m_pRenderTarget->Clear( );
    
        RECT rct;
        GetClientRect( GetDlgItem( m_hWnd, IDC_VIDEOVIEW ), &rct);
        int width = rct.right;
        int height = rct.bottom;
    
        for (int i = 0 ; i < NUI_SKELETON_COUNT; ++i)
        {
            NUI_SKELETON_TRACKING_STATE trackingState = skeletonFrame.SkeletonData[i].eTrackingState;
    
            if (NUI_SKELETON_TRACKED == trackingState)
            {
                // We're tracking the skeleton, draw it
                DrawSkeleton(skeletonFrame.SkeletonData[i], width, height);
            }
            else if (NUI_SKELETON_POSITION_ONLY == trackingState)
            {
                // we've only received the center point of the skeleton, draw that
                D2D1_ELLIPSE ellipse = D2D1::Ellipse(
                    SkeletonToScreen(skeletonFrame.SkeletonData[i].Position, width, height),
                    g_JointThickness,
                    g_JointThickness
                    );
    
                m_pRenderTarget->DrawEllipse(ellipse, m_pBrushJointTracked);
            }
        }
    
        hr = m_pRenderTarget->EndDraw();
        if (D2DERR_RECREATE_TARGET == hr)
        {
            hr = S_OK;
            DiscardDirect2DResources();
        }
    	
    }
    /// <summary>
    /// Handle new depth data
    /// </summary>
    void CSkeletonBasics::ProcessDepth()
    {
    	HRESULT hr;
    	NUI_IMAGE_FRAME imageFrame;
        hr = m_pNuiSensor->NuiImageStreamGetNextFrame(m_pDepthStreamHandle, 0, &imageFrame);
        if (FAILED(hr))
        {
            return;
        }
    	BOOL nearMode;
        INuiFrameTexture* pTexture;
        // Get the depth image pixel texture
        hr = m_pNuiSensor->NuiImageFrameGetDepthImagePixelFrameTexture(
            m_pDepthStreamHandle, &imageFrame, &nearMode, &pTexture);
        if (FAILED(hr))
        {
            goto ReleaseFrame;
        }
        NUI_LOCKED_RECT LockedRect;
    
        // Lock the frame data so the Kinect knows not to modify it while we're reading it
        pTexture->LockRect(0, &LockedRect, NULL, 0);
    	
    	if(LockedRect.Pitch != 0)
    	{
    		hr = m_nuiIStream->ProcessDepth(LockedRect.size, PBYTE(LockedRect.pBits), imageFrame.liTimeStamp);
    		if (FAILED(hr))
    		{
    			SetStatusMessage(L"ProcessDepth failed!");
    			return;
    		}			
    	}
    	pTexture->UnlockRect(0);
    
    
    
    ReleaseFrame:
        // Release the frame
        m_pNuiSensor->NuiImageStreamReleaseFrame(m_pDepthStreamHandle, &imageFrame);
    }
    

    Monday, August 5, 2013 12:22 AM
  • You look like miss this:

    http://social.msdn.microsoft.com/Forums/en-US/45eb3409-ad15-44f2-8b15-7a1b8054d31f/kinect-interaction-does-only-track-left-hand#f1b59feb-c8f9-4b90-a54b-a2fc32fe3aea

    class CIneractionClient:public INuiInteractionClient
    {
    		public:
    		 CIneractionClient()
    		 {;}
    		 ~CIneractionClient()
    		 {;}
    
    
    		 STDMETHOD(GetInteractionInfoAtLocation)(THIS_ DWORD skeletonTrackingId, NUI_HAND_TYPE handType, FLOAT x, FLOAT y, _Out_ NUI_INTERACTION_INFO *pInteractionInfo)
    		 {        
    		  if(pInteractionInfo)
    		 {
    			  pInteractionInfo->IsPressTarget         = false;
    			  pInteractionInfo->PressTargetControlId  = 0;
    			  pInteractionInfo->PressAttractionPointX = 0.f;
    			  pInteractionInfo->PressAttractionPointY = 0.f;
    			  pInteractionInfo->IsGripTarget          = true;
    			  return S_OK;
    		 }
    		 return E_POINTER;
      
    		  //return S_OK; 
     
    		 }
    
    		 STDMETHODIMP_(ULONG)    AddRef()                                    { return 2;     }
    		 STDMETHODIMP_(ULONG)    Release()                                   { return 1;     }
    		 STDMETHODIMP            QueryInterface(REFIID riid, void **ppv)     { return S_OK;  }
    
    };


    Monday, August 5, 2013 3:07 AM
  • Hi,

    Is this only track a primary hand at the time?

    Is there any way to track both two hands at the same time?

    For example, user 1 gripped there left hand and released their right hand

    Thank in advance.

    Tuesday, August 6, 2013 2:18 AM
  • Hi,

    Is this only track a primary hand at the time?

    Is there any way to track both two hands at the same time?

    For example, user 1 gripped there left hand and released their right hand

    Thank in advance.


    Not only both hand but  it also  could track 2 users at same time
    • Proposed as answer by ttmk84 Tuesday, August 6, 2013 10:50 AM
    • Unproposed as answer by ttmk84 Tuesday, August 6, 2013 10:50 AM
    Tuesday, August 6, 2013 2:56 AM
  • It such a stupid question but i tried all day for calculating the distance between two hands abs(leftHand.x - rightHand.x)

    how can I record coordinates of the 2 hands?

    the below code can only work with 1hand at a time

    foreach (UserInfo userInfo in this.userInfos)
    {
                    foreach (InteractionHandPointer handPointer in userInfo.HandPointers)
                    {
    
    			x = handPointer.RawX;
    
    			y = handPointer.RawY;
    
    			z = handPointer.RawZ;
    
    
    		}
    
    }...

    thank you so much

    • Proposed as answer by ttmk84 Tuesday, August 6, 2013 11:36 AM
    • Unproposed as answer by ttmk84 Tuesday, August 6, 2013 11:36 AM
    Tuesday, August 6, 2013 11:34 AM
  • for(int ii = 0; ii < NUI_SKELETON_COUNT; ii++) { NUI_USER_INFO user = interactionFrame.UserInfos[ii]; if (user.SkeletonTrackingId != 0) { for(int j = 0; j < NUI_USER_HANDPOINTER_COUNT; j++) { NUI_HANDPOINTER_INFO hand = user.HandPointerInfos[j]; NUI_HANDPOINTER_STATE state = (NUI_HANDPOINTER_STATE)hand.State; //if(state & NUI_HANDPOINTER_STATE_INTERACTIVE) // //cout << "Interactive: " << hand.X << " " << hand.Y <<endl; //if(state & NUI_HANDPOINTER_STATE_PRESSED) // OutputDebugString(L"Pressed Button"); if (i>1) return; if(hand.State != NUI_HANDPOINTER_STATE_NOT_TRACKED && hand.HandEventType>0) { //if(m_pInteractionStreamFunction != NULL) // m_pInteractionStreamFunction(hand); CString str = "handpointer="; CString t; t.Format(_T("%d"),j); str += t+ "---------hand:"; if (hand.HandType == NUI_HAND_TYPE_LEFT) str += "left"; if (hand.HandType == NUI_HAND_TYPE_RIGHT) str += "right"; str += "----- event: "; if (hand.HandEventType == NUI_HAND_EVENT_TYPE_GRIP) str += "grip"; if (hand.HandEventType == NUI_HAND_EVENT_TYPE_GRIPRELEASE) str += "release"; str += "\n"; OutputDebugString(str); } if(hand.State != NUI_HANDPOINTER_STATE_NOT_TRACKED) { if (hand.HandType == NUI_HAND_TYPE_LEFT) { handLeftEvent[i] = hand.HandEventType; } if (hand.HandType == NUI_HAND_TYPE_RIGHT) { handRightEvent[i] = hand.HandEventType; } } } i += 1; }

    This code already posted above

    By the way, you can easy get hand position inside ProcessSkeleton()

    • Proposed as answer by pphamtuan Saturday, December 7, 2013 10:56 AM
    • Unproposed as answer by pphamtuan Saturday, December 7, 2013 10:56 AM
    Wednesday, August 7, 2013 2:26 AM
  • can you give me a example  project, plz? i  start learning kinect  interaction and c++.
    Saturday, December 7, 2013 11:02 AM
  • can you send me your project, plz? my email: hoang.xuantuan5@gmail.com
    Saturday, December 7, 2013 11:04 AM
  • chefbennyj, Just wanted to give you the props you deserve on this!
    • Edited by Dan Pruitt Thursday, January 22, 2015 8:41 PM
    Thursday, January 22, 2015 8:40 PM