none
Sending Coordinates of tracked object to another process in K4W's FaceTrackingBasics Program RRS feed

  • Question

  • I am using the Kinect for Windows Developer Toolkit v1.5.2 where you have the code Face Tracking Basics-WPF example.

    In the FaceTrackingViewer Class, I made modifications to a section of the code so that I could generate coordinates of the frame that is being tracked in the Get3DShape() accessor. I am retrieving the x, y, z points and I am passing these values to a memory mapped file. 

    I am trying to pick up the values being passed to the memory mapped file in labview and use it within a microcontroller (myRIIO).

    I have the following code:                       

                          
      // cut from here
                            using (MemoryStream stream = new MemoryStream())
                            {
    
                                var sw = new StreamWriter(stream);
    
                                int n = 121;
    
                                foreach (Vector3DF[] vector in facePoints3D.GetSlices(n))
    
                                {
                                    var copier = new VectorSerializer();
    
                                    //act 
                                    byte[] bytearray = copier.SerializeVectors(vector);
                                    Vector3DF[] copiedVectors = copier.DeserializeVectors(bytearray);
    
                                    //Initialize unmanaged memory to hold array.
                                   int size = Marshal.SizeOf(bytearray[0]) * bytearray.Length;
    
                                    IntPtr pnt = Marshal.AllocHGlobal(size);
    
                                    //bool mutexCreated;
                                    // Mutex mutex = new Mutex(true, "vectmapmutex", out mutexCreated);
    
                                    //copy the array to unmanaged memory.
                                    Marshal.Copy(bytearray, 0, pnt, bytearray.Length);
    
                                    // Copy the unmanaged array back to another managed array.
    
                                    byte[] bytearray2 = new byte[bytearray.Length];
    
                                    Marshal.Copy(pnt, bytearray2, 0, bytearray.Length);
    
                                    //Console.WriteLine("The array was coppied to unmanaged memory and back.");
    
                                    try
                                    {
                                        using (MemoryMappedFile mmf = MemoryMappedFile.CreateNew("vectmap", 1024 * 1024,                                                                MemoryMappedFileAccess.ReadWrite))
                                        {
    
                                            bool mutexCreated;
                                            Mutex mutex = new Mutex(true, "vectmapmutex", out mutexCreated);
    
                                            //Copy to Memory Mapped File
                                            using (MemoryMappedViewStream mapstream = mmf.CreateViewStream())
                                            {
                                                BinaryWriter binwriter = new BinaryWriter(mapstream);
                                                binwriter.Write(bytearray2);
                                            }
    
                                            mutex.ReleaseMutex();
    
                                            mutex.WaitOne(1000);
    
                                            using (MemoryMappedViewStream mapstream = mmf.CreateViewStream())
                                            {
                                                BinaryReader binreader = new BinaryReader(mapstream);
                                                //return binreader.ReadBytes((int)mapstream.Length);
                                                Console.WriteLine("vectors: {0}", binreader.ReadBoolean());
    
                                            }
    
                                           // mutex.ReleaseMutex();
                                        }
                                    }
    
                                    finally
                                    {
                                        // Free the unmanaged memory.
                                        Marshal.FreeHGlobal(pnt);
                                    }
    
                                    Console.WriteLine(string.Join(",", bytearray2));
                                    // sw.WriteLine(string.Join(",", bytearray));
                                    //sw.Flush();
    
                                    stream.Position = 0;  // so we can read from file beginning, set stream position to null
    
                                }           // close memorystream thread
                            }               // close foreach thread 
    

    I've tried to save each vector in the vector3df struct to an array which is where the GetSlices(n) code comes from. Also, I modified the vector3df definition a little bit. See (_gist.github.com/jsmarsch/d0dcade8c656b94f5c1c)

    I have the class that defines the GetSlices here:

        // Define GetSlices
        public static class Ext
        {
            public static IEnumerable<T[]> GetSlices<T>(this IEnumerable<T> source, int n)
            {
                IEnumerable<T> it = source;
                T[] slice = it.Take(n).ToArray();
                it = it.Skip(n);
                while (slice.Length != 0)
                {
                    yield return slice;
                    slice = it.Take(n).ToArray();
                    it = it.Skip(n);
                }
            }
        }
        //End GetSlices
    

    Thing is whenever I try to call the mapped file in memory from labview, I get a file does not exist error.

    Also, I have issues with my mutex portion of the code as I am getting an error of threading synchronization.

    Can you please take a look at my code and see if there is anything you could do to help?

    Monday, January 5, 2015 7:24 PM

Answers

All replies

  • The question is not Kinect specific. The fact it is Kinect data is irrelevant since you would have the same issue if you just copied any byte data to it. You may have better luck with an answer from someone more familiar with MemoryMapped file access and labview. Can you access the memory just with another application?

    http://msdn.microsoft.com/en-us/library/windows/desktop/aa366551(v=vs.85).aspx

    http://msdn.microsoft.com/en-us/library/dd997372(v=vs.110).aspx


    Carmine Sirignano - MSFT


    Monday, January 5, 2015 11:59 PM
  • Hi Carmine,

    I have fixed this. I used udp to push the data eventually. But I have one more question, the Kinect's Face Tracking Basics c# example in the developer toolkit browser v1.5.2 is where I got the snippet of the code above, albeit it's slightly modified by me.

    I am trying to use the example to track some mannequin head and I have skeleton stream turned off (code here).

    So, I have the skeleton stream disabled and I am seeing the meshed triangle vertices on the mannequin head, but I realize I can't access updated frames after the first frame.

    So my question is, will the c# code example not work with a mannequin head just as the c++ code does (as implied by Nikolai's post). Also, I am using the XBOX ONE Kinect Sensor. Could this be the reason why the depth frames are not being used to generate the distance value?

    I look forward to your reply.

    Thanks!

    Thursday, January 8, 2015 7:56 PM
  • Are you sure you are using the Xbox One sensor, not the 360 version? The v1.8 version of face tracking doesn't work unless you have a v1 sensor attached. Additionally, you cannot have v1 and v2 sensors api's in the same WPF application. Regardless, the managed and unmanaged version of the api's are identical, actually the managed version is just a wrapper around the unmanaged version. The behavior of not using skeletal hints will affect accuracy and performance. There are some other threads on this forum that discuss that further.

    Keep in mind, a mannequin is not a person. There are subtleties in peoples faces that the tracker may not find and therefore not detect. You should try it on a real person first and see if that is working.

    If you want to use the v2 sensor(Xbox One), you need to use the v2 SDK and it has its own face tracking api's that require depth/ir/body and color information to work since it is must more robust.


    Carmine Sirignano - MSFT

    Friday, January 9, 2015 7:25 PM
  • Hi Carmine,

    I am actually using the XBOX360 Sensor. My bad!!! I did not realize they are different. All the same, I see it working when I am tracking my face.

    So when you say the \QUOTE  managed and unmanaged version of the api's are identical, actually the managed version is just a wrapper around the unmanaged version /QUOTE , I assume you are implying the managed one is the one that is used with the XBOX ONE sensor while the unmanaged is for other generic sensors. Keep in mind that I am using the C# code.

    I will also be interested in the other threads' links that you might have that could help when using it without skeleton hints as you have mentioned.

    I also have one more question on the Xbox One v2 Sensor, does the v2 SDK work well with non-human faces? I was chatting with an online sales person from Microsoft yesterday and they told me it does. I would be glad to receive your help. Thanks!


    Lex


    • Edited by Lexilighty Friday, January 9, 2015 7:49 PM
    Friday, January 9, 2015 7:48 PM
  • None of the v1 libraries have anything to do with Xbox One. This is technology that came out in 2012 and v2 didn't come out until last year(October 2014). Managed code calls down into the native library through managed interop. The technology that does the tracking is all unmanaged code and run natively on the system.

    None of the Kinect SDK features are designed to work with mannequins. If it does, it is by product of how the technology works and by no means supported. 

    Kinect v2 is even more effective of filtering out "fake" people and will probably not work in the scenario you are proposing.


    Carmine Sirignano - MSFT


    Monday, January 12, 2015 9:11 PM
  • Hi Carmine,

    I've gotten the Kinect for Windows v2 Sensor. One thing though, I read your article here  and it says I can use the Kinect for windows sensor on a virtual machine. Here's the thing: my host machine runs on Windows 7 and the vm runs Windows 8. I followed the process discussed in the tutorial but I could not get my kinect to be recognized within the VM software. I understand that the sdk for the v2.0 sensor will not work on a windows 7 machine and this might be the cause of the problem since it won't install on the host machine and since I won't be able to do the remote desktop connection thing.

    My question is do you guys have a way around this problem?


    Lex

    Wednesday, January 28, 2015 2:16 AM
  • Kinect for Windows v1 can be used in a VM, Kinect for Windows v2 cannot. We need direct access to a DirectX 11 based GPU and USB3.

    Carmine Sirignano - MSFT


    Wednesday, January 28, 2015 6:50 PM