none
How to use High Detail Face Points in Kinect v2 RRS feed

  • Question

  • I'm using Kinect v2 on a face recognition project, I have two questions:

    1. Can I use kinect 1.8 toolkit on Kinect v2 hardware?
    2. How can I use HighDetailFacePoints in v2 SDK?

    In v2's demo "HDFaceBasics-WPF", they used CalculateVerticesForAlignment function to retrive geometry points on target's face with FaceAlignment, but, I don't know how to mapping these points to specific meaningful type like EyeLeft.


    Regards, Nighting Liu


    • Edited by Nighting Liu Tuesday, September 16, 2014 6:59 AM
    Tuesday, September 16, 2014 6:54 AM

Answers

  • No v1 and v2 api's are different SDK's and API's, but the functions/fundamental design of what you need should be the same when treating data. What specifically do you need from v1 as the Face API's replaces v1 face tracking.

    There is currently a bug in the HighDetailFacePoints enum. There should be no leftEye as there is no vertex in that position in the mesh. This will be updated in the next update. To get the camerapoint value of the vertex you cast the enum to an int

    CameraSpacePoint lefteyeOutercorner = vertices[(int)HighDetailFacePoints.LefteyeOutercorner];


    Carmine Sirignano - MSFT

    Tuesday, September 16, 2014 7:17 PM

All replies

  • No v1 and v2 api's are different SDK's and API's, but the functions/fundamental design of what you need should be the same when treating data. What specifically do you need from v1 as the Face API's replaces v1 face tracking.

    There is currently a bug in the HighDetailFacePoints enum. There should be no leftEye as there is no vertex in that position in the mesh. This will be updated in the next update. To get the camerapoint value of the vertex you cast the enum to an int

    CameraSpacePoint lefteyeOutercorner = vertices[(int)HighDetailFacePoints.LefteyeOutercorner];


    Carmine Sirignano - MSFT

    Tuesday, September 16, 2014 7:17 PM
  • Hi I am doing a third year computer science dissertation using the kinect v2 for 3d facial expression recognition.
    I am currently trying to get hold of each set of coordinates for every facial point.
    When using using the following:

    private void UpdateMesh()
    {
    	var vertices = this.currentFaceModel.CalculateVerticesForAlignment(this.currentFaceAlignment);
    	CameraSpacePoint lefteyeOutercorner = vertices[(int)HighDetailFacePoints.LefteyeOutercorner];
    	Debug.WriteLine(lefteyeOutercorner.X);
    	
    	for (int i = 0; i < vertices.Count; i++)
        {
    		var vert = vertices[i];
            this.theGeometry.Positions[i] = 
            new Point3D(vert.X, vert.Y, -vert.Z);
        }
    }

    I am getting outputs values of -0.04695327 and similar, could you please tell me how these values relate to the relative position of these features and if there is a better way of getting these coordinates.

    private void UpdateMesh()
    {
    	var vertices = this.currentFaceModel.CalculateVerticesForAlignment(this.currentFaceAlignment);
    	CameraSpacePoint lefteyeOutercorner = vertices[(int)HighDetailFacePoints.LefteyeOutercorner];
    	
           ColorSpacePoint colorPoint = sensor.CoordinateMapper.MapCameraPointToColorSpace(lefteyeOutercorner);
    
    	Debug.WriteLine(lefteyeOutercorner.X);
    
    
    
    	for (int i = 0; i < vertices.Count; i++)
        {
    		var vert = vertices[i];
            this.theGeometry.Positions[i] = 
            new Point3D(vert.X, vert.Y, -vert.Z);
        }
    }

    this now gives coords of around 1128.944

    These look better :)

    • Edited by smithy3993 Wednesday, February 4, 2015 10:38 AM Found how to convert to colour point
    Tuesday, February 3, 2015 1:21 PM