none
Confused about face tranking, points and animations

    Question

  • I'm little bit confused about face tracking so please correct me if i'm wrong.
    What i think right now: 
    - ShapeUnits (FaceShapeDeformation) express how much the neutral mesh/average human face differs from the real user's tracked face (in 70 different ways).
    - AnimationUnits (FaceShapeAnimation) recognize 17 basic movement on the neutral face.
    - HighDetailFacePoints are just 36 vetices of the 3d neutral mesh face? 
    So is correct to say that 
    1) there are others "HighDetailFacePoints" that developers don't need to use/know (just because there are more then 1000 hd point/vertices, correct?)
    2) There are only 5 2D points (FacePointType) on which are build FaceProperty (that are discrete states) and FaceShapeAnimations?
    3.a) AnimationUnits are built on HighDetailFacePoints?
    3.b) so If i want to code new custom animation should I use HighDetailFacePoints and not ShapeUnits?
    4) Is There a FaceTrackingBasics sample/api that renders this vertices or the mesh like for the kinect 1 (here 
    http://blogs.msdn.com/b/kinectforwindows/archive/2014/01/31/clearing-the-confusion-around-kinect-for-windows-face-tracking-output.aspx)?
    Because i need to visualize this different points or vertices to understand how to code custom animationUnits (just for debuggin) e.g. recognize a yawn from a surprised open mouth or a screaming open mouth.
    Sorry for so many questions, but every time i try to code some sample i get stuck :(
    Thanks
    Sunday, December 07, 2014 6:25 PM

All replies

  • There are exactly 94 SU's (defined in FaceShapeDeformations enum), where each deforms the mesh differently. These are non-normalized units and can sometimes been seen as high as +/- 10. SU's specify the captured face deformations from the model builder and are constant while tracking (at least until another build).

    There are exactly 17 AU's (defined in FaceShapeAnimations enum), where each deforms the mesh differently (and independent of SU's). These are non-normalized units, but unlike SU's, some are signed while others are unsigned (eg. JawOpen is mostly positive, but nothing says it cannot go negative). AU's change each frame when tracking. The quality of AU's is much higher for a built model, but they can still differ between successive builds (ie. a constant AU is not guaranteed to produce the same expression when a different persons shape(SU's) are applied.

    HighDetailFacePoints are specific vertex indices that have some meaning, they aren't directly used in any API call. Once you get verticies from CalculateVerticiesFromAlignment you should use "LefteyeInnercorner" to get the vertex value.

    There are currently 1347 vertices in the model (mesh), the others don’t have an “assigned” meaning.  But can be accessed and used the same way as HighDetailFacePoints.

    The FacePointType is a property of 2D face tracking and not HDFace.  It’s used in calls like GetFacePointsInColorSpace

    AU’s are analogous to SU’s as both deform the mesh.  They are only related to HighDetailFacePoint by the mesh. AU’s are the primary mechanism to apply to animations.  Using these values for custom animations is beyond what the API supports (i.e. there is no way to alter the array and pass back to the API.)  The values in the AU’s need to be applied to a deformed mesh or converted into a different animation representation (i.e. bones transformations).  In these instances neither HighDetailFacePoints or ShapeUnits apply.

    For an example, review the HDFaceBascis samples provided in the SDK Browser. If you want a wireframe visual, you will have to change the draw calls.


    Carmine Sirignano - MSFT

    Tuesday, December 09, 2014 9:49 PM
    Owner
  • Thanks for your answers, they are very useful.
    Only now i realize how much badly-formed my question was (i'm sorry, english is not my first language). 
    I don't need it to create a custom animation or to animate a mesh; in my scenario i have to detect facial movement (like muscle movement) on the user's face.
    My mental approach is to understand which AU are "played" on a actor's/user's face who is expressing sadness (e.g. LowerlipDepressorLeft, LowerlipDepressorRight, LefteyeClosed, RighteyeClosed are all "true"/positive when the user is sad or crying) and use this "training" information to recognize sadness in other users.
    (Let me insist just to be sure that i well explained myself because of my language gap) I'm not interested in somatic traits informations or characteristics (shape units) and i don't need to realize animation on the mesh. 
    Just like the "happy" FaceProperty that use the 2 mouth's FacePointType, I want realize a "sad" state/property/function but, for my purpose, the 5 FacePointType are not enough, i need more informations; maybe HighDetailFacePoints vertices and AU's can be useful? How i can get them?
    Can i have some kind of HighDetailFacePoints raw output like coordinate or positions?
    What do you suggest me?
    Thanks for the support
    Wednesday, December 10, 2014 2:14 PM
  • If you want to know what the mesh result is for a certain AU, is check for the AU you are looking for and call CalculateVerticiesFromAlignment. This will always give you the modified mesh result at that time. We do not provide AU's to vertex indices. This can change since this is internal implementation detail we do not expose.


    Carmine Sirignano - MSFT

    Wednesday, December 10, 2014 7:54 PM
    Owner
  • I want to create a Sad FaceProperty but i need more points/vertices then the standard 5 FacePointType (i.g. a point for eyebrow). How i can do it? maybe using HighDetailFacePoints?
    Sunday, December 14, 2014 2:57 PM
  • You also need to consider between a frown and sad face? What is sadness? Yes, you will need more information not just points on the face: eyes, eyebrow, etc. I would say a non-happy state would be less granular but there may be too subtle in the face you may not get from what is provided. You may need your own analysis.

    Carmine Sirignano - MSFT

    Monday, December 15, 2014 8:10 PM
    Owner
  • Let's say that i want to try to build my own analysis, should I use HighDetailFacePoints? How i can use them? (i.g. there is some xyz coordinate for every point/vertice?) 
    Wednesday, December 17, 2014 7:19 PM