# sign language translation using kinect v2

• ### Question

• hey everyone,

I'm currently developing an application that can translate hand gestures (signs) into written words.

I have succeeded in getting a hand skeleton and getting the coordinates of each finger in the (x,y) dimension.

I did some research and now I need to transform these coordinates into vectors since most of these signs involve movements+ building my database of signs ...

I'm currently stuck at this task, any help on how this could be done ?

Sunday, November 27, 2016 9:56 PM

### All replies

• How did you manage to get all the finger positions? Skeleton has the thumb and the hand tip,not exactly all the fingers.

If you do have all the joints, vector math should be enough to get vectors(subtracting vectors can get you direction).

One caveat of doing sign language with Kinect that I expect is , it needs a low to medium velocity in hand movement and gesturing. Kinect loses the movement when its too fast for it(I don't remember the threshold).

UPDATE:Did a bit of googling, you used a convex hull from the depth data, so the vertices where fingers?
Monday, November 28, 2016 8:13 AM
• I just watched the news and there was a sign lang. presenter. I noticed that a fair amount of sign gestures involve occluding part of the hand or the whole hand by another. Occluding joints can be a breaking behavior if the occluded joints are inferred and tend to overshoot in places they shouldn't be. Have you tried to view the frames in Kinect Studio using someone who uses the sign language fluently? Are the data recorded by Kinect reliable enough for you to use it like this?
Monday, November 28, 2016 5:02 PM

The output of the library displays 5 rows for each hand, each row having the coordinates of the 3 joints of each finger (X and Y)

Can you explain to me how vector math would be used here ?

PS. I can share the code if it helps

Saturday, December 3, 2016 3:54 PM
• This is a university project that has never been done before, so some degree of error would be tolerated, plus there are better tracking sensors on the market nowadays.

I just need to translate some of the more simple gestures

Saturday, December 3, 2016 3:56 PM
• You said in the OP that you wanted to convert coordinates to vectors. Since you want to do that, then one thing you can get out of the data from any tracking lib,provided you have the coordinates any given time for joints, is the direction of a bone, by subtracting the bone's starting joint coordinates from the end joint.

You can also use regular trigonometry and dot product to calculate angles between two vectors(like two consecutive bone vectors).

PS:Since you have a third party library then you should probably refer to their own forum for this kind of thing. After all it is a tracking library and your problem is about tracking and how to use the data.

Saturday, December 3, 2016 4:40 PM
• Is getting the vector as simple as doing this :  vSourceToDestination = vDestination - vSource; ?

Also how can i get both source and destination coordinates if it's happening in real time?

Ps: sadly there is no forum for the library so I'm quite desperate

Your help is very appreciated !

Sunday, December 4, 2016 1:50 PM
• Yes, rather simplistic but its a first step.

You get both if you cache the data per frame.Consider this more as a pseudocode type of thing to get an idea.

```Vector3[] previous_frame_joint_positions;

.....

void SomeFunctionForPollingOrEvents(SomeData data)

{

Vector3[] current_frame_joint_positions = Aiolos.Get.... //here you get the real time data
//Here you can use both previous and current frame data to do all sorts of calculations//You can replace X with any joint type you want.
Vector3 joint_X_Direction = current_frame_joint_positions[joint_X_index] - previous_frame_joint_positions[joint_X_index];

//..perform other calculations using joint_X_Direction

previous_frame_joint_positions = current_frame_joint_positions; //Here you cache the current frame data as the previous frame data for the next iteration/call of this function.

}```

Sorry to hear about Aiolos...but still you can contact the maintainers/authors right? They are the ones who know about this library more than anyone else.

Sunday, December 4, 2016 3:05 PM

• I managed to get the coordinates of the fingers using the library. But now I need to have a reference point in each hand for ex:

Reference = ThumbBase-PinkyBase, in order to calculate the distance between the reference and each finger and compare the sign. I'm having trouble in this part and I hope someone can help :

The output of my current project is something like this:

ThumbBase: 177 ; 209 ThumbMiddle: 183 ; 196  ThumbTip: 188 ; 187    // X and Y coordinates in the frame

IndexBase:  .....   IndexMiddle: .....  IndexTip: .....

My code :

private static void PrintHand(KinectHand hand, int handNumber, int outputPos)
{
int firstLine = cursorTop + 3 + (outputPos * 7);
Console.SetCursorPosition(cursorLeft, firstLine);
Console.Write("Hand: " + handNumber);
for (int fingerIdx = 0; fingerIdx < 5; fingerIdx++)
{
try
{
Color c = Color.FromArgb(180, Engine.FingerColors[fingerIdx]);
Array f = Enum.GetValues(typeof(Hand.FingerJointType));
Array fNames = Enum.GetNames(typeof(Hand.FingerJointType));
int idxInEnum = fingerIdx * 3;
Microsoft.Kinect.DepthSpacePoint[] p = new Microsoft.Kinect.DepthSpacePoint[3];
string[] jointNames = new string[3];
for (int j = 0; j < 3; j++)
{
Hand.FingerJointType jt = (Hand.FingerJointType)f.GetValue(idxInEnum + j);
p[j] = hand.FingerJoints[jt];
jointNames[j] = (string)fNames.GetValue(idxInEnum + j);
Console.SetCursorPosition(cursorLeft + j * 25, firstLine + fingerIdx+1);
Console.Write(jointNames[j] + ": " + p[j].X.ToString("0.0") + ";" + p[j].Y.ToString("0.0") + "\n");
}

Monday, December 26, 2016 6:55 PM