none
Confidence values on detection RRS feed

  • Question

  • Hello, which is the range of confidence on detection values? [0,1]? 

    On my project while testing the gestures i get confidence values around 0.04, 0.08, 0.1... and sometimes values around 0.3 or 0.5, are these values too low? (I have 12 gestures on the database if that matters).

    This makes hard filtering the gestures on the code. How could I solve it?

    Thank you very much

    Tuesday, September 15, 2015 2:38 PM

Answers

  • Hello Alfred,

    The confidence values range from 0 to 1. Typically, the first few frames when the gesture is occurring will have very low confidence. The longer that the gesture occurs, the higher the confidence should be. This is why static pose gestures (like a 'T' pose) are easier to train than dynamic gestures (like a 'swipe'). A good target for gesture recognition is to get 0.6 confidence or higher within a few frames of the gesture being detected. You can improve gesture confidence with more training examples (both positive and negative). If the speed of the gesture is not important to detection, then you should also include examples of the gesture being performed at a slower rate, so that the ML has more frames to use when learning the motion. Since you're using multiple gestures, be sure to include samples of all other gestures as negative training examples for the one that you are working on. Also, try using the 'Visual Gesture Builder Viewer' tool, which will help you see how the confidence values change as you perform each gesture in your database during real-time. This is a great tool for identifying if any of your gestures conflict with each other, so you can either filter false positives out based on confidence, improve your gesture tagging (tag only the portions that are unique to each gesture), or provide more training examples to help decrease false positives/negatives.

    ~Angela


    Friday, September 18, 2015 7:03 PM

All replies

  • Hello Alfred,

    The confidence values range from 0 to 1. Typically, the first few frames when the gesture is occurring will have very low confidence. The longer that the gesture occurs, the higher the confidence should be. This is why static pose gestures (like a 'T' pose) are easier to train than dynamic gestures (like a 'swipe'). A good target for gesture recognition is to get 0.6 confidence or higher within a few frames of the gesture being detected. You can improve gesture confidence with more training examples (both positive and negative). If the speed of the gesture is not important to detection, then you should also include examples of the gesture being performed at a slower rate, so that the ML has more frames to use when learning the motion. Since you're using multiple gestures, be sure to include samples of all other gestures as negative training examples for the one that you are working on. Also, try using the 'Visual Gesture Builder Viewer' tool, which will help you see how the confidence values change as you perform each gesture in your database during real-time. This is a great tool for identifying if any of your gestures conflict with each other, so you can either filter false positives out based on confidence, improve your gesture tagging (tag only the portions that are unique to each gesture), or provide more training examples to help decrease false positives/negatives.

    ~Angela


    Friday, September 18, 2015 7:03 PM
  • Hello, thanks for your answer. I have checked the training examples (only the core part of the gesture is tagged) and trained negatively with VGB, and the false positives/negatives has actually improved.

    The problem is the confidence remains very low (between 0.05 ~ 0.5), the fact is due to the low ratios of false positives, it kind of works, but I can't trust this results. So, one solution could be transform the gestures like "Swipe" into continuous gestures. Before using this solution (it will consume a lot of time, and I can't really afford that) is there another thing I could try?

    Thank you very much.

    Tuesday, September 29, 2015 11:15 AM