none
VGB training results RRS feed

  • Question

  • Hi Carmine,
    I'm leading a research on gestures' vocabulary and I'm using VGB as Ground truth tool to assess the quality of each gesture in term of accuracy of detection (I'm exploiting ADA BOOST discrete classifiers.
    Now, at the end of the training phase, I have the report which lists the accuracy in term of true positives and the error rate in term of false positives, as at frame level as at gesture level (filtered on a window of frames).
    My question is:

    the accuracy and the error rate are assessed on the full samples set (as it is compose before being spitted in training and validation set) or only on the validation set?
    I need this info because it could simplify my observation avoiding me to make analysis on a different test set after the training phase is completed.
    Thanks in advance
    Vito  

     



    Thursday, July 23, 2015 12:18 PM

Answers

  • Hello Vito,

    The results that are reported in the output section after each build are only for clips used during training. These numbers should be used as a quick way to verify tagging accuracy (poorly tagged gestures will confuse the ML and make these numbers worse), but are not very useful for judging a gesture detector's accuracy. The gesture recognizer should not have any trouble detecting gestures in clips that it already trained with, so you should expect good results.

    For a better understanding of how a gesture detector will perform in the real world, you need to analyze the detector using clips that are not included in training. To do this, create a separate analysis (.a) project and add new clips to this project that are not included in the corresponding training set. Then right-click on the analysis project, select 'analyze', and select the database that contains the detector you want to test. The results reported after analysis will give you a much better representation of the true/false positives that exist for your gesture detector.

    ~Angela

    Friday, July 24, 2015 8:11 PM

All replies

  • Hello Vito,

    The results that are reported in the output section after each build are only for clips used during training. These numbers should be used as a quick way to verify tagging accuracy (poorly tagged gestures will confuse the ML and make these numbers worse), but are not very useful for judging a gesture detector's accuracy. The gesture recognizer should not have any trouble detecting gestures in clips that it already trained with, so you should expect good results.

    For a better understanding of how a gesture detector will perform in the real world, you need to analyze the detector using clips that are not included in training. To do this, create a separate analysis (.a) project and add new clips to this project that are not included in the corresponding training set. Then right-click on the analysis project, select 'analyze', and select the database that contains the detector you want to test. The results reported after analysis will give you a much better representation of the true/false positives that exist for your gesture detector.

    ~Angela

    Friday, July 24, 2015 8:11 PM
  • Dear Angela thank you very much.
    I supposed the classifier worked this way but I needed a confirm.
    The strange thing is that while I obtain at least 88% true positive rate accuracy at frame and gesture level, I get a 60% False positive rate at gesture level, on the frame set, but when I use a Validation set whit the analysis tool, I obtain very good results: about 100% true positive rate and 10% false positive rate (in the worst case).
    You have to take into account that among 4 gestures I have 2 that are very similar, except for speed, acceleration and hand pose. As you can see on the following rows I report from the training phase results I have this kind of results:

     "Testing on Training Data
        Raw Per Frame Results:
    % Accuracy True Positives: 88.590309 % (2011/2270)
    % Error False Positives: 0.583320 % (148/25372)
        Filtered Per Gesture Results:
    % Accuracy True Positives: 100.000000 % (200/200)
    % Error False Positives: 63.000000 % (126/200)

     "

    Looking at the following row of the gestures analysis tool results:is it right to say that I get 100% true positive rate and 0% False positive rate at gesture recognition level?
     
     "

     ID: GesturesRecognitionBiphase_01VocA20.gbd-20150724164149/#Frames:11029/Worst error:1/ Average RMS:  0.18142/ False Positive: 0, False Negative: 0

     "

    I would like to understand deeper AdaBoost classifier used by VGB, is there any article or white paper deep explaining the meanings of the parameters used to choose the features, as reported at the and of the training phase as in the following rows?

     "

    Top 10 contributing weak classifiers:
    Angles( SpineMid, Head, SpineBase ) using inferred joints, fValue >= 56.000000, alpha = 2.015352
    "

     I'd like to understand the meaning of alpha end fValue. I've already read the following paper, https://msdn.microsoft.com/en-us/magazine/dn166933.aspx?f=255&MSPPError=-2147217396

    but I didn't find what I was looking for.

    Just an other question, even if I already know you'll answer me a negative way. Is it possible to find a paper explaining the meaning of each feature the week classifiers exploit? Is it possible, via code, to access any single feature computed by the VGB.
    Thanks in advance
    Vito.


    Monday, July 27, 2015 9:42 AM