none
Cognitive Vision sample doesn't work anymore (ImageTagging)

    Question

  • Hi, 

    Today, I have run my USQL demo that analyze images. And it doesn't work anymore.

    I tried a simple example from GitHub and it doesn't work anymore either. 

    https://github.com/Azure/usql/blob/master/Examples/Cognitive/HelloWorld_Cognitive_Imaging.usql

    I install lastest version of U-SQL Extensions from Azure Portal in my ADLA account (08/13/2017). But it works with extensions from 04/30/2017 retrieved from a previous ADLA account.

    Exception thrown : 

    Unhandled sexception from user code:

    "Bad type in Set<T> method, column: FileName, index: 0,

    expected: System.String, actual: System.Byte[]"

    The details includes more information including any inner exceptions and the stack trace where the exception was raised.




    http://blog.djeepy1.net


    • Edited by Jean-Pierre RiehlMVP Tuesday, August 15, 2017 9:14 AM Actually, it works on previous version
    Monday, August 14, 2017 6:59 AM

All replies

  • / Load Assemblies

    REFERENCE ASSEMBLY ImageCommon;
    REFERENCE ASSEMBLY FaceSdk;
    REFERENCE ASSEMBLY ImageEmotion;
    REFERENCE ASSEMBLY ImageTagging;
    REFERENCE ASSEMBLY ImageOcr;
    // Load in images
    @imgs =
        EXTRACT FileName string, ImgData byte[]
        FROM @"/usqlext/samples/cognition/{FileName:*}.jpg"
        USING new Cognition.Vision.ImageExtractor();
    
    

    Abstracts the tags from each image

    Cognitive ImageTagger function returns information about visual content found in an image. Use tagging, descriptions, and domain-specific models to identify content and label content with confidence. There are two ways in U-SQL to extract contents and its confidence from image

    1.        Extract tags from the image using Image tagging Processor

    @tags =
        PROCESS @imgs 
        PRODUCE FileName,
                NumObjects int,
                Tags SQL.MAP<string, float?>
        READONLY FileName
        USING new Cognition.Vision.ImageTagger();
    
    
    
    
    2.  Extract the tags from each image using image tagging Extractor 
    @tags = 
        EXTRACT FileName string,
            NumObjects int,
            Tags SQL.MAP<string, float?>
        FROM @"/usqlext/samples/cognition/{FileName:*}.jpg"
        USING new Cognition.Vision.ImageTagsExtractor();
    
    
    In case you want to output the result store in @tags rowset, you can serialize the SQL.MAP as follows
    
    
    @tags_serialized =
        SELECT FileName,
               NumObjects,
               String.Join(";", Tags.Select(x => String.Format("{0}:{1}", x.Key, x.Value))) AS TagsString
        FROM @tags;
    
    
    
    

    Extract emotions from human faces

    EmotionExtractor/EmotionApplier cognitive functions detect one or more human faces in an image and get back face rectangles for where in the image the faces are, along with face attributes like emotion. There are two ways in U-SQL to extract emotions from the image

    1.        Extract face and recognize facial expression using Emotion Extractor

    EmotionExtractor function is applied to each image and it generates one row per face detected in an image. It returns number of faces in the image, index of the face and its face rectangle along with emotion and confidence score for the emotion

    @emotions_from_extractor =
        EXTRACT FileName string, 
            NumFaces int, 
            FaceIndex int, 
            RectX float, RectY float, Width float, Height float, 
            Emotion string, 
            Confidence float
        FROM @"/usqlext/samples/cognition/{FileName:*}.jpg"
        USING new Cognition.Vision.EmotionExtractor();
    
    

    2.      Extract face and recognize facial expressions using applier

    EmotionApplier function is applied to each image stored in row as a byte array in the @img rowset and it generates one row per face detected in an image. It returns number of faces in the image, index of the face and its face rectangle along with emotion and confidence score for the emotion.

    @emotions_from_applier =
        SELECT FileName,
            Details.NumFaces,
            Details.FaceIndex,
            Details.RectX, Details.RectY, Details.Width, Details.Height,
            Details.Emotion,
            Details.Confidence
        FROM @imgs
        CROSS APPLY
            new Cognition.Vision.EmotionApplier() AS Details(
                NumFaces int, 
                FaceIndex int, 
                RectX float, RectY float, Width float, Height float, 
                Emotion string, 
                Confidence float);
    
    

    Estimate age and gender for human faces using extractor

    Like EmotionExtractor, FaceDetectionExtractor cognitive functions also detect one or more human faces in an image and get back face rectangles for where in the image the faces are, along with face attributes like age and gender. There are two ways in U-SQL to extract age and gender from the face in an image.

    1.        Estimate age and gender using Face Detection Extractor

    FaceDetectionExtractor function is applied to each image and it generates one row per face detected in an image. It returns number of faces in the image, current index of the face from all the faces recognized from an image and its face rectangle along with estimated age and gender for the current face index.

    @faces_from_extractor =
        EXTRACT FileName string, 
            NumFaces int, 
            FaceIndex int, 
            RectX float, RectY float, Width float, Height float, 
            FaceAge int, 
            FaceGender string
        FROM @"/usqlext/samples/cognition/{FileName:*}.jpg"
        USING new Cognition.Vision.FaceDetectionExtractor();
    
    

    2.      Estimate age and gender for human faces using applier

    FaceDetectionApplier function is applied to each image and it generates one row per face detected in an image. It returns number of faces in the image, current index of the face from all the faces recognized from an image and its face rectangle along with estimated age and gender for the current face index

    @faces_from_applier = 
        SELECT FileName,
            Details.NumFaces,
            Details.FaceIndex,
            Details.RectX, Details.RectY, Details.Width, Details.Height,
            Details.FaceAge,
            Details.FaceGender
        FROM @imgs
        CROSS APPLY
            new Cognition.Vision.FaceDetectionApplier() AS Details(
                NumFaces int, 
                FaceIndex int, 
                RectX float, RectY float, Width float, Height float, 
                FaceAge int, 
                FaceGender string);
    
    

    When to Extractor Vs. Processor cognitive U-SQL function?

    In case when you have  images larger than 4 MB use Extractor to evaluated image with any image specific cognitive functions. 

    Wednesday, August 16, 2017 8:14 PM
  • Thank you Hiraidti. Where have you found this documentation ?

    SQL.MAP<string, float?> is the right type for newer version. It works...but...I get only 1 file extracted or processed by Cognition assemblies.

    Do I miss something ?

    DECLARE EXTERNAL @year string = "2017"; DECLARE EXTERNAL @month string = "08"; DECLARE EXTERNAL @day string = "12"; DECLARE EXTERNAL @islocal = 1; REFERENCE ASSEMBLY ImageCommon; REFERENCE ASSEMBLY FaceSdk; REFERENCE ASSEMBLY ImageEmotion; REFERENCE ASSEMBLY ImageTagging; REFERENCE ASSEMBLY ImageOcr;

    DECLARE @imgfiles string = @"/images/"+@year+"/"+@month+"/"+@day+"/{FileName}.{extension}"; DECLARE @out string = @"/analysis/ImageAnalysis-v2-tagging-"+@year+@month+@day+".csv.txt"; DECLARE @outEmotions string = @"/analysis/ImageAnalysis-v2-emotions-"+@year+@month+@day+".txt"; DECLARE @outFaces string = @"/analysis/ImageAnalysis-v2-faces-"+@year+@month+@day+".txt";

    @emotions = EXTRACT FileName string, extension string, NumFaces int, FaceIndex int, RectX float, RectY float, Width float, Height float, Emotion string, Confidence float FROM @imgfiles USING new Cognition.Vision.EmotionExtractor(); /* @count = SELECT COUNT( * ) AS nbimages FROM @emotions; //14 rows */ OUTPUT @emotions TO @out USING Outputters.Text(); //only 7 tags identified in first image

    PS : {FileName:*} format is deprecated.


    http://blog.djeepy1.net


    Sunday, August 20, 2017 6:01 PM
  • How do we make it work .Is there any solution.
    Wednesday, November 22, 2017 12:07 PM
    • Edited by CateArcher Wednesday, November 22, 2017 5:52 PM
    Wednesday, November 22, 2017 5:51 PM