locked
Lumia SDK - How do you get from image file to pixels in a custom filter? RRS feed

  • Question

  • I'm trying to create my own custom filter on an image file, but I don't understand how you get from an image to manipulating the pixels inside the filter.

    The custom filter class looks like

            public class CustomEffect : CustomEffectBase
        {
            public CustomEffect(IImageProvider source)
                : base(source)
            {
            }

            protected override void OnProcess(PixelRegion sourcePixelRegion, PixelRegion targetPixelRegion)
            {
                var sourcePixels = sourcePixelRegion.ImagePixels;
                var targetPixels = targetPixelRegion.ImagePixels;

                sourcePixelRegion.ForEachRow((index, width, position) =>
                {
                    for (int x = 0; x < width; ++x, ++index)
                    {
                        //custom code
                    }
                });
            }
        }

    And I'm going to apply my filter in this method below.  I first get the image file the user selected, create an IImageSource, then pass it to the CustomEffect filter I created.  However there is an OnProcess Method which takes PixelRegion parameters and it is in this method where I have to write my code to transform the pixels.  But I'm not clear what do to get the PixelRegions (if anything)

        private async Task<bool> ApplyFilterAsync(StorageFile file)
            {
                // Open a stream for the selected file.
                IRandomAccessStream fileStream = await file.OpenAsync(FileAccessMode.Read);

                string errorMessage = null;

                try
                {
                    var source = new RandomAccessStreamImageSource(fileStream);

                    CustomEffect CE = new CustomEffect(source);

                    
                }
                catch (Exception exception)
                {
                    errorMessage = exception.Message;
                }
    Friday, February 27, 2015 2:32 PM

Answers

  • What is passing the PixelRegions to this method?  Is the source the input image, and target the result that is output from the RenderAsync()?

    When processing starts the CustomEffectBase will call the OnProcess method when appropriate. I don't know how the current implementation (in 2.0) is but back in 1.x source and target would have been intermediate buffers  (something that becomes clear when writing a custom effect in C++ instead of C# as your implementation has to create those buffers there - CustomEffectBase takes care of it already).

    The necessity of buffering in between stages becomes clear when you chain several effects behind each other. If for example you first crop an image and apply an effect to the cropped image you would not want to process all pixels of the source image but only those that are still there after cropping.

    Friday, February 27, 2015 9:30 PM

All replies

  • The Lumia Imaging SDK uses a pipeline concept in order to allow filters, etc. to be combined and reused in different ways. Basically you set up an ImageSource object, put several filters behind each other and finally pipe that into a Renderer that finally will write the resulting data in some format (e.g. a WriteableBitmapRenderer if you want to have the result in a WriteableBitmap or a JpegRenderer if you want to write the result to a JPEG file immediately).

    In your case you would therefore use a StorageFileImageSource (namespace Lumia.Imaging) and use that as the source for the CustomEffect. Then you use for example the WriteableBitmapRenderer to render the result to a WriteableBitmap. In order to create the pipeline you would provide your CustomEffect as source for the Renderer. Once you call renderer.RenderAsync() it will take care of the rest (the ImageSource will provide the data to the CustomEffect, the CustomEffect will apply the transformation, the Renderer will write the data to the destination.

    So it would look someting like this (pseudo code to illustrate the principle):

    Source src = new Source("filename.jpg");

    CustomEffect eff = new CustomEffect(src);

    Renderer renderer = new Renderer(eff, destination);

    await renderer.RenderAsync();

    Friday, February 27, 2015 6:26 PM
  • The Lumia Imaging SDK uses a pipeline concept in order to allow filters, etc. to be combined and reused in different ways. Basically you set up an ImageSource object, put several filters behind each other and finally pipe that into a Renderer that finally will write the resulting data in some format (e.g. a WriteableBitmapRenderer if you want to have the result in a WriteableBitmap or a JpegRenderer if you want to write the result to a JPEG file immediately).

    In your case you would therefore use a StorageFileImageSource (namespace Lumia.Imaging) and use that as the source for the CustomEffect. Then you use for example the WriteableBitmapRenderer to render the result to a WriteableBitmap. In order to create the pipeline you would provide your CustomEffect as source for the Renderer. Once you call renderer.RenderAsync() it will take care of the rest (the ImageSource will provide the data to the CustomEffect, the CustomEffect will apply the transformation, the Renderer will write the data to the destination.

    So it would look someting like this (pseudo code to illustrate the principle):

    Source src = new Source("filename.jpg");

    CustomEffect eff = new CustomEffect(src);

    Renderer renderer = new Renderer(eff, destination);

    await renderer.RenderAsync();

    Ok thank you, the thing I'm not clear on is this method in which I write all my code for the custom effect.

    protected override void OnProcess(PixelRegion sourcePixelRegion, PixelRegion targetPixelRegion)

    What is passing the PixelRegions to this method?  Is the source the input image, and target the result that is output from the RenderAsync()?


    • Edited by Sal_S Friday, February 27, 2015 7:35 PM
    Friday, February 27, 2015 7:35 PM
  • What is passing the PixelRegions to this method?  Is the source the input image, and target the result that is output from the RenderAsync()?

    When processing starts the CustomEffectBase will call the OnProcess method when appropriate. I don't know how the current implementation (in 2.0) is but back in 1.x source and target would have been intermediate buffers  (something that becomes clear when writing a custom effect in C++ instead of C# as your implementation has to create those buffers there - CustomEffectBase takes care of it already).

    The necessity of buffering in between stages becomes clear when you chain several effects behind each other. If for example you first crop an image and apply an effect to the cropped image you would not want to process all pixels of the source image but only those that are still there after cropping.

    Friday, February 27, 2015 9:30 PM