none
KinestSensor.MapDepthFrameToColorFrame RRS feed

  • Question

  • I'm doing a project an I need to map a 2D image to a 2.5D image from kinect pixel by pixel.

    I'm trying to use "Sensor.MapDepthFrameToColorFrame" method. But I don't know what is the output of this function and how can I display the result as an image. 

    Your kind reply is going to be really appreciated.

    Sunday, July 21, 2013 12:34 PM

Answers

All replies

  • Hello,

    KinectSensor.MapDepthFrameToColorFrame is obsolete (Obsolete starting with version 1.6), you should rather use KinectSensor.CoordinateMapper.MapDepthFrameToColorFrame (http://msdn.microsoft.com/en-us/library/microsoft.kinect.coordinatemapper_members.aspx).


    There is an example/demo in  : Kinect_Component/GreenScreen-WPF directory (You can browse examples with the Developer Toolkit Browser 1.7).

    Monday, July 22, 2013 12:34 PM
  • Thanx alot Vlad_Eduardo

    I have a problem that output of the function is in red only. can you see the image below and tell me what may cause the problem ?


    Tuesday, July 23, 2013 6:56 AM
  • I don't know.

    This is the example code from Microsoft :

    //------------------------------------------------------------------------------
    // <copyright file="MainWindow.xaml.cs" company="Microsoft">
    //     Copyright (c) Microsoft Corporation.  All rights reserved.
    // </copyright>
    //------------------------------------------------------------------------------
    
    namespace Microsoft.Samples.Kinect.GreenScreen
    {
        using System;
        using System.Diagnostics;
        using System.Globalization;
        using System.IO;
        using System.Windows;
        using System.Windows.Media;
        using System.Windows.Media.Imaging;
        using Microsoft.Kinect;
    
        /// <summary>
        /// Interaction logic for MainWindow.xaml
        /// </summary>
        public partial class MainWindow : Window
        {   
            /// <summary>
            /// Format we will use for the depth stream
            /// </summary>
            private const DepthImageFormat DepthFormat = DepthImageFormat.Resolution320x240Fps30;
    
            /// <summary>
            /// Format we will use for the color stream
            /// </summary>
            private const ColorImageFormat ColorFormat = ColorImageFormat.RgbResolution640x480Fps30;
    
            /// <summary>
            /// Active Kinect sensor
            /// </summary>
            private KinectSensor sensor;
    
            /// <summary>
            /// Bitmap that will hold color information
            /// </summary>
            private WriteableBitmap colorBitmap;
    
            /// <summary>
            /// Bitmap that will hold opacity mask information
            /// </summary>
            private WriteableBitmap playerOpacityMaskImage = null;
    
            /// <summary>
            /// Intermediate storage for the depth data received from the sensor
            /// </summary>
            private DepthImagePixel[] depthPixels;
    
            /// <summary>
            /// Intermediate storage for the color data received from the camera
            /// </summary>
            private byte[] colorPixels;
    
            /// <summary>
            /// Intermediate storage for the green screen opacity mask
            /// </summary>
            private int[] greenScreenPixelData;
    
            /// <summary>
            /// Intermediate storage for the depth to color mapping
            /// </summary>
            private ColorImagePoint[] colorCoordinates;
    
            /// <summary>
            /// Inverse scaling factor between color and depth
            /// </summary>
            private int colorToDepthDivisor;
    
            /// <summary>
            /// Width of the depth image
            /// </summary>
            private int depthWidth;
    
            /// <summary>
            /// Height of the depth image
            /// </summary>
            private int depthHeight;
    
            /// <summary>
            /// Indicates opaque in an opacity mask
            /// </summary>
            private int opaquePixelValue = -1;
    
            /// <summary>
            /// Initializes a new instance of the MainWindow class.
            /// </summary>
            public MainWindow()
            {
                InitializeComponent();
            }
    
            /// <summary>
            /// Execute startup tasks
            /// </summary>
            /// <param name="sender">object sending the event</param>
            /// <param name="e">event arguments</param>
            private void WindowLoaded(object sender, RoutedEventArgs e)
            {
                // Look through all sensors and start the first connected one.
                // This requires that a Kinect is connected at the time of app startup.
                // To make your app robust against plug/unplug, 
                // it is recommended to use KinectSensorChooser provided in Microsoft.Kinect.Toolkit
                foreach (var potentialSensor in KinectSensor.KinectSensors)
                {
                    if (potentialSensor.Status == KinectStatus.Connected)
                    {
                        this.sensor = potentialSensor;
                        break;
                    }
                }
    
                if (null != this.sensor)
                {
                    // Turn on the depth stream to receive depth frames
                    this.sensor.DepthStream.Enable(DepthFormat);
    
                    this.depthWidth = this.sensor.DepthStream.FrameWidth;
    
                    this.depthHeight = this.sensor.DepthStream.FrameHeight;
    
                    this.sensor.ColorStream.Enable(ColorFormat);
    
                    int colorWidth = this.sensor.ColorStream.FrameWidth;
                    int colorHeight = this.sensor.ColorStream.FrameHeight;
    
                    this.colorToDepthDivisor = colorWidth / this.depthWidth;
    
                    // Turn on to get player masks
                    this.sensor.SkeletonStream.Enable();
    
                    // Allocate space to put the depth pixels we'll receive
                    this.depthPixels = new DepthImagePixel[this.sensor.DepthStream.FramePixelDataLength];
    
                    // Allocate space to put the color pixels we'll create
                    this.colorPixels = new byte[this.sensor.ColorStream.FramePixelDataLength];
    
                    this.greenScreenPixelData = new int[this.sensor.DepthStream.FramePixelDataLength];
    
                    this.colorCoordinates = new ColorImagePoint[this.sensor.DepthStream.FramePixelDataLength];
    
                    // This is the bitmap we'll display on-screen
                    this.colorBitmap = new WriteableBitmap(colorWidth, colorHeight, 96.0, 96.0, PixelFormats.Bgr32, null);
    
                    // Set the image we display to point to the bitmap where we'll put the image data
                    this.MaskedColor.Source = this.colorBitmap;
    
                    // Add an event handler to be called whenever there is new depth frame data
                    this.sensor.AllFramesReady += this.SensorAllFramesReady;
    
                    // Start the sensor!
                    try
                    {
                        this.sensor.Start();
                    }
                    catch (IOException)
                    {
                        this.sensor = null;
                    }
                }
    
                if (null == this.sensor)
                {
                    this.statusBarText.Text = Properties.Resources.NoKinectReady;
                }
            }
    
            /// <summary>
            /// Execute shutdown tasks
            /// </summary>
            /// <param name="sender">object sending the event</param>
            /// <param name="e">event arguments</param>
            private void WindowClosing(object sender, System.ComponentModel.CancelEventArgs e)
            {
                if (null != this.sensor)
                {
                    this.sensor.Stop();
                    this.sensor = null;
                }
            }
    
            /// <summary>
            /// Event handler for Kinect sensor's DepthFrameReady event
            /// </summary>
            /// <param name="sender">object sending the event</param>
            /// <param name="e">event arguments</param>
            private void SensorAllFramesReady(object sender, AllFramesReadyEventArgs e)
            {
                // in the middle of shutting down, so nothing to do
                if (null == this.sensor)
                {
                    return;
                }
    
                bool depthReceived = false;
                bool colorReceived = false;
    
                using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
                {
                    if (null != depthFrame)
                    {
                        // Copy the pixel data from the image to a temporary array
                        depthFrame.CopyDepthImagePixelDataTo(this.depthPixels);
    
                        depthReceived = true;
                    }
                }
    
                using (ColorImageFrame colorFrame = e.OpenColorImageFrame())
                {
                    if (null != colorFrame)
                    {
                        // Copy the pixel data from the image to a temporary array
                        colorFrame.CopyPixelDataTo(this.colorPixels);
    
                        colorReceived = true;
                    }
                }
    
                // do our processing outside of the using block
                // so that we return resources to the kinect as soon as possible
                if (true == depthReceived)
                {
                    this.sensor.CoordinateMapper.MapDepthFrameToColorFrame(
                        DepthFormat,
                        this.depthPixels,
                        ColorFormat,
                        this.colorCoordinates);
    
                    Array.Clear(this.greenScreenPixelData, 0, this.greenScreenPixelData.Length);
    
                    // loop over each row and column of the depth
                    for (int y = 0; y < this.depthHeight; ++y)
                    {
                        for (int x = 0; x < this.depthWidth; ++x)
                        {
                            // calculate index into depth array
                            int depthIndex = x + (y * this.depthWidth);
    
                            DepthImagePixel depthPixel = this.depthPixels[depthIndex];
    
                            int player = depthPixel.PlayerIndex;
    
                            // if we're tracking a player for the current pixel, do green screen
                            if (player > 0)
                            {
                                // retrieve the depth to color mapping for the current depth pixel
                                ColorImagePoint colorImagePoint = this.colorCoordinates[depthIndex];
    
                                // scale color coordinates to depth resolution
                                int colorInDepthX = colorImagePoint.X / this.colorToDepthDivisor;
                                int colorInDepthY = colorImagePoint.Y / this.colorToDepthDivisor;
    
                                // make sure the depth pixel maps to a valid point in color space
                                // check y > 0 and y < depthHeight to make sure we don't write outside of the array
                                // check x > 0 instead of >= 0 since to fill gaps we set opaque current pixel plus the one to the left
                                // because of how the sensor works it is more correct to do it this way than to set to the right
                                if (colorInDepthX > 0 && colorInDepthX < this.depthWidth && colorInDepthY >= 0 && colorInDepthY < this.depthHeight)
                                {
                                    // calculate index into the green screen pixel array
                                    int greenScreenIndex = colorInDepthX + (colorInDepthY * this.depthWidth);
    
                                    // set opaque
                                    this.greenScreenPixelData[greenScreenIndex] = opaquePixelValue;
    
                                    // compensate for depth/color not corresponding exactly by setting the pixel 
                                    // to the left to opaque as well
                                    this.greenScreenPixelData[greenScreenIndex - 1] = opaquePixelValue;
                                }
                            }
                        }
                    }
                }
    
                // do our processing outside of the using block
                // so that we return resources to the kinect as soon as possible
                if (true == colorReceived)
                {
                    // Write the pixel data into our bitmap
                    this.colorBitmap.WritePixels(
                        new Int32Rect(0, 0, this.colorBitmap.PixelWidth, this.colorBitmap.PixelHeight),
                        this.colorPixels,
                        this.colorBitmap.PixelWidth * sizeof(int),
                        0);
    
                    if (this.playerOpacityMaskImage == null)
                    {
                        this.playerOpacityMaskImage = new WriteableBitmap(
                            this.depthWidth,
                            this.depthHeight,
                            96,
                            96,
                            PixelFormats.Bgra32,
                            null);
    
                        MaskedColor.OpacityMask = new ImageBrush { ImageSource = this.playerOpacityMaskImage };
                    }
    
                    this.playerOpacityMaskImage.WritePixels(
                        new Int32Rect(0, 0, this.depthWidth, this.depthHeight),
                        this.greenScreenPixelData,
                        this.depthWidth * ((this.playerOpacityMaskImage.Format.BitsPerPixel + 7) / 8),
                        0);
                }
            }
    
            /// <summary>
            /// Handles the user clicking on the screenshot button
            /// </summary>
            /// <param name="sender">object sending the event</param>
            /// <param name="e">event arguments</param>
            private void ButtonScreenshotClick(object sender, RoutedEventArgs e)
            {
                if (null == this.sensor)
                {
                    this.statusBarText.Text = Properties.Resources.ConnectDeviceFirst;
                    return;
                }
    
                int colorWidth = this.sensor.ColorStream.FrameWidth;
                int colorHeight = this.sensor.ColorStream.FrameHeight;
    
                // create a render target that we'll render our controls to
                RenderTargetBitmap renderBitmap = new RenderTargetBitmap(colorWidth, colorHeight, 96.0, 96.0, PixelFormats.Pbgra32);
    
                DrawingVisual dv = new DrawingVisual();
                using (DrawingContext dc = dv.RenderOpen())
                {
                    // render the backdrop
                    VisualBrush backdropBrush = new VisualBrush(Backdrop);
                    dc.DrawRectangle(backdropBrush, null, new Rect(new Point(), new Size(colorWidth, colorHeight)));
    
                    // render the color image masked out by players
                    VisualBrush colorBrush = new VisualBrush(MaskedColor);
                    dc.DrawRectangle(colorBrush, null, new Rect(new Point(), new Size(colorWidth, colorHeight)));
                }
    
                renderBitmap.Render(dv);
        
                // create a png bitmap encoder which knows how to save a .png file
                BitmapEncoder encoder = new PngBitmapEncoder();
    
                // create frame from the writable bitmap and add to encoder
                encoder.Frames.Add(BitmapFrame.Create(renderBitmap));
    
                string time = System.DateTime.Now.ToString("hh'-'mm'-'ss", CultureInfo.CurrentUICulture.DateTimeFormat);
    
                string myPhotos = Environment.GetFolderPath(Environment.SpecialFolder.MyPictures);
    
                string path = Path.Combine(myPhotos, "KinectSnapshot-" + time + ".png");
    
                // write the new file to disk
                try
                {
                    using (FileStream fs = new FileStream(path, FileMode.Create))
                    {
                        encoder.Save(fs);
                    }
    
                    this.statusBarText.Text = string.Format("{0} {1}", Properties.Resources.ScreenshotWriteSuccess, path);
                }
                catch (IOException)
                {
                    this.statusBarText.Text = string.Format("{0} {1}", Properties.Resources.ScreenshotWriteFailed, path);
                }
            }
            
            /// <summary>
            /// Handles the checking or unchecking of the near mode combo box
            /// </summary>
            /// <param name="sender">object sending the event</param>
            /// <param name="e">event arguments</param>
            private void CheckBoxNearModeChanged(object sender, RoutedEventArgs e)
            {
                if (this.sensor != null)
                {
                    // will not function on non-Kinect for Windows devices
                    try
                    {
                        if (this.checkBoxNearMode.IsChecked.GetValueOrDefault())
                        {
                            this.sensor.DepthStream.Range = DepthRange.Near;
                        }
                        else
                        {
                            this.sensor.DepthStream.Range = DepthRange.Default;
                        }
                    }
                    catch (InvalidOperationException)
                    {
                    }
                }
            }
        }
    }

    Tuesday, July 23, 2013 8:17 AM
  • Thanks.

    I have read this code. I know that in this function colorCoordinates store the mapped data. I want to know how I can display the mapped frame. I mean I want a frame with color and depth. 


    Tuesday, July 23, 2013 8:40 AM
  • If you want to superpose in a single frame the color and the depth, I don't know how to do it, maybe show it like a point cloud (http://pointclouds.org/) with rgbd (red-green-blue-depth).

    colorCoordinates

    is an array which contain at each depth index the corresponding index (X, Y) for the color array.

    Hope this help.

    Tuesday, July 23, 2013 9:28 AM
  • Thanks a lot :)

    I'll try.

    Tuesday, July 23, 2013 11:32 AM