none
Help... RRS feed

  • Question

  • Alright, so I have a project going where Im using the Kinect with a robot to do some mapping / navigating. I have a good amount of java experience but have never coded in any other language (still a student, just now learning). At the moment I am trying to create a program that will take a depth and rgb image every 30 seconds or so, save it, and then analyze it to decide the next decision. Right now I'm pretty much stuck and Im having a really hard time continuing. FI can't figure out how to get ONE depth and ONE color image out of the kinect. Im using code I got from a tutorial where it opens up a stream to the kinect and streams the images to two windows that I open up. Here is what I have so far... `
     
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Windows;
    using System.Windows.Controls;
    using System.Windows.Data;
    using System.Windows.Documents;
    using System.Windows.Input;
    using System.Windows.Media;
    using System.Windows.Media.Imaging;
    using System.Windows.Navigation;
    using System.Windows.Shapes;
    using Microsoft.Research.Kinect.Audio;
    using Microsoft.Research.Kinect.Nui;
    using Coding4Fun.Kinect.Wpf;
    
    namespace WpfApplication1
    {
        /// <summary>
        /// Interaction logic for MainWindow.xaml
        /// </summary>
        public partial class MainWindow : Window
        {
            private int _distance; // given distance to object
            private int _angle; // given angle to object
            private static int WIDTH = 500; // in mm
            private static int HEIGHT = 500; // in mm
    
            public MainWindow()
            {
                Console.WriteLine("Enter in the angle of the objective from the Kinects");
                string angleString = Console.ReadLine();
                double _angle = Double.Parse(angleString);
                Console.WriteLine("Enter in the distance to the desired object in meters.");
                string distanceString = Console.ReadLine();
                double _distance = Double.Parse(distanceString);
                InitializeComponent();
            }
    
            Runtime nui = Runtime.Kinects[0];
            private void Window_Loaded(object sender, RoutedEventArgs e)
            {
                Boolean done = false;
                while (!done)
                {
                    nui.Initialize(RuntimeOptions.UseColor | RuntimeOptions.UseDepth);
                    nui.DepthFrameReady += new EventHandler<ImageFrameReadyEventArgs>(nui_DepthFrameReady);
                    nui.VideoFrameReady += new EventHandler<ImageFrameReadyEventArgs>(nui_VideoFrameReady);
                    nui.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color);
                    nui.DepthStream.Open(ImageStreamType.Depth, 2, ImageResolution.Resolution320x240, ImageType.Depth);
                
                }
            }
    
            void nui_VideoFrameReady(object sender, ImageFrameReadyEventArgs e)
            {
                PlanarImage image = e.ImageFrame.Image;
                image1.Source = BitmapSource.Create(image.Width, image.Height,
                    96, 96, PixelFormats.Bgr32, null, image.Bits, image.Width * image.BytesPerPixel);
                //return image;
            }
    
            void nui_DepthFrameReady(object sender, ImageFrameReadyEventArgs e)
            {
                byte[] ColoredBytes = GenerateColoredBytes(e.ImageFrame); 
                // create an image based on the colored bytes 
                PlanarImage image = e.ImageFrame.Image;
                image2.Source = BitmapSource.Create(image.Width, image.Height, 96, 96, PixelFormats.Bgr32, null, ColoredBytes, image.Width * PixelFormats.Bgr32.BitsPerPixel / 8); 
                //return image; 
                
                //image2.Source = e.ImageFrame.ToBitmapSource();]
                
            }
    
            private Byte[] GenerateColoredBytes(ImageFrame imageFrame)
            {
                int w = imageFrame.Image.Width;
                int h = imageFrame.Image.Height;
    
                Byte[] depthData = imageFrame.Image.Bits;
                
                Byte[] colorFrame = new byte[h * w * 4];
    
                const int BlueIndex = 0;
                const int GreenIndex = 1;
                const int RedIndex = 2;
    
                var depthIndex = 0;
                for (var y = 0; y < h; y++)
                {
                    var heightOffset = y * w;
    
                    for (var x = 0; x < w; x++)
                    {
                        var index = ((w - x - 1) + heightOffset) * 4;
                        var distance = getDistance(depthData[depthIndex], depthData[depthIndex + 1]);
                        if (distance == 0)
                        {
                            colorFrame[(int)index + BlueIndex] = 0;
                            colorFrame[(int)index + GreenIndex] = 0;
                            colorFrame[(int)index + RedIndex] = 0;
                        }
                        else if (distance <= 900)
                        {
                            colorFrame[(int)index + BlueIndex] = 255;
                            colorFrame[(int)index + GreenIndex] = 0;
                            colorFrame[(int)index + RedIndex] = 0;
                        }
                        else if (distance > 900 && distance < 2000)
                        {
                            colorFrame[(int)index + BlueIndex] = 0;
                            colorFrame[(int)index + GreenIndex] = 255;
                            colorFrame[(int)index + RedIndex] = 0;
                        }
                        else if (distance > 2000)
                        {
                            colorFrame[(int)index + BlueIndex] = 0;
                            colorFrame[(int)index + GreenIndex] = 0;
                            colorFrame[(int)index + RedIndex] = 255;
                        }
    
                        depthIndex += 2;
                    }
    
                }
                return colorFrame;
            }
    
            private int getDistance(Byte d, Byte d1)
            {
                return (int)(d | d1 << 8);
            }
    
            private void Window_Closed(object sender, EventArgs e)
            {
                nui.Uninitialize();
            }
        }
    }
    
    
    So I take in a distance in mm and an angle from where the center of the kinect is facing and I navigate to that point. I already have the algorithm written and what not, but I need to get this program to the point where I am getting one rgb and one depth image from the Kinect to save and modify. I was thinking of putting this in a while(!done) loop and each loop it would grab one image, do all the calculations, get a movement, and then sleep for 30 seconds. But I just... have absolutely no idea how to set that up.
    Thank you to anyone who responds from a student drowning in work.
    Tuesday, January 24, 2012 11:25 PM

All replies

  • Hi,

    The Kinect SDK is event based. This means that when there is some newdata ready for you, you will be notified (by having a method called).

    In Window_Loaded() method you shouldn't initialize the SDK in a while() loop. Initialization and stream opening should be done once. You register you callback methods and they will be called when there's some data available for you. To approach you problem, you can't really control the frequency of the updates but you can simply ignore any data received and from 30 to 30 seconds, you process it.

     

    Runtime nui = Runtime.Kinects[0];
    private void Window_Loaded(object sender, RoutedEventArgs e)
    {
        nui.Initialize(RuntimeOptions.UseColor | RuntimeOptions.UseDepth);
        nui.DepthFrameReady += new EventHandler<ImageFrameReadyEventArgs>(nui_DepthFrameReady);
        nui.VideoFrameReady += new EventHandler<ImageFrameReadyEventArgs>(nui_VideoFrameReady);
        nui.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color);
        nui.DepthStream.Open(ImageStreamType.Depth, 2, ImageResolution.Resolution320x240, ImageType.Depth);
                
    }
    

     

    Ok, now that we have the kinect sensor initialized and callbacks setup  (nui_DepthFrameReady and  nui_VideoFrameReady) it's time to process some data.   The SDK sends you the data as a byte[] array. With this, you can do whatever you want (create a texture, save it to disk, process it as it is).

    This is an example of the rgb method which processes an image once every 30 seconds

    void nui_VideoFrameReady(object sender, ImageFrameReadyEventArgs e)
    {
        int newSecond = DateTime.Now.Second;
        if ((newSecond - oldSecond + 60) % 60  >  30)
        {
             oldSecond = newSecond;
             PlanarImage img = e.ImageFrame.Image;
             byte[] imagedata = img.Bits;
    
             // Here you can process imagedata in any way you want
            
        }
    }
    

    • Edited by reydan Friday, January 27, 2012 8:45 AM
    • Proposed as answer by Robert A. Wlodarczyk Wednesday, February 1, 2012 4:15 PM
    Friday, January 27, 2012 8:44 AM