none
Cloud points in kinect fusion RRS feed

  • Question

  • I got cloud points using point-cloud approach of kinect fusion.This is how i got the cloud points in kinect-fusion.

    Code :

    _points = GetPointCloud();

    public Point3DCollection GetPointCloud()

                { using (var m = volume.CalculateMesh(1))

                    { return Utils.Point3dFromVertCollection(

     m.GetVertices());

      }

    }

    Please clarify some of my doubts about point cloud in kinect fusion.

    1. how can i get the points of point cloud where the actual human is standing.

    2. Is there someway so that i can see superimpose of points on human in the screen


    Thanks in advance

    Sunday, May 12, 2013 3:04 PM

Answers

  • Thanks a lot for all of your support J

     Hooray, at last i have isolated the human from depthimage and displayed it in kinect fusion pointcloud.

     Code Snippet:

      depthData.DepthImagePixels = new DepthImagePixel[depthStream.FramePixelDataLength];                   

                        depthFrame.CopyPixelDataTo(pixelData1);

                         int x = depthFrame.Width / 2;

                        int y = depthFrame.Height / 2;

                        int d1 = (ushort)pixelData1[x + y * depthFrame.Width];

                        d1 = d1>>3 ;                 

     

                     segval = CreatePlayerDepthImage1(depthFrame, depthData.DepthImagePixels, pixelData1,d1);

     

     private DepthImagePixel[] CreatePlayerDepthImage1(DepthImageFrame depthFrame, DepthImagePixel[] pixelData1, short[] pixelData,int d1)

            {

                       DepthImagePixel[] pixelData2 = new DepthImagePixel[pixelData1.Length];

                 for (int i = 0; i < pixelData.Length; i++)

                {              

                    playerIndex = pixelData[i] & DepthImageFrame.PlayerIndexBitmask;

                    textBox2.Text = playerIndex.ToString();

                    if (playerIndex != 0)

                    {                   

                        pixelData2[i].Depth = (short)d1;

                        pixelData2[i].PlayerIndex = (short)playerIndex;                

                        }

                }

              }

    But this code is running very slow, because it iterates the whole depth frame and finally the human movement in the screen  is very slow.

     Is that any other way to avoid this loop and isolate human or else how to improve the faster rendering of human in the output.

     

    Note:-

     Graphics card that i am using is:

     Nvidia geforce GTX210 with 1gb

        


    Friday, May 17, 2013 4:50 PM

All replies

  • Kinect Fusion provides data for all points in the scene. It doesn't distinguish between human and non-human objects.

    However, it should be possible, using some 3D math, to align the skeleton joint data from the skeleton stream with the data collected by Kinect Fusion. From a skeleton joint position, you should be able to locate nearby surfaces in the Kinect Fusion data, which presumably would be the surfaces of the human to which that joint belongs.


    John | Kinect for Windows development team

    Monday, May 13, 2013 9:28 PM
  • Thank john J

    I am little bit confused about 3 D skeleton points(X,Y,Z) and point cloud (X,Y,Z) whether they are same?

    Because why i am asking this question is, when i loop the point cloud data's and check whether skeleton 3 D(X,Y,Z) is with in the point cloud data.

    It seems that there is no match between those point cloud data's and skeleton 3 D(x,y,z) .

    Code Snippet:

     double Hand x = first.Joints[Joint Type Hand Left].Position.X;

       double Handy = first.Joints[ Joint Type Hand Left].Position.Y;

       double Hand z = first.Joints[ Joint Type Hand Left].Position.Z;

     for each (Point Collection g in _points)

                    {

                        if (g.X == Hand x )

                        {

                           skeleton x = g.X;

                        }

                        if (g.Y == Handy)

                        {

                            skeleton y = g.Y;

                        }

                        if (g.Z == Hand z)

                        {

                            skeleton z = g.Z;

                        }

                    } 

    Please suggest me some idea of how to relate the skeleton 3 D (X,Y,Z) with point cloud 3 D(X,Y,Z) of kinect fusion.


    • Edited by BosSteve Tuesday, May 14, 2013 6:46 AM
    Tuesday, May 14, 2013 6:41 AM
  • you could manipulate the depth of the pixels that dont belong to the player, and then use this depth image for kinect fusion,

    with the right threshold for min / max depth, you could then isolate the player in kinect fusion.

    Wednesday, May 15, 2013 2:58 PM
  • Thanks arctorx for your answer.

    But my question is not regarding finding the player. I am very much confused , whether 3 D skeleton points(X,Y,Z) and point cloud (X,Y,Z) are same.

    After i enable the skeleton tracking, kinect fusion is tracking the player.

    But my greatest doubt is  "when i loop the point cloud data's and check whether skeleton 3 D(X,Y,Z) is with in the point cloud data." it is not matching.

    and also my concern is to get the exact point of point cloud where skeleton head 3 D ( X,Y,Z) is located from the code snippet which i wrote in my previous reply.

    Thanks again

    Steve 

    Wednesday, May 15, 2013 4:49 PM
  • What about enabling the player index on the depthmap, then filtering the depthmap based on the player index, and finally feeding that prefiltered depthmap into the Fusion codepath?

    --Dale

    Thursday, May 16, 2013 11:19 AM
  • I already do it, then I get only the point cloud from the head, with help from the opencv face detection, but it isnt really accurate.  Is there another way to filter the point cloud to find an object?, What I do , is before NuiFusionDepthToDepthFloatImage, I manipulate the NUI_DEPTH_IMAGE_PIXEL array ,and I set the depth of the pixels that I dont want to 0.1. With OpenCV I can find the pixels of the face, and all other pixels depths are setting to 0.1. It works , but with each tiny move rotate the whole pointcloud. Has someone any idea??

    Thursday, May 16, 2013 1:12 PM
  • Thanks, i am currently working on that method only to extract the human point cloud.


    Since i am continuing my whole work in kinect sdk  Please tell me suggestion in kinect sdk not in opencv.

    Assume that i extracted the human cloud points from depthmap.But  my ultimate goal is to take the data on the point cloud for the matching hand 3D(X,Y,Z) point.

    how to achieve the above scenario? 

    Thursday, May 16, 2013 1:27 PM
  • If you have the XYZ coordinate from your hand , you can get the position of the hand in the depthmap with NuiTransformSkeletonToDepthImage . When you have this position in the depthmap, you can  set the another depthpixels to 0.1 , and leave only the area of the hand unchanged. Then, KinectFusion will procces the depthpixels of the hand, the another ones will be out of the threshold.
    Thursday, May 16, 2013 5:22 PM
  • Thanks a lot for all of your support J

     Hooray, at last i have isolated the human from depthimage and displayed it in kinect fusion pointcloud.

     Code Snippet:

      depthData.DepthImagePixels = new DepthImagePixel[depthStream.FramePixelDataLength];                   

                        depthFrame.CopyPixelDataTo(pixelData1);

                         int x = depthFrame.Width / 2;

                        int y = depthFrame.Height / 2;

                        int d1 = (ushort)pixelData1[x + y * depthFrame.Width];

                        d1 = d1>>3 ;                 

     

                     segval = CreatePlayerDepthImage1(depthFrame, depthData.DepthImagePixels, pixelData1,d1);

     

     private DepthImagePixel[] CreatePlayerDepthImage1(DepthImageFrame depthFrame, DepthImagePixel[] pixelData1, short[] pixelData,int d1)

            {

                       DepthImagePixel[] pixelData2 = new DepthImagePixel[pixelData1.Length];

                 for (int i = 0; i < pixelData.Length; i++)

                {              

                    playerIndex = pixelData[i] & DepthImageFrame.PlayerIndexBitmask;

                    textBox2.Text = playerIndex.ToString();

                    if (playerIndex != 0)

                    {                   

                        pixelData2[i].Depth = (short)d1;

                        pixelData2[i].PlayerIndex = (short)playerIndex;                

                        }

                }

              }

    But this code is running very slow, because it iterates the whole depth frame and finally the human movement in the screen  is very slow.

     Is that any other way to avoid this loop and isolate human or else how to improve the faster rendering of human in the output.

     

    Note:-

     Graphics card that i am using is:

     Nvidia geforce GTX210 with 1gb

        


    Friday, May 17, 2013 4:50 PM
  • http://msdn.microsoft.com/en-us/library/jj663803

    For Kinect Fusion :

     " Recommended Hardware

    Desktop PC with 3GHz (or better) multi-core processor and a graphics card with 2GB or more of dedicated on-board memory. Kinect Fusion has been tested for high-end scenarios on a NVidia GeForce GTX680 and AMD Radeon HD 7850.

    Note: It is possible to use Kinect Fusion on laptop class GPU hardware, but this typically runs significantly slower than desktop-class hardware. In general, aim to process at the same frame rate as the Kinect sensor (30fps) to enable the most robust camera pose tracking."

    Friday, May 17, 2013 5:09 PM
  • Thanks.

    I am going to use GTX 680.Now when i try the above logic without graphics card,it runs for 6 frames per second or 10 frames per second and after that application is hanging.

    Tuesday, May 21, 2013 10:31 AM
  • That then...is probably a bug in the code. Time to trace your code in the debugger. ;-)

    I use depthmaps and colorstreams + generating a pointcloud from the depthmap all from the Kinect with 30 frames/sec on my laptop's GTX680m running for hours. Personally, I don't use Fusion and instead create my pointcloud data using the NuiTransformDepthImageToSkeleton() or MapDepthPointToSkeletonPoint() API.

    It will work. Be patient as you learn and spend time with your code and the debugger. And I hope you have fun! :-)


    --Dale

    Tuesday, May 21, 2013 11:53 AM
  • since i need depthimagepixels of player, i need to loop through entire depthframe and if the player is found then i need to update his pixel.

    Actually I am doing this code in c#.

    Can you provide me with any other logic for getting player's depthimagepixel?

    And also i welcome any suggestion to avoid looping whole depthframe inorder to get the player pixel.

    Thanks

    Sam

    Tuesday, May 21, 2013 4:41 PM
  • I believe you will have to do the loop to view the depthmap the Kinect API gives you and generate a new map what only the pixels you want based on the player bits. I do not think you can avoid the loop.

    In code that I have written I do a similar things that you do. I get the depthmap using the API, loop through it to filter only the pixels with player bits, and then create a new depthmap with it. I do all this at 30fps and have at least 80% of my CPU still available on my laptop.


    --Dale

    Tuesday, May 21, 2013 8:46 PM
  • Yes i understand that it is not possible to get the player pixels without looping.So that only now i tested the same code with 12 gb ram.Now it runs fast without any frame struck.

    Now i get struck up with getting depthimagepixels of head alone by using the above logic.

    Actually i matched every pixel of the depthframe with depthimagepoint of skeleton that belongs to head.But i am not able to isolate head alone and display it in 2D image.

    Is there any other way of isolating head/hand/left shoulder/right shoulder depthimagepixels from depthframe? 

    Wednesday, May 22, 2013 2:35 PM
  • You have to write your own custom code. Even if you were to use the FaceTracking portion of the SDK, you are still going to write custom code.

    --Dale


    Wednesday, May 22, 2013 7:22 PM
  • Thanks Dale,

    Yes i wrote some custom code to retrieve chest alone.

    By using this function below, i will get cloud points of given depthimagepixels.

    For eg:- if i pass head depthimagepixel then it should give head point clouds alone.But single point cloud is not returning from the below function.

    So what i am doing is just taking some extra points around head.

    Actually my question is how to get the cloud point of head depthimagepixel from the bunch of cloud points.

    How to obtain hand/spine/left shoulder cloud point from the function below?

     _points = GetPointCloud(true);

      public Point3dCollection GetPointCloud(bool lowRes = false)

        {

          using (var m = _volume.CalculateMesh(lowRes ? _lowResStep : 1))

          {

            return Utils.Point3dFromVertCollection(

              m.GetVertices()

            );

          }

        }

    Friday, May 24, 2013 4:05 PM
  • I am not able to get specific 3d point like it is head 3d point or leftshoulder 3d point or some other damn skeleton point, Eventhough i got whole mesh of pointcloud points from kinect fusion.

    I am pulling my hair for couple of weeks...

    Please give some idea?
    Monday, May 27, 2013 12:31 PM
  • Sorry, no. You have to write your own custom code.

    --Dale

    Monday, May 27, 2013 1:54 PM
  • Thanks Dale.

    I think this problem will create migraine headache for me.

    Tuesday, May 28, 2013 6:02 AM
  • take your time and play with the code.

    Tuesday, May 28, 2013 11:36 AM