none
Unity Plugin: fixing your __//TODO: fix perf on this call. for CoordinateMapperView depthBuffer.SetData(depthPoints); by using GetDepthFrameToCameraSpaceTable RRS feed

  • General discussion

  • Hey guys! Brian Chasalow here. Checkin in from a brief hiatus here- I've downloaded the latest SDK version and I'm happy to discover its performance is much improved from previous iterations. However in looking at the GreenScreen sample, I did discover the following gem that I want to work with you guys to fix:

    CoordinateMapperView:

    //TODO: fix perf on this call.
     depthBuffer.SetData(depthPoints);

    This is slow because you're using MapColorFrameToDepthSpaceUsingIntPtr and passing that giant 1920x1080x2 buffer to a DX11 shader. IIRC, you should instead be able to get the color texture, the depth texture, and the GetDepthFrameToCameraSpaceTable buffer, and write a shader that does the color to depth mapping yourself. I'm not trying to project a 3d point cloud in this case- just to improve the basic 2d depth-based-greenscreening effect. You'll only need to pass in the buffer once, because it doesn't change. Here's my trouble- I need the code to do the alignment in the shader. Here's my pseudocode, please tell me if what I'm doing is correct. I think the offsetCoords line is a little bit incorrect. Halp!

    //edit: this code is very wrong, see lower below for more sane code-

    float4 color = float4(0, 0, 0, 0); float depthAtThisPixelPosition = tex2D(_DepthTex, i.tex.xy).r; float2 offsetCoords = depthAtThisPixelPosition * DepthFrameToCameraSpaceTableComputeBuffer[i.tex.xy].xy; if(offsetCoords.x > 0 && offsetCoords.x < 1.0 && offsetCoords.y > 0.0 && offsetCoords.y < 1.0){ color = tex2D(_MainTex, offsetCoords); }

    //edit: this code is very wrong

    ----------- return color;

    Much thanks. 






    Wednesday, December 17, 2014 3:51 AM

All replies

  • You might want to have a look at the depth unity package in my GitHub repo where we are sending data directly to the shader and it to modify the camera texture.

    https://github.com/carmines/workshop/tree/master/Unity


    Carmine Sirignano - MSFT

    Wednesday, December 17, 2014 6:11 PM
  • While that has a lot of useful info, it doesn't use the DepthFrameToCameraSpaceTable at all.

    Let's start with some known-to-be-valid shader code, for projecting vertices in 3d at the proper positions:

    vDepth = texture( uDepth, ciTexCoord0 ).r;
    pos.xy = vDepth * texture( uDepthToCameraSpace, ciTexCoord0 ).rg; //or texelFetch
    pos.z = vDepth;

    This says that

    line 1)in a vertex shader, you can fetch the depth position,

    line 2) adjust the x/y vertex positions to be the depth * the DepthToCameraSpaceTable (or sampled from a texture that contains the same table, as shown above)

    line 3) project the z vertex to the depth as usual

    Now, I want to do something similar in a pixel shader in 2d for the greenscreen effect. Logic would dictate that I would need to adjust the X/Y UV coords instead of the vertex positions. How to get there from the DepthToCameraSpaceTable is what I'm having trouble nailing down exactly.

    So, more succinctly: Are the values returned from vDepth*DepthToCameraSpaceTable[i].xy ==  the depth returned from _pCoordinateMapper.MapColorFrameToDepthSpaceUsingIntPtr and pDepthCoordinatesData ? 

    I think the answer is no, because the DepthToCameraSpaceTable iirc is 512x424x2, and pDepthCoordinatesData is 1920x1080x2. Perhaps I need to do something along the lines of : 

    	int colorWidth = (int)(i.tex.x * (float)512); //this was 1920 in the original code
    	int colorHeight = (int)(i.tex.y * (float)424); //this was 1080 in the original code
    	int colorIndex = (int)(colorWidth + colorHeight * (float)512); //this was 1920 in the original code
    	
    	float vDepth = _DepthTexture.Sample(SampleType, i.tex).r; //new code
    	float2 depthCoordinates = vDepth * DepthToCameraTableBuffer[colorIndex].xy; //new code 
    	o = float4(0, 1, 0, 1);
    	
    	if ((!isinf(depthCoordinates[colorIndex].x) && !isnan(depthCoordinates[colorIndex].x) && depthCoordinates[colorIndex].x != 0) || 
    		!isinf(depthCoordinates[colorIndex].y) && !isnan(depthCoordinates[colorIndex].y) && depthCoordinates[colorIndex].y != 0)
    	{
    		// We have valid depth data coordinates from our coordinate mapper.  Find player mask from corresponding depth points.
    		float player = bodyIndexBuffer[(int)depthCoordinates[colorIndex].x + (int)(depthCoordinates[colorIndex].y * 512)]; //this was unchanged from the original code
    		if (player != 255)
    		{
    			o = _MainTex.Sample(SampleType, i.tex);
    		}
    	}

    Hope this was clear.

    Brian











    Thursday, December 18, 2014 6:01 AM
  • Carmine: submitting a bug report here- it would appear that 

     GetDepthFrameToCameraSpaceTable 

    returns bad data in Unity in the public SDK. I see a lot of zeroed values when I pull from the CoordinateMapper, and if I write it to a texture it looks like it's not quite right, comparing to data received from something like Cinder. This is why none of my GPU-based depth/color mapping code works.

    Can you also confirm this? If you need any more info to debug, glad to help.

    Brian




    Friday, December 19, 2014 5:58 PM
  • I wrote a plugin the other day to grab the DepthFrameToCameraSpaceTable in Unity, and I was able to confirm that it gets good data, as opposed to what the Unity plugin is providing, which is super broken. Please fix that. However, I'd also be happy to continue the discussion about how one might use the table in Unity in a shader if it were theoretically-good-data ;-)

    Brian Chasalow

    Sunday, December 28, 2014 7:21 AM
  • it would appear that GetDepthFrameToCameraSpaceTable returns bad data in Unity in the public SDK

    I can confirm I'm still seeing this issue 8 months later. Specifically the first 53 rows of the table have good-looking data, but everything past there is zeroes.

    This is different from the behaviour observed outside of Unity, where if the table is accessed before the Kinect is ready, it will return a table full of infinities.

    As a temporary workaround, I've made a pure C# project in Visual Studio which fetches the table and stores it to a file I can read in Unity, but this is far from ideal.

    Brian Chasalow, would you be willing to share the fixed plugin that you wrote? (Or have you already done so?)

    Tuesday, September 1, 2015 12:29 AM
  • Hi there,

    I confirm that the issue is not resolved in version KinectForWindows_UnityPro_2.0.1410. 



    Sunday, September 10, 2017 5:03 AM