none
Access violation by NuiFusionDepthToDepthFloatFrame function RRS feed

  • Question

  • Hi,

    I`m trying to write a simple code that: acquires a single depth frame from Kinect sensor, integrates it to a Volume Reconstruction and write its Mesh result to a .STL file.

    As I call NuiFusionDepthToDepthFloatFrame, a Runtime Exception Error for Access violation(for reading location) pops up .

    I tried to remove the very previous line (NuiFusionCreateImageFrame). The exception doesn not occur anymore, but NuiFusionDepthToDepthFloatFrame returns E_INVALIDARG. I checked this error code and I coud not find any of the possible reasons listed to it.

    The code is shown below:

    Could anyone please help me out of it?

    const NUI_IMAGE_FRAME *pFrame;
    
    	HANDLE hs;
    	
    	hr = NuiInitialize(NUI_INITIALIZE_FLAG_USES_DEPTH);
    
    
    	hr = NuiImageStreamOpen(NUI_IMAGE_TYPE_DEPTH,
    									NUI_IMAGE_RESOLUTION_640x480,
    									NUI_IMAGE_STREAM_FLAG_SUPPRESS_NO_FRAME_DATA,
    									2,
    									NULL,
    									&hs);
    
    	hr = NuiImageStreamGetNextFrame(hs,
    									1000,
    									&pFrame);
    
    	INuiFrameTexture* pTexture;
    
    	pTexture = pFrame->pFrameTexture;
    
    	
    
    	NUI_LOCKED_RECT  lockedRect; 
    
    	hr = pTexture->LockRect(NULL, 
    							&lockedRect, 
    							NULL, 
    							NULL);
    
    
    	const NUI_DEPTH_IMAGE_PIXEL * pImgPixel = reinterpret_cast<const NUI_DEPTH_IMAGE_PIXEL *>(lockedRect.pBits);
    
    
    	NUI_FUSION_IMAGE_FRAME *pFloatFrame = new NUI_FUSION_IMAGE_FRAME ;
    
    	// DepthFloatImage
    	hr = NuiFusionCreateImageFrame(NUI_FUSION_IMAGE_TYPE_FLOAT, 640, 480, nullptr, &pFloatFrame);
    
    
    	hr = NuiFusionDepthToDepthFloatFrame(pImgPixel, 
    									640, 
    									480, 
    									pFloatFrame, 
    									NUI_FUSION_DEFAULT_MINIMUM_DEPTH, 
    									NUI_FUSION_DEFAULT_MAXIMUM_DEPTH,
    									TRUE );
    	 
    	const NUI_FUSION_IMAGE_FRAME *pPointCloudFrame = new NUI_FUSION_IMAGE_FRAME;
    
    	hr = NuiFusionDepthFloatFrameToPointCloud(pFloatFrame,
    										 pPointCloudFrame);
    
    	INuiFusionMesh * pMesh = nullptr;
    	 
    	INuiFusionReconstruction *pFusion;
    
    	NUI_FUSION_RECONSTRUCTION_PARAMETERS reconstructionParams;
    	reconstructionParams.voxelsPerMeter = 256; // 1000mm / 256vpm = ~3.9mm/voxel
    	reconstructionParams.voxelCountX = 512;  // 512 / 256vpm = 2m wide reconstruction
    	reconstructionParams.voxelCountY = 384;  // Memory = 512*384*512 * 4bytes per voxel
    	reconstructionParams.voxelCountZ = 512;  // Require 512MB GPU memory
    
    	hr = NuiFusionCreateReconstruction(&reconstructionParams,
              NUI_FUSION_RECONSTRUCTION_PROCESSOR_TYPE_CPU,
              -1, NULL, &pFusion);
    	
    
    	Matrix4 worldToCameraTransform;
    
    	pFusion->GetCurrentWorldToCameraTransform( &worldToCameraTransform );
    
    	hr = pFusion->IntegrateFrame(pFloatFrame, 1, &worldToCameraTransform);
    
    	hr = pFusion->CalculateMesh(1,&pMesh); 
    
    	const Vector3 * pVert;
    	hr = pMesh->GetVertices(&pVert);
    
    	
    
    	//Save .STL file
    
    	hr = WriteBinarySTLMeshFile(pMesh, L"C:\Users\Jose\Desktop\mesh_test.stl", true);
    
    
    	hr = pFrame->pFrameTexture->UnlockRect(0);
    
    	NuiImageStreamReleaseFrame(hs, pFrame);
    
    
    	NuiShutdown();
    	

    Wednesday, June 19, 2013 12:49 PM

Answers

  • Have a look at how you are creating the NUI_FUSION_IMAGE. You shouldn't "new" up and then call NuiFusionCreateImageFrame. Have a look at the KinectFusionProcessor or the Kinect Fusion Bascics-D2D sample on the minimum of what you need to do to pass data to KinectFusion.

    Looks like you forgot a step on getting the INuiFrameTexture... from the KinectFusionBasics-D2D sample:

    CKinectFusionBasics::CopyExtendedDepth(NUI_IMAGE_FRAME &imageFrame)

        INuiFrameTexture *extendedDepthTex = nullptr;
        // Extract the extended depth in NUI_DEPTH_IMAGE_PIXEL format from the frame
        BOOL nearModeOperational = FALSE;
        hr = m_pNuiSensor->NuiImageFrameGetDepthImagePixelFrameTexture(m_pDepthStreamHandle, &imageFrame, &nearModeOperational, &extendedDepthTex);
        if (FAILED(hr))
        {
            SetStatusMessage(L"Error getting extended depth texture.");
            return hr;
        }
        NUI_LOCKED_RECT extendedDepthLockedRect;
        // Lock the frame data to access the un-clamped NUI_DEPTH_IMAGE_PIXELs
        hr = extendedDepthTex->LockRect(0, &extendedDepthLockedRect, nullptr, 0);
        if (FAILED(hr) || extendedDepthLockedRect.Pitch == 0)
        {
            SetStatusMessage(L"Error getting extended depth texture pixels.");
            return hr;
        }
        // Copy the depth pixels so we can return the image frame
        errno_t err = memcpy_s(m_pDepthImagePixelBuffer, m_cImageSize * sizeof(NUI_DEPTH_IMAGE_PIXEL), extendedDepthLockedRect.pBits, extendedDepthTex->BufferLen());
    Thursday, June 20, 2013 1:02 AM

All replies

  • Are you trying to process only one frame before using the Calculate Mesh function? Kinect Fusion requires that a number of depth frames be processed before a volume can be generated. The number of frames required would depend on the depth area you are scanning. Typically the more "edges" that can be detected, the better the results.

    Take note of the Processing Pipeline
    http://msdn.microsoft.com/en-us/library/dn188670.aspx

    There is the CKinectFusionExplorer class that you can leverage from the Kinect Fusion Explorer sample that can get you started. 

    Wednesday, June 19, 2013 4:49 PM
  • Hi Carmine Si,

    Thank you very much for the reply. I read the link about Processing Pipeline, but:

    Even if i am not able to generate a Reconstruction Volume, I should at least be able to convert a single frame from USHORT compact Depth to Float. The error with the function NuiFusionDepthToDepthFloatFrame function still persists. Based on my code, what could be causing it?


    Wednesday, June 19, 2013 10:51 PM
  • Have a look at how you are creating the NUI_FUSION_IMAGE. You shouldn't "new" up and then call NuiFusionCreateImageFrame. Have a look at the KinectFusionProcessor or the Kinect Fusion Bascics-D2D sample on the minimum of what you need to do to pass data to KinectFusion.

    Looks like you forgot a step on getting the INuiFrameTexture... from the KinectFusionBasics-D2D sample:

    CKinectFusionBasics::CopyExtendedDepth(NUI_IMAGE_FRAME &imageFrame)

        INuiFrameTexture *extendedDepthTex = nullptr;
        // Extract the extended depth in NUI_DEPTH_IMAGE_PIXEL format from the frame
        BOOL nearModeOperational = FALSE;
        hr = m_pNuiSensor->NuiImageFrameGetDepthImagePixelFrameTexture(m_pDepthStreamHandle, &imageFrame, &nearModeOperational, &extendedDepthTex);
        if (FAILED(hr))
        {
            SetStatusMessage(L"Error getting extended depth texture.");
            return hr;
        }
        NUI_LOCKED_RECT extendedDepthLockedRect;
        // Lock the frame data to access the un-clamped NUI_DEPTH_IMAGE_PIXELs
        hr = extendedDepthTex->LockRect(0, &extendedDepthLockedRect, nullptr, 0);
        if (FAILED(hr) || extendedDepthLockedRect.Pitch == 0)
        {
            SetStatusMessage(L"Error getting extended depth texture pixels.");
            return hr;
        }
        // Copy the depth pixels so we can return the image frame
        errno_t err = memcpy_s(m_pDepthImagePixelBuffer, m_cImageSize * sizeof(NUI_DEPTH_IMAGE_PIXEL), extendedDepthLockedRect.pBits, extendedDepthTex->BufferLen());
    Thursday, June 20, 2013 1:02 AM
  • Thank you again Carmine Si!

    I followed the steps of sample Kinect Fusion Bascics-D2D. Now I`m able to get a frame converted to a Point Cloud. As you had said, Meshing of a Volume requires a minimum number of frames......and it really does not work for my single frame test! I read about Processing Pipelining, but I coudn`t find more details about this minimum requirements. You said it depends on the area i`m scanning. As I`m not interested in the coherence and quality of meshing (and much more in a resulting .stl output itself as a benchmark), isn`t there a way of tricking the Volume Reconstruction to get a merely couple of points meshed at all? Where can I get more detailed information about it?

    Saturday, June 22, 2013 3:47 PM
  • Kinect Fusion's ability to generate mesh data(vertices/indices) comes from the pipeline. I posted the minimum amount of steps required for sending data to KinectFusion(http://social.msdn.microsoft.com/Forums/en-US/3abb2f48-46ef-45f4-af67-95612adc67f2/getvertices-kinect-fusion). We do not comment on internal implementation details of the feature, but if you want to understand the research behind it, this is posted as part of Microsoft Reseach

    http://research.microsoft.com/en-us/projects/surfacerecon/

    Monday, June 24, 2013 5:34 PM