Error when taking in-place processing with Media Foundation Transform of WinRT


  • From MSDN, I found that I can process frames in-place for several modification.

    1. Setting attribute "dwFlags" to "MFT_INPUT_STREAM_PROCESS_IN_PLACE" within parameter "MFT_INPUT_STREAM_INFO" of function "GetInputStreamInfo"

    2. Setting attribute "dwFlags" to "MFT_OUTPUT_STREAM_PROVIDES_SAMPLES" within parameter "MFT_OUTPUT_STREAM_INFO" of function "GetOutputStreamInfo"

    3. Assigning pointer of output sample from input sample.

    But I encounter a problem. This method works fine if I insert my MFT component to capture stream through function addEffectAsync of my application.

    But for video playback, if I insert my MFT component through msInsertVideoEffect, it will cause an error when playing

    0x40080202: WinRT transform error (parameters: 0xC00D3E85, 0x00000000, 0x00000009, 0x037EF294)

    And I found that for the combinations of video color space "RGB24", "NV12", "YUY2" and video format ".mp4", ".wmv", it only works under one combination which is "RGB24" and ".wmv"

    I can't figure out why capture stream works and video playback not. Furthermore, why only the combination of "RGB24" and ".wmv" works?

    Besides, any document I can find for realizing the error code of WinRT component??

    Tuesday, October 23, 2012 9:56 AM

All replies

  • Hello,

    You can look up the error in "Mferror.h". I don't know why RGB24 and WMV work for you. I actually would not expect them to work. I'm guessing that you are writing a decoder with an input type of mp4 and an output type of RGB24. The problem you are having is due to the different allocators are needed for compressed and uncompressed data. You can't do an in place transform on compressed data and then just pass it downstream. You must allow a proper allocator to be negotiated between your output and the downstream topology node.

    I hope this helps,


    Windows Media SDK Technologies - Microsoft Developer Services - http://blogs.msdn.com/mediasdkstuff/

    Thursday, November 01, 2012 11:30 PM
  • Dear James:

    Thanks for your reply. About question for my MFT component, it is not a decoder, it is an processing MFT component which processing input frame and pass the frame applying effect downstream. Like sample code of Grayscale within Media Extension Samples, but with different processing

    For the problem, I still have several questions.

    1. For this problem, if it is due to "the different allocators are needed for compressed and uncompressed data", why the capture work but playback doesn't??

    2. Under playback stream, even though I pass sample downstream without making any processing, it still return the same error. Does the reason is that I declare that the mode of GetInputStreamInfo is in place transform, so I need to pass the uncompressed data downstream.

    3. As said under question 2, if the statement is right, than how can I take in-place transform without creating a new sample. Because the original sample buffer is compressed but the required for downstream is uncompressed. 

    4. If I don't set attribute "dwFlags" to "MFT_INPUT_STREAM_PROCESS_IN_PLACE" under GetInputStreamInfo and attribute "dwFlags" to "MFT_OUTPUT_STREAM_PROVIDES_SAMPLES" under GetOutputStreamInfo, does it mean that the sample buffer from client(media engine) is uncompressed data?? Or the input is compressed data and output is uncompressed data??

    5. Now I create the sample with these functions

            CHECK_HR(hr = MFCreateSample(&p_output_sample));
            CHECK_HR(hr = MFCreateAlignedMemoryBuffer(m_cbImageSize, MF_16_BYTE_ALIGNMENT, &pOutput));
            CHECK_HR(hr = p_output_sample->AddBuffer(pOutput));

    Does this allocation method have any problems??

    6. For the MFT component, is it legal to change state between passing input sample downstream and create a sample downstream after connection?? Because my implementation need to pass sample downstream if don't need to process and create a sample downstream if need to process. I may change between these states dynamically when streaming.


    Monday, November 05, 2012 2:52 AM