locked
How to get YUV frame out of a Custom MFT

    Question

  • I have looked at the Grayscale MFT.  I can figure out how to get the NV12 frame by looking at the grayscale code for NV12 format.  What I need it a way to pass this NV12 frame (or blob) up to my metro app.  Is there any way to expose an interface in the MFT so the metro app can access the memory where the NV12 frame resides?
    • Edited by guntek1971 Thursday, July 19, 2012 8:52 PM
    Thursday, July 19, 2012 7:50 PM

Answers

  • Hello Sanj,

    I will warn you that what you are doing goes against Microsoft best practice and in my option should be avoided. You should not use an MFT as a server. I would recommend that you use the Media Engine and supply a bitmap that you can copy to your network stream. Subsequently you could write a custom Sink Writer for the Media engine and write out data to the network. If you do pursue using an MFT and expect to be able to monitor on the server, you need to make sure not to block the processing thread. This means copying the data out of the media sample and passing it to a different thread for processing. You also need to make sure that all of your data processing of media samples occurs within the context of C++ or C++ Cx. Sending media samples to C# or VB, i.e managed code will lead to extremely bad performance and poor battery life for the hosing device. This is due to a number of different reasons but mainly because it is extremely expensive to marshal between C++ and the managed environment.

    I hope this helps,

    James


    Windows Media SDK Technologies - Microsoft Developer Services - http://blogs.msdn.com/mediasdkstuff/

    Saturday, July 28, 2012 12:23 AM
    Moderator

All replies

  • Hello quntek1971,

    You should never pass video frames from an MFT to the controlling application. You should implement all the necessary functionality in the MFT and let the rich compositor handle rendering of any frames.

    If you can explain exactly what you are trying to do I may be able to offer a suggestion as to how to approach your architecture in a more "media friendly" way.

    Thanks,

    James


    Windows Media SDK Technologies - Microsoft Developer Services - http://blogs.msdn.com/mediasdkstuff/

    Wednesday, July 25, 2012 12:58 AM
    Moderator
  • Hi James,

    I am trying to implement a server listening in a specific socket.  When the server gets a request to get a frame, the NV12 frames needs to be copied out of the grayscale MFT and sent back to the client by the server.  I have looked at the realtime communication sample.  It seems very complicated.  So it looks like I will have to implement the socket code in the MFT.  Do you have any other suggestions on how this can be done?

    Thanks

    Sanj

    Thursday, July 26, 2012 6:36 PM
  • Hello Sanj,

    I will warn you that what you are doing goes against Microsoft best practice and in my option should be avoided. You should not use an MFT as a server. I would recommend that you use the Media Engine and supply a bitmap that you can copy to your network stream. Subsequently you could write a custom Sink Writer for the Media engine and write out data to the network. If you do pursue using an MFT and expect to be able to monitor on the server, you need to make sure not to block the processing thread. This means copying the data out of the media sample and passing it to a different thread for processing. You also need to make sure that all of your data processing of media samples occurs within the context of C++ or C++ Cx. Sending media samples to C# or VB, i.e managed code will lead to extremely bad performance and poor battery life for the hosing device. This is due to a number of different reasons but mainly because it is extremely expensive to marshal between C++ and the managed environment.

    I hope this helps,

    James


    Windows Media SDK Technologies - Microsoft Developer Services - http://blogs.msdn.com/mediasdkstuff/

    Saturday, July 28, 2012 12:23 AM
    Moderator