Rendering to an NV12 texture RRS feed

  • Question

  • This turned out to be a fairly long and in-depth question, but please bear with me. Also, excuse the Wikipedia-style links. I wasn't allowed to do hyperlinks apparently, since my account isn't verified.

    I stumbled upon this answer[1] on StackOverflow a while ago, and I've been trying to recreate the process which he describes in order to do screen capture straight to a H.264 compressed video file.

    I have managed to get the whole Duplication-to-D3D11-to-MMF pipeline working with WMV3 without any problems, but I'm struggling getting H.264 working.

    According to the Media Foundation documentation[2], using H.264 as the output format requires some kind of YUV input format. I'm using NV12, since it seemed to be the most common one. According to the DXGI documentation[3], NV12 needs to be aliased with R8 for Y and R8G8 for UV, so I've created two render target views with these two formats that both view the NV12 target texture that I've created. I then set the R8 view as a render target on my context and draw a fullscreen quad using a pixel shader that converts my input texture (the one copied from the Duplication API) from RGB to YUV[4] and discards the UV channels. I do the same for the R8G8 view, but with a pixel shader that discards the Y channel instead.

    This all seemed like it made sense, and that I would get some nice H.264 compressed output in the end. What happens instead is that my video drivers (latest Nvidia) crash and my D3D device throws DXGI_ERROR_DEVICE_HUNG. I figured I had run into a rare case of driver bug, so I tried using WARP instead. Now, since the Duplication API doesn't work with WARP, I went with just a static BGRA texture. I didn't get any crashes with WARP, but the output is a flickering mess, at which point I gave up.

    So, my actual question is a multi-layered one. Can I even render to an NV12 texture? The DXGI documentation does say that RTV is a supported view type. If I can, is it correct to do so in two passes as described above? If I can't, is there any other way to get a BGRA texture into a IMFSinkWriter that outputs H.264?

    I'll happily provide a code sample if it would make it easier to understand what I'm going on about.

    [1]: http://stackoverflow.com/a/22219146
    [2]: http://msdn.microsoft.com/en-us/library/windows/desktop/dd797816(v=vs.85).aspx
    [3]: http://msdn.microsoft.com/en-us/library/windows/desktop/bb173059(v=vs.85).aspx#DXGI_FORMAT_NV12
    [4]: http://www.pcmag.com/encyclopedia/term/55166/yuv-rgb-conversion-formulas

    Wednesday, October 22, 2014 7:45 PM

All replies

  • Hi Ashen,

    I am trying to involve someone familiar with this issue to come into this thread. Thank you for your understanding.


    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place. Click HERE to participate the survey.

    Thursday, October 23, 2014 7:27 AM
  • Shameless bump. I would greatly appreciate any help I could get with this. :)
    Monday, November 3, 2014 6:17 PM
  • Hi,

    I'm stuck in same topic for a while. Do you happen to find any answer resolving the problem? Thanks.

    The first link in your reference list is a post from MSFT that claims it works. But I don't have points to post comment and ask for sample code there. Do you have the source code from "zhuman"?

    Tuesday, January 13, 2015 11:53 AM