locked
Concatenating and transcoding files using MFCreateAggregateSource? RRS feed

  • Question

  •  

     

    Hi-

     

    I'm modifying an application that currently transcodes media files from one type to another(which was inspired by the Transcode example in the sdk) to also concatenate media files together.  At first, I thought I'd have to write my own media source or set up a SequencerSource, but I was pleased when I found the MFCreateAggregateSource function-it looks like it eliminates a bunch of work I'd have to do if I did either of the other options.

     

    Anyways, I adapted my app to use it, but when I run it, I (now) get errors and it doesn't seem to work-it'll write the first segment/embedded source to the output file but then it errors out.  

    Any ideas?  Am I missing a step?

    thanks 

    Mike Kohout

     

    Here's how I set up the source:

     

     

     

     

    IMFMediaSource* sequencerSource = NULL;
    IMFCollection* sourceCollection = NULL;
    MFCreateCollection(&sourceCollection);
    
    for( int i = 0; i< 1; i++)
    {
        IMFMediaSource* constituentSrc = NULL;
        CreateMediaSource(sInputFile, &constituentSrc);
        sourceCollection->AddElement(constituentSrc);
    }
    
    MFCreateAggregateSource(sourceCollection, &sequencerSource);
    CTranscoder transcoder( audioType, videoType, containerType, sequencerSource );  //this is the my transcoder that's inspired by the SDK example
    
    ....//set some video/audio attributes that are attached to the TranscodeProfile 
    
    transcoder.EncodeToFile(sOutputFile);

    Inside EncodeToFile, it creates the topology without error

     

     

     hr=MFCreateTranscodeTopology( m_pSource, sURL, m_pProfile, &m_pTopology );
     hr = m_pSession->SetTopology(0, m_pTopology);

    and then calls Transcode

    HRESULT CTranscoder::Transcode()
    {
        assert (m_pSession);
        IMFMediaEvent* pEvent = NULL;
        MediaEventType meType = MEUnknown;  // Event type
        HRESULT hr = S_OK;
        HRESULT hrStatus = S_OK;            // Event status
        //Get media session events synchronously
        while (meType != MESessionClosed)
        {
            hr = m_pSession->GetEvent(0, &pEvent);
            if (FAILED(hr)) { break; }
            
          // Get the event type.
            hr = pEvent->GetType(&meType);
            if (FAILED(hr)) { break; }
            hr = pEvent->GetStatus(&hrStatus);
            if (FAILED(hr)) { break; }
            if (FAILED(hrStatus))
            {
                wprintf_s(L"Failed. 0x%X error condition triggered this event.\n", hrStatus);
                hr = hrStatus;
                break;
            }
    
            switch (meType)
            {
            case MESessionTopologySet:
                hr = Start();
                if (SUCCEEDED(hr))
                {
                    wprintf_s(L"Ready to start.\n");
                }
                break;
            case MESessionStarted:
                wprintf_s(L"Started encoding...\n");
                break;
            case MESessionEnded:
                hr = m_pSession->Close();
                if (SUCCEEDED(hr))
                {
                    wprintf_s(L"Finished encoding.\n");
                }
                break;
            case MESessionClosed:
                wprintf_s(L"Output file created.\n");
                break;
            }
            if (FAILED(hr))
            {
                break;
            }
            SafeRelease(&pEvent);
        }
        SafeRelease(&pEvent);
        return hr;
    }

    When I alter Transcode to print out the events that it sees, for the failed transcode(with a MediaSource generated by MFCreateAggregateSource with > 1 constituent sources) I get the following events:

     

    MESessionNotifyPresentation
    MESessionCapabilitiesChanged
    MESessionTopologyStatus
    MESessionStarted

     

    and (now, surprisingly) an error: "Failed. 0x80070057 error condition triggered this event."

    With a MediaSource that works(a MediaSource generated by MFCreateAggregateSource with only 1 constituent source) I get the following events:

     

    MESessionNotifyPresentation
    MESessionCapabilitiesChanged
    MESessionTopologyStatus
    MESessionTopologyStatus
    MESessionCapabilitiesChanged
    MESessionStarted
    MEEndOfPresentation
    MESessionTopologyStatus
    MESessionCapabilitiesChanged
    MESessionEnded
    MESessionTopologyStatus
    MESessionCapabilitiesChanged
    MESessionEnded
    MESessionCapabilitiesChanged
    MESessionClosed

     

    The behavior of the application does change depending on the media files used as sources:  if two MOVs are provided as input, it errors out as said above.  If it's two WMVs, it just hangs.

     

    using mftrace on the resulting exe+two WMVs writes out a bunch of stuff, but among it is this, which seems to repeat until the process is killed:  

     

     

    3960,C54 20:31:20.67373 CMFTransformDetours::ProcessOutput @000000000026FBC0 failed hr=0xC00D6D72 MF_E_TRANSFORM_NEED_MORE_INPUT
    
    
    Perhaps the source built by MFCreateAggregateSource isn't advancing onto it's next internal source or something?

     

    • Edited by Michael Kohout Wednesday, November 9, 2011 8:43 PM more info, output from mftrace
    Wednesday, November 9, 2011 4:45 PM

All replies

  • Hi All-

     

    Since I haven't had any luck with this task, I've tried to simplify my app until it works-but it's still failing.  I've rewritten everything pretty much into main and a couple of helper functions.  I'm feeding an AVI file into the application and setting it to output into a wmv, and while I'm resolving a partial topology the app is still failing.

     

    The error I'm getting is "0xc00d36b4"-which through some minor bing-fu I've discovered means "The data specified for the media type is invalid, inconsistent, or not supported by this object".

     

    The application is a bit crude(it doesn't close things nicely) but it seems to me that it should work and I've been racking my brain for a couple days now about what my error could be.  Below is the complete source code, and I'm running this on Win7 Ultimate SP1(with the 7.1 SDK installed) with VS2010.

     

    Does anyone out there have a tip?

     

     

    #include "Common.h"
    #include <mfapi.h>
    #include <mfidl.h>
    #include <mferror.h>
    
    #pragma warning( disable:4100)
    #pragma warning( disable:4189)
    
    
    HRESULT CreateMediaSource(PCWSTR sURL, IMFMediaSource **pSource)
    {
        HRESULT hr = S_OK;
        MF_OBJECT_TYPE objectType = MF_OBJECT_INVALID;
        CComPtr<IMFSourceResolver> pSourceResolver = NULL;
       
        do
        {
            // Create the source resolver.
            hr = MFCreateSourceResolver(&pSourceResolver);
            BREAK_ON_FAIL(hr);
    
            // Use the syncrhonous source resolver to create the media source.
            hr = pSourceResolver->CreateObjectFromURL(
                sURL,                       // URL of the source.
                MF_RESOLUTION_MEDIASOURCE | 
                    MF_RESOLUTION_CONTENT_DOES_NOT_HAVE_TO_MATCH_EXTENSION_OR_MIME_TYPE,  
                                            // indicate that we want a source object, and 
                                            // pass in optional source search parameters
                NULL,                       // Optional property store for extra parameters
                &objectType,                // Receives the created object type.
                (IUnknown**)pSource         // Receives a pointer to the media source.
                );
            BREAK_ON_FAIL(hr);
    
            // Get the IMFMediaSource interface from the media source.
    
            BREAK_ON_NULL(pSource, E_NOINTERFACE);
        }
        while(false);
    
        return hr;
    }
    
    
    HRESULT CreateTranscodeTopology(LPCWSTR outFile, IMFTranscodeProfile* profile,	IMFMediaSource* src, IMFTopology ** top)
    {
        HRESULT hr = S_OK;
        
        do
        {
            // standard media source creation
            BREAK_ON_FAIL(hr);
    
           
            // create the actual transcode topology based on the transcode profile
            hr = MFCreateTranscodeTopology(
                src,                      // the source of the content to transcode
                outFile,                      // output filename
                profile,            // transcode profile to use
               top);                  // resulting topology
            BREAK_ON_FAIL(hr);
        }
        while(false);
    
        return hr;
    }
    
    int wmain(int argc, wchar_t* argv[])
    {
    	HeapSetInformation(NULL, HeapEnableTerminationOnCorruption, NULL, 0);
    
    	HRESULT hr = S_OK;
    	
    	hr = CoInitializeEx(NULL, COINIT_APARTMENTTHREADED | COINIT_DISABLE_OLE1DDE);
    
    	if (SUCCEEDED(hr))
    	{
    		hr = MFStartup(MF_VERSION);
    	}
    
    
    	IMFMediaSession* session = NULL;
    	MFCreateMediaSession(NULL, &session);
    	//CTranscodeApiTopoBuilder builder = CTranscodeApiTopoBuilder();
    	IMFMediaSource* source = NULL;
    	hr = CreateMediaSource(L"C:\\Users\\Administrator\\Desktop\\puffs.avi", &source);
    
    	IMFPresentationDescriptor* sourcePresentation = NULL;
    	hr= source->CreatePresentationDescriptor(&sourcePresentation);
    	//for each stream in source presentation descriptor, generate a source node for it and add them to partial topology
    	CComQIPtr<IMFPresentationDescriptor> sourcePresentationDesc;
    	hr= source->CreatePresentationDescriptor(&sourcePresentationDesc);
    
    	IMFTranscodeProfile* profile = NULL;
    	MFCreateTranscodeProfile(&profile);
    
    	//get a video transcodeing attribute
    	
    	//determine the input type...
    	DWORD streamCount = 0;
    	sourcePresentation->GetStreamDescriptorCount(&streamCount);
    	
    	MFT_REGISTER_TYPE_INFO inputVideoMediaType;
    	bool typefound = false;
    	for(DWORD i = 0; i< streamCount; i++){
    		CComQIPtr<IMFStreamDescriptor> stream;
    		BOOL selectedStream = false;
    		CComQIPtr<IMFMediaTypeHandler> streamTypeHandler;
    		CComQIPtr<IMFMediaType> streamType;
    		GUID videoInMajorType=GUID_NULL;
    		hr=sourcePresentation->GetStreamDescriptorByIndex(i,&selectedStream, &stream);
    		hr=stream->GetMediaTypeHandler(&streamTypeHandler);
    		hr=streamTypeHandler->GetCurrentMediaType(&streamType);
    		hr=streamType->GetMajorType(&videoInMajorType);
    		if(selectedStream && (videoInMajorType==MFMediaType_Video)&& !typefound){
    			GUID subType = GUID_NULL;
    			hr=streamType->GetGUID(MF_MT_SUBTYPE, &subType);
    			printf("I'm a selected video.\n");
    			inputVideoMediaType.guidMajorType=videoInMajorType;
    			inputVideoMediaType.guidSubtype = subType;
    			typefound = true;
    		}
    	}
    
    	IMFAttributes* videoAttrs = NULL;
    	hr=MFCreateAttributes(&videoAttrs,10);
    	DWORD flags = (MFT_ENUM_FLAG_ALL & (~MFT_ENUM_FLAG_FIELDOFUSE))   
                    | MFT_ENUM_FLAG_SORTANDFILTER; 
    	flags = MFT_ENUM_FLAG_ALL;
    	MFT_REGISTER_TYPE_INFO outputType;  
    	outputType.guidMajorType = MFMediaType_Video;
        outputType.guidSubtype = MFVideoFormat_WMV3;
    
    	IMFActivate** videoActivationsArr = NULL;
    	UINT32 activationsFound;
    	hr = MFTEnumEx(
                MFT_CATEGORY_VIDEO_ENCODER,         // type of object to find - video encoders
                flags,                              // search flags
                NULL,//&inputVideoMediaType,               // match this input type for an encoder
                &outputType,                        // get encoders with specified output type
                &videoActivationsArr,
                &activationsFound);
    
    	printf("found %i transformers that will handle our output type\n", activationsFound);
    	CComPtr<IMFTransform> pEncoder;
    	
    	hr = videoActivationsArr[0]->ActivateObject(IID_IMFTransform, 
                    (void**)&pEncoder);
    	 CComPtr<IMFMediaType> pType;
    	hr = pEncoder->GetOutputAvailableType(0, 0, &pType);
    	 pType->CopyAllItems(videoAttrs);
    	 hr=profile->SetVideoAttributes(videoAttrs);
    
    	//get audio attributes
    	IMFAttributes* audioAttrs = NULL;
    	hr=MFCreateAttributes(&audioAttrs,10);
    	hr=audioAttrs->SetGUID(MF_MT_SUBTYPE, MFAudioFormat_WMAudioV8);
    	hr=audioAttrs->SetGUID(MF_MT_MAJOR_TYPE,MFMediaType_Audio); 
    
    	CComPtr<IMFCollection> pTypeCollection = NULL;
    	CComPtr<IMFAttributes> resolvedAudioType = NULL;
    	 DWORD dwFlags = 
                (MFT_ENUM_FLAG_ALL & (~MFT_ENUM_FLAG_FIELDOFUSE))   
                | MFT_ENUM_FLAG_SORTANDFILTER;       
    
    	 hr = MFTranscodeGetAudioOutputAvailableTypes(
                MFAudioFormat_WMAudioV8,                      // specify the requested audio format
                dwFlags,                          // get all MFTs except for the FOU, and sort
                NULL,                             // no custom attributes
                &pTypeCollection);           // store result in specified collection
    	hr=pTypeCollection->GetElement(0, (IUnknown**)&resolvedAudioType);
    	hr=resolvedAudioType->CopyAllItems(audioAttrs);
    
    	   UINT64 nFrameRate;
        UINT64 nFrameSize;
        UINT64 nPixelAspectRatio;
        UINT32 nBitRate;
        IMFAttributes *pAttrs = NULL;
    
    
        //CHECK_HR( hr = pAttrs->SetGUID( MF_MT_MAJOR_TYPE, MFMediaType_Video ) );
        //CHECK_HR( hr = pAttrs->SetGUID( MF_MT_SUBTYPE, MFVideoFormat_WMV3 ) );
        //CHECK_HR( hr = pAttrs->SetString( MF_TRANSCODE_ENCODINGPROFILE, L"MP" ) );
        
         hr = pType->GetUINT64( MF_MT_FRAME_RATE, &nFrameRate );
        //CHECK_HR( hr = pAttrs->SetUINT64( MF_MT_FRAME_RATE, nFrameRate ) );
    
       hr = pType->GetUINT64( MF_MT_FRAME_SIZE, &nFrameSize ) ;
        //CHECK_HR( hr = pAttrs->SetUINT64( MF_MT_FRAME_SIZE, nFrameSize ) );
     
        hr = pType->GetUINT32( MF_MT_AVG_BITRATE, &nBitRate );
        //CHECK_HR( hr = pAttrs->SetUINT32( MF_MT_AVG_BITRATE, nBitRate ) );
    
         hr = pType->GetUINT64( MF_MT_PIXEL_ASPECT_RATIO, &nPixelAspectRatio ) ;
        //CHECK_HR( hr = pAttrs->SetUINT64( MF_MT_PIXEL_ASPECT_RATIO, nPixelAspectRatio ) ); 
    
    
    	hr=profile->SetAudioAttributes(audioAttrs);
    
    
    	IMFAttributes* containerAttrs = NULL;
    	hr=MFCreateAttributes(&containerAttrs,1);
    	hr=containerAttrs->SetGUID(MF_TRANSCODE_CONTAINERTYPE, MFTranscodeContainerType_ASF);
    	hr=profile->SetContainerAttributes(containerAttrs);
    
    	IMFTopology* topo = NULL;
    	CreateTranscodeTopology(L"C:\\Users\\Administrator\\Desktop\\topobuilder_ipod_sample.wmv", profile, source, &topo);
    
    	WORD nodeCount=0;
    	hr=topo->GetNodeCount(&nodeCount);
    	for(WORD i = 0; i< nodeCount; i++){
    		
    		IMFTopologyNode* node = NULL;
    		MF_TOPOLOGY_TYPE nodeType = MF_TOPOLOGY_TRANSFORM_NODE;
    		hr=topo->GetNode(i, &node);
    		hr=node->GetNodeType(&nodeType);
    
    		if(nodeType == MF_TOPOLOGY_TRANSFORM_NODE)
    		{
    			printf("transform node\n");
    			//don't think we have to do anything here...
    		}else if(nodeType== MF_TOPOLOGY_SOURCESTREAM_NODE){
    			printf("sourcestream node\n");
    			//set the start and stop times for the stream(if you want to create a clip)
    		}else if(nodeType == MF_TOPOLOGY_OUTPUT_NODE){
    			printf("output node\n");
    			//set this node to always record...(see Cut.cpp, I think)
    		}else if(nodeType == MF_TOPOLOGY_TEE_NODE){
    			printf("topology tee node\n");
    		}else if(nodeType == MF_TOPOLOGY_MAX){
    			printf("topology max node\n");
    		}else{
    			printf("don't know what type of node this is...\n");
    		}
    
    	}
    
    	hr= session->SetTopology(MFSESSION_SETTOPOLOGY_IMMEDIATE,topo);
    	
    
    	PROPVARIANT varStart;
        PropVariantInit(&varStart);
    
        hr = session->Start(&GUID_NULL, &varStart);
    	int videoFileCount = 1;
    	while(true){
    			//list o events we may care about:  http://msdn.microsoft.com/en-us/library/windows/desktop/ms703967(v=vs.85).aspx
    			IMFMediaEvent* e = NULL;
    			GUID extendedEventType = GUID_NULL;
    			MediaEventType eventType;
    			//session->BeginGetEvent((IMFAsyncCallback*)&eventListener, (IUnknown*)NULL);
    			hr=session->GetEvent(0,&e);
    			hr = e->GetType(&eventType);
    			hr = e->GetExtendedType(&extendedEventType);
    			if(eventType == MESessionEnded  && videoFileCount == 0){
    				hr=session->Close();
    				printf("session ended & no more files to encode\n");
    			}if(eventType == MESessionEnded  && videoFileCount > 0){
    				hr=session->Close();
    				printf("session ended & still more files to encode.\n");
    				printf("send MEEndOfStream event for each stream in now-empty presentation\n");
    				printf("send MEEndofPresentation event\n");
    			    printf("Build a new presentation Descriptor(P167) and send the MENewPresentation event\n");
    				videoFileCount = videoFileCount -1;
    				session->QueueEvent(MESourceStarted, GUID_NULL, S_OK, NULL);  
    			}else if(eventType == MESessionClosed){
    				printf("closed session\n");
    				break;
    			}else if(eventType == MESessionTopologySet)
    			{
    				HRESULT eventHR = S_OK;
    				// get the result of the topology set operation
    				hr = e->GetStatus(&eventHR);
    				BREAK_ON_FAIL(hr);
    
    				// if topology resolution failed, then the returned HR will indicate that
    				if(! (S_OK ==eventHR))
    				{
    					// store the failure for future reference
    					//m_sessionResult = topologySetStatusHr;
    
    					
    					if(eventHR == 0xc00d36fa)
    						//http://social.msdn.microsoft.com/Forums/en/mediafoundationdevelopment/thread/7e45d261-8f1e-4dac-b5c1-a5dd69e08e01
    						printf("ugh oh.  failed to create mediasink\n");
    					else if(eventHR == 0xc00d36b4)
    						//http://blog.firefly-vj.net/2009/09/media-foundationhresult.html
    						printf("The data specified for the media type is invalid, inconsistent, or not supported by this object.\n");
    					
    					session->Close();
    					break;
    				}
    			}else if(eventType == MEError){
    				HRESULT eventHR = S_OK;
    				e->GetStatus(&eventHR);
    				printf("some error:%s\n", eventHR);
    			}else if(eventType == MESessionStarted) {
    				printf("session started\n");
    			}else if(eventType == MESessionTopologiesCleared){
    				printf("session topology cleared\n");
    			}else{
    				printf("some other event\n");
    				session->QueueEvent(MEError,GUID_NULL, S_OK,NULL);
    			}
    
    	}
    
    	MFShutdown();
        CoUninitialize();
    }
    

     


    thanks again,

    Mike Kohout

     

    EDIT:

    I've run this application under mftrace and this is some of the output.

     

    When I create the TranscodeTopology:

    3744,EE0 15:23:54.54467 CMFExportDetours::MFCreateTranscodeTopology @ URL: 'C:\Users\Administrator\Desktop\topobuilder_ipod_sample.wmv'

    3744,EE0 15:23:54.54468 CMFExportDetours::MFCreateTranscodeTopology @ Profile audio: <NULL>

    3744,EE0 15:23:54.54469 CMFExportDetours::MFCreateTranscodeTopology @ Profile video: MF_MT_MAJOR_TYPE=MEDIATYPE_Video;MF_MT_SUBTYPE=MFVideoFormat_WMV3;MF_MT_PIXEL_ASPECT_RATIO=4294967297 (1,1);MF_MT_TRANSFER_FUNCTION=0;MF_MT_VIDEO_PRIMARIES=0;MF_MT_YUV_MATRIX=0;MF_TRANSCODE_ENCODINGPROFILE=MP

    3744,EE0 15:23:54.54470 CMFExportDetours::MFCreateTranscodeTopology @ Profile container: MF_TRANSCODE_CONTAINERTYPE=MFTranscodeContainerType_ASF

     

     

    This is a sampling of similar failures-I'm assuming these are generated when the system is trying to build a path from the source type->destination type:

    3744,DF4 15:23:54.62108 CMFTransformDetours::SetOutputType @00000000001E91B0 Failed MT: MF_MT_MAJOR_TYPE=MEDIATYPE_Video;MF_MT_SUBTYPE=MFVideoFormat_IYUV;MF_MT_ALL_SAMPLES_INDEPENDENT=1;MF_MT_FIXED_SIZE_SAMPLES=1;MF_MT_FRAME_RATE=85899345921000 (20000,1000);MF_MT_PIXEL_ASPECT_RATIO=4294967297 (1,1);MF_MT_INTERLACE_MODE=2;MF_MT_TRANSFER_FUNCTION=0;MF_MT_VIDEO_PRIMARIES=0;MF_MT_YUV_MATRIX=0

    3744,DF4 15:23:54.62108 CMFTransformDetours::SetInputType @0000000000298A30 Succeeded MT: <NULL>

    3744,DF4 15:23:54.62114 CMFTransformDetours::SetInputType @0000000000298A30 Succeeded MT: MF_MT_MAJOR_TYPE=MEDIATYPE_Video;MF_MT_SUBTYPE=MFVideoFormat_YUY2;MF_MT_ALL_SAMPLES_INDEPENDENT=1;MF_MT_FIXED_SIZE_SAMPLES=1;MF_MT_FRAME_RATE=85899345921000 (20000,1000);MF_MT_PIXEL_ASPECT_RATIO=4294967297 (1,1);MF_MT_INTERLACE_MODE=2;MF_MT_TRANSFER_FUNCTION=0;MF_MT_VIDEO_PRIMARIES=0;MF_MT_YUV_MATRIX=0

    3744,DF4 15:23:54.62115 CMFTransformDetours::SetOutputType @00000000001E91B0 Failed MT: MF_MT_MAJOR_TYPE=MEDIATYPE_Video;MF_MT_SUBTYPE=MFVideoFormat_YUY2;MF_MT_ALL_SAMPLES_INDEPENDENT=1;MF_MT_FIXED_SIZE_SAMPLES=1;MF_MT_FRAME_RATE=85899345921000 (20000,1000);MF_MT_PIXEL_ASPECT_RATIO=4294967297 (1,1);MF_MT_INTERLACE_MODE=2;MF_MT_TRANSFER_FUNCTION=0;MF_MT_VIDEO_PRIMARIES=0;MF_MT_YUV_MATRIX=0

    3744,DF4 15:23:54.62116 CMFTransformDetours::SetInputType @0000000000298A30 Succeeded MT: <NULL

     

    The output near the end of the logfile telling me that the topology didn't resolve:

    3744,EE0 15:23:54.62163 CTopologyHelpers::Trace @00000000002901B0 MF_TOPOLOGY_RESOLUTION_STATUS = NOT FOUND!!

    Thursday, November 17, 2011 9:15 PM
  • MFCreateAggregateSource does not create a sequence of sources, where one source's data is appended onto another's.  Instead, it combines multiple sources into one.  For example, if you had two WMV sources, each with one audio output and one video output, MFCreateAggregateSource would generate one source with two audio outputs and two video outputs.  It is a helper for things like webcams which generate seperate sources for audio and video.

    For what you are trying to do, you need to use the full sequencer.  You would build a partial topology for each content file in your sequence and append the partial topology onto the sequencer.  Then you would need to handle the sequencer events (MENewPresentation, MEEndOfPresentationSegment) to transition from one element in the sequence to the next.

    Saturday, December 17, 2011 12:35 AM
  • Hi Matt-

     

    Thanks for your reply.  It's been a while since I looked at this exact code, but I finally came back to this task this week.  This spike I've been trying to use the IMFSourceReader + IMFSinkWriter objects to accomplish this same goal (via a modified MFCopy).  

    What I'm doing is queueing up sources in a stl vector, then when I encounter an MF_SOURCE_READERF_ENDOFSTREAM after reading from a source, I go through and set up my next source and adjust the stream type on the writer.

    Is this also a valid way to accomplish my goal?  I ask because I spied this in the "remarks" section of the MSDN docs for IMFSinkWriter:SetInputMediaType():

    After streaming begins—that is, after the first call to IMFSinkWriter::WriteSample—you can call this method at any time to change the input format. However, the underlying encoder and media sink must support dynamic format changes.

    And I'm getting an error after changing the input type for that stream.

    If this isn't universally supported by the writers, is there any way I can programmatically determine if my sink supports it, at the very least as a way to determine if my error is elsewhere?

     

    thanks 

    Mike Kohout

    Wednesday, December 28, 2011 11:04 PM
  • So, what do you mean by the "full sequencer"?  I'm starting up a session, then polling the event queue looking for a MEEndOfPresentation event.  When I find one, I prepare the second source, get it's presentation descriptor, queue a MENewPresentation event(with the presentation descriptor as an argument) and queue a MENewStream on the session's event queue:

     

        HRESULT hr = S_OK;
    	MFCreateMediaSession(NULL, &m_session);
    	CHECK_HR(hr= m_session->SetTopology(MFSESSION_SETTOPOLOGY_IMMEDIATE,m_topology));
    
    	PROPVARIANT varStart;
        PropVariantInit(&varStart);
    	//CHECK_HR(hr = m_session->BeginGetEvent(this, NULL));
        hr = m_session->Start(&GUID_NULL, &varStart);
    	
    	while(true){
    		IMFMediaEvent* e = NULL;
    		GUID extendedEventType = GUID_NULL;
    		MediaEventType eventType;
    		//session->BeginGetEvent((IMFAsyncCallback*)&eventListener, (IUnknown*)NULL);
    		CHECK_HR(hr=m_session->GetEvent(0,&e));
    		CHECK_HR(hr = e->GetType(&eventType));
    		CHECK_HR(hr = e->GetExtendedType(&extendedEventType));
    	
    		if(eventType == MEEndOfPresentation  && this->sourceCollection->size() == 0){
    				hr=m_session->Close();
    				printf("session ended & no more files to encode\n");
    				break;
    		}
    		else if(eventType == MEEndOfPresentation && this->sourceCollection->size() > 0){
    				//hr=m_session->Close();
    				printf("presentation ended & still more files to encode.\n");
    				printf("send MEEndOfStream event for each stream in now-empty presentation\n");
    				printf("send MEEndofPresentation event\n");
    			    printf("Build a new presentation Descriptor(P167) and send the MENewPresentation event\n");
    				IMFMediaSource  *newSource = NULL;
    				hr=_CreateSource(&newSource);
    				
    				//deselect old streams.
    				{
    					IMFPresentationDescriptor *presentationDescriptor = NULL;
    					m_currentSource->CreatePresentationDescriptor(&presentationDescriptor);
    					DWORD oldSourceStreamCount = 0;
    					hr=presentationDescriptor->GetStreamDescriptorCount(&oldSourceStreamCount);
    					for(DWORD i = 0; i < oldSourceStreamCount; i++)
    					{	hr=presentationDescriptor->DeselectStream(i);
    					}
    				}
    				//set up new streams..
    				{
    					IMFPresentationDescriptor *presentationDescriptor = NULL;
    					newSource->CreatePresentationDescriptor(&presentationDescriptor);
    					newSource->QueueEvent(MENewPresentation, GUID_NULL, S_OK, (const PROPVARIANT* )presentationDescriptor);
    					DWORD streamCount =0;
    					presentationDescriptor->GetStreamDescriptorCount(&streamCount);
    					for(DWORD i = 0; i< streamCount; i++){
    						IMFStreamDescriptor* streamDesc = NULL;
    						BOOL isSelected = false;
    						presentationDescriptor->GetStreamDescriptorByIndex(i, &isSelected, &streamDesc);
    						if(isSelected){
    							m_session->QueueEvent(MENewStream, GUID_NULL, S_OK, (const PROPVARIANT* )streamDesc);
    							newSource->QueueEvent(MENewStream, GUID_NULL, S_OK, (const PROPVARIANT* )streamDesc);
    						}
    					}
    				}
    				//m_session->QueueEvent(MESourceStarted, GUID_NULL, S_OK, NULL);  
    		}
    

     

    However, I'm getting output like this:

    set topology
    MESessionNotifyPresentationTime
    MESessionCapabilitiesChanged
    MESessionTopologyStatus
    MESessionTopologyStatus
    MESessionCapabilitiesChanged
    MESessionStarted
    MESessionStreamSinkFormatChanged
    presentation ended & still more files to encode.
    send MEEndOfStream event for each stream in now-empty presentation
    send MEEndofPresentation event
    Build a new presentation Descriptor(P167) and send the MENewPresentation event
    MESessionTopologyStatus
    MESessionCapabilitiesChanged
    MESessionEnded
    MENewStream
    MENewStream
    ^C


    and, as you can see, I have to terminate my application to end it(otherwise it hangs forever).

     

    Sorry to be such a doofus about this.

    Mike Kohout

    Tuesday, January 10, 2012 9:50 PM
  • Using the source reader and sink writer to manually append presentations is certainly an option.  Depending on what you require, like the use of converters for example, it may be more work.  Media sinks generally do not support dynamic format changes.  Encoders can, but often do not.  The best option is to use converter MFTs (resizer, color converter, frame rate converter) to ensure a consistent format coming into the encoder.

    By 'full sequencer', I mean MFCreateSequencerSource and its associated APIs.  To add a new presentation to the sequence, you should be calling IMFSequencerSource::AppendTopology.  The session will then send an MENewPresentation event when it is ready for the next presentation.  You should never be queueing MENewStream or MESourceStarted events yourself except when implementing a media source.  The source is responsible for sending these events when it has completed certain operations, if they originate from somewhere else besides the source it will only mess up the session state.

    Thursday, February 9, 2012 6:46 PM