Data Services Streaming provider RRS feed

  • Question

  • I have implemented a streaming provider to allow streaming of large files into a FILESTREAM column in SQL Server 2008.

    I hvae modified the maxMessageReceived size in the web.config to be larger than the max size of any binary size we are trying to stream.

    On the WPF client application, I call the SetSaveStream and set the stream to a filestream from the client and then call SaveChanges.

    For small files this all works great.  However, for large files (like a video 300MB), I receive a "Out of Memory" Exception after a while on the client when the SaveChanges() method is called.  Also the memory on the server IIS process goes way up.  I was under the impression that Data Services would use a "chunked" transfer encoding for MR Requests. 

    If both the client and the server buffer the enitre request into memory, doesn't this defeat the purpose of trying to implement streaming?

    I can upload a smaller file (90MB) successfully, but after inspecting the request using Fiddler I see that the request size is 90MB and that the request is not set to use a chunked transfer encoding. 

    Any help would be great.




    Saturday, September 25, 2010 6:00 AM

All replies

  • Thanks for the links, but I have already reviewed them to create my streaming provider as well as my client application.

    The problem that I am having is that using the data services client libraries seems that the entire stream is buffered somewhere in memory when the SaveChanges method is called.  I have verified this using the Visual Studio memory profiler.

    I have tried to set the property AllowWriteStreamBuffering to false which I thought would keep the request from being buffered in memory for redirects etc.


    Wednesday, September 29, 2010 4:13 AM
  • By default, the WCF Data Services client sends the POST request chunked (at about 65KB or 10000hex bytes) when creating a new stream, as you can see in the following header:

    POST /PhotoService/PhotoData.svc/PhotoInfo HTTP/1.1
    User-Agent: Microsoft ADO.NET Data Services
    DataServiceVersion: 1.0;NetFx
    MaxDataServiceVersion: 2.0;NetFx
    Accept: application/atom+xml,application/xml
    Accept-Charset: UTF-8
    Content-Type: image/bmp
    Slug: SS_Scots.bmp
    Host: XXXXXXX
    Transfer-Encoding: chunked
    Expect: 100-continue


    (For some reason it seems that Fiddler doesn't show each chunked request in the exchange.)

    I am seeing a similar spike in memory usage during SaveChanges, which seems to be about the size of the streamed BLOB, in my case a 200MB bmp file.  The client and the server should not be buffering the entire stream during SaveChanges, so there must be something else going on. I will investigate this further.

    I also noticed that, by default, Fiddler seems to cache all request and response data, including BLOBs, which puts extra memory pressure on my machine while tracing these scenarios.



    This posting is provided "AS IS" with no warranties, and confers no rights.
    Sunday, October 3, 2010 6:06 AM
  • Hi Paul,

    On the server side did you set the transfer mode to "Streamed"?  Here's an example of how to enable streaming in web.config:


        <serviceHostingEnvironment aspNetCompatibilityEnabled="true" />


          <service name="NorthwindService">

            <endpoint address="http://MyWebServer/Northwind/Northwind.svc"

                      binding="webHttpBinding" bindingConfiguration="streamedBinding"







            <binding name="streamedBinding" maxReceivedMessageSize="67108864" transferMode="Streamed"  >

              <!-- This element specifies Windows Authentication for this endpoint-->

              <security mode="TransportCredentialOnly">

                <transport clientCredentialType="Windows" proxyCredentialType="Windows" />









    Tuesday, October 5, 2010 1:52 AM
  • It turns out that the WCF Data Services client doesn't currently support setting AllowWriteStreamBuffering to false, which means that the client is going to always cache the entire blob during SaveChanges. I have opened a bug to track this behavior, which I don't think allows client applications to take full advantage of the streaming provider functionalitity.

    Jimmy's recommendation should stop the memory spikes in the w3wp.exe process.

    Thanks Glenn.

    This posting is provided "AS IS" with no warranties, and confers no rights.
    Tuesday, October 5, 2010 4:01 AM
  • Thanks for looking at this!  I agree with you, that client applications must be able to do this to take full advantage of the streaming capabilities.  This would be crucial for our applications.  I is disappointing that we might not be able to use the streaming portion of data services for now.



    Tuesday, October 5, 2010 2:38 PM
  • Hi,

    You might be able to work around this issue. You could use an HttpWebRequest directly without our client (after all, the client is just a wrapper around that anyway).

    For inserting entities you would issue a POST with the binary data (no real headers are required for this request, other than those required by your streaming provider). In the response you should just pay attention to the Location header which returns the URL of the created entity - for which you would then query using our client (Execute method) and then you can make updates to the entity using the client.

    For updating streams on entities it's even easier. You ask for the EditStreamUri property on the entity descriptor and then send a PUT request to that URL using HttpWebRequest. Only if you would be using the ETags on the stream itself you would need to include an If-Match header with the value of the StreamETag property from the entity descriptor.


    Vitek Karas [MSFT]
    Tuesday, October 5, 2010 4:10 PM