none
Send port with file adapter vs remote datacenter performance issue RRS feed

  • Question

  • Hi all,

    We have a strange issue, when sending simple csv files to a remote datacenter.

    The size of the files are between 3-10 mb.

    Our BizTalk servers are located in our European datacenter and we have a remote datacenter located in Asia. (Line 100 mbit download and 50 mbit upload with ping of 250ms)

    We are running BizTalk 2013 R2 CU5 Ent, with multiple servers in the BizTalk group and clusted SQL backend.

    When sending to the local datacenter, the file transfer is under 1 second.

    When sending to the remote datacenter, the file transfer takes between 3 and 5 minutes.

    The files are instantly created on the remote site, with 0 kb in size, but stays running in BizTalk for 3 – 5 minutes.

    I would have expected and increase in the transfer time, but nothing like this.

    The strange thing is when I manually copy the files, using windows explore, running under the BizTalk service account (from Europe to Asia), the file transfer is fast.

    In addition, when I use Powershell or Command Prompt the file transfer is fast. (Also running under the BizTalk service account)

    I can transfer 100mb files in under 30 seconds without issues.

    I have tried the following scenarios to isolate the problem.

    • Disable antivirus on the remote datacenter. No performance increase
    • On the file adapter - Set the allow cache to true. No performance increase
    • On the file adapter - Set the use temp files to true. No performance increase
    • On the file adapter - Set the allow cache and use temp files to true. No performance increase

    Anyone got any input or suggestions? 


    Friday, April 28, 2017 10:49 AM

Answers

  • This is just some educated speculation so...the File Adapter is probably much more chatty when writing the files due to the streaming nature of the Pipeline/Adapter architecture.

    Meaning, Windows file copy (Explorer/Powershell) can do one single continuous write operation while the File Adapter writes in many chunks, as the data is read through the Pipeline.

    So, the your issue is latency, not throughput.

    The way to test this would be to write a program that copies a file on 1024k chunks.

    • Marked as answer by Rasmus Jaeger Tuesday, May 2, 2017 10:41 AM
    Sunday, April 30, 2017 12:54 PM
    Moderator

All replies

  • Hi,

    Couple of things you can try to help isolate the issue.

    1. You have multiple servers in the same group. Try to move the send host to a different server, and check again
    2. Do you have more than one BizTalk environment (Prod/QA/Test)? Try a different one with same setup

    Is this a simple messaging app, simply moving one file from one place to another? If so, I assume you are using PassThru pipeline? And of course, what is the thread count on your send host? Try restarting it, and see if it helps.


    Best regards, Kjetil :) Please remember to click "Mark as Answer" on the post that helps you. This can be beneficial to other community members reading the thread.

    My blog

    Friday, April 28, 2017 11:47 AM
  • This is just some educated speculation so...the File Adapter is probably much more chatty when writing the files due to the streaming nature of the Pipeline/Adapter architecture.

    Meaning, Windows file copy (Explorer/Powershell) can do one single continuous write operation while the File Adapter writes in many chunks, as the data is read through the Pipeline.

    So, the your issue is latency, not throughput.

    The way to test this would be to write a program that copies a file on 1024k chunks.

    • Marked as answer by Rasmus Jaeger Tuesday, May 2, 2017 10:41 AM
    Sunday, April 30, 2017 12:54 PM
    Moderator
  • Meaning, Windows file copy (Explorer/Powershell) can do one single continuous write operation while the File Adapter writes in many chunks, as the data is read through the Pipeline.

    If it was the case the same problem will occur with the file adapter and data transfers between local datacenters, eg across Denmark?

    I haven't seen it with 1.5 Gb data files

    rgds /Peter

    Sunday, April 30, 2017 2:50 PM
  • The thing is, distance is not the only determining factor in latency.  You can have high latency across the street if the data has to take an overly circuitous route to get there.
    Monday, May 1, 2017 3:46 PM
    Moderator
  • Hi,

    The issue was latency related. The file adapter has a fixed buffer size of 4096 bytes. <o:p></o:p>

    Will develop a custom file adapter to overcome our issue.<o:p></o:p>

    Thanks for your help<o:p></o:p>


    Tuesday, May 2, 2017 10:41 AM
  • The SDK has a File Adapter you can use already.

    But, keep in mind, this may not be an actual problem.  What you are observing is a combination of the entire message process.

    Tuesday, May 2, 2017 11:25 AM
    Moderator