none
C# Desktop App connecting to remote SQL server (Very Slow) RRS feed

  • Question

  •  

    Hi,

     

    I have written an application that gets data from sql server on our web host. The only problem is that its very slow when dealing with large data.

    Whats the best method when working with this kind of situation, where the database is across the world?

     

    Right now, its a direct connection and stored procedures. Can it be done a better way?

     

     

    Craig

    Saturday, April 26, 2008 9:25 AM

All replies

  • Normally, the problem with remote systems isn't data volume - it is round trips. Meaning: if it needs to have a "chatty" conversation with 270 different requests/respoonses, then you are heavily constrained (at the limit) by the speed of light (genuinely). With distributed systems, perhaps the best approach is to design a specific API, usually using technologies like WCF or SOAP - with methods (etc) specific to what you need - i.e. "ListAllOrdersForCustomer", or "UpdateOrderAndLines" - the idea being to get all the required data from A to B in a single request, rather than a call to "SaveOrderHeader" followed by 20 calls to "SaveOrderLine" (if you see what I mean).

     

    When data *volume* is the issue (bulk data transfers), then compression is your friend, as is choosing an appropriate data-format... for instance, CSV/TSV is less bloated than simple xml (POX), which is less bloated than a serialized recordset [noting however that all 3 should compress fairly well - but obviously starting with less data helps ;-p]. Equally, the transport can be a factor; for example, compressed data travels as binary, which by default (over SOAP) requires base-64 encoding, which incrases the volume by a third - however, this can be resolved by using the MTOM protocol which is supported on WSE3 and WCF [basicHttpBinding].

     

    So; it isn't a simple question ;-p

     

    The first thing I would do is set up a network trace, and figure out whether the problem is:

    a: lots of requests/responses [too chatty - redesign API]

    b: few requests/responses, but very large payloads [apply compression; consider redesign of API]

     

    Saturday, April 26, 2008 9:55 AM
  • Oh - and for the record, I wouldn't normally allow a desktop client direct access to a SQL Server - clients can be easily abused (for example, reverse engineered via reflector) and exploited. From a security perspective, it is a better idea to have the client call an app-server (WCF or SOAP) with *known* methods (like already mentioned), and having the app-server talk to the SQL-server.

     

    This also makes it possible to use a "trusted subsystem" model between the app-server and SQL-server - meaning that the app-server talks to the SQL-server in a single known identity, using the same connection string each time: this allows optimal use of the connection pool, reducing the number of open connections to the SQL server, and dramatically increasing the scalability.

    Saturday, April 26, 2008 10:01 AM
  •  

    Make sure your SQL queries are efficient and try to use stored procedures also.

    It could also be to do with say, some AV software are fierce and they slow down outgoing connections/incoming connections. It may also be to do with the hoster's performance, but can be a number of things.

     

    as stated earlier, I would use a webservice to connect to the database and return back data that you need. But check the performance of the network and your query, make sure its good and efficient and not using unneccesary queries in your sql query.

     

    in addition to that, make sure you only retrieve records you need, not all records if its a large database for obvious reasons, in this case performance...select only the data you need

    Saturday, April 26, 2008 10:19 AM
  • You need to describe the scenario a litle further. Just explaining of transfer large data is not enough. You must give us litle more to think on. Transfering data between two locations can be done on many ways, but all depends on the application scenario. If you want to have only one SQL Server then you need to limit the data transfered, which means as less rows as posible, and as less columns as posible. Data that are not changed to offen will be cashed and transfered once a day for example. You will not show thousands of data at once, instead will have paging and show only 20-30 records per page. Paging will be developed on SQL Server side, not client side.

    You can also consider the help of SQL Server using replication and in your case you can use point to point replication to connect to sql server using internet connection.

     

    Saturday, April 26, 2008 11:16 AM