none
ADO.Net Concurrency RRS feed

  • Question

  • i'm binding a datatable to a datagridview through the tableadapter .Right now there are some 800 columns with average of 100 rows which are being retreived from the databse and hence the application is slow.In order to  optimize the same,the logic which i thought is first take some important columns hwich counts nearly to 50 columns ,and display in the grid.As soon as its getting displayed,ther will be backgound delegate which loads the remaining columns  and populate to the same data table.

    The database  will keep changing every 2minutes,so i would like to put a concurrency or lock or transcation type of approach which retreives only the records which are related to the records retreived  which are displayed in the grid.

     

    My questions are

     

    1.Is the approach right to call the database to retreive asynchronously.Since i cannot go out from table adapters.

    2.which can be used for establishing the concurrency from ADO .net  "Transaction Scope " or "lock (tableadapter)" or some other thing.

    3.how to merge it in the same table.

     

    Please advice me on the same.

     

    Thanks in advance.

     

     

    Tuesday, March 25, 2008 5:49 PM

Answers

  • You don't have a concurrency problem you have object and relational modeling problem because in OLTP(online transaction processing) your table cannot be 800 columns because your objects must be converted to the smaller relational model. I am assuming Excel files are loaded into a database Excel is flat file without rules a database comes with more than 3000 pages of ANSI SQL rules.  You create a table by files and association while relationship is determined by upper and lower bound cardinality.  A valid solution move the columns you need to a view, if you don't your application will not run.

     

     

    Thursday, April 3, 2008 6:28 PM

All replies

  • It sounds like there are a lot of different issues in place here. I don't know how much control you have over the situation, but I'll list a few items here:

     

    1.  I can't imagine a situation where it would be reasonable to have 800 columns in a table. The database schema should be updated to reduce that column count. Don't forget that in general, SQL performs better with more rows, less columns, rather than the opposite.

     

    2.  There is no reasonable way to show 50 or 800 columns in a single DataGridView. Even with scrolling, it's a user interface nightmare. In general, you can only fit about 20 columns in a single screen width at a reasonable font size/resolution. Also, the DataGridView itself will perform badly with that many columns to deal with.

     

    3.  Are there really 50 important things that need to be displayed at once? A typical person can only comprehend 5-7 unique items of data at a time. That means that they will only really be able to think about 1/10th of the important data at once, and that will make the UI confusing and stressful, since they will have to constant choose which set of things to pay attention to. This is a psychology and usability thing, and the rules aren't hard and fast, but I would reconsider and try to make that list of important columns smaller. Possibly business rules need to be considered, about how the data is being used, perhaps some aggregates, calculated results, etc, could reduce the list to say 10 items.

     

    That said, in order to improve the performance and usabilty of the program, I suggest making at least one user iterface modification.. Only pull back the "important" columns to display, then if the user wants the other 750 data items, they can select a row, and click a "Details" button/menu item/etc. This will bring up a display that has each column as a row in a big "property list" sort of display that shows everything about that line item. This could be in a modal form, another region of the main form, or what have you..

     

    Since you've already distinguished between what's important and not important, there's no reason to labour your UI or the SQL server with retrieving and displaying all that non-important data all the time.

     

    Also, I want to strongly discourage you from the design you were suggesting. Attempting to introduce any sort of concurrency to this already bad situation would cause a lot potential for instability (and might simply be impossible with the model your currently using).

     

    Also, if you could post more detail, I'd be glad to help you in more detail.

     

    Specifically, since you're discussing transactions and locking, that seems to imply that you intend to use the DataGridView for updating/adding data as well as viewing it. How does this fit in with your "database which is changing every 2 minutes"?

     

    Thanks,

    Troy

    Tuesday, March 25, 2008 8:38 PM
  • hi Thanks a lot for ur reply.

     

    1.I'm using many tables with joins to get the 800 columns.

    2.In the data grid viwe ,i have a seperate check boxes  from which the user can check /Uncheck the column names which he wants.

    3.You are right,depending upon the business rules only ,there is a need to 800 columns.

     

    The values in the database will also be updated from an external service and including the user who is   using the grid.So concurrency comes in to picture.

     

    Please  post more questions  when u need more detail.

     

    Thanks a lot!

    Tuesday, March 25, 2008 9:05 PM
  • Well, a few more questions..

     

    1. Is the service a "black box" or can you modify it's code?

     

    2. How are the joins happening? Is the client application sending a dynamic SQL statement with joins, or it is accessing a View on the server, or a stored procedure?

     

    You might be able to gain a lot of performance back by having the service, when updating, truncate then repopulate a concrete table with the results of those joins, then rebuild it each time it updates, then notify the client of the "new version" of the data available. This way, the load is happening behind the scenes, and only once, when the service updates the results of the big join query.

     

    This gets back to my first question. If the service is something you can change.. Consider this model -- the service acts as an agent for all updating to the database.

     

    So, the service has it's own automated process of updating the data, every two minutes. The client, when it wants to update the data, talks to the service, not the SQL server. The service merges the client changes with it's automatic changes, and commits all of the changes at once during a single transaction, every two minutes. Immediately afterwards, it builds a table with the big 800 column results table that the client need to read from. Then it indicates to the client that there is a new version of the data available. (Optionally, the service could simply timestamp the version of the data in the resutls table, and the client can periodically check to see if there is a newer version available than what it's working with).

     

    This will reduce the joins to a single execution, and control all updating through a common service process.

     

    If that works for you, it would both improve the stability, and the performance of the entire solution.

     

    Hope that helps,

    Troy

     

     

     

     

    Wednesday, March 26, 2008 7:37 PM
  •  thoward37 wrote:

    Well, a few more questions..

     

    1. Is the service a "black box" or can you modify it's code?

     

    2. How are the joins happening? Is the client application sending a dynamic SQL statement with joins, or it is accessing a View on the server, or a stored procedure?

     

    You might be able to gain a lot of performance back by having the service, when updating, truncate then repopulate a concrete table with the results of those joins, then rebuild it each time it updates, then notify the client of the "new version" of the data available. This way, the load is happening behind the scenes, and only once, when the service updates the results of the big join query.

     

    This gets back to my first question. If the service is something you can change.. Consider this model -- the service acts as an agent for all updating to the database.

     

    So, the service has it's own automated process of updating the data, every two minutes. The client, when it wants to update the data, talks to the service, not the SQL server. The service merges the client changes with it's automatic changes, and commits all of the changes at once during a single transaction, every two minutes. Immediately afterwards, it builds a table with the big 800 column results table that the client need to read from. Then it indicates to the client that there is a new version of the data available. (Optionally, the service could simply timestamp the version of the data in the resutls table, and the client can periodically check to see if there is a newer version available than what it's working with).

     

    This will reduce the joins to a single execution, and control all updating through a common service process.

     

    If that works for you, it would both improve the stability, and the performance of the entire solution.

     

    Hope that helps,

    Troy

     

     

     

     

     

    hi,

    That service is actually a  black box service which does populate a number of tables and  joins are made to retreive the data  from those tables to rertreive  nearly 800 columns.So regarding service i cannot do anything.Only i can optimize the

    way to retreiv by calling two instead of a single query.

    Wednesday, March 26, 2008 8:03 PM
  • You don't have a concurrency problem you have object and relational modeling problem because in OLTP(online transaction processing) your table cannot be 800 columns because your objects must be converted to the smaller relational model. I am assuming Excel files are loaded into a database Excel is flat file without rules a database comes with more than 3000 pages of ANSI SQL rules.  You create a table by files and association while relationship is determined by upper and lower bound cardinality.  A valid solution move the columns you need to a view, if you don't your application will not run.

     

     

    Thursday, April 3, 2008 6:28 PM