none
Front file increases dramatically RRS feed

  • Question

  • I am looking at a split Access 2010 database - part of the business process is to assign accumulated orders as a batch process - and that assignment logic is very complex.  It is a module linking to other modules - all with lots of code.  The essence of business logic is that assignment depends on current availability of the worker as well as the worker's current back log.  There is a lot of looping, and contingencies based on order quantity versus availability.

    The original author I am sure crafted this code over months of tweaking, and is no longer around.

    In the end, there is not much new data when this batch process is complete.  The back end does not really increase much.  But the front end blows up 400M each time it runs this batch.  The batch takes 10 min to process on an 8G RAM Win 10 PC.  I have noted that the file size increase is directly related to the order load.  The more orders involving more workers results in more file size increase.  The 10 min /400M size is just the typical.  One can make it faster & smaller with a smaller run. 

    I'm out of date on this side of coding and still think in terms of memory leaks but I don't think that is the right term.  We need to deal with this 400M file size increase preferably not with a manual compact as is being done now - if it is possible to stop the file size increase via more correct code methods.  So I am wondering generically what objects typically are demanding so much file size that are being left behind after each run in a scenario like this.

    Would welcome input from those that have wrestled with this type situation previously.

    Friday, June 3, 2016 9:56 PM

Answers

  • Would welcome input from those that have wrestled with this type situation previously.

    Hi msdn,

    Is this because of heavily loading of a table that is deleted again for the most part? In that case you could use a temp database to process all the input. Afterwards you can delete the temp database.

    Imb.

    • Marked as answer by David_JunFeng Tuesday, June 14, 2016 2:12 PM
    Saturday, June 4, 2016 12:52 PM
  • I agree with Imb. I don't think there is a way to prevent the file size increase if you're using the front end as a scratch pad to perform complex processes other than manually compacting it or setting the Compact on Close option. You can either throw away the front end and just keep using a fresh copy each time you need to run the process or use a temporary database as Imb suggested. You can take a look at this demo to see how to create a temporary database. Just my 2 cents...
    • Marked as answer by David_JunFeng Tuesday, June 14, 2016 2:12 PM
    Saturday, June 4, 2016 3:26 PM

All replies

  • Would welcome input from those that have wrestled with this type situation previously.

    Hi msdn,

    Is this because of heavily loading of a table that is deleted again for the most part? In that case you could use a temp database to process all the input. Afterwards you can delete the temp database.

    Imb.

    • Marked as answer by David_JunFeng Tuesday, June 14, 2016 2:12 PM
    Saturday, June 4, 2016 12:52 PM
  • I agree with Imb. I don't think there is a way to prevent the file size increase if you're using the front end as a scratch pad to perform complex processes other than manually compacting it or setting the Compact on Close option. You can either throw away the front end and just keep using a fresh copy each time you need to run the process or use a temporary database as Imb suggested. You can take a look at this demo to see how to create a temporary database. Just my 2 cents...
    • Marked as answer by David_JunFeng Tuesday, June 14, 2016 2:12 PM
    Saturday, June 4, 2016 3:26 PM
  • I generally set things up so the client executes a shortcut that first copies the front-end db from a server to the local workstation and then runs that local copy. This way the bloat is eliminated each time the user starts the application.

    Paul

    Sunday, June 5, 2016 12:03 PM
  • In the big set of code I have uncovered 3 separate Make Table queries - with no code that ever deletes that table objects; for these I am wondering if I change to 3 fixed table objects and have a Delete/clear-all query preceding an Append Query (in place of the Make Table query) as to whether this will reduce the continuous bloat.

    Have also uncovered a loop code that includes a call to another module which is itself a loop.  So there is a loop inside a loop - and that undoubtedly is a prime cause for the long processing time....but I am not yet sure if the tasks being performed can avoid this.

    The 3rd thing I have found so far is that it appears that it does its thing across the entire table(s) - the tables have old data going back years that doesn't need to be involved in the current process; so that re-orienting the starting point to a select query of a significantly smaller record set could help.

     
    Monday, June 6, 2016 12:48 PM
  • In the big set of code I have uncovered 3 separate Make Table queries - with no code that ever deletes that table objects; for these I am wondering if I change to 3 fixed table objects and have a Delete/clear-all query preceding an Append Query (in place of the Make Table query) as to whether this will reduce the continuous bloat.

    This blog article might help with your decision. Cheers!
    Monday, June 6, 2016 2:56 PM