none
What is the fastest way to insert data into a Jet (MS Access) database? RRS feed

  • Question

  • Similar questions have been asked, but I really haven't found a simple solution for this simple question:

    I am downloading data from various data sources using the respective .NET 4.0 DBConnections / DbCommands / DbDataReaders, etc. That's all fine and dandy. What I want to do is dump all the results into MS Access. What is the fastest way to do this??

    Here's what I've already done:

    I examine the incoming data (accessed via a DataReader obtained from a DbCommand.ExecuteReader) using the schema table (Reader.GetSchemaTable), and construct a ADO native CREATE TABLE command. I submit that command to the Jet Database via an OleDBCommand, using the Microsoft.ACE.OLEDB.12.0 provider. This works great. In other words, I don't have any questions or issues with preparing any table in which to insert the data!

    The bottleneck comes in the actual insertion process. I have tried all of the following methods:

    DataAdapter

    • Access the output table via a DataAdapter
    • Fill a working DataTable from the DataAdapter
    • Construct a parameterized InsertCommand (so I can keep strong data typing)
    • Use a While (Reader.Read()) loop to cycle through result set
    • For each row, use the DataTable.Rows.Add method to create a new row, get the values from the Reader row, and add it to the DataTable
    • Call the DataAdapter.Update( DataTable ) method

    I've tried all of the following variations:

    • Use a DataAdapter query that returns no rows (WHERE FALSE), or one that returns all rows (no Where clause)
    • Set DataAdapter.SelectCommand to Nothing after the initial fill (to make sure DataAdapter doesn't keep SELECTing every time its updated)
    • Call DataAdapter.Update for every record, once every interval (say, 1,000 records), or only once (at the end)

    Results:

    • The SelectCommand doesn't make any noticeable difference. Whether it's SELECT * WHERE FALSE, SELECT *, or Nothing, the performance is the same. Probably because on the first and only Fill call the table is always empty anyway
    • It may help to reduce the number of Update calls. I'm aware that Jet does not have batch update support, but at least letting the CLR bunch a thousand (or ten thousand) prepared Update calls in a row seems to help. A bit.
    • The bottle neck is definitely in the Update process. If I never call it (i.e. just load an entire result to memory, then discard it, OR never actually Add the new row at all) the reading process is very fast. Both the time and CPU usage is concentrated on the Update call -- and I am updating to a database on my own hard drive.

    Example

    For testing purposes, I connect to a TeraData database via an OdbcReader, and request the entire table data dictionary view: dbc.Tables. For the database I'm working with that comes out to 30,153 rows x 26 fields, taking a total of 79 MB when downloaded to MS Access. When I only call Update every 10,000 records, I can clearly see that the Reader fetches 10,000 records, reads the fields, and saves the data to the DataTable in memory in 6-7 seconds. At about 20 MB, that's probably limited only by the network and database provider speed. (I'd say 3.0 MB / sec is pretty good!). But it then takes 12-13 seconds to insert the records in the MS Access database. Here are the actual stats. The final update after the End read is the last 153 records:

    Begin reading at 3/1/2011 8:44:00 AM
     Updating through record 10000 at 3/1/2011 8:44:06 AM
     Finished update at 3/1/2011 8:44:18 AM
     Updating through record 20000 at 3/1/2011 8:44:25 AM
     Finished update at 3/1/2011 8:44:38 AM
     Updating through record 30000 at 3/1/2011 8:44:55 AM
     Finished update at 3/1/2011 8:45:08 AM
    End read at 3/1/2011 8:45:08 AM
    Begin final update at 3/1/2011 8:45:08 AM
    End final update at 3/1/2011 8:45:08 AM
    

    So my program, network, and Teradata DB can chug through 40,000 element reads totalling 3.0 MB every second, but it takes twice as long to insert the same data to a database file on my own hard drive.


    jmh
    Tuesday, March 1, 2011 1:58 PM

Answers

  • Hello JMH,

    There are some general items you can try to improve the performance of executing many update statements, but the engine is not designed for a high-stress environment and many scenarios using the Access Connectivity Engine are actually unsupported.

    Have you reviewed the conditions on using the engine from the download page?

    Microsoft Access Database Engine 2010 Redistributable

    http://www.microsoft.com/downloads/en/details.aspx?FamilyID=c06b8369-60dd-4b64-a44b-84b371ede16d&displaylang=en

    Unless you are specifically using the Access database file from the Access client somewhere in your solution, it's not recommended to write your data to the database file. Instead you should be using SQL Server such as SQL Server 2008 Express. This is discussed in the following article.

    Data Access Technologies Road Map

    http://msdn.microsoft.com/en-us/library/ms810810.aspx

    If you find that your solution follows the recommended road map and falls within the supported uses, then you can review the following article which shows some statistics on the varying methods of working with the data and the corresponding times for these methods.

    Data Programming with Access 2010

    http://msdn.microsoft.com/en-us/library/ff965871.aspx

    As with any design, you will want to follow the basic design principles:

    - Setup a primary key and indexes on your tables

    - Use transactions for looping updates

    KB KB 304266 How to add an index to an Access database in Access 2002

    http://support.microsoft.com/default.aspx?scid=kb;EN-US;304266

    KB 146908 How To Speed Up Data Access by Using BeginTrans & CommitTrans

    http://support.microsoft.com/default.aspx?scid=kb;EN-US;146908

    It sounds like your approach will issue many locks. You could explore the following engine settings to see if tweaking these values improves your performance.

    maxLocksPerFile - The following article will give you some general information on this setting, but it doesn't include the correct path to the key for the Access 2010 Engine. The default value is 9500. You can try modifying the value to something like 500000.

    Here is the path to search for in your registry:

    x86: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office\14.0\Access Connectivity Engine\Engines\ACE

    x64: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Office\14.0\Access Connectivity Engine\Engines\ACE

    KB 286153 You may receive a "There isn't enough disk space or memory" error message when you perform an operation on an Access table

    http://support.microsoft.com/default.aspx?scid=kb;EN-US;286153

    maxBufferSize - This key is located in the same directory as the maxLocksPerFile. The default value is 0. You can try modifying the value to something like 51200.

    KB 187872 How To Determine Jet Memory Usage with DAO MaxBufferSize

    http://support.microsoft.com/default.aspx?scid=kb;EN-US;187872

    Regards,

    Dennis

    • Marked as answer by Joshua Honig Wednesday, March 9, 2011 12:30 PM
    Monday, March 7, 2011 10:00 PM
  • I'm with Dennis on this one. It's like a guy with a vintage VW beetle coming into back into the dealership telling the salesman that he can't haul nearly as much coal as he had expected. The JET/Access database engine is really designed for small, single-user database applications with occasional sharing. Getting it to do more really requires considerable engineering and finger-crossing.

    Consider that SQL Server express (and all other versions of SQL Server (service) engines) have "bulk copy" interfaces designed specifically for data import. SSE is far more robust, scalable and secure than JET/Access.

    I know, you're stuck with Access/JET but you may be beating a dead horse (or goat in this case).


    __________________________________________________________________
    William Vaughn
    Mentor, Consultant, Trainer, MVP
    http://betav.com
    http://betav.com/blog/billva
    http://www.hitchhikerguides.net

    “Hitchhiker’s Guide to Visual Studio and SQL Server (7th Edition)”

    Please click the Mark as Answer button if a post solves your problem!

    • Marked as answer by Joshua Honig Wednesday, March 9, 2011 12:30 PM
    Monday, March 7, 2011 11:44 PM
    Moderator

All replies

  • The Benchmark

    If 20,000 element reads per second totalling 1.5 MB for updating to MS Access seems good, here's why I complain: It takes half the time to download the same data with a VBA program I wrote in MS Access which uses ADO 2.8 to fetch the data, DAO to create the table and fill it, and of course the VBA runtime for the the containing program.

    Further, my VBA program takes a fraction of the CPU that the shiny new ADO.NET / OleDB / Microsoft.ACE.OleDB.12.0 program does. It's wonderful programming in actual VB, with all the bells and whistles of Visual Studio 2010, but what's up with the crummy performance? I though ADO.NET was supposed to be so much better. The fact that it gets trounced by a VBA program using old COM objects has me scratching my head.


    jmh
    Tuesday, March 1, 2011 2:09 PM
  • Method 2: Command with no DataAdapter

    The second method I tried is to skip the DataAdapter, and instead create a stand-alone parameterized query. This allows the program to skip the step of saving data to an in-memory data table. Instead, each row is updated by setting the value of each parameter directly. The trade off is that each row must be inserted with an explicit ExecuteNonQuery call.

    Result:

    It takes slightly less time than the DataAdapter method: my example query took 59 seconds vs. the 68 seconds with the DataAdapter. Still about twice the time my VBA program takes (31 seconds).

    Clarification on elapsed time figures: All the elapsed time figures measure from immediately before the read process to immediately after the read process. The time spent waiting for the DB provider to respond is not included. The timer only starts with the RecordSet (COM ADO) / DataReader (VB.NET) is returned and ready for reading. In other words, these stats are not affected by variations in DB provider query response time.


    jmh
    Tuesday, March 1, 2011 2:41 PM
  • Method 3: Text literal INSERT INTO statement

    The third method I tried is to create my own literal INSERT INTO SQL statement. I don't like this idea in general because of the risk to data typing, but I gave it a try anyway. In order to preserve data types as best I can, I convert values to strings in one of four ways:

    1. For text data (CHAR, TEXT, or MEMO fields in destination table): First replace single apostrophe ['] with two apostrophes ['']. This is necessary for escaping the apostrophe character. Second, enclose in apostrophes: '[value]'
    2. For numeric data, use VB.NET .ToString method. If null, set value = literal "Null"
    3. For datetime data, enclose in pound signs [#]: #[date time]#. If null, set value = literal "Null"
    4. For binary data, convert the data to a hex string, using Reader.GetBytes to retrieve a byte array, and BitConverter.ToString( bytearray ) to convert the byte array to a hexadecimal string. Finally, remove the "-" charcters in the resulting string and add "0x" to the beginning, which is how binary literal values are tagged in Jet SQL.

    To minimize the individual reads of schema information, I prepare separate arrays of fieldname, datatype (text, num, datetime, or binary), and field length (necessary for dimensioning byte array). The common "INSERT INTO [Table] ( [Field1] ... [Fieldx] ) " is also created only once and appended to the beginning of each individual query statement.

    In short, I create a literal INSERT INTO statement in the most efficient manner possible while preserving general type integrity.

    Testing methodology:

    1. Test raw download speed: Call Reader.Read but do nothing.
    2. Test raw data access speed: Read each field in each record and assign value to a throwaway Object variable to force reading of actual element value.
    3. Test incremental time to prepare literal sql statment: Only read data and prepare query text, without assigning to a command or executing a command.
    4. Test incremental time to set CommandText property of command object: Same as above but assign query text to the Command.CommandText property, rather than a throwaway string variable
    5. Test execution time: Actually execute the query (Command.ExecuteNonQuery())

    Result:

    1. Raw download: 8 - 15 seconds, 2-5% CPU
    2. Data access: 15 - 20 seconds, 10 - 18% CPU
    3. Prepare ~type-safe literal statement: 18 - 26 seconds, 20% CPU
    4. Assign statement to command object: 16 - 30 seconds, 20% CPU
    5. Execute: 50 - 65 seconds, 40% CPU

    jmh
    Tuesday, March 1, 2011 5:11 PM
  • Method 1 (DataAdapter) Revisited

    Ok, this is worth noting. If I put the entire result set in memory and then update to the Access table all at once, this is what happens:

    Trial 1

    Begin reading at 3/1/2011 12:18:07 PM
    End read at 3/1/2011 12:18:20 PM
    Begin update at 3/1/2011 12:18:20 PM
    End update at 3/1/2011 12:18:58 PM

    Trial 2

    Begin reading at 3/1/2011 12:23:20 PM
    End read at 3/1/2011 12:23:33 PM
    Begin update at 3/1/2011 12:23:33 PM
    End update at 3/1/2011 12:24:10 PM

    In other words, it took 13 seconds to read the entire result set, assign the values to the individual new row objects, and append the individual rows to the DataTable in memory. I.e. I can see that the ADO.NET DataReader / DataTable interfaces really are very fast.

    However, it then took 37 - 38 seconds to dump the table from memory into the MS Access table. The whole batch of physical updates is handled behinds the scenes by the ADO.NET components, using the parameter UpdateCommand query prepared earlier, and using the DataTable as the update source. That is, the only explicit command I call to push the in-memory DataTable to the MS Access database is Adapter.Update( DataTable ). Thus I believe I am seeing the maximum possible insert speed provided by using an ADO.NET OleDB connection to a Jet database.

    Use Transaction Statement

    Sometimes people recommend using transaction processing to increase insert performance. I did this by submitting the statement "BEGIN TRANSACTION" via OleDbCommand.ExecuteNonQuery immediately before calling Adapter.Update( DataTable ), then submitting "COMMIT" immediately afterwards. This did not have any impact on speed. It still took 35 - 36 seconds to perform the update. And note I know the TRANSACTION statement worked, because if I submit "ROLLBACK" instead of "COMMIT", the insert really isn't committed. The resulting database has the table but no records in it.


    jmh
    Tuesday, March 1, 2011 5:54 PM
  • I'm struggling with similar problem for the last few days. I must append data converted from binary file inside Access database. I put all my hopes in begin/commit transaction. I try  DAO, ADODB, OLEDB but my results are very disappointing. 

    My best time was achieved using ADODB (Microsoft.ACE.OLEDB.12.0) and adExecuteNoRecords option while executing. I wrote 6000 rows x 350 fields (mostly single values) divided in multiple database tables. Execution time was about 17 seconds.

    Tuesday, March 1, 2011 11:32 PM
  • My best time was achieved using ADODB (Microsoft.ACE.OLEDB.12.0) and adExecuteNoRecords option while executing. I wrote 6000 rows x 350 fields (mostly single values) divided in multiple database tables. Execution time was about 17 seconds.
    Glad to hear I'm not alone in my struggles. Can you give a little more info on the actual objects and methods you are referring to?
    jmh
    Wednesday, March 2, 2011 3:54 PM
  • Hello JMH,

    There are some general items you can try to improve the performance of executing many update statements, but the engine is not designed for a high-stress environment and many scenarios using the Access Connectivity Engine are actually unsupported.

    Have you reviewed the conditions on using the engine from the download page?

    Microsoft Access Database Engine 2010 Redistributable

    http://www.microsoft.com/downloads/en/details.aspx?FamilyID=c06b8369-60dd-4b64-a44b-84b371ede16d&displaylang=en

    Unless you are specifically using the Access database file from the Access client somewhere in your solution, it's not recommended to write your data to the database file. Instead you should be using SQL Server such as SQL Server 2008 Express. This is discussed in the following article.

    Data Access Technologies Road Map

    http://msdn.microsoft.com/en-us/library/ms810810.aspx

    If you find that your solution follows the recommended road map and falls within the supported uses, then you can review the following article which shows some statistics on the varying methods of working with the data and the corresponding times for these methods.

    Data Programming with Access 2010

    http://msdn.microsoft.com/en-us/library/ff965871.aspx

    As with any design, you will want to follow the basic design principles:

    - Setup a primary key and indexes on your tables

    - Use transactions for looping updates

    KB KB 304266 How to add an index to an Access database in Access 2002

    http://support.microsoft.com/default.aspx?scid=kb;EN-US;304266

    KB 146908 How To Speed Up Data Access by Using BeginTrans & CommitTrans

    http://support.microsoft.com/default.aspx?scid=kb;EN-US;146908

    It sounds like your approach will issue many locks. You could explore the following engine settings to see if tweaking these values improves your performance.

    maxLocksPerFile - The following article will give you some general information on this setting, but it doesn't include the correct path to the key for the Access 2010 Engine. The default value is 9500. You can try modifying the value to something like 500000.

    Here is the path to search for in your registry:

    x86: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office\14.0\Access Connectivity Engine\Engines\ACE

    x64: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Office\14.0\Access Connectivity Engine\Engines\ACE

    KB 286153 You may receive a "There isn't enough disk space or memory" error message when you perform an operation on an Access table

    http://support.microsoft.com/default.aspx?scid=kb;EN-US;286153

    maxBufferSize - This key is located in the same directory as the maxLocksPerFile. The default value is 0. You can try modifying the value to something like 51200.

    KB 187872 How To Determine Jet Memory Usage with DAO MaxBufferSize

    http://support.microsoft.com/default.aspx?scid=kb;EN-US;187872

    Regards,

    Dennis

    • Marked as answer by Joshua Honig Wednesday, March 9, 2011 12:30 PM
    Monday, March 7, 2011 10:00 PM
  • I'm with Dennis on this one. It's like a guy with a vintage VW beetle coming into back into the dealership telling the salesman that he can't haul nearly as much coal as he had expected. The JET/Access database engine is really designed for small, single-user database applications with occasional sharing. Getting it to do more really requires considerable engineering and finger-crossing.

    Consider that SQL Server express (and all other versions of SQL Server (service) engines) have "bulk copy" interfaces designed specifically for data import. SSE is far more robust, scalable and secure than JET/Access.

    I know, you're stuck with Access/JET but you may be beating a dead horse (or goat in this case).


    __________________________________________________________________
    William Vaughn
    Mentor, Consultant, Trainer, MVP
    http://betav.com
    http://betav.com/blog/billva
    http://www.hitchhikerguides.net

    “Hitchhiker’s Guide to Visual Studio and SQL Server (7th Edition)”

    Please click the Mark as Answer button if a post solves your problem!

    • Marked as answer by Joshua Honig Wednesday, March 9, 2011 12:30 PM
    Monday, March 7, 2011 11:44 PM
    Moderator
  • Dennis, thanks for the very thorough reply! I'm reading through the references you provided as well. William, thanks for your two cents, too. And well said :).

    I hope that getting such clear answers from eminently qualified MS experts like the two of you can help me build my case to my management to go ahead with SSE for my team. It's just 5 of us doing tech-heavy complicated fraud analysis. In fact we already have full versions of Sql Server Management Studio for interacting with enterprise level databases.


    jmh
    Wednesday, March 9, 2011 12:30 PM
  • A follow up question:

    It sounds like your approach will issue many locks. -- Dennis

    How do I perform lots of updates without requiring so many locks? Is this an issue specific to Jet, or will I run in to the same situation when inserting into a Sql Server database?


    jmh
    Wednesday, March 9, 2011 12:34 PM
  • Unlike JET, SQL Server supports a bulk-copy interface that permits mass updates. The trick there is to import into a "work" table and use a stored procedure (code that runs on the server) to validate and fold the data into the base tables. Any UPDATE operation will lock rows, pages and even entire tables for a period of time. However, consider that SQL Server is designed to support hundreds of users and has the performance and architecure to perform changes efficiently. Jet, not so much.

    __________________________________________________________________
    William Vaughn
    Mentor, Consultant, Trainer, MVP
    http://betav.com
    http://betav.com/blog/billva
    http://www.hitchhikerguides.net

    “Hitchhiker’s Guide to Visual Studio and SQL Server (7th Edition)”

    Please click the Mark as Answer button if a post solves your problem!

    Wednesday, March 9, 2011 4:38 PM
    Moderator
  • Relative connection times

    Now, I know the time it takes to establish a connection is not necessarily indicative of the time it takes to insert data, but I did find this interesting:

    I made a Windows Forms app with VB.net. It opens closes a connection to a data source as many times in a row as I tell it to, and measures the elapsed time with a Stopwatch. The three types of connections I tested were:

    I did some initial testing to ensure that the connections were truly opened and closed by each .Open and .Close method. (Or New( FilePath ) and .Close methods for the StreamReader.) I ran multiple tests using a cycle of 10,000 connections for the OdbcConnections and StreamReaders, or 500 - 1000 connections for the OleDb --> Jet connection. I also separate the first connection from the rest, since there is often an inordinate latency for the first connection to a remote database. (By inordinate I mean several hundred milliseconds vs. less than 1 millisecond).

    So, the results: These are the very consistent average times for opening and then closing a connection to the following data sources:

    Interface  Provider ms
    Odbc Teradata 0.06
    Odbc DB2 on z/OS 0.08
    Odbc SQL Server 0.04
    StreamReader text file 0.03
    OleDb Jet (Microsoft.ACE.OLEDB.12.0) 46.8

    So you can see, connection cycle time for an Jet db (Access 2007 format) on my local hard drive is about 500 - 1,000 times slower than an ODBC connection over and enterprise network, or 1,500 times slower than a text stream handle to a file on my local hard drive.

     


    jmh
    Thursday, March 10, 2011 5:02 PM
  • LOL. These are the same results I showed Microsoft when I was on the VB team almost 15 years ago. OLE DB's (and JET's) connection performance sucks. The problem is, they seem to think it's necessary to pre-populate the entire database schema when the connection is first established. This is a one-time cost but yes, it's really terrible--especially for databases with lots of tables and other objects.

    So, no, this does not make JET any faster or slower when loading data--unless you close and reopen the connection on each batch. Consider as well that JET might still be caching the rows locally and sending them on to the datafile in batches to improve performance. This can be disabled by using transactions (ironically).

    I would try the heart-paddles on that dead horse... ;)


    __________________________________________________________________
    William Vaughn
    Mentor, Consultant, Trainer, MVP
    http://betav.com
    http://betav.com/blog/billva
    http://www.hitchhikerguides.net

    “Hitchhiker’s Guide to Visual Studio and SQL Server (7th Edition)”

    Please click the Mark as Answer button if a post solves your problem!

    Thursday, March 10, 2011 10:37 PM
    Moderator
  • "I'm not dead yet!"

    Difficult but effective: XML import / export using the ImportExportSpecification Object

    After much research, coding, and testing, I figured out a method that works well but requires advanced programming. In short, use the ImportExportSpecification object, which was introduced in MS Access 2007, to generate an XML Schema Definition (XSD) file for your destination table, then import programmatically created XML data files into the table.

    The "basic" steps:

    1. Use the DoCmd.TransferDatabase method, using the StructureOnly flag, to copy just the structure of the destination table to a temporary table
    2. Programmatically create an ImportExportSpecification that will export the temporary table while generating a separate XSD file, which defines the structure separate from the content. ImportExportSpecifications are a new thing in Office 2007+; they are NOT the same thing as the old specifications which can still be created using the "Advanced" dialog during the import / export process.

      Note: The difficult part here is generating the XML definition of the IMEX specification itself. The syntax of the XML is not officially documented anywhere, but can be inferred by:
      1. Manually create an export spec by using the GUI, saving it at the last step.
      2. Dump the XML definition by using something like:

        Debug.Print CurrentProject.ImportExportSpecifications([specname]).XML
      3. Use this as a template in your own code, and use XML.XMLDocument, XML.XMLElement, etc. to insert your own values for the [docroot]:Path, ExportXML:AccessObject, and ExportXML:SchemaTarget attributes.

    3. Call the Execute method of the ImportExportSpecification object to generate the schema file and empty data file.
    4. Delete the temporary table in the database (DoCmd.DeleteObject).
    5. Programatically create an ImportExportSpecification that will import the XML data file and append it to the destination table in the database. Again, see Note under item 2.
    6. Delete the empty data file from disk (or save it to a different name for reference)
    7. For each chunk of data you want to insert:
      1. Create an XMLDocument with the requisit dataroot root element. (Once again, see Note under 2 for tips).
      2. For each row of data:
      3. Create an element with the same name as the destination table
      4. Within this element, insert elements with the name of the destination column, and the content in the innertext of the element.
      5. Save the XMLDocument to the XML file name specified in your import spec
      6. Call the execute method of the import spec

      NOTES:
      • If you're working with date data, make sure you format it correctly.
      • If you're working with binary data, you'll have to go a step further and generate a Base64Binary string representation to insert in the respective field element. (See XML Schema:Base64Binary, SoapBase64Binary Class)
      • COM Interop can be a bit clunky. To make sure each [spec].Execute() call works, reset the reference the spec object before each call. For example:

        Imports Acc = Microsoft.Office.Interop.Access
        
        [Psuedo code]:
        Class XMLIOClass
        
         Private pAccApp As Acc.Application
         Private pAccDb As DAO.Database
         Private cp As Acc._CurrentProject = Nothing
         Private imexspecs As Acc.ImportExportSpecifications = Nothing
         Private imex As Acc.ImportExportSpecification = Nothing
        
         Sub InitialStuff
          pAccApp = [get your app]
          pAccDb = pAccApp.CurrentDb
          cp = pAccDb.CurrentProject
          imexspecs = cp.ImportExportSpecifications
         End Sub
        
         Sub RunTheSpec
          'Reset the object
          imex = Nothing
          imex = imexspecs("Spec Name")
          imex.Execute()
         End Sub
        
         Sub EndStuff 
          [A bunch of Marshal.FinalReleaseComObject(object) stuff]
         End Sub
        
        End Class

         
    8. Delete the import spec

    Other resources:


    jmh


    Tuesday, March 29, 2011 4:46 PM