none
Bulk insert nullable ints RRS feed

  • Question

  • Hi,

     

    I'm trying to import CSV data into Sql Server 2000 using the System.Data.SqlClient.SqlBulkCopy class. When a table has a nullable column of type int, I'm running into a problem; its as if SqlBulkCopy forgot to check if the column is nullable and expects an int. I've tried both with a zero-size field (i.e. nothing between the commas) in the CSV, and with the string "NULL". In both cases the WriteToServer method crashes because it couldn't parse the input string as an int.

     

    The relevant exception stack trace looks like this:

     

    System.InvalidOperationException: The given value of type String from the data source cannot be converted to type int of the specified target column. ---> System.FormatException: Failed to convert parameter value from a String to a Int32. ---> System.FormatException: Input string was not in a correct format.
       at System.Number.StringToNumber(String str, NumberStyles options, NumberBuffer& number, NumberFormatInfo info, Boolean parseDecimal)
       at System.Number.ParseInt32(String s, NumberStyles style, NumberFormatInfo info)
       at System.String.System.IConvertible.ToInt32(IFormatProvider provider)
       at System.Convert.ChangeType(Object value, Type conversionType, IFormatProvider provider)
       at System.Data.SqlClient.SqlParameter.CoerceValue(Object value, MetaType destinationType)
       --- End of inner exception stack trace ---
       at System.Data.SqlClient.SqlParameter.CoerceValue(Object value, MetaType destinationType)
       at System.Data.SqlClient.SqlBulkCopy.ConvertValue(Object value, _SqlMetaData metadata)
       --- End of inner exception stack trace ---
       at System.Data.SqlClient.SqlBulkCopy.ConvertValue(Object value, _SqlMetaData metadata)
       at System.Data.SqlClient.SqlBulkCopy.WriteToServerInternal()
       at System.Data.SqlClient.SqlBulkCopy.WriteRowSourceToServer(Int32 columnCount)
       at System.Data.SqlClient.SqlBulkCopy.WriteToServer(IDataReader reader)

     

    Now, I am not really sure why one would want to use nullable ints, it seems to me that for most "natural attributes" of integral types, zero would logically mean the same as null. (For example, when counting, zero and null seems to me to be the same thing.) However, I do not necessarily control the database schema and I really would like to be able to export simply by writing CSV data resulting from SELECT * FROM [TABLE]; and import by using a CsvReader implementing IDataReader and SqlBulkCopy.

     

    I'm fairly sure querying the db schema and parsing the input to handle those rows with nulls separately from the rest (generating INSERT statements for example) will result in drastically worsened performance - even when there are no rows with nulls the parsing would add some overhead.

     

    Is this a bug in SqlBulkCopy? How can I solve or work around the problem?

    Tuesday, April 10, 2007 10:28 AM

Answers

  • Hi,

     

    I found the answer, so figured I should share it with the world (or the five people who read this anyway!)... The object I'm using to read the CSV data is called CsvReader, and I found that on codeproject: http://www.codeproject.com/cs/database/CsvReader.asp

     

    This is a fast, forward-only, non-cached implementation, and it works brilliantly with streams to create a small memory footprint that doesn't depend on the size of the data file itself. No caching may sound bad but it's great in a streaming architecture since other streams such as filestream is already buffered; hence buffering again in the same stream pipeline would add nothing but bloat.

     

    I don't know enough about how the IDataReader and IDataRecord interfaces "should" be used to tell if anyone did anything wrong, but by setting breakpoints I was able to determine that SqlBulkCopy actually never calls IDataRecord.IsDBNull(int). I quickly modified the CsvReader.GetValue(int) to return DbNull.Value whenever the string value of the field is empty and the index is 9 (of course only useful for my particular test file). It turns out that the bulk insert works brilliantly now.

     

    In case I'm giving other people ideas here, let me rush to mention that there is still a potential problem with the use of CSV as data format: There is no way to distinguish between an empty nvarchar field and a null value; both are represented as a zero-length field in CSV. Luckily, I don't need to deal with that for text fields, so I will be alright.

     

    So my solution is to add a BitArray to the CsvReader where I can set flags corresponding to the indexes of nullable columns, and use this to return DbNull.Value from CsvReader.GetValue(int) when the data is an empty field and the column is nullable. I use the information schema to dynamically discover the nullable columns immediately prior to the bulk insert. The code goes a bit like this:

     

    int[] getNullableColumns(string table)

    {

    const string q

    = "SELECT [ordinal_position]-1 FROM [information_schema].[columns] "

    + "WHERE [table_name] = '{0}' AND [is_nullable] = 'YES' "

    + "ORDER BY [ordinal_position]";

    return (int[])DB.GetColumn(typeof(int), q, table);

    }

     

    The GetColumn method is just a helper method that creates the SqlCommand, formats the sql string (NB! Don't do this with user input - it opens you up to SQL injection attacks; you can do basically the same almost as easily with SqlParameter objects instead.), and creates an array with the values of the first column of the first table of the result set. The point here is the query. Note that the ordinal position is a one-based index, whereas the index passed to the IDataReader.GetValue(int) is a zero-based index (hence I subtract one from the information schema index).

     

    The code using this then becomes

     

    void bulkInsert(Stream csvDataStream, string destinationTable)

    {

       if (csvDataStream == null) throw new ArgumentNullException("s");

       StreamReader sr = new StreamReader(csvDataStream);

       CsvReader reader = new CsvReader(sr, false);

       reader.SetNullable(getNullableColumns(destinationTable));

       bulkInsert(reader, destinationTable);

    }

     

     

    This works like a charm.

    Tuesday, April 10, 2007 12:45 PM

All replies

  • I just realised that nullable ints have a very natural place in foreign key fields. All the more reason then to figure out how to handle them when using SqlBulkCopy!

     

    I really hope someone can help as this is blocking and occuring in the middle of implementation rather than during the design phase.

    Tuesday, April 10, 2007 10:38 AM
  • Sorry for providing information in pieces here, but I am discovering this as I go along..!

     

    It seems that this is not specific to the int type; any nullable column of non-string type such as smalldatetime appears to produce the same problem. If the CSV data is an empty string, conversion to the expected type fails and WriteToServer crashes as a result.

     

    So the question then becomes how can I bulk insert CSV data when the target table has nullable columns?

    Tuesday, April 10, 2007 10:51 AM
  • Hi,

     

    I found the answer, so figured I should share it with the world (or the five people who read this anyway!)... The object I'm using to read the CSV data is called CsvReader, and I found that on codeproject: http://www.codeproject.com/cs/database/CsvReader.asp

     

    This is a fast, forward-only, non-cached implementation, and it works brilliantly with streams to create a small memory footprint that doesn't depend on the size of the data file itself. No caching may sound bad but it's great in a streaming architecture since other streams such as filestream is already buffered; hence buffering again in the same stream pipeline would add nothing but bloat.

     

    I don't know enough about how the IDataReader and IDataRecord interfaces "should" be used to tell if anyone did anything wrong, but by setting breakpoints I was able to determine that SqlBulkCopy actually never calls IDataRecord.IsDBNull(int). I quickly modified the CsvReader.GetValue(int) to return DbNull.Value whenever the string value of the field is empty and the index is 9 (of course only useful for my particular test file). It turns out that the bulk insert works brilliantly now.

     

    In case I'm giving other people ideas here, let me rush to mention that there is still a potential problem with the use of CSV as data format: There is no way to distinguish between an empty nvarchar field and a null value; both are represented as a zero-length field in CSV. Luckily, I don't need to deal with that for text fields, so I will be alright.

     

    So my solution is to add a BitArray to the CsvReader where I can set flags corresponding to the indexes of nullable columns, and use this to return DbNull.Value from CsvReader.GetValue(int) when the data is an empty field and the column is nullable. I use the information schema to dynamically discover the nullable columns immediately prior to the bulk insert. The code goes a bit like this:

     

    int[] getNullableColumns(string table)

    {

    const string q

    = "SELECT [ordinal_position]-1 FROM [information_schema].[columns] "

    + "WHERE [table_name] = '{0}' AND [is_nullable] = 'YES' "

    + "ORDER BY [ordinal_position]";

    return (int[])DB.GetColumn(typeof(int), q, table);

    }

     

    The GetColumn method is just a helper method that creates the SqlCommand, formats the sql string (NB! Don't do this with user input - it opens you up to SQL injection attacks; you can do basically the same almost as easily with SqlParameter objects instead.), and creates an array with the values of the first column of the first table of the result set. The point here is the query. Note that the ordinal position is a one-based index, whereas the index passed to the IDataReader.GetValue(int) is a zero-based index (hence I subtract one from the information schema index).

     

    The code using this then becomes

     

    void bulkInsert(Stream csvDataStream, string destinationTable)

    {

       if (csvDataStream == null) throw new ArgumentNullException("s");

       StreamReader sr = new StreamReader(csvDataStream);

       CsvReader reader = new CsvReader(sr, false);

       reader.SetNullable(getNullableColumns(destinationTable));

       bulkInsert(reader, destinationTable);

    }

     

     

    This works like a charm.

    Tuesday, April 10, 2007 12:45 PM
  • Although the original question has been answered, there is an error in my above SQL; the ordinal_position column is NOT necessarily the same as the index of the column.

     

    I now use this SQL instead, which I *think* does the trick -- I am not certain because it relies on the assumption that if the columns are ordered according to their ordinal_position, the resulting order is the same as the logical order (by index) of the columns. This appears to be the case in my database.

     

    SELECT IDENTITY(int,0,1) [index], [ordinal_position], [is_nullable], [data_type]
      INTO #tempcols
      FROM [information_schema].[columns]
      WHERE [table_name] = @table
      ORDER BY [ordinal_position]

     

    SELECT [index] FROM #tempcols WHERE [is_nullable] = 'YES'

     

     

    If anyone knows a reliable way to query the database to get the column index, preferably relying on the stored procedures and schemas provided for obtaining metadata (i.e. not querying system tables directly, as these are version dependant and no backward compatibility is ensured with future SQL server versions), then please post the query here!

     

    Tuesday, April 10, 2007 1:56 PM
  • Hi dagjo,

    Very well described problem/solution, I only wish I understood C# better. I have been trying without success to use the fast CSV reader to import CSV into SQL 2000 via ASPX - VB.  I am on a short timeline project that I have to read a large CSV files into SQL2000, it was suggested by Sébastien that I use System.Data.SqlClient.SqlBulkCopy but,  I do not know how to get started. Any help or examples you may know of would be greatly appreciated.

    Thnx

    Wednesday, April 25, 2007 8:39 PM
  • What is the problem you're having exactly? My problem was due to null-values in the CSV data, which was represented as zero-size fields. If the column where null should be inserted was of type int, CsvReader would attempt to parse the empty string to get an int, which resulted in an exception. Since it was SqlBulkCopy and not my own code that was the caller, there was no way to catch the exception (and that would anyway have had a huge performance cost!), so I had to modify the CsvReader slightly so that it returned DbNull.Value when asked for the value of a nullable column with an empty data value.

     

    If you don't have any nulls in the CSV data to insert, you should be all set to go, and you'll get great results in no time at all! My workstation is a fairly low end two year old box, and I could process around 3000 rows per second on most tables (obviously the record size matters, as does indexes).

     

    If you're lost, post the CREATE TABLE statement and a few lines of CSV data (with nulls if there are any!) and I'll write a class that imports the file. I'll write it in C# but post a VB translation (I'll just use Reflector to disassemble to VB.NET) as well...

    Thursday, April 26, 2007 7:20 PM
  • Hi dagjo,

    Thanks for your reply and help. I have pared down the lines of csv and I hope I have the create table correct this is my first try.

     I have included the data that I think is problem area. There are nulls in any one of the fields and have included one for each type.

    Thanks

    -Cebo

     

    CREATE TABLE [dbo].[test] (
    [GCN] [varchar] (7) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [NDC_Number] [varchar] (12) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [NDDF_NDC_Number] [varchar] (11) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [Item_Nbr] [varchar] (11) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [Fine_Line_Desc] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [Invoice_Picking_Nbr] [Long] NULL,
    [Extended_Cost] [Currency] NULL,
    [Purchase_Date] [Date] NULL
     ) ON [PRIMARY]

     

    GCN,NDC Number,NDDF NDC Number,Item Nbr,Fine Line Desc,Invoice Picking Nbr,Extended Cost,Purchase Date
    _034062,_30904571135,_,_573618,"COLD,ALLERGY,SINUS DROPS &SPRY",846210,,3/23/2007
    _034062,_30904571135,_,_573618,"COLD,ALLERGY,SINUS DROPS &SPRY",93740,$3.96,3/23/2007
    _034062,_30904571135,_,_573618,"COLD,ALLERGY,SINUS DROPS &SPRY",,$7.92,3/23/2007
    _034062,_30904571135,_,,"COLD,ALLERGY,SINUS DROPS &SPRY",95382,$2.64,3/23/2007
    _034063,_30904551135,_,_573618,"",,$3.96,3/23/2007
    _034064,_30904551135,_,_573618,"COLD,ALLERGY,SINUS DROPS &SPRY",93740,$3.96,3/23/2007
    _034063,_30904551135,_,_573618,"COLD,ALLERGY,SINUS DROPS &SPRY",95521,$7.92,
    _034061,_30904561135,_,_573618,"COLD,ALLERGY,SINUS DROPS &SPRY",95382,$2.64,3/23/2007
    Friday, April 27, 2007 9:11 PM
  • Hi Dagjo,

    Sorry I made mistakes on the table thanks for the SQL tip. Seems C#'s whats happening now, trying to save for a class but life always happens.

    I want to thank you again for your helping!

    -Cebo

    CREATE TABLE [dbo].[test] (
    [GCN] [varchar] (7) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [NDC_Number] [varchar] (12) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [NDDF_NDC_Number] [varchar] (11) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [Item_Nbr] [varchar] (11) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [Fine_Line_Desc] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [Invoice_Picking_Nbr] [bigint] NULL,
    [Extended_Cost] [money] NULL,
    [Purchase_Date] [datetime] NULL
     ) ON [PRIMARY]

    Go

    Friday, April 27, 2007 9:13 PM
  • I'm busy right now but I'll try to see if I can get it done a little bit later. The trick is to get CsvReader.GetValue(int) to return DbNull.Value when appropriate, rather than an empty string. For the string types (varchar, nvarchar and so on) the question arises as to how to distinguish between an empty string and a null, but obviously unless the export to CSV makes this distinction you can't either - you'll either have to choose some way to interpret it or require the user's manual involvement...

     

    Hopefully I'll be able to take a look sometime during the weekend. Bon week-end!

     

    Dag

    Saturday, April 28, 2007 10:08 AM
  •  

    And by the way, a hint for the future: You can script objects easily in SQL Query Analyzer's object browser by right-clicking the object and choosing from the context menu. In a case like this, right click the table and choose something like Script object to clipboard as create... and then you can paste it anywhere.
    Saturday, April 28, 2007 10:11 AM
  • Also fogot, some of the csv varchar has underscores. I post to the forum but get some errors so posting again.

    Hi Dagjo,

    Sorry I made mistakes on the table thanks for the SQL tip. Seems C#'s whats happening now, trying to save for a class but life always happens.

    I want to thank you again for your helping!

    -Cebo

    CREATE TABLE [dbo].[test] (
    [GCN] [varchar] (7) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [NDC_Number] [varchar] (12) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [NDDF_NDC_Number] [varchar] (11) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [Item_Nbr] [varchar] (11) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [Fine_Line_Desc] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [Invoice_Picking_Nbr] [bigint] NULL,
    [Extended_Cost] [money] NULL,
    [Purchase_Date] [datetime] NULL
     ) ON [PRIMARY]

    Go

    Monday, April 30, 2007 6:45 PM
  • Hi,

    I completely forgot about this until now - sorry!

    But the good news is it was easy to get this working. In the hope that you can understand how to approach problems like this, I'll explain what I did rather than just the solution.

    First I ran the CREATE TABLE statement you posted and saved the lines of CSV data to a file I called TestData.csv. Next, I wrote a small console application that uses the SqlBulkCopy to bulk insert the data.

    This blew up, seemingly because the CsvReader would return an empty string which SqlBulkCopy couldn't convert to the expected field type, such as Money. The call to the the CsvReader to get the data was clearly successful, because there was no CsvReader code in the exception stack trace.

    To find out which method in CsvReader would need modification, I set a breakpoint in EnsureInitialize() - this method is clearly intended to init the reader, so we can be pretty sure it'll be called prior to any reading calls completing. This allows you to use the "step over" debug command (F10). Then all the framework code (release build code) will execute, but as soon as code for which there are debug symbols and source code is reached, the debugger will break again.

    This quickly revealed that the reader intialized properly, read the header line properly, and then IDataRecord.GetValue(int) is called by SqlBulkCopy to get each field value. On field seven (index 6) of the first row, the reader returns empty string, and SqlBulkCopy throws an exception because it can't convert this to money.

    The IDataRecord interface declares another method IsDBNull(int) to see if a given field is DBNull, so I simply called it in the watch list and indeed, it returned true.

    This leads to the following simple modification of CsvReader.cs (fortunately it is released under the MIT license, which means we can modify it):

    object IDataRecord.GetValue(int i)
    {
        ValidateDataReader(DataReaderValidations.IsInitialized | DataReaderValidations.IsNotClosed);
        if (((IDataRecord)this).IsDBNull(i))
            return DBNull.Value;
        else
            return thisIdea;
    }


    That's all there is to it! With this simple modification, the bulk insert is successful.


    The console application code (Program.cs) is as follows:

    using System;
    using System.Collections.Generic;
    using System.Data;
    using System.Data.SqlClient;
    using System.IO;
    using System.Text;

    using LumenWorks.Framework.IO.Csv;


    namespace BulkInsert
    {
        class Program
        {
            static void Main(string[] args)
            {
                string path = "TestData.csv";
                try
                {
                    using (StreamReader sr = new StreamReader(File.OpenRead(path)))
                    {
                        CsvReader r = new CsvReader(sr, true);
                        SqlBulkCopy bcp = new SqlBulkCopy(cnxStr, SqlBulkCopyOptions.Default);
                        bcp.DestinationTableName = tableName;
                        bcp.WriteToServer(r);
                    }
                }
                catch (Exception ex)
                {
                    Console.WriteLine(ex.ToString());
                    Console.Read();
                }
            }


            static string cnxStr
            {
                get { return @"Data Source=machineName\sqlexpress; Initial Catalog=CsvTest; Integrated Security=True"; }
            }


            static string tableName
            {
                get { return "test"; }
            }
        }
    }



    I used Reflector to disassemble the DLL to VB.NET, and that reads as follows:


    Friend Class Program
    ' Methods
    Private Shared Sub Main(ByVal args As String())
    Dim path As String = "TestData.csv"
    Try
    Using sr As StreamReader = New StreamReader(File.OpenRead(path))
    Dim r As New CsvReader(sr, True)
    Dim bcp As New SqlBulkCopy(Program.cnxStr, SqlBulkCopyOptions.Default)
    bcp.DestinationTableName = Program.tableName
    bcp.WriteToServer(r)
    End Using
    Catch ex As Exception
    Console.WriteLine(ex.ToString)

    Console.Read
    End Try
    End Sub


    ' Properties
    Private Shared ReadOnly Property cnxStr As String
    Get
    Return "Data Source=machineName\sqlexpress; Initial Catalog=CsvTest; Integrated Security=True"
    End Get
    End Property

    Private Shared ReadOnly Property tableName As String
    Get
    Return "test"
    End Get
    End Property

    End Class

    Obviously you will have to modify the database connection string, and you may want to add a line to set the batch size, consider using a transaction for the entire bulk insert, and so on. But this is sufficient to overcome the problem with null values.

    Happy coding,

    Dag
    Tuesday, May 1, 2007 10:49 AM
  • Dag you are the Wizard!

    I have to work on something else now but, some day I hope I can return your favor you are too adavanced for me to help but, I will continue helping others when I can.

    I can not wait to trying your solution.

    I can not thank you enough for your work on this!

    -Cebo

     

     

     

    Wednesday, May 2, 2007 6:31 PM
  • Hi Dag,

    I got some time, I tried your solution, I changed the CsvReader.cs like you showed me, but I'm getting  an error and VB:

    Can see what I'm doing wrong?

    Thanks for your help!

    -Cebo

     

    System.InvalidOperationException: The given value of type CsvReader from the data source cannot be converted to type

    varchar of the specified target column. ---

    > System.InvalidCastException: Failed to convert parameter value from a CsvReader to a String. --->

    System.InvalidCastException: Object must implement IConvertible.
    at System.Convert.ChangeType(Object value, Type conversionType, IFormatProvider provider)
       at System.Data.SqlClient.SqlParameter.CoerceValue(Object value, MetaType destinationType)
       --- End of inner exception stack trace ---
       at System.Data.SqlClient.SqlParameter.CoerceValue(Object value, MetaType destinationType)
       at System.Data.SqlClient.SqlBulkCopy.ConvertValue(Object value, _SqlMetaDatametadata)
       --- End of inner exception stack trace ---
       at System.Data.SqlClient.SqlBulkCopy.ConvertValue(Object value, _SqlMetaDatametadata)
       at System.Data.SqlClient.SqlBulkCopy.WriteToServerInternal()
       at System.Data.SqlClient.SqlBulkCopy.WriteRowSourceToServer(Int32 columnCount)
       at System.Data.SqlClient.SqlBulkCopy.WriteToServer(IDataReader reader)
       at CSVReaderConsoleApp.BulkInsert.Program.Main(String[] args) in C:\Documents and Settings\cebo\My Documents\Visual

    Studio 2005\Projects\CSVReaderConsoleApp\CSVReaderConsoleApp\program.vb:line 21


    Imports System
    Imports System.Collections.Generic
    Imports System.Data
    Imports System.Data.SqlClient
    Imports System.IO
    Imports System.Text
    Imports LumenWorks.Framework.IO.Csv
    Namespace BulkInsert
        Public Class Program
            Shared Sub Main(ByVal args As String())
                Dim path As String = "test2.csv"
                Try
                    Using sr As StreamReader = New StreamReader(File.OpenRead(path))
                        Dim r As New CsvReader(sr, True)
                        Dim bcp As New SqlBulkCopy(Program.cnxStr, SqlBulkCopyOptions.Default)
                        bcp.DestinationTableName = Program.tableName
                        bcp.WriteToServer(r)
                    End Using
                Catch ex As Exception
                    Console.WriteLine(ex.ToString)
                    Console.Read()
                End Try
            End Sub

            Shared ReadOnly Property cnxStr() As String
                Get
                    Return "Data Source=localhost; Initial Catalog=test; uid=sa; pwd=; Connect Timeout=30"
                End Get
            End Property
            Shared ReadOnly Property tableName() As String
                Get
                    Return "test2"
                End Get
            End Property
        End Class
    End Namespace

     

    Wednesday, May 2, 2007 8:47 PM
  • What exactly does your CsvReader modification look like? It seems from this message that you are returning a reference to the CsvReader instance...!! I notice that the code I pasted from VS2005 to this forum is messed up; the method should not return "this" but the indexer this[   i  ] which has been replaced by a smiley thing in the forum post. That is

     

    "this"

    and left square bracket "["

    and index "i"

    and right square bracket "]"

     

    When I write it straight, the forum here outsmarts me and inserts a smiley thingy..!

     

    If you can't figure this out, post the test2.CSV data and the exact change you made to the C# code (CsvReader) and I'll have a look. The data you've posted so far certainly presented no problems.

     

    If you post your email (which would expose you to spam, I guess) I could send you an attachment with the VS solution I used. Maybe if you make a throwaway hotmail account for this purpose...?

     

    But now it's time for me to hit the sheets! I'm in Paris, France, and it's time to sleep....

    Wednesday, May 2, 2007 9:28 PM
  • Well, actually it isn't really a "smiley", but a light bulb, presumably symbolizing "idea" - but the code should be "this [ i ]", which we'd normally write without the spaces, although they don't matter from the compiler's point of view!

     

    Let me know if you got it working..!

    Wednesday, May 2, 2007 9:31 PM
  • Hi Dag,

    You were absolutely correct with your excellent programming example, It works flawlessly for the data that I posted. 

    The full data is more columns. I thought that I could extrapolate your example, but now I'm getting  this error.

    See below.

    Thanks for everything!

    -Cebo

     

    For reference purposes: It was determined that the console program that Dagjo wrote works perfectly as described in previous posts. The data was not of the correct length was this problem.

     

     

     

    Thursday, May 3, 2007 1:02 AM
  • Hi,

     

    strictly speaking, this is not the same problem, but anyway... You have to read the exception message!

     

    "Received an invalid column length from the BCP client for colid 1" is what I get. (It's weird that we don't get exactly the same, but even so...) So clearly, the data in the CSV file doesn't fit in the table.

     

    A quick look at the table definition:

     

      Therapeutic Class (varchar(6), null)

     

    and the CSV data:

     

    _00:00.00,AHFS CATEGORY UNKNOWN,_,_,_,_886825,_0000000, (...)

     

     

     As you can see, "_00:00.00" is 9 characters long, which does not fit in a varchar(6).
    Thursday, May 3, 2007 7:00 AM
  • Hi Dag,

    You are absolutely correct, I was laying bed thinking the same thing, I am finding size issues throughout the database table. The layout

    document I was given is completely off. I have to go through and resize the most of the columns.

    Thanks for your help!

    -Cebo

     

    Thursday, May 3, 2007 3:43 PM
  •  

    thanks dagjo!

     

    I ran into the exact same issue earlier today (only with nullable datetimes rather than ints) and was racking my brain trying to figure out how to work around it. I made the quick modification to CsvReader that you suggested and everything's peachy now Smile

    Wednesday, March 26, 2008 5:39 PM