locked
Will changing the TYPE of SQL table variables cause data loss? RRS feed

  • Question

  • User2142845853 posted

    On reviewing the SQL table to 'tune it', found some wrong types used,

    Column Name:  usrComments  

    Data Type: nvarchar(MAX), this will get changed to nvarchar(1024)

    If there are 100 columns and 20 have nvarchar(MAX) set and each of those is re-designated to nvarchar(512)  will there be data loss?  Assuming none of these has near 512 of data in there, can the columns be re-typed with out data loss?  is it ok to do this from the sql server management studio?                                                                                                                                                                                                                                              

    Thursday, August 11, 2016 3:18 PM

Answers

  • User-359936451 posted

    Yes to all your questions!! BUT is data loss possible, YES!!!

    But you can easily change types without data loss, its just that there is always a possibility.

    So what to do, make a copy of the table or write a stored procedure to move data from old table to new table then rename the old table to oldTbl_Backup and then rename the new table to the original table name.

    In SQL SERVER MANAGMENT STUDIO you could run a query like this....

    SELECT * INTO tblNewTable
    FROM origTable;

    this will make a duplicate of the original table, here you can change col types etc, and see what the results will be.

    If you are using SQL you can make all of your changes in SSMS.

    • Marked as answer by Anonymous Thursday, October 7, 2021 12:00 AM
    Thursday, August 11, 2016 4:35 PM
  • User475983607 posted

    rogersbr

     Assuming none of these has near 512 of data in there, can the columns be re-typed with out data loss?

    You should be fine as long as you don't have more than 512 Unicode characters stored in each record.

    rogersbr

    is it ok to do this from the sql server management studio?  

    That's where I would run the DDL.

    You'll see an error message if you try to truncate a column that has more than 512 Unicode characters..

    Msg 8152, Level 16, State 10, Line 17
    String or binary data would be truncated.

    I generally build a test harness to make sure I don't make a mistake and the process goes as I expect.  

    The T-SQL script below errors because one of the columns is larger than 3 characters.  However, the catch block gracefully rolls back the transaction.  Add SET ANSI_WARNINGS OFF; to force the truncation.

    IF OBJECT_ID('tempdb..#Temp') IS NOT NULL
    	DROP TABLE #Temp;
    
    CREATE TABLE #Temp (
    	Content NVARCHAR(MAX)
    );
    
    --SET ANSI_WARNINGS OFF;
    --SET ANSI_WARNINGS ON;
    
    INSERT INTO #Temp (Content)
    VALUES	('Hello World'),
    		('Foo'),
    		('Bar');
    
    SELECT Content FROM #Temp;
    
    BEGIN TRANSACTION 
    BEGIN TRY
    	ALTER TABLE #Temp ALTER COLUMN Content NVARCHAR(3);
    	COMMIT TRANSACTION
    END TRY
    BEGIN CATCH
    	IF @@TRANCOUNT > 0
    		BEGIN
    			ROLLBACK TRANSACTION
    			PRINT 'Error: ROLLBACK TRANSACTION'
    		END
    END CATCH  
    
    
    SELECT Content FROM #Temp;
    
    

    • Marked as answer by Anonymous Thursday, October 7, 2021 12:00 AM
    Thursday, August 11, 2016 4:47 PM

All replies

  • User-359936451 posted

    Yes to all your questions!! BUT is data loss possible, YES!!!

    But you can easily change types without data loss, its just that there is always a possibility.

    So what to do, make a copy of the table or write a stored procedure to move data from old table to new table then rename the old table to oldTbl_Backup and then rename the new table to the original table name.

    In SQL SERVER MANAGMENT STUDIO you could run a query like this....

    SELECT * INTO tblNewTable
    FROM origTable;

    this will make a duplicate of the original table, here you can change col types etc, and see what the results will be.

    If you are using SQL you can make all of your changes in SSMS.

    • Marked as answer by Anonymous Thursday, October 7, 2021 12:00 AM
    Thursday, August 11, 2016 4:35 PM
  • User475983607 posted

    rogersbr

     Assuming none of these has near 512 of data in there, can the columns be re-typed with out data loss?

    You should be fine as long as you don't have more than 512 Unicode characters stored in each record.

    rogersbr

    is it ok to do this from the sql server management studio?  

    That's where I would run the DDL.

    You'll see an error message if you try to truncate a column that has more than 512 Unicode characters..

    Msg 8152, Level 16, State 10, Line 17
    String or binary data would be truncated.

    I generally build a test harness to make sure I don't make a mistake and the process goes as I expect.  

    The T-SQL script below errors because one of the columns is larger than 3 characters.  However, the catch block gracefully rolls back the transaction.  Add SET ANSI_WARNINGS OFF; to force the truncation.

    IF OBJECT_ID('tempdb..#Temp') IS NOT NULL
    	DROP TABLE #Temp;
    
    CREATE TABLE #Temp (
    	Content NVARCHAR(MAX)
    );
    
    --SET ANSI_WARNINGS OFF;
    --SET ANSI_WARNINGS ON;
    
    INSERT INTO #Temp (Content)
    VALUES	('Hello World'),
    		('Foo'),
    		('Bar');
    
    SELECT Content FROM #Temp;
    
    BEGIN TRANSACTION 
    BEGIN TRY
    	ALTER TABLE #Temp ALTER COLUMN Content NVARCHAR(3);
    	COMMIT TRANSACTION
    END TRY
    BEGIN CATCH
    	IF @@TRANCOUNT > 0
    		BEGIN
    			ROLLBACK TRANSACTION
    			PRINT 'Error: ROLLBACK TRANSACTION'
    		END
    END CATCH  
    
    
    SELECT Content FROM #Temp;
    
    

    • Marked as answer by Anonymous Thursday, October 7, 2021 12:00 AM
    Thursday, August 11, 2016 4:47 PM
  • User2142845853 posted

    Ok, Now ran into a NEW problem, do I have to start a new question?

    What happened was during this optimization I chose to drop columns that were no longer used.  Then had to add a few new ones.  Set the lengths to fit the columns.  But the problem is that the backup copies of the table? the working copy?  no longer matches.

    I have to plead SQL ignorance, I think there is a way to merge this:

    active server - old table with extra/unused columns

    working test version - new table with optimized columns, far fewer (deleted unused ones)

    Goal - the old, obsolete table on the active server has new user data, how to "Merge" this into the working test version such that, if the active server has newer data, update with that data.

    or do some Difference and show where the tables are different?

    There will not be much difference, if there is some way to output the old and new tables in some text form I could compare manually; 

    Thursday, August 11, 2016 7:50 PM
  • User475983607 posted

    rogersbr

    Goal - the old, obsolete table on the active server has new user data, how to "Merge" this into the working test version such that, if the active server has newer data, update with that data.

    SQL is language like...  Just SELECT the new records from the old table and INSERT them into the new table.

    BEGIN TRANSACTION 
    
    --Insert
    INSERT INTO NewTable(cols, I, want)
    SELECT cols, I, want
    FROM OldTable 
    WHERE newstuff = 'the new stuff';
    
    --verify
    SELECT * 
    FROM dbo.NewTable 
    WHERE newstuff = 'the new stuff';
     
    ROLLBACK 
    --COMMIT

    The ROLLBACK will revert the update but allow you to view the results of the INSERT. This lets you test the query before committing the INSERT. Comment ROLLBACK and uncomment COMMIT when you're satisfied with the script.

    There is also a MERGE command that you can read about.

    That's about the the best advice I can provide without seeing the schema and data.

    Thursday, August 11, 2016 8:35 PM
  • User2142845853 posted

    I like that you use the ROLLBACK there.

    but do not understand what you mean with 'new stuff'?

    will just make a backup copy then try this, thanks

    SELECT * 
    FROM dbo.NewTable 
    WHERE newstuff = 'the new stuff';

    is this a table is it a column name is it a row name?   

    rogersbr

    Goal - the old, obsolete table on the active server has new user data, how to "Merge" this into the working test version such that, if the active server has newer data, update with that data.

    SQL is language like...  Just SELECT the new records from the old table and INSERT them into the new table.

    BEGIN TRANSACTION 
    
    --Insert
    INSERT INTO NewTable(cols, I, want)
    SELECT cols, I, want
    FROM OldTable 
    WHERE newstuff = 'the new stuff';
    
    --verify
    SELECT * 
    FROM dbo.NewTable 
    WHERE newstuff = 'the new stuff';
     
    ROLLBACK 
    --COMMIT

    The ROLLBACK will revert the update but allow you to view the results of the INSERT. This lets you test the query before committing the INSERT. Comment ROLLBACK and uncomment COMMIT when you're satisfied with the script.

    There is also a MERGE command that you can read about.

    That's about the the best advice I can provide without seeing the schema and data.

    Friday, August 12, 2016 3:33 PM
  • User475983607 posted

    is this a table is it a column name is it a row name?

    It's pseudo code.  

    How did you identify there are new records in the old table which are missing in the new table?  

    The answer to this question becomes the filter criteria used to retrieve the new records from the old table.

    Friday, August 12, 2016 3:46 PM
  • User2142845853 posted

    trying many iterations.  it has about 450 columns so I use the right click, script as - insert into, then tweak this

    the old table is the one to use, whatever data is there?  force that data into the records in the new table, as long as the column name matches

    doesnt work.  now that many columns were removed and the types changed, just cannot get this to insert/select.  will either write some code in c# to compare byte for byte or compare manually

    Friday, August 12, 2016 6:58 PM