How do I restore 1 table from a backup? | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

How do I restore 1 table from a backup?

I have read that it is easy if the table is on it’s own Filegroup and that filegroup is backed up. We do not have any Filegroups set up (using default primary Filegroup). I tried to export the table, but after 3 hours it came back with an error message (with no reason why!). The table I am trying to restore has 118 million rows. Any suggestions? Sanette
SQL Developer
JustEnough Software Corporation
quote:Walking on water and coding of a spec is easy, as long as both are frozen – A friend

From SQL 7 its not possible to restore a table alone, to accomplish that you have to keep that table on a seperate filegroup which is not possible in the current situation. SO the other option is to use DTS to export the data to another table.
Make sure the network connection between the servers is fine.
Refer to the DTS step log for more information about the job results. _________
Satya SKJ
Moderator
SQL-Server-Performance.Com

&gt; Walking on water and coding of a spec is easy, as long as both are frozen – A friend<br /><br />Nice quote <img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />
Current scenario, it won’t be possible. So the only option is as Satya suggested – restore entire database on different machine and then export table using DTS. For future, you can plan for keeping the critical backup on differnt file group and then designing a backup strategy whic takes backup of individual filegroups. Gaurav
Moderator
Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard

And also BCP is the fastest solution to get the records out from the table and same to import.
_________
Satya SKJ
Moderator
SQL-Server-Performance.Com

Satya, I had read somewhere and with some small tests observed that BULK IMPORT is the fastest to import data. Gaurav
Moderator
Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard

@Sanette, WOW – don’t you just have the most fun environment ! You’re going to have to restore to another server, and look at DTS/ BCP to move the data. From what I’ve read, they "say" BCP is supposed to be faster, although I read recently that that may no longer be true. There is probably a good article here somewhere on this – I’ll see if I can find it. Again, you’re into the logged operation scenario. I’m guessing you have to put this data back into your 23.5×7 environment, real-time ? You can try and do a mass job, but I suspect you may run into the same issues as your mass delete problem, and that you may have to go with "million at a time" concept we discussed in your other thread …
Gaurav Do you mean BULK INSERT, in that case yes I too agree its fastest. I was referring BCP against DTS. _________
Satya SKJ
Moderator
SQL-Server-Performance.Com

BCP is working fine. I had a problem with the DTS option as my database kept on running out of log space. SQL Guess: Luckily this is only for development purposes! I am not so pushed for time, but it would be great not to wait for something and have it bomb out on you after 3 hours! Have you found the article yet? Sanette
@Sanette, from the DTS "faq" on the home page here :http://www.sql-server-performance.com/dts.asp
quote:
Assuming you don’t need to transform data during a DTS import in SQL Server 2000, the Bulk Insert task provides the fastest data loads into SQL Server. Not only does the Bulk Insert task not allow any transformation, it also does not permit changes in column mappings, or to import data into any other database other than SQL Server, or to create an exception log file for bad records. [2000] Added 6-6-2001

That tends to be what Satya is saying. also:
quote:When creating new transformations using the SQL Server 2000 DTS Transform Data task, you are given nine different ways in order to transform data. For best performance, assuming it will do the job you need, select the "Copy Column" option. If what you need to do cannot be done using "Copy Column", the next fastest option is "ActiveX script". [2000] Added 6-6-2001

When working on vast amounts of data, I tend to want to be able to commit after a significant amount of change – hence the "million at a time" type processing. Have you thouht about dropping indexes on a table, then re-creating them afterwards ? Turning constraint checking off on the table ? That will improve performance, although it may defeat the purpose of having constraints, if they are there to ensure data integrity. The removing of indexes, and recreating, should help, unless there is only a clustering index….
I’ve searched, and can’t find any articles on DTS performance, only the FAQ type page I refered to above.
You can set BULK-LOGGED Recovery model while importing data to this database which takes care of Tlog. Refer to books online for this recovery model explanation. _________
Satya SKJ
Moderator
SQL-Server-Performance.Com

Also Setting the USE FASTLOAD option makes the process a non-logging operation and I have seen drastic reduction in log file (from 4 GB to 1 GB) for large operations. Another option can be to do the operation in batches. Between the batches when the transaction will be committed, if the Truncate Log on Checkpoint option is set, the log will start overwriting the previous inactive entry. Carrying out the operation as a single process will mark the entire portion of log as active and so u’ll end up having large log files. Gaurav
Moderator
Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard

One of the best article I’ve seen about DTS best practiceshttp://vyaskn.tripod.com/sql_server_dts_best_practices.htm and I’m sure this will help you. _________
Satya SKJ
Moderator
SQL-Server-Performance.Com

]]>