Tuesday, March 27, 2012

Automatically script out a table & indexes - script needed (2005)

I'm trying to automate an auto-export of a table on a daily basis,
using BCP. I'm using native format for the BCP because the text in
one of the fields can encompass pretty much any ASCII characters, and
using the other options (including the null terminator, stuff like
|||, etc) hasn't worked particularly well.
So, I'm archiving out a table on a daily basis. I want to script out
the table at the same time; that way, if there are any table changes,
an import will still work.
How can I do this? I've been digging through google for scripts with
no luck. Ideally I'd like a table-creation script, along with CREATE
INDEX statements.
Anybody have a script handy for this? I know it can be done by using
the system tables, but I'm hoping to avoid reinventing the wheel.
Thanks in advance.
Michael
M Bourgon (bourgon@.gmail.com) writes:
> I'm trying to automate an auto-export of a table on a daily basis,
> using BCP. I'm using native format for the BCP because the text in
> one of the fields can encompass pretty much any ASCII characters, and
> using the other options (including the null terminator, stuff like
>|||, etc) hasn't worked particularly well.
> So, I'm archiving out a table on a daily basis. I want to script out
> the table at the same time; that way, if there are any table changes,
> an import will still work.
> How can I do this? I've been digging through google for scripts with
> no luck. Ideally I'd like a table-creation script, along with CREATE
> INDEX statements.
> Anybody have a script handy for this? I know it can be done by using
> the system tables, but I'm hoping to avoid reinventing the wheel.
> Thanks in advance.
If you are on SQL 2005, you would use SMO for the scripting and on SQL 2000
it would be DMO. No, I don't have any examples, I have stayed away from
both.
Personally, I would prefer the definition of the table to be under version
control and be content with that.
But why use BCP as a backup tool? Why not simply BACKUP? Or are you
trying to tell us that this is the only table in a big database that
you want to back up?
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
|||Kalen has written this script
SELECT type_desc,object_name(ic.object_id) as object_name , index_name =
i.name,
'column' = c.name,
'column usage' = CASE ic.is_included_column
WHEN 0 then 'KEY'
ELSE 'INCLUDED'
END
FROM sys.index_columns ic JOIN sys.columns c
ON ic.object_id = c.object_id
AND ic.column_id = c.column_id
JOIN sys.indexes i
ON i.object_id = ic.object_id
AND i.index_id = ic.index_id
"M Bourgon" <bourgon@.gmail.com> wrote in message
news:1184085398.736952.162110@.o61g2000hsh.googlegr oups.com...
> I'm trying to automate an auto-export of a table on a daily basis,
> using BCP. I'm using native format for the BCP because the text in
> one of the fields can encompass pretty much any ASCII characters, and
> using the other options (including the null terminator, stuff like
> |||, etc) hasn't worked particularly well.
> So, I'm archiving out a table on a daily basis. I want to script out
> the table at the same time; that way, if there are any table changes,
> an import will still work.
> How can I do this? I've been digging through google for scripts with
> no luck. Ideally I'd like a table-creation script, along with CREATE
> INDEX statements.
> Anybody have a script handy for this? I know it can be done by using
> the system tables, but I'm hoping to avoid reinventing the wheel.
> Thanks in advance.
> Michael
>
|||On Jul 10, 4:48 pm, Erland Sommarskog <esq...@.sommarskog.se> wrote:
> Personally, I would prefer the definition of the table to be under version
> control and be content with that.
We're working on implementing several changes, that's one of them.

> But why use BCP as a backup tool? Why not simply BACKUP? Or are you
> trying to tell us that this is the only table in a big database that
> you want to back up?
Nope. We're working on moving to Partitioned Tables, but right
now we have home-grown partitioning, and we need to deal with old
"partitions".
|||M Bourgon (bourgon@.gmail.com) writes:
> On Jul 10, 4:48 pm, Erland Sommarskog <esq...@.sommarskog.se> wrote:
> Nope. We're working on moving to Partitioned Tables, but right
> now we have home-grown partitioning, and we need to deal with old
> "partitions".
And the scripting is part of that? Maybe you could give more details?
If the main purpose is that the import of the BCP in native format will
work, maybe it sufficient to save the format file with the table? You
can create a format file from BCP with the format option. (You use "format"
in place of "in" or "out".)
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
|||On Jul 11, 4:03 pm, Erland Sommarskog <esq...@.sommarskog.se> wrote:
> M Bourgon (bour...@.gmail.com) writes:
>
> And the scripting is part of that? Maybe you could give more details?
Sure. We have a large table that constantly has new records added. We
want to be able to go back historically and pull from this data set.
The way we currently do it is to keep several days (the "hot" data)
available in one table (whilst auto-archiving 1 days' data out to a
table on a daily basis) several weeks available by a partitioned view,
and archive the older tables. This way, when we get requests for
older data, we can easily look in a set of tables for the results. We
split it up by day due to volume, and also because most of our
requests are "this subset of data, from date A to date B, further
filtered". We've discussed different ways of keeping the data
available, as what takes up the least amount of space is not
necessarily the easiest to query. I like the idea of keeping the data
in a database because then we can easily query it, and we don't have
to worry about any issues (i.e. if we save it out, and the table
format changes, we're don't run into issues months from now when we
try to load the data).
One question you'll probably ask - how do you create the one-days-
worth-of-data table? Unfortunately, that's just a dumb script also,
destined to break if we wind up changing the format of the table.

> If the main purpose is that the import of the BCP in native format will
> work, maybe it sufficient to save the format file with the table? You
> can create a format file from BCP with the format option. (You use "format"
> in place of "in" or "out".)
Can the format file be used instead of DDL? I'll have to try it.
Thanks.
|||M Bourgon (bourgon@.gmail.com) writes:
> Sure. We have a large table that constantly has new records added. We
> want to be able to go back historically and pull from this data set.
> The way we currently do it is to keep several days (the "hot" data)
> available in one table (whilst auto-archiving 1 days' data out to a
> table on a daily basis) several weeks available by a partitioned view,
> and archive the older tables. This way, when we get requests for
> older data, we can easily look in a set of tables for the results. We
> split it up by day due to volume, and also because most of our
> requests are "this subset of data, from date A to date B, further
> filtered".
Maybe a very nave and silly question, but what about a clustered index
on the big table? If you have a date range and clustered index to match
that range, it's fairly irrelevant if the table has 500 million rows.
The major reasons to partition a table I know of are:
1) Being able to quickly drop old data or add new data, by shifting
a table out or in.
2) Spread the load over different file groups.
But it sounds that you in your case keep the data, so that reason to
partition is out.

> We've discussed different ways of keeping the data available, as what
> takes up the least amount of space is not necessarily the easiest to
> query. I like the idea of keeping the data in a database because then
> we can easily query it, and we don't have to worry about any issues
> (i.e. if we save it out, and the table format changes, we're don't run
> into issues months from now when we try to load the data).
But if you keep the data in the database, why then BCP?

> One question you'll probably ask - how do you create the one-days-
> worth-of-data table? Unfortunately, that's just a dumb script also,
> destined to break if we wind up changing the format of the table.
I don't see that much of a problem. I would not expect frequent schema
changes to a table of this size. Having to update one script extra
when you actually do is not that big deal. Although for a plain copy,
you could use SELECT INTO. That would not give the constraints,
triggers and indexes though.
What I am a little more curious is what happens to all those daily
tables that all of a sudden has an obsolete definition.

> Can the format file be used instead of DDL? I'll have to try it.
No, format file has nothing to do with DLL. But my thinking was that
if you saved the BCP file, and then want to import three months later,
the format files relates the format of the file. Assuming that you only
add new columns at the end not drop any, it would import out of the box.
Else you would have to edit column-mapping in the format file.
If you want to script the table, the you are probably best off with DMO
on SQL 2000 and SMO on SQL 2005. I have not worked with either, so I can't
help.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
|||On Jul 17, 4:37 pm, Erland Sommarskog <esq...@.sommarskog.se> wrote:
(sorry for delay on response)

> Maybe a very nave and silly question, but what about a clustered index
> on the big table? If you have a date range and clustered index to match
> that range, it's fairly irrelevant if the table has 500 million rows.
Right. The problem isn't accessing the data in a reasonable amount of
time, it's space issues. We need to have X days available, but we
also need to be able to move it offline (and back online if we need to
query it) in order to free up space. Disk space is cheap, but it's
not that cheap.

> The major reasons to partition a table I know of are:
> 1) Being able to quickly drop old data or add new data, by shifting
> a table out or in.
> 2) Spread the load over different file groups.
3) Make sure of disk space.

> But if you keep the data in the database, why then BCP?
As the data ages, it needs to be pulled out.

> I don't see that much of a problem. I would not expect frequent schema
> changes to a table of this size. Having to update one script extra
> when you actually do is not that big deal. Although for a plain copy,
> you could use SELECT INTO. That would not give the constraints,
> triggers and indexes though.
Okay. I was hoping there was a way to do it.

> What I am a little more curious is what happens to all those daily
> tables that all of a sudden has an obsolete definition.
The change in definition will be relatively minor. Instead of an INT
field (for ID, for instance), we need to move to BIGINT.

> If you want to script the table, the you are probably best off with DMO
> on SQL 2000 and SMO on SQL 2005. I have not worked with either, so I can't
> help.
Fair enough. I appreciate all the help, Erland. Thank you.
sql

No comments:

Post a Comment