Showing posts with label table. Show all posts
Showing posts with label table. Show all posts

Thursday, March 29, 2012

Automation of Excel to SQL Server 2005

Hi all,

I have a task which needs to be automated.

1) Excel spreadsheet load into sql server database as a table.

2) Then , there will be some quering of data joining this table ( imported from excel) with other tables.

3) Then, output should be back into excel spreadsheet.

All these steps 1,2,3 need to be automated.

Could anybody give me the different steps(if not,, any links which guide me doing so), for doing this and any pre setup required for this.

Thanks for all your help..

You can do this using distributed quires...

Here the sample,

Code Snippet

--To fetch the data from the Excel

select * from

OpenDataSource('Microsoft.Jet.OLEDB.4.0',

'Data Source="c:\book1.xls";User ID=Admin;Password=;Extended properties=Excel 5.0')...[Sheet1$]

--To store the data back to the Excel

insert into

OpenDataSource('Microsoft.Jet.OLEDB.4.0',

'Data Source="c:\book1.xls";User ID=Admin;Password=;Extended properties=Excel 5.0')...[Sheet1$]

Values('11','Name11')

Automating the importation of an Oracle table into Sql Server 2000

Hi guys,

I figure this should not be a complex one. I know how to manually pull in data from Oracle 9i into SQL Server 2000 using DTS. However this is my issue....

I simply want to automate the pulling in of data from 1 table in my ORACLE 9i database into another table in my Sql Server 200. I was hoping I could simple write a stored procedure that would sort of utilize a dblink like in ORACLE and then schedule that procedure. Is this feasible in Sql Server, and how would one go about setting this automated import up??

Thanks in Advance all.........

'WaleDid 9i ever come out with an export utility?

Your best bet is to schedule the Oracle export, TRUNCATE the SQL Server table, then do a bcp load or BULK INSERT...

or you can set up a linked server in sql server and do a delete and an insert from the oracle table...

The latter will incur more over head...but will appear to be easier...|||problem is that the ORACLE database is not in my control. So I can't just schedule an export.|||I wonder if you can bcp out data from a linked server...

look into sp_addlinkedserver

You must authority to that box, right?|||yeah I do have authority on the SQL Server box. i'll look into it.|||You'll have to look into sp_addlinkedsrvlogin as well

automating table creation

hi there.
moving from IBM DB2 to MS SQL server. i dont need to take the actual
data from the old db to the new one, but i would like to take the table
structure. in DB2, i could run through a ddl file that contained CREATE
DATABASE, CREATE TABLE, etc. kinds of commands to automatically create
tables, databases, and the like. how exactly do i do this using MS SQL
server? im thinking along the lines of writing my own file, and then
using something in SQL server to basically read the file and execute
the SQL commands. any ideas? thanks.
qin_23
qin_23
Posted via http://www.webservertalk.com
View this thread: http://www.webservertalk.com/message1835350.html
quin_23,
The easiest way is to generated the script, copy and paste it into Query
Analyser. Then run it. You may have some tweeking to do before it will
execute without error.
Alternatively, use osql to execute the DDL file.
-- Bill
"qin_23" <qin_23.2n3acb@.mail.webservertalk.com> wrote in message
news:qin_23.2n3acb@.mail.webservertalk.com...
> hi there.
> moving from IBM DB2 to MS SQL server. i dont need to take the actual
> data from the old db to the new one, but i would like to take the table
> structure. in DB2, i could run through a ddl file that contained CREATE
> DATABASE, CREATE TABLE, etc. kinds of commands to automatically create
> tables, databases, and the like. how exactly do i do this using MS SQL
> server? im thinking along the lines of writing my own file, and then
> using something in SQL server to basically read the file and execute
> the SQL commands. any ideas? thanks.
> qin_23
>
> --
> qin_23
> Posted via http://www.webservertalk.com
> View this thread: http://www.webservertalk.com/message1835350.html
>
sql

automating table creation

hi there.
moving from IBM DB2 to MS SQL server. i dont need to take the actual
data from the old db to the new one, but i would like to take the table
structure. in DB2, i could run through a ddl file that contained CREATE
DATABASE, CREATE TABLE, etc. kinds of commands to automatically create
tables, databases, and the like. how exactly do i do this using MS SQL
server? im thinking along the lines of writing my own file, and then
using something in SQL server to basically read the file and execute
the SQL commands. any ideas? thanks.
qin_23
--
qin_23
---
Posted via http://www.webservertalk.com
---
View this thread: http://www.webservertalk.com/message1835350.htmlquin_23,
The easiest way is to generated the script, copy and paste it into Query
Analyser. Then run it. You may have some tweeking to do before it will
execute without error.
Alternatively, use osql to execute the DDL file.
-- Bill
"qin_23" <qin_23.2n3acb@.mail.webservertalk.com> wrote in message
news:qin_23.2n3acb@.mail.webservertalk.com...
> hi there.
> moving from IBM DB2 to MS SQL server. i dont need to take the actual
> data from the old db to the new one, but i would like to take the table
> structure. in DB2, i could run through a ddl file that contained CREATE
> DATABASE, CREATE TABLE, etc. kinds of commands to automatically create
> tables, databases, and the like. how exactly do i do this using MS SQL
> server? im thinking along the lines of writing my own file, and then
> using something in SQL server to basically read the file and execute
> the SQL commands. any ideas? thanks.
> qin_23
>
> --
> qin_23
> ---
> Posted via http://www.webservertalk.com
> ---
> View this thread: http://www.webservertalk.com/message1835350.html
>

automating table creation

hi there.
moving from IBM DB2 to MS SQL server. i dont need to take the actual data fr
om the old db to the new one, but i would like to take the table structure.
in DB2, i could run through a ddl file that contained CREATE DATABASE, CREAT
E TABLE, etc. kinds of commands to automatically create tables, databases, a
nd the like. how exactly do i do this using MS SQL server? im thinking along
the lines of writing my own file, and then using something in SQL server to
basically read the file and execute the SQL commands. any ideas? thanks.
qin_23quin_23,
The easiest way is to generated the script, copy and paste it into Query
Analyser. Then run it. You may have some tweeking to do before it will
execute without error.
Alternatively, use osql to execute the DDL file.
-- Bill
"qin_23" <qin_23.2n3acb@.mail.webservertalk.com> wrote in message
news:qin_23.2n3acb@.mail.webservertalk.com...
> hi there.
> moving from IBM DB2 to MS SQL server. i dont need to take the actual
> data from the old db to the new one, but i would like to take the table
> structure. in DB2, i could run through a ddl file that contained CREATE
> DATABASE, CREATE TABLE, etc. kinds of commands to automatically create
> tables, databases, and the like. how exactly do i do this using MS SQL
> server? im thinking along the lines of writing my own file, and then
> using something in SQL server to basically read the file and execute
> the SQL commands. any ideas? thanks.
> qin_23
>
> --
> qin_23
> ---
> Posted via http://www.webservertalk.com
> ---
> View this thread: http://www.webservertalk.com/message1835350.html
>|||alright. thanks it worked.
qin_23

automating profiler

I have created a stored proc that will automate the capture of traces. The trace is captured to a file instead of a table because tracing to a table on the same server has caused performance issues.
The problem is that I really want the trace in a table so that we can produce reports with Reporting Services. I know that I can manually save the trace to a trace table. But is there a way to automate this process? I can't find any command line parameter
s for Profiler.
Sure, take a look at fn_trace_gettable in BOL. By the way you almost never
want to trace directly to a table if you care about performance.
Andrew J. Kelly SQL MVP
"Jo" <Jo@.discussions.microsoft.com> wrote in message
news:75FFDF10-B0AA-4AD4-8573-0A241F630717@.microsoft.com...
> I have created a stored proc that will automate the capture of traces. The
trace is captured to a file instead of a table because tracing to a table on
the same server has caused performance issues.
> The problem is that I really want the trace in a table so that we can
produce reports with Reporting Services. I know that I can manually save the
trace to a trace table. But is there a way to automate this process? I can't
find any command line parameters for Profiler.
>
>
|||Is there a SQL 7.0 soluthion?
"Andrew J. Kelly" wrote:

> Sure, take a look at fn_trace_gettable in BOL. By the way you almost never
> want to trace directly to a table if you care about performance.
> --
> Andrew J. Kelly SQL MVP
>
> "Jo" <Jo@.discussions.microsoft.com> wrote in message
> news:75FFDF10-B0AA-4AD4-8573-0A241F630717@.microsoft.com...
> trace is captured to a file instead of a table because tracing to a table on
> the same server has caused performance issues.
> produce reports with Reporting Services. I know that I can manually save the
> trace to a trace table. But is there a way to automate this process? I can't
> find any command line parameters for Profiler.
>
>
|||Not that I am aware of.
Andrew J. Kelly SQL MVP
"Jo" <Jo@.discussions.microsoft.com> wrote in message
news:DC0B3779-1875-40D4-848B-31473782A038@.microsoft.com...[vbcol=seagreen]
> Is there a SQL 7.0 soluthion?
> "Andrew J. Kelly" wrote:
never[vbcol=seagreen]
The[vbcol=seagreen]
table on[vbcol=seagreen]
the[vbcol=seagreen]
can't[vbcol=seagreen]
|||Do you know anything about the xp_trace_opentracefile stored proc in 7.0? Is it possible that this will do the same thing as fn_trace_gettable in 2000?
"Andrew J. Kelly" wrote:

> Not that I am aware of.
> --
> Andrew J. Kelly SQL MVP
>
> "Jo" <Jo@.discussions.microsoft.com> wrote in message
> news:DC0B3779-1875-40D4-848B-31473782A038@.microsoft.com...
> never
> The
> table on
> the
> can't
>
>
|||Do you know anything about the extended stored proc called xp_trace_opentracefile?
Will this do the same thing as fn_trace_gettable?
"Andrew J. Kelly" wrote:

> Not that I am aware of.
> --
> Andrew J. Kelly SQL MVP
>
> "Jo" <Jo@.discussions.microsoft.com> wrote in message
> news:DC0B3779-1875-40D4-848B-31473782A038@.microsoft.com...
> never
> The
> table on
> the
> can't
>
>
|||I am not familiar with the xp and don't have 7.0 anymore.
Andrew J. Kelly SQL MVP
"Jo" <Jo@.discussions.microsoft.com> wrote in message
news:5302ACCD-993C-4CD6-914A-0B2FEC8B8203@.microsoft.com...
> Do you know anything about the extended stored proc called
xp_trace_opentracefile?[vbcol=seagreen]
> Will this do the same thing as fn_trace_gettable?
> "Andrew J. Kelly" wrote:
almost[vbcol=seagreen]
traces.[vbcol=seagreen]
can[vbcol=seagreen]
save[vbcol=seagreen]
I[vbcol=seagreen]
|||The SQL Server 7.0 resource kit has a COM+ object that will allow you to
read a SQL Server 7.0 formatted trace. I don't have sample code handy but it
wasn't hard to figure out how to import the trace using some simple VB
code...
Brian Moran
Principal Mentor
Solid Quality Learning
SQL Server MVP
http://www.solidqualitylearning.com
"Jo" <Jo@.discussions.microsoft.com> wrote in message
news:DC0B3779-1875-40D4-848B-31473782A038@.microsoft.com...[vbcol=seagreen]
> Is there a SQL 7.0 soluthion?
> "Andrew J. Kelly" wrote:
never[vbcol=seagreen]
The[vbcol=seagreen]
table on[vbcol=seagreen]
the[vbcol=seagreen]
can't[vbcol=seagreen]
|||Thank you, I'll give it a try.
"Brian Moran" wrote:

> The SQL Server 7.0 resource kit has a COM+ object that will allow you to
> read a SQL Server 7.0 formatted trace. I don't have sample code handy but it
> wasn't hard to figure out how to import the trace using some simple VB
> code...
> --
> Brian Moran
> Principal Mentor
> Solid Quality Learning
> SQL Server MVP
> http://www.solidqualitylearning.com
>
> "Jo" <Jo@.discussions.microsoft.com> wrote in message
> news:DC0B3779-1875-40D4-848B-31473782A038@.microsoft.com...
> never
> The
> table on
> the
> can't
>
>
|||I can't find the resource kit on the original 7.0 disks. The only sql server resource kit that I find under the MSDN subscriber downloads is for sql server 2000. Is there another place that I need to look?
"Brian Moran" wrote:

> The SQL Server 7.0 resource kit has a COM+ object that will allow you to
> read a SQL Server 7.0 formatted trace. I don't have sample code handy but it
> wasn't hard to figure out how to import the trace using some simple VB
> code...
> --
> Brian Moran
> Principal Mentor
> Solid Quality Learning
> SQL Server MVP
> http://www.solidqualitylearning.com
>
> "Jo" <Jo@.discussions.microsoft.com> wrote in message
> news:DC0B3779-1875-40D4-848B-31473782A038@.microsoft.com...
> never
> The
> table on
> the
> can't
>
>

Tuesday, March 27, 2012

Automatically update datetime field in a database table.

hi e'body:

I have some database tables, and each one of them have a creation_date and modified_date in them. I was trying to figure out a way where when a row in one of these tables is changed using Enterprise Manager (Database -> Tables -> select table -> right click -> select all rows -> change a field in a row inside a table), is there a way apart from triggers, such that the "modified_date" column for that row get changed to 'getdate()' (rather picks up the current datetime).

thanks in advance.Here is the generic trigger I use to record who modified a record and when:
CREATE TRIGGER TR_[TABLENAME]_U ON dbo.[TABLENAME]
FOR UPDATE
AS
set nocount on
update [TABLENAME]
set Modified = getdate(),
Modifier = isnull(inserted.Modifier, (convert(nvarchar(50),suser_sname())))
from [TABLENAME]
inner join Inserted on TABLENAME.PKey = Inserted.PKey
set nocount offNote that this trigger is for update only. For inserts, you should have default values defined on the table.

Automatically Sort Table when Table is Opened in Project

Hi all,

We recently upsized two Microsoft Access Databases to SQL. We're
using an ADP (2002) as the front end.

All the conversion issues have been resolved, except for one:
Whenever we insert a record into a table, the table isn't sorted by
primary key like I would expect. Instead, the record can be found at
the end of the table. This makes finding a particular record
(especially as time goes on) very difficult.

I've tried eliminating all indexes except for the primary key, and
also writing AFTER INSERT triggers, but the table still does not sort
correctly.

Any suggestions would be greatly appreciated!

MattThis is a client-side problem since a table in SQL has no inherent logical
order. I don't think Access will resort the table unless you requery it. To
get around this you will probably need to create your own form for data
entry and display - a good idea anyway if you are doing a lot of data entry.

--
David Portas
SQL Server MVP
--|||Yes adding to David's post, you can define the sort order for the adp form
(you can make the form look virtually identical to a datasheet/table if
you'd like). It will be a string like an Order by clause. I believe you
need the sort string and you need to set the OrderByOn property to true. You
can add code to react to a new record or whatever. Good luck.
~ck

"Matt" <mhen001@.yahoo.com> wrote in message
news:ea450280.0406211322.7b5e60e2@.posting.google.c om...
> Hi all,
> We recently upsized two Microsoft Access Databases to SQL. We're
> using an ADP (2002) as the front end.
> All the conversion issues have been resolved, except for one:
> Whenever we insert a record into a table, the table isn't sorted by
> primary key like I would expect. Instead, the record can be found at
> the end of the table. This makes finding a particular record
> (especially as time goes on) very difficult.
> I've tried eliminating all indexes except for the primary key, and
> also writing AFTER INSERT triggers, but the table still does not sort
> correctly.
> Any suggestions would be greatly appreciated!
> Matt|||oops in a project you use these properties.
ServerFilter Property

You can use the ServerFilter property to specify a subset of records to be
displayed when a server filter is applied to a form or report within a
Microsoft Access project (.adp) or a data access page in a Microsoft Access
project (.adp) or database (.mdb).

Setting

The ServerFilter property is a string expression consisting of a WHERE
clause without the WHERE keyword. For example, the following Visual Basic
code defines and applies a filter to show only customers from the USA:

Me.ServerFilter = "Country = 'USA'"
Me.ServerFilterByForm = TrueThe easiest way to set this property is by using
a form or report's property sheet. You can also set this property on a form
or report by using Visual Basic.

CK

"Matt" <mhen001@.yahoo.com> wrote in message
news:ea450280.0406211322.7b5e60e2@.posting.google.c om...
> Hi all,
> We recently upsized two Microsoft Access Databases to SQL. We're
> using an ADP (2002) as the front end.
> All the conversion issues have been resolved, except for one:
> Whenever we insert a record into a table, the table isn't sorted by
> primary key like I would expect. Instead, the record can be found at
> the end of the table. This makes finding a particular record
> (especially as time goes on) very difficult.
> I've tried eliminating all indexes except for the primary key, and
> also writing AFTER INSERT triggers, but the table still does not sort
> correctly.
> Any suggestions would be greatly appreciated!
> Matt|||Thanks for the replies. It looks like I'll have to go the form route.

The strange thing is: if you open up the properties of the table and
click the "Data" tab, you can specify the "ORDER BY" property. Help
defines this property as:

"Shows client-side sorting criteria sorting criteria applied after
the result set is returned from the database. This property is a
string expression that is the name of the field or fields on which you
want to sort records. When you use more than one field name, separate
the names with a comma (,). If you want to sort records in descending
order, type DESC at the end of the string expression."

Why would Microsoft create this property yet in practice not allow you
to sort tables on the client-side?

Matt

"ck" <c_kettenbach@.hotmail.com> wrote in message news:<5CMBc.3267$6F6.1885@.newssvr25.news.prodigy.com>...
> oops in a project you use these properties.
> ServerFilter Property
>
> You can use the ServerFilter property to specify a subset of records to be
> displayed when a server filter is applied to a form or report within a
> Microsoft Access project (.adp) or a data access page in a Microsoft Access
> project (.adp) or database (.mdb).
> Setting
> The ServerFilter property is a string expression consisting of a WHERE
> clause without the WHERE keyword. For example, the following Visual Basic
> code defines and applies a filter to show only customers from the USA:
> Me.ServerFilter = "Country = 'USA'"
> Me.ServerFilterByForm = TrueThe easiest way to set this property is by using
> a form or report's property sheet. You can also set this property on a form
> or report by using Visual Basic.
> CK
> "Matt" <mhen001@.yahoo.com> wrote in message
> news:ea450280.0406211322.7b5e60e2@.posting.google.c om...
> > Hi all,
> > We recently upsized two Microsoft Access Databases to SQL. We're
> > using an ADP (2002) as the front end.
> > All the conversion issues have been resolved, except for one:
> > Whenever we insert a record into a table, the table isn't sorted by
> > primary key like I would expect. Instead, the record can be found at
> > the end of the table. This makes finding a particular record
> > (especially as time goes on) very difficult.
> > I've tried eliminating all indexes except for the primary key, and
> > also writing AFTER INSERT triggers, but the table still does not sort
> > correctly.
> > Any suggestions would be greatly appreciated!
> > Matt

Automatically setup table from dataset

I need to add tables to an existing "report" from datasets that have 50+
columns. Each table is a different dataset. Is there anyway to replicte the
behiour of the wizard on an existing report? Adding columns one at a time is
a pain, it'd be easier for me to add them all and then remove the few I don't
need.
And yes I know that db tables that wide and reports with this much info are
somewhat useless but I'm forced to swim in the pond I'm in.Can't you just copy and paste a previously created table? It will maintain
all the properties of the table.
>I need to add tables to an existing "report" from datasets that have 50+
> columns. Each table is a different dataset. Is there anyway to replicte
> the
> behiour of the wizard on an existing report? Adding columns one at a time
> is
> a pain, it'd be easier for me to add them all and then remove the few I
> don't
> need.
> And yes I know that db tables that wide and reports with this much info
> are
> somewhat useless but I'm forced to swim in the pond I'm in.

Automatically script out a table & indexes - script needed (2005)

I'm trying to automate an auto-export of a table on a daily basis,
using BCP. I'm using native format for the BCP because the text in
one of the fields can encompass pretty much any ASCII characters, and
using the other options (including the null terminator, stuff like
|||, etc) hasn't worked particularly well.
So, I'm archiving out a table on a daily basis. I want to script out
the table at the same time; that way, if there are any table changes,
an import will still work.
How can I do this? I've been digging through google for scripts with
no luck. Ideally I'd like a table-creation script, along with CREATE
INDEX statements.
Anybody have a script handy for this? I know it can be done by using
the system tables, but I'm hoping to avoid reinventing the wheel.
Thanks in advance.
Michael
M Bourgon (bourgon@.gmail.com) writes:
> I'm trying to automate an auto-export of a table on a daily basis,
> using BCP. I'm using native format for the BCP because the text in
> one of the fields can encompass pretty much any ASCII characters, and
> using the other options (including the null terminator, stuff like
>|||, etc) hasn't worked particularly well.
> So, I'm archiving out a table on a daily basis. I want to script out
> the table at the same time; that way, if there are any table changes,
> an import will still work.
> How can I do this? I've been digging through google for scripts with
> no luck. Ideally I'd like a table-creation script, along with CREATE
> INDEX statements.
> Anybody have a script handy for this? I know it can be done by using
> the system tables, but I'm hoping to avoid reinventing the wheel.
> Thanks in advance.
If you are on SQL 2005, you would use SMO for the scripting and on SQL 2000
it would be DMO. No, I don't have any examples, I have stayed away from
both.
Personally, I would prefer the definition of the table to be under version
control and be content with that.
But why use BCP as a backup tool? Why not simply BACKUP? Or are you
trying to tell us that this is the only table in a big database that
you want to back up?
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
|||Kalen has written this script
SELECT type_desc,object_name(ic.object_id) as object_name , index_name =
i.name,
'column' = c.name,
'column usage' = CASE ic.is_included_column
WHEN 0 then 'KEY'
ELSE 'INCLUDED'
END
FROM sys.index_columns ic JOIN sys.columns c
ON ic.object_id = c.object_id
AND ic.column_id = c.column_id
JOIN sys.indexes i
ON i.object_id = ic.object_id
AND i.index_id = ic.index_id
"M Bourgon" <bourgon@.gmail.com> wrote in message
news:1184085398.736952.162110@.o61g2000hsh.googlegr oups.com...
> I'm trying to automate an auto-export of a table on a daily basis,
> using BCP. I'm using native format for the BCP because the text in
> one of the fields can encompass pretty much any ASCII characters, and
> using the other options (including the null terminator, stuff like
> |||, etc) hasn't worked particularly well.
> So, I'm archiving out a table on a daily basis. I want to script out
> the table at the same time; that way, if there are any table changes,
> an import will still work.
> How can I do this? I've been digging through google for scripts with
> no luck. Ideally I'd like a table-creation script, along with CREATE
> INDEX statements.
> Anybody have a script handy for this? I know it can be done by using
> the system tables, but I'm hoping to avoid reinventing the wheel.
> Thanks in advance.
> Michael
>
|||On Jul 10, 4:48 pm, Erland Sommarskog <esq...@.sommarskog.se> wrote:
> Personally, I would prefer the definition of the table to be under version
> control and be content with that.
We're working on implementing several changes, that's one of them.

> But why use BCP as a backup tool? Why not simply BACKUP? Or are you
> trying to tell us that this is the only table in a big database that
> you want to back up?
Nope. We're working on moving to Partitioned Tables, but right
now we have home-grown partitioning, and we need to deal with old
"partitions".
|||M Bourgon (bourgon@.gmail.com) writes:
> On Jul 10, 4:48 pm, Erland Sommarskog <esq...@.sommarskog.se> wrote:
> Nope. We're working on moving to Partitioned Tables, but right
> now we have home-grown partitioning, and we need to deal with old
> "partitions".
And the scripting is part of that? Maybe you could give more details?
If the main purpose is that the import of the BCP in native format will
work, maybe it sufficient to save the format file with the table? You
can create a format file from BCP with the format option. (You use "format"
in place of "in" or "out".)
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
|||On Jul 11, 4:03 pm, Erland Sommarskog <esq...@.sommarskog.se> wrote:
> M Bourgon (bour...@.gmail.com) writes:
>
> And the scripting is part of that? Maybe you could give more details?
Sure. We have a large table that constantly has new records added. We
want to be able to go back historically and pull from this data set.
The way we currently do it is to keep several days (the "hot" data)
available in one table (whilst auto-archiving 1 days' data out to a
table on a daily basis) several weeks available by a partitioned view,
and archive the older tables. This way, when we get requests for
older data, we can easily look in a set of tables for the results. We
split it up by day due to volume, and also because most of our
requests are "this subset of data, from date A to date B, further
filtered". We've discussed different ways of keeping the data
available, as what takes up the least amount of space is not
necessarily the easiest to query. I like the idea of keeping the data
in a database because then we can easily query it, and we don't have
to worry about any issues (i.e. if we save it out, and the table
format changes, we're don't run into issues months from now when we
try to load the data).
One question you'll probably ask - how do you create the one-days-
worth-of-data table? Unfortunately, that's just a dumb script also,
destined to break if we wind up changing the format of the table.

> If the main purpose is that the import of the BCP in native format will
> work, maybe it sufficient to save the format file with the table? You
> can create a format file from BCP with the format option. (You use "format"
> in place of "in" or "out".)
Can the format file be used instead of DDL? I'll have to try it.
Thanks.
|||M Bourgon (bourgon@.gmail.com) writes:
> Sure. We have a large table that constantly has new records added. We
> want to be able to go back historically and pull from this data set.
> The way we currently do it is to keep several days (the "hot" data)
> available in one table (whilst auto-archiving 1 days' data out to a
> table on a daily basis) several weeks available by a partitioned view,
> and archive the older tables. This way, when we get requests for
> older data, we can easily look in a set of tables for the results. We
> split it up by day due to volume, and also because most of our
> requests are "this subset of data, from date A to date B, further
> filtered".
Maybe a very nave and silly question, but what about a clustered index
on the big table? If you have a date range and clustered index to match
that range, it's fairly irrelevant if the table has 500 million rows.
The major reasons to partition a table I know of are:
1) Being able to quickly drop old data or add new data, by shifting
a table out or in.
2) Spread the load over different file groups.
But it sounds that you in your case keep the data, so that reason to
partition is out.

> We've discussed different ways of keeping the data available, as what
> takes up the least amount of space is not necessarily the easiest to
> query. I like the idea of keeping the data in a database because then
> we can easily query it, and we don't have to worry about any issues
> (i.e. if we save it out, and the table format changes, we're don't run
> into issues months from now when we try to load the data).
But if you keep the data in the database, why then BCP?

> One question you'll probably ask - how do you create the one-days-
> worth-of-data table? Unfortunately, that's just a dumb script also,
> destined to break if we wind up changing the format of the table.
I don't see that much of a problem. I would not expect frequent schema
changes to a table of this size. Having to update one script extra
when you actually do is not that big deal. Although for a plain copy,
you could use SELECT INTO. That would not give the constraints,
triggers and indexes though.
What I am a little more curious is what happens to all those daily
tables that all of a sudden has an obsolete definition.

> Can the format file be used instead of DDL? I'll have to try it.
No, format file has nothing to do with DLL. But my thinking was that
if you saved the BCP file, and then want to import three months later,
the format files relates the format of the file. Assuming that you only
add new columns at the end not drop any, it would import out of the box.
Else you would have to edit column-mapping in the format file.
If you want to script the table, the you are probably best off with DMO
on SQL 2000 and SMO on SQL 2005. I have not worked with either, so I can't
help.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
|||On Jul 17, 4:37 pm, Erland Sommarskog <esq...@.sommarskog.se> wrote:
(sorry for delay on response)

> Maybe a very nave and silly question, but what about a clustered index
> on the big table? If you have a date range and clustered index to match
> that range, it's fairly irrelevant if the table has 500 million rows.
Right. The problem isn't accessing the data in a reasonable amount of
time, it's space issues. We need to have X days available, but we
also need to be able to move it offline (and back online if we need to
query it) in order to free up space. Disk space is cheap, but it's
not that cheap.

> The major reasons to partition a table I know of are:
> 1) Being able to quickly drop old data or add new data, by shifting
> a table out or in.
> 2) Spread the load over different file groups.
3) Make sure of disk space.

> But if you keep the data in the database, why then BCP?
As the data ages, it needs to be pulled out.

> I don't see that much of a problem. I would not expect frequent schema
> changes to a table of this size. Having to update one script extra
> when you actually do is not that big deal. Although for a plain copy,
> you could use SELECT INTO. That would not give the constraints,
> triggers and indexes though.
Okay. I was hoping there was a way to do it.

> What I am a little more curious is what happens to all those daily
> tables that all of a sudden has an obsolete definition.
The change in definition will be relatively minor. Instead of an INT
field (for ID, for instance), we need to move to BIGINT.

> If you want to script the table, the you are probably best off with DMO
> on SQL 2000 and SMO on SQL 2005. I have not worked with either, so I can't
> help.
Fair enough. I appreciate all the help, Erland. Thank you.
sql

Sunday, March 25, 2012

Automatically script out a table & indexes - script needed (2005)

I'm trying to automate an auto-export of a table on a daily basis,
using BCP. I'm using native format for the BCP because the text in
one of the fields can encompass pretty much any ASCII characters, and
using the other options (including the null terminator, stuff like
|||, etc) hasn't worked particularly well.

So, I'm archiving out a table on a daily basis. I want to script out
the table at the same time; that way, if there are any table changes,
an import will still work.

How can I do this? I've been digging through google for scripts with
no luck. Ideally I'd like a table-creation script, along with CREATE
INDEX statements.

Anybody have a script handy for this? I know it can be done by using
the system tables, but I'm hoping to avoid reinventing the wheel.
Thanks in advance.

MichaelM Bourgon (bourgon@.gmail.com) writes:

Quote:

Originally Posted by

I'm trying to automate an auto-export of a table on a daily basis,
using BCP. I'm using native format for the BCP because the text in
one of the fields can encompass pretty much any ASCII characters, and
using the other options (including the null terminator, stuff like
>|||, etc) hasn't worked particularly well.
>
So, I'm archiving out a table on a daily basis. I want to script out
the table at the same time; that way, if there are any table changes,
an import will still work.
>
How can I do this? I've been digging through google for scripts with
no luck. Ideally I'd like a table-creation script, along with CREATE
INDEX statements.
>
Anybody have a script handy for this? I know it can be done by using
the system tables, but I'm hoping to avoid reinventing the wheel.
Thanks in advance.


If you are on SQL 2005, you would use SMO for the scripting and on SQL 2000
it would be DMO. No, I don't have any examples, I have stayed away from
both.

Personally, I would prefer the definition of the table to be under version
control and be content with that.

But why use BCP as a backup tool? Why not simply BACKUP? Or are you
trying to tell us that this is the only table in a big database that
you want to back up?

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Kalen has written this script
SELECT type_desc,object_name(ic.object_id) as object_name , index_name =
i.name,

'column' = c.name,

'column usage' = CASE ic.is_included_column

WHEN 0 then 'KEY'

ELSE 'INCLUDED'

END

FROM sys.index_columns ic JOIN sys.columns c

ON ic.object_id = c.object_id

AND ic.column_id = c.column_id

JOIN sys.indexes i

ON i.object_id = ic.object_id

AND i.index_id = ic.index_id

"M Bourgon" <bourgon@.gmail.comwrote in message
news:1184085398.736952.162110@.o61g2000hsh.googlegr oups.com...

Quote:

Originally Posted by

I'm trying to automate an auto-export of a table on a daily basis,
using BCP. I'm using native format for the BCP because the text in
one of the fields can encompass pretty much any ASCII characters, and
using the other options (including the null terminator, stuff like
|||, etc) hasn't worked particularly well.
>
So, I'm archiving out a table on a daily basis. I want to script out
the table at the same time; that way, if there are any table changes,
an import will still work.
>
How can I do this? I've been digging through google for scripts with
no luck. Ideally I'd like a table-creation script, along with CREATE
INDEX statements.
>
Anybody have a script handy for this? I know it can be done by using
the system tables, but I'm hoping to avoid reinventing the wheel.
Thanks in advance.
>
Michael
>

Automatically script out a table & indexes - script needed (2005)

I'm trying to automate an auto-export of a table on a daily basis,
using BCP. I'm using native format for the BCP because the text in
one of the fields can encompass pretty much any ASCII characters, and
using the other options (including the null terminator, stuff like
|||, etc) hasn't worked particularly well.
So, I'm archiving out a table on a daily basis. I want to script out
the table at the same time; that way, if there are any table changes,
an import will still work.
How can I do this? I've been digging through google for scripts with
no luck. Ideally I'd like a table-creation script, along with CREATE
INDEX statements.
Anybody have a script handy for this? I know it can be done by using
the system tables, but I'm hoping to avoid reinventing the wheel.
Thanks in advance.
MichaelM Bourgon (bourgon@.gmail.com) writes:
> I'm trying to automate an auto-export of a table on a daily basis,
> using BCP. I'm using native format for the BCP because the text in
> one of the fields can encompass pretty much any ASCII characters, and
> using the other options (including the null terminator, stuff like
>|||, etc) hasn't worked particularly well.
> So, I'm archiving out a table on a daily basis. I want to script out
> the table at the same time; that way, if there are any table changes,
> an import will still work.
> How can I do this? I've been digging through google for scripts with
> no luck. Ideally I'd like a table-creation script, along with CREATE
> INDEX statements.
> Anybody have a script handy for this? I know it can be done by using
> the system tables, but I'm hoping to avoid reinventing the wheel.
> Thanks in advance.
If you are on SQL 2005, you would use SMO for the scripting and on SQL 2000
it would be DMO. No, I don't have any examples, I have stayed away from
both.
Personally, I would prefer the definition of the table to be under version
control and be content with that.
But why use BCP as a backup tool? Why not simply BACKUP? Or are you
trying to tell us that this is the only table in a big database that
you want to back up?
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx|||Kalen has written this script
SELECT type_desc,object_name(ic.object_id) as object_name , index_name =i.name,
'column' = c.name,
'column usage' = CASE ic.is_included_column
WHEN 0 then 'KEY'
ELSE 'INCLUDED'
END
FROM sys.index_columns ic JOIN sys.columns c
ON ic.object_id = c.object_id
AND ic.column_id = c.column_id
JOIN sys.indexes i
ON i.object_id = ic.object_id
AND i.index_id = ic.index_id
"M Bourgon" <bourgon@.gmail.com> wrote in message
news:1184085398.736952.162110@.o61g2000hsh.googlegroups.com...
> I'm trying to automate an auto-export of a table on a daily basis,
> using BCP. I'm using native format for the BCP because the text in
> one of the fields can encompass pretty much any ASCII characters, and
> using the other options (including the null terminator, stuff like
> |||, etc) hasn't worked particularly well.
> So, I'm archiving out a table on a daily basis. I want to script out
> the table at the same time; that way, if there are any table changes,
> an import will still work.
> How can I do this? I've been digging through google for scripts with
> no luck. Ideally I'd like a table-creation script, along with CREATE
> INDEX statements.
> Anybody have a script handy for this? I know it can be done by using
> the system tables, but I'm hoping to avoid reinventing the wheel.
> Thanks in advance.
> Michael
>|||On Jul 10, 4:48 pm, Erland Sommarskog <esq...@.sommarskog.se> wrote:
> Personally, I would prefer the definition of the table to be under version
> control and be content with that.
We're working on implementing several changes, that's one of them.
> But why use BCP as a backup tool? Why not simply BACKUP? Or are you
> trying to tell us that this is the only table in a big database that
> you want to back up?
Nope. :) We're working on moving to Partitioned Tables, but right
now we have home-grown partitioning, and we need to deal with old
"partitions".|||M Bourgon (bourgon@.gmail.com) writes:
> On Jul 10, 4:48 pm, Erland Sommarskog <esq...@.sommarskog.se> wrote:
>> But why use BCP as a backup tool? Why not simply BACKUP? Or are you
>> trying to tell us that this is the only table in a big database that
>> you want to back up?
> Nope. :) We're working on moving to Partitioned Tables, but right
> now we have home-grown partitioning, and we need to deal with old
> "partitions".
And the scripting is part of that? Maybe you could give more details?
If the main purpose is that the import of the BCP in native format will
work, maybe it sufficient to save the format file with the table? You
can create a format file from BCP with the format option. (You use "format"
in place of "in" or "out".)
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx|||On Jul 11, 4:03 pm, Erland Sommarskog <esq...@.sommarskog.se> wrote:
> M Bourgon (bour...@.gmail.com) writes:
> > On Jul 10, 4:48 pm, Erland Sommarskog <esq...@.sommarskog.se> wrote:
> >> But why use BCP as a backup tool? Why not simply BACKUP? Or are you
> >> trying to tell us that this is the only table in a big database that
> >> you want to back up?
> > Nope. :) We're working on moving to Partitioned Tables, but right
> > now we have home-grown partitioning, and we need to deal with old
> > "partitions".
> And the scripting is part of that? Maybe you could give more details?
Sure. We have a large table that constantly has new records added. We
want to be able to go back historically and pull from this data set.
The way we currently do it is to keep several days (the "hot" data)
available in one table (whilst auto-archiving 1 days' data out to a
table on a daily basis) several weeks available by a partitioned view,
and archive the older tables. This way, when we get requests for
older data, we can easily look in a set of tables for the results. We
split it up by day due to volume, and also because most of our
requests are "this subset of data, from date A to date B, further
filtered". We've discussed different ways of keeping the data
available, as what takes up the least amount of space is not
necessarily the easiest to query. I like the idea of keeping the data
in a database because then we can easily query it, and we don't have
to worry about any issues (i.e. if we save it out, and the table
format changes, we're don't run into issues months from now when we
try to load the data).
One question you'll probably ask - how do you create the one-days-
worth-of-data table? Unfortunately, that's just a dumb script also,
destined to break if we wind up changing the format of the table.
> If the main purpose is that the import of the BCP in native format will
> work, maybe it sufficient to save the format file with the table? You
> can create a format file from BCP with the format option. (You use "format"
> in place of "in" or "out".)
Can the format file be used instead of DDL? I'll have to try it.
Thanks.|||M Bourgon (bourgon@.gmail.com) writes:
> Sure. We have a large table that constantly has new records added. We
> want to be able to go back historically and pull from this data set.
> The way we currently do it is to keep several days (the "hot" data)
> available in one table (whilst auto-archiving 1 days' data out to a
> table on a daily basis) several weeks available by a partitioned view,
> and archive the older tables. This way, when we get requests for
> older data, we can easily look in a set of tables for the results. We
> split it up by day due to volume, and also because most of our
> requests are "this subset of data, from date A to date B, further
> filtered".
Maybe a very naïve and silly question, but what about a clustered index
on the big table? If you have a date range and clustered index to match
that range, it's fairly irrelevant if the table has 500 million rows.
The major reasons to partition a table I know of are:
1) Being able to quickly drop old data or add new data, by shifting
a table out or in.
2) Spread the load over different file groups.
But it sounds that you in your case keep the data, so that reason to
partition is out.
> We've discussed different ways of keeping the data available, as what
> takes up the least amount of space is not necessarily the easiest to
> query. I like the idea of keeping the data in a database because then
> we can easily query it, and we don't have to worry about any issues
> (i.e. if we save it out, and the table format changes, we're don't run
> into issues months from now when we try to load the data).
But if you keep the data in the database, why then BCP?
> One question you'll probably ask - how do you create the one-days-
> worth-of-data table? Unfortunately, that's just a dumb script also,
> destined to break if we wind up changing the format of the table.
I don't see that much of a problem. I would not expect frequent schema
changes to a table of this size. Having to update one script extra
when you actually do is not that big deal. Although for a plain copy,
you could use SELECT INTO. That would not give the constraints,
triggers and indexes though.
What I am a little more curious is what happens to all those daily
tables that all of a sudden has an obsolete definition.
>> If the main purpose is that the import of the BCP in native format will
>> work, maybe it sufficient to save the format file with the table? You
>> can create a format file from BCP with the format option. (You use
>> "format" in place of "in" or "out".)
> Can the format file be used instead of DDL? I'll have to try it.
No, format file has nothing to do with DLL. But my thinking was that
if you saved the BCP file, and then want to import three months later,
the format files relates the format of the file. Assuming that you only
add new columns at the end not drop any, it would import out of the box.
Else you would have to edit column-mapping in the format file.
If you want to script the table, the you are probably best off with DMO
on SQL 2000 and SMO on SQL 2005. I have not worked with either, so I can't
help.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx|||On Jul 17, 4:37 pm, Erland Sommarskog <esq...@.sommarskog.se> wrote:
(sorry for delay on response)
> Maybe a very na=EFve and silly question, but what about a clustered index
> on the big table? If you have a date range and clustered index to match
> that range, it's fairly irrelevant if the table has 500 million rows.
Right. The problem isn't accessing the data in a reasonable amount of
time, it's space issues. We need to have X days available, but we
also need to be able to move it offline (and back online if we need to
query it) in order to free up space. Disk space is cheap, but it's
not that cheap.
> The major reasons to partition a table I know of are:
> 1) Being able to quickly drop old data or add new data, by shifting
> a table out or in.
> 2) Spread the load over different file groups.
3) Make sure of disk space. :)
> But if you keep the data in the database, why then BCP?
As the data ages, it needs to be pulled out.
> I don't see that much of a problem. I would not expect frequent schema
> changes to a table of this size. Having to update one script extra
> when you actually do is not that big deal. Although for a plain copy,
> you could use SELECT INTO. That would not give the constraints,
> triggers and indexes though.
Okay. I was hoping there was a way to do it.
> What I am a little more curious is what happens to all those daily
> tables that all of a sudden has an obsolete definition.
The change in definition will be relatively minor. Instead of an INT
field (for ID, for instance), we need to move to BIGINT.
> If you want to script the table, the you are probably best off with DMO
> on SQL 2000 and SMO on SQL 2005. I have not worked with either, so I can't
> help.
Fair enough. I appreciate all the help, Erland. Thank you.

Automatically script out a table & indexes - script needed (2005)

I'm trying to automate an auto-export of a table on a daily basis,
using BCP. I'm using native format for the BCP because the text in
one of the fields can encompass pretty much any ASCII characters, and
using the other options (including the null terminator, stuff like
|||, etc) hasn't worked particularly well.
So, I'm archiving out a table on a daily basis. I want to script out
the table at the same time; that way, if there are any table changes,
an import will still work.
How can I do this? I've been digging through google for scripts with
no luck. Ideally I'd like a table-creation script, along with CREATE
INDEX statements.
Anybody have a script handy for this? I know it can be done by using
the system tables, but I'm hoping to avoid reinventing the wheel.
Thanks in advance.
MichaelM Bourgon (bourgon@.gmail.com) writes:
> I'm trying to automate an auto-export of a table on a daily basis,
> using BCP. I'm using native format for the BCP because the text in
> one of the fields can encompass pretty much any ASCII characters, and
> using the other options (including the null terminator, stuff like
>|||, etc) hasn't worked particularly well.
> So, I'm archiving out a table on a daily basis. I want to script out
> the table at the same time; that way, if there are any table changes,
> an import will still work.
> How can I do this? I've been digging through google for scripts with
> no luck. Ideally I'd like a table-creation script, along with CREATE
> INDEX statements.
> Anybody have a script handy for this? I know it can be done by using
> the system tables, but I'm hoping to avoid reinventing the wheel.
> Thanks in advance.
If you are on SQL 2005, you would use SMO for the scripting and on SQL 2000
it would be DMO. No, I don't have any examples, I have stayed away from
both.
Personally, I would prefer the definition of the table to be under version
control and be content with that.
But why use BCP as a backup tool? Why not simply BACKUP? Or are you
trying to tell us that this is the only table in a big database that
you want to back up?
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Kalen has written this script
SELECT type_desc,object_name(ic.object_id) as object_name , index_name =
i.name,
'column' = c.name,
'column usage' = CASE ic.is_included_column
WHEN 0 then 'KEY'
ELSE 'INCLUDED'
END
FROM sys.index_columns ic JOIN sys.columns c
ON ic.object_id = c.object_id
AND ic.column_id = c.column_id
JOIN sys.indexes i
ON i.object_id = ic.object_id
AND i.index_id = ic.index_id
"M Bourgon" <bourgon@.gmail.com> wrote in message
news:1184085398.736952.162110@.o61g2000hsh.googlegroups.com...
> I'm trying to automate an auto-export of a table on a daily basis,
> using BCP. I'm using native format for the BCP because the text in
> one of the fields can encompass pretty much any ASCII characters, and
> using the other options (including the null terminator, stuff like
> |||, etc) hasn't worked particularly well.
> So, I'm archiving out a table on a daily basis. I want to script out
> the table at the same time; that way, if there are any table changes,
> an import will still work.
> How can I do this? I've been digging through google for scripts with
> no luck. Ideally I'd like a table-creation script, along with CREATE
> INDEX statements.
> Anybody have a script handy for this? I know it can be done by using
> the system tables, but I'm hoping to avoid reinventing the wheel.
> Thanks in advance.
> Michael
>|||On Jul 10, 4:48 pm, Erland Sommarskog <esq...@.sommarskog.se> wrote:
> Personally, I would prefer the definition of the table to be under version
> control and be content with that.
We're working on implementing several changes, that's one of them.

> But why use BCP as a backup tool? Why not simply BACKUP? Or are you
> trying to tell us that this is the only table in a big database that
> you want to back up?
Nope. We're working on moving to Partitioned Tables, but right
now we have home-grown partitioning, and we need to deal with old
"partitions".|||M Bourgon (bourgon@.gmail.com) writes:
> On Jul 10, 4:48 pm, Erland Sommarskog <esq...@.sommarskog.se> wrote:
> Nope. We're working on moving to Partitioned Tables, but right
> now we have home-grown partitioning, and we need to deal with old
> "partitions".
And the scripting is part of that? Maybe you could give more details?
If the main purpose is that the import of the BCP in native format will
work, maybe it sufficient to save the format file with the table? You
can create a format file from BCP with the format option. (You use "format"
in place of "in" or "out".)
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||On Jul 11, 4:03 pm, Erland Sommarskog <esq...@.sommarskog.se> wrote:
> M Bourgon (bour...@.gmail.com) writes:
>
> And the scripting is part of that? Maybe you could give more details?
Sure. We have a large table that constantly has new records added. We
want to be able to go back historically and pull from this data set.
The way we currently do it is to keep several days (the "hot" data)
available in one table (whilst auto-archiving 1 days' data out to a
table on a daily basis) several weeks available by a partitioned view,
and archive the older tables. This way, when we get requests for
older data, we can easily look in a set of tables for the results. We
split it up by day due to volume, and also because most of our
requests are "this subset of data, from date A to date B, further
filtered". We've discussed different ways of keeping the data
available, as what takes up the least amount of space is not
necessarily the easiest to query. I like the idea of keeping the data
in a database because then we can easily query it, and we don't have
to worry about any issues (i.e. if we save it out, and the table
format changes, we're don't run into issues months from now when we
try to load the data).
One question you'll probably ask - how do you create the one-days-
worth-of-data table? Unfortunately, that's just a dumb script also,
destined to break if we wind up changing the format of the table.

> If the main purpose is that the import of the BCP in native format will
> work, maybe it sufficient to save the format file with the table? You
> can create a format file from BCP with the format option. (You use "format
"
> in place of "in" or "out".)
Can the format file be used instead of DDL? I'll have to try it.
Thanks.|||M Bourgon (bourgon@.gmail.com) writes:
> Sure. We have a large table that constantly has new records added. We
> want to be able to go back historically and pull from this data set.
> The way we currently do it is to keep several days (the "hot" data)
> available in one table (whilst auto-archiving 1 days' data out to a
> table on a daily basis) several weeks available by a partitioned view,
> and archive the older tables. This way, when we get requests for
> older data, we can easily look in a set of tables for the results. We
> split it up by day due to volume, and also because most of our
> requests are "this subset of data, from date A to date B, further
> filtered".
Maybe a very nave and silly question, but what about a clustered index
on the big table? If you have a date range and clustered index to match
that range, it's fairly irrelevant if the table has 500 million rows.
The major reasons to partition a table I know of are:
1) Being able to quickly drop old data or add new data, by shifting
a table out or in.
2) Spread the load over different file groups.
But it sounds that you in your case keep the data, so that reason to
partition is out.

> We've discussed different ways of keeping the data available, as what
> takes up the least amount of space is not necessarily the easiest to
> query. I like the idea of keeping the data in a database because then
> we can easily query it, and we don't have to worry about any issues
> (i.e. if we save it out, and the table format changes, we're don't run
> into issues months from now when we try to load the data).
But if you keep the data in the database, why then BCP?

> One question you'll probably ask - how do you create the one-days-
> worth-of-data table? Unfortunately, that's just a dumb script also,
> destined to break if we wind up changing the format of the table.
I don't see that much of a problem. I would not expect frequent schema
changes to a table of this size. Having to update one script extra
when you actually do is not that big deal. Although for a plain copy,
you could use SELECT INTO. That would not give the constraints,
triggers and indexes though.
What I am a little more curious is what happens to all those daily
tables that all of a sudden has an obsolete definition.

> Can the format file be used instead of DDL? I'll have to try it.
No, format file has nothing to do with DLL. But my thinking was that
if you saved the BCP file, and then want to import three months later,
the format files relates the format of the file. Assuming that you only
add new columns at the end not drop any, it would import out of the box.
Else you would have to edit column-mapping in the format file.
If you want to script the table, the you are probably best off with DMO
on SQL 2000 and SMO on SQL 2005. I have not worked with either, so I can't
help.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||On Jul 17, 4:37 pm, Erland Sommarskog <esq...@.sommarskog.se> wrote:
(sorry for delay on response)

> Maybe a very na=EFve and silly question, but what about a clustered index
> on the big table? If you have a date range and clustered index to match
> that range, it's fairly irrelevant if the table has 500 million rows.
Right. The problem isn't accessing the data in a reasonable amount of
time, it's space issues. We need to have X days available, but we
also need to be able to move it offline (and back online if we need to
query it) in order to free up space. Disk space is cheap, but it's
not that cheap.

> The major reasons to partition a table I know of are:
> 1) Being able to quickly drop old data or add new data, by shifting
> a table out or in.
> 2) Spread the load over different file groups.
3) Make sure of disk space.

> But if you keep the data in the database, why then BCP?
As the data ages, it needs to be pulled out.

> I don't see that much of a problem. I would not expect frequent schema
> changes to a table of this size. Having to update one script extra
> when you actually do is not that big deal. Although for a plain copy,
> you could use SELECT INTO. That would not give the constraints,
> triggers and indexes though.
Okay. I was hoping there was a way to do it.

> What I am a little more curious is what happens to all those daily
> tables that all of a sudden has an obsolete definition.
The change in definition will be relatively minor. Instead of an INT
field (for ID, for instance), we need to move to BIGINT.

> If you want to script the table, the you are probably best off with DMO
> on SQL 2000 and SMO on SQL 2005. I have not worked with either, so I can't
> help.
Fair enough. I appreciate all the help, Erland. Thank you.

Automatically re-indexing

Is there a way to tell the system to re-index a table after a change - any
change is made to the table?
Is there a way to tell the system to re-index using a stored procedure?
Thanks,
Tom.
Tom,
Yes. That is assuming you are using SQL Server 2000, you can use "Change
Tracking" and "Update Index in Background" and get near real-time updates of
the FT Catalogs when a table's FT-enabled column changes. SQL Server 2000
BOL titles "Full-Text Search Recommendations" and "Maintaining Full-Text
Indexes" have more info on these options. If you're using SQL Server 7.0,
then you must schedule and run an Incremental Population (substitute
start_incremental for start_full in the below sql code).
As for a stored proc that can do this, try:
use pubs
go
if object_id('sp_WrapFT_SProcs','P') IS NOT NULL
drop procedure sp_WrapFT_SProcs
GO
CREATE PROCEDURE sp_WrapFT_SProcs @.tablename varchar(100), @.activity
varchar(100)
AS
DECLARE @.SQLCMD varchar(255)
SELECT @.SQLCMD = "sp_fulltext_catalog @.tablename, @.activity"
EXEC (@.SQLCMD)
GO
-- Execute above:
EXEC sp_WrapFT_SProcs 'PubInfo', 'start_full'
GO
Regards,
John
"Thomas Scheiderich" <tfs@.deltanet.com> wrote in message
news:10h0diu7prv0h39@.corp.supernews.com...
> Is there a way to tell the system to re-index a table after a change - any
> change is made to the table?
> Is there a way to tell the system to re-index using a stored procedure?
> Thanks,
> Tom.
>
|||"John Kane" <jt-kane@.comcast.net> wrote in message
news:#F671tceEHA.724@.TK2MSFTNGP10.phx.gbl...
> Tom,
> Yes. That is assuming you are using SQL Server 2000, you can use "Change
> Tracking" and "Update Index in Background" and get near real-time updates
of
> the FT Catalogs when a table's FT-enabled column changes. SQL Server 2000
> BOL titles "Full-Text Search Recommendations" and "Maintaining Full-Text
> Indexes" have more info on these options. If you're using SQL Server 7.0,
> then you must schedule and run an Incremental Population (substitute
> start_incremental for start_full in the below sql code).
Yes, I am using Sql Server 2000.
What I am doing is setting up a table with all my QA Docs (about 150 of
them).
I have an ASP.NET page that just goes through my Document folder and creates
the records as well as copies the documents into my image field.
What I am doing is trying to set up an easy way to update the records when
some of the documents have changed. Instead of trying to determine which
have been changed, I plan to just delete all the records and create all the
records again (only takes about 3 minutes). I just want to make sure that
index is also updated.
Thanks,
Tom.[vbcol=seagreen]
> As for a stored proc that can do this, try:
> use pubs
> go
> if object_id('sp_WrapFT_SProcs','P') IS NOT NULL
> drop procedure sp_WrapFT_SProcs
> GO
> CREATE PROCEDURE sp_WrapFT_SProcs @.tablename varchar(100), @.activity
> varchar(100)
> AS
> DECLARE @.SQLCMD varchar(255)
> SELECT @.SQLCMD = "sp_fulltext_catalog @.tablename, @.activity"
> EXEC (@.SQLCMD)
> GO
> -- Execute above:
> EXEC sp_WrapFT_SProcs 'PubInfo', 'start_full'
> GO
> Regards,
> John
>
> "Thomas Scheiderich" <tfs@.deltanet.com> wrote in message
> news:10h0diu7prv0h39@.corp.supernews.com...
any
>
|||Tom,
There should be an easy way to determine when & what document has changed,
at a mim. save the doc length &/or mod date/time in your SQL table and then
only upload the changed documents. With CT & UIiB enabled, each time you
upload (or really update/insert) the document into your FT-enable image
column, CT & UIiB will automatically update the FT Catalog with the
new/modified documents.
Even if it takes only 3 minutes, deleting all the records and then
re-creating all the records again, seems a bit time-consuming (not very
scalable as you get more & more documents) and wasteful as all documents
would then need to be re-FT Indexed and that can take more time as you get
more & more documents, IMHO.
Regards,
John
"Thomas Scheiderich" <tfs@.deltanet.com> wrote in message
news:10h0rrtl23mhi76@.corp.supernews.com...[vbcol=seagreen]
> "John Kane" <jt-kane@.comcast.net> wrote in message
> news:#F671tceEHA.724@.TK2MSFTNGP10.phx.gbl...
updates[vbcol=seagreen]
> of
2000[vbcol=seagreen]
7.0,
> Yes, I am using Sql Server 2000.
> What I am doing is setting up a table with all my QA Docs (about 150 of
> them).
> I have an ASP.NET page that just goes through my Document folder and
creates
> the records as well as copies the documents into my image field.
> What I am doing is trying to set up an easy way to update the records when
> some of the documents have changed. Instead of trying to determine which
> have been changed, I plan to just delete all the records and create all
the[vbcol=seagreen]
> records again (only takes about 3 minutes). I just want to make sure that
> index is also updated.
> Thanks,
> Tom.
> any
procedure?
>
|||"John Kane" <jt-kane@.comcast.net> wrote in message
news:OnizJ8eeEHA.2544@.TK2MSFTNGP10.phx.gbl...
> Tom,
> There should be an easy way to determine when & what document has changed,
> at a mim. save the doc length &/or mod date/time in your SQL table and
then
> only upload the changed documents. With CT & UIiB enabled, each time you
> upload (or really update/insert) the document into your FT-enable image
> column, CT & UIiB will automatically update the FT Catalog with the
> new/modified documents.
> Even if it takes only 3 minutes, deleting all the records and then
> re-creating all the records again, seems a bit time-consuming (not very
> scalable as you get more & more documents) and wasteful as all documents
> would then need to be re-FT Indexed and that can take more time as you get
> more & more documents, IMHO.
You're right.
And if I set CT & UIiB, I shouldn't have to check the length or time,
anyway.
Thanks,
Tom[vbcol=seagreen]
> Regards,
> John
>
> "Thomas Scheiderich" <tfs@.deltanet.com> wrote in message
> news:10h0rrtl23mhi76@.corp.supernews.com...
"Change[vbcol=seagreen]
> updates
> 2000
Full-Text[vbcol=seagreen]
> 7.0,
> creates
when[vbcol=seagreen]
which[vbcol=seagreen]
> the
that[vbcol=seagreen]
change -
> procedure?
>

Automatically increasing field definition by SQL

How can I create/define a field so it'll be of the automatically increasing type with a SQL sentence? If it must be done during table creation, that's cool too.
ThanksCreate table a
(
name varchar2(100)
);

Alter table a
modify name varchar2(200);|||Are you asking how to create a column that will increase in value, or increase in size? If you are looking to create something analagous to Oracle's rowid, the syntax is different for each database engine, so you'll have to tell us which engine you are using for us to give you one answer.

-PatP|||It's on ACCESS.|||Originally posted by anat_sher
It's on ACCESS. That's helpful, but are you looking for an MS-Access AUTONUMBER (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/off2000/html/acconWhichTypeAutoNumberFieldCreate.asp) column, or a TEXT column that will increase in length each time you do something?

-PatP|||AUTONUMBER please..

It's not on ACCESS really, it's on a SQL server. But I figured it's about he same. No?|||create table tableA
(
id INTEGER IDENTITY(1, 1)
...
)

Automatically export foxpro data to an xml file

I'm not sure if this is the correct place to post this, so I apologize
if it isn't.
I have a foxpro table that needs to be accessed by the web, but I don't
really want to access it with odbc unless I have to. I'd rather just
export the data once a night and read it from that exported file.
My questions is this: Is there a program out there that will export a
foxpro table to an xml file like once a night (or at any specified
interval)?
Any other solutions somebody can think of would be helpful. Thanks!Luke.Visinoni@.gmail.com wrote:
> I'm not sure if this is the correct place to post this, so I apologize
> if it isn't.
> I have a foxpro table that needs to be accessed by the web, but I don't
> really want to access it with odbc unless I have to. I'd rather just
> export the data once a night and read it from that exported file.
> My questions is this: Is there a program out there that will export a
> foxpro table to an xml file like once a night (or at any specified
> interval)?
> Any other solutions somebody can think of would be helpful. Thanks!
>
Since this is a SQL Server newsgroup, I'll suggest a SQL Server
solution... :-)
You could setup a linked server between SQL and your Foxpro database,
and then use a scheduled SQL job to export the data from Foxpro to an
XML file. Or just let your web site connect to SQL like the rest of the
world does.|||How would I set up a linked server?
Tracy McKibben wrote:
> Since this is a SQL Server newsgroup, I'll suggest a SQL Server
> solution... :-)
> You could setup a linked server between SQL and your Foxpro database,
> and then use a scheduled SQL job to export the data from Foxpro to an
> XML file. Or just let your web site connect to SQL like the rest of the
> world does.|||Luke.Visinoni@.gmail.com wrote:
> I'm not sure if this is the correct place to post this, so I apologize
> if it isn't.
> I have a foxpro table that needs to be accessed by the web, but I don't
> really want to access it with odbc unless I have to. I'd rather just
> export the data once a night and read it from that exported file.
> My questions is this: Is there a program out there that will export a
> foxpro table to an xml file like once a night (or at any specified
> interval)?
> Any other solutions somebody can think of would be helpful. Thanks!
>
Since this is a SQL Server newsgroup, I'll suggest a SQL Server
solution... :-)
You could setup a linked server between SQL and your Foxpro database,
and then use a scheduled SQL job to export the data from Foxpro to an
XML file. Or just let your web site connect to SQL like the rest of the
world does.|||How would I set up a linked server?
Tracy McKibben wrote:
> Since this is a SQL Server newsgroup, I'll suggest a SQL Server
> solution... :-)
> You could setup a linked server between SQL and your Foxpro database,
> and then use a scheduled SQL job to export the data from Foxpro to an
> XML file. Or just let your web site connect to SQL like the rest of the
> world does.|||Luke.Visinoni@.gmail.com wrote:
> How would I set up a linked server?
> Tracy McKibben wrote:
>
Try to look up sp_addlinkedserver in Books On Line - that will get you
started.
Regards
Steen Schlter Persson
Databaseadministrator / Systemadministrator|||Luke.Visinoni@.gmail.com wrote:
> How would I set up a linked server?
> Tracy McKibben wrote:
>
Try to look up sp_addlinkedserver in Books On Line - that will get you
started.
Regards
Steen Schlter Persson
Databaseadministrator / Systemadministrator|||Hi Luke,
First, make sure you have the latest FoxPro and Visual FoxPro OLE DB data
provider, downloadable from msdn.microsoft.com/vfoxpro/downloads/updates.
To set up a linked server in SQL Server Management Studio's Object Explorer:
ServerName > Server Objects > Linked Servers > Right Click - New Linked
Server. In the dialog here's what I have:
Linked Server: LinkedServerNameHere
Provider: Microsoft OLE DB Provider for Visual FoxPro
Product Name: Visual FoxPro 9
Data Source: "C:\Program Files\Microsoft Visual FoxPro
9\Samples\Northwind\Northwind.dbc"
Provider String: VFPOLEDB.1
For the data source, if there is a DBC file present point directly to the
DBC file. If there is no DBC present just point to the directory where the
DBFs are located.
To access the data use code like:
Select Customers.* From LinkedServerNameHere...Customers
Cindy Winegarden MCSD, Microsoft Most Valuable Professional
cindy@.cindywinegarden.com
<Luke.Visinoni@.gmail.com> wrote in message
news:1150833731.102678.204090@.c74g2000cwc.googlegroups.com...
> How would I set up a linked server?
> Tracy McKibben wrote:
>|||Hi Luke,
First, make sure you have the latest FoxPro and Visual FoxPro OLE DB data
provider, downloadable from msdn.microsoft.com/vfoxpro/downloads/updates.
To set up a linked server in SQL Server Management Studio's Object Explorer:
ServerName > Server Objects > Linked Servers > Right Click - New Linked
Server. In the dialog here's what I have:
Linked Server: LinkedServerNameHere
Provider: Microsoft OLE DB Provider for Visual FoxPro
Product Name: Visual FoxPro 9
Data Source: "C:\Program Files\Microsoft Visual FoxPro
9\Samples\Northwind\Northwind.dbc"
Provider String: VFPOLEDB.1
For the data source, if there is a DBC file present point directly to the
DBC file. If there is no DBC present just point to the directory where the
DBFs are located.
To access the data use code like:
Select Customers.* From LinkedServerNameHere...Customers
Cindy Winegarden MCSD, Microsoft Most Valuable Professional
cindy@.cindywinegarden.com
<Luke.Visinoni@.gmail.com> wrote in message
news:1150833731.102678.204090@.c74g2000cwc.googlegroups.com...
> How would I set up a linked server?
> Tracy McKibben wrote:
>

Automatically export foxpro data to an xml file

I'm not sure if this is the correct place to post this, so I apologize
if it isn't.
I have a foxpro table that needs to be accessed by the web, but I don't
really want to access it with odbc unless I have to. I'd rather just
export the data once a night and read it from that exported file.
My questions is this: Is there a program out there that will export a
foxpro table to an xml file like once a night (or at any specified
interval)?
Any other solutions somebody can think of would be helpful. Thanks!Luke.Visinoni@.gmail.com wrote:
> I'm not sure if this is the correct place to post this, so I apologize
> if it isn't.
> I have a foxpro table that needs to be accessed by the web, but I don't
> really want to access it with odbc unless I have to. I'd rather just
> export the data once a night and read it from that exported file.
> My questions is this: Is there a program out there that will export a
> foxpro table to an xml file like once a night (or at any specified
> interval)?
> Any other solutions somebody can think of would be helpful. Thanks!
>
Since this is a SQL Server newsgroup, I'll suggest a SQL Server
solution... :-)
You could setup a linked server between SQL and your Foxpro database,
and then use a scheduled SQL job to export the data from Foxpro to an
XML file. Or just let your web site connect to SQL like the rest of the
world does.|||How would I set up a linked server?
Tracy McKibben wrote:
> Since this is a SQL Server newsgroup, I'll suggest a SQL Server
> solution... :-)
> You could setup a linked server between SQL and your Foxpro database,
> and then use a scheduled SQL job to export the data from Foxpro to an
> XML file. Or just let your web site connect to SQL like the rest of the
> world does.|||Luke.Visinoni@.gmail.com wrote:
> How would I set up a linked server?
> Tracy McKibben wrote:
>> Since this is a SQL Server newsgroup, I'll suggest a SQL Server
>> solution... :-)
>> You could setup a linked server between SQL and your Foxpro database,
>> and then use a scheduled SQL job to export the data from Foxpro to an
>> XML file. Or just let your web site connect to SQL like the rest of the
>> world does.
>
Try to look up sp_addlinkedserver in Books On Line - that will get you
started.
Regards
Steen Schlüter Persson
Databaseadministrator / Systemadministrator|||Hi Luke,
First, make sure you have the latest FoxPro and Visual FoxPro OLE DB data
provider, downloadable from msdn.microsoft.com/vfoxpro/downloads/updates.
To set up a linked server in SQL Server Management Studio's Object Explorer:
ServerName > Server Objects > Linked Servers > Right Click - New Linked
Server. In the dialog here's what I have:
Linked Server: LinkedServerNameHere
Provider: Microsoft OLE DB Provider for Visual FoxPro
Product Name: Visual FoxPro 9
Data Source: "C:\Program Files\Microsoft Visual FoxPro
9\Samples\Northwind\Northwind.dbc"
Provider String: VFPOLEDB.1
For the data source, if there is a DBC file present point directly to the
DBC file. If there is no DBC present just point to the directory where the
DBFs are located.
To access the data use code like:
Select Customers.* From LinkedServerNameHere...Customers
Cindy Winegarden MCSD, Microsoft Most Valuable Professional
cindy@.cindywinegarden.com
<Luke.Visinoni@.gmail.com> wrote in message
news:1150833731.102678.204090@.c74g2000cwc.googlegroups.com...
> How would I set up a linked server?
> Tracy McKibben wrote:
>> Since this is a SQL Server newsgroup, I'll suggest a SQL Server
>> solution... :-)
>> You could setup a linked server between SQL and your Foxpro database,
>> and then use a scheduled SQL job to export the data from Foxpro to an
>> XML file. Or just let your web site connect to SQL like the rest of the
>> world does.
>

automatically expand identity specification node?

subject says it all -- is it possible to automatically expand the identity specification node in table properties for the Management Studio or VS2005 diagram mode? when creating a DB, it seems ridiculous that for EVERY table I have to add an extra click to get to just one more clickable item that ought to be exposed by default.Yeah, that's kind of a pain. If you have a lot of design to do at once, it's easier to use T-SQL than the GUI tools. I don't know of a method to auto-expand that.|||

thx for the reply ... even if it's a year later

sure, T-SQL would be easier for the identity aspect ... but when I'm modelling a DB, it's too organic of a process to do in script. One of SQL Server's strengths has always been the DB modeller. I don't need to create an ERD and work off of that because the SQL Server diagram IS my initial ERD. Implementing ideas visually at the conceptual stage of DB design is critical IMO. Making the identiy attribute more accessible, whether by default, by choice, or by rearranging that portion of the GUI, would be a big value add, again, IMO ...

|||Have you gone to the Microsoft site to make that suggestion? They have a place where they rank them.|||

Buck Woody - MSFT wrote:

Have you gone to the Microsoft site to make that suggestion? They have a place where they rank them.

Where at? Connections?

|||That's right - here is the link:

http://connect.microsoft.com/SQLServer

|||thanks for the link, hadn't used Connect in a while.

automatically expand identity specification node?

subject says it all -- is it possible to automatically expand the identity specification node in table properties for the Management Studio or VS2005 diagram mode? when creating a DB, it seems ridiculous that for EVERY table I have to add an extra click to get to just one more clickable item that ought to be exposed by default.Yeah, that's kind of a pain. If you have a lot of design to do at once, it's easier to use T-SQL than the GUI tools. I don't know of a method to auto-expand that.|||

thx for the reply ... even if it's a year later

sure, T-SQL would be easier for the identity aspect ... but when I'm modelling a DB, it's too organic of a process to do in script. One of SQL Server's strengths has always been the DB modeller. I don't need to create an ERD and work off of that because the SQL Server diagram IS my initial ERD. Implementing ideas visually at the conceptual stage of DB design is critical IMO. Making the identiy attribute more accessible, whether by default, by choice, or by rearranging that portion of the GUI, would be a big value add, again, IMO ...

|||Have you gone to the Microsoft site to make that suggestion? They have a place where they rank them.|||

Buck Woody - MSFT wrote:

Have you gone to the Microsoft site to make that suggestion? They have a place where they rank them.

Where at? Connections?

|||That's right - here is the link:

http://connect.microsoft.com/SQLServer

|||thanks for the link, hadn't used Connect in a while.sql

automatically expand identity specification node?

subject says it all -- is it possible to automatically expand the identity specification node in table properties for the Management Studio or VS2005 diagram mode? when creating a DB, it seems ridiculous that for EVERY table I have to add an extra click to get to just one more clickable item that ought to be exposed by default.Yeah, that's kind of a pain. If you have a lot of design to do at once, it's easier to use T-SQL than the GUI tools. I don't know of a method to auto-expand that.|||

thx for the reply ... even if it's a year later

sure, T-SQL would be easier for the identity aspect ... but when I'm modelling a DB, it's too organic of a process to do in script. One of SQL Server's strengths has always been the DB modeller. I don't need to create an ERD and work off of that because the SQL Server diagram IS my initial ERD. Implementing ideas visually at the conceptual stage of DB design is critical IMO. Making the identiy attribute more accessible, whether by default, by choice, or by rearranging that portion of the GUI, would be a big value add, again, IMO ...

|||Have you gone to the Microsoft site to make that suggestion? They have a place where they rank them.|||

Buck Woody - MSFT wrote:

Have you gone to the Microsoft site to make that suggestion? They have a place where they rank them.

Where at? Connections?

|||That's right - here is the link:

http://connect.microsoft.com/SQLServer

|||thanks for the link, hadn't used Connect in a while.