Showing posts with label alli. Show all posts
Showing posts with label alli. Show all posts

Tuesday, March 27, 2012

Automating "picture" of server performance

Hi all

I need to do the following using the performance monitor tool.

Im monitoring the %cpu used, transaction/sec and page/sec of a server using the perfomance monitor.

What i would need to do is to monitor all these information in a log that will reset himself once it has reached its maximum size limit. I also need to be able to take a "screenshot" of what was in the log when it became full but i cannot simply save the file because it would use too much space in the long run.

Is there a way for me to automating this information and make it so that each time the log become full i can take a screenshot (By that i mean getting a image of the information under the form of a graphic) of what was inside the log before it restart?

Ask if you need more precision

Dale

If you can't save the file, you probably can't save a graphic that would include everything and all the details in the file.

You can automate perf mon and dump it to a CSV. But if you tried to create a screenshot that shows all the details of all the info in the CSV and save all of those, it's likely not going to save space. Maybe you should look at the interval you are using in perf mon. Or save the file somewhere off the server if it's just space on the server itself. Or load the csv files into a table in a database. I just am not following the whole screenshot concept. It's not going to be that useful in analysis to have a ton of screenshots to look through.

-Sue

|||

I need those to help determine the best moment of the week to monitor the system through a server sided trace.

I think the best will be to just keep all the log and extract the information i want from it at the end of the week.

Sunday, March 11, 2012

Automatic deletion to allow import

Greetings All
I'm new to SQL. I'm using Standard edition (SP3)
I have recently had to sort out the import of a Unisys dbs into our SQL
server. This dbs needs to be imported a minimum of 3 times per week. It won'
t
overwrite an existing dbs so the only way I can think to do this is by
deleting the existing dbs before the scheduled import. I know Unisys can
provide a componant that allows synchronisation but as with everything that
costs mucho money.
Is there a was to delete the existing dbs automatically so that the dbs will
be imported through Data Transformation Services, local packages?
Or am I, due to my lack of knowledge going about this the wrong way.
Andy
--
Ah! thats what pushing the button does.!!!I don't know what a dbs is. Is it a table? And entire database of
many tables?
It is possible for DTS to delete the rows in a table before importing
new data. In the DTS wizard click on the Transform column to find the
option, to the right of the targe table you select. Save the package.
Roy
On Tue, 25 Jul 2006 08:18:01 -0700, sage
<sage@.discussions.microsoft.com> wrote:

>Greetings All
>I'm new to SQL. I'm using Standard edition (SP3)
>I have recently had to sort out the import of a Unisys dbs into our SQL
>server. This dbs needs to be imported a minimum of 3 times per week. It won
't
>overwrite an existing dbs so the only way I can think to do this is by
>deleting the existing dbs before the scheduled import. I know Unisys can
>provide a componant that allows synchronisation but as with everything that
>costs mucho money.
>Is there a was to delete the existing dbs automatically so that the dbs wil
l
>be imported through Data Transformation Services, local packages?
>Or am I, due to my lack of knowledge going about this the wrong way.
>Andy|||Thanks Roy
dbs = database (Sorry for confusing you)
The problem is we have at least 150 tables.
A coleague found the "Drop" command .. do you know if this could be used?
Andy
--
Ah! thats what pushing the button does.!!!
"Roy Harvey" wrote:

> I don't know what a dbs is. Is it a table? And entire database of
> many tables?
> It is possible for DTS to delete the rows in a table before importing
> new data. In the DTS wizard click on the Transform column to find the
> option, to the right of the targe table you select. Save the package.
> Roy
>
> On Tue, 25 Jul 2006 08:18:01 -0700, sage
> <sage@.discussions.microsoft.com> wrote:
>
>|||Are you figuring on...
Ddropping and re-creating the database?
Dropping and recreating the tables?
Truncating the tables and reloading the data?
What tool are you using for the load? Something from Unisys? Have
you already gone through the process the first time, and just trying
to make it run three times a week? Or is that first load still ahead
of you?
In general, DROP will let you drop a database or a table. TRUNCATE or
DELETE will let you remove the data from a table.
I have always found that the table definitions generated by automatic
load tools leave something - usually a great deal - to be desired.
Having all the character columns end up as NVARCHAR(4000), and all the
numbers end up as FLOAT, is not my idea of good data type choices. So
what I generally do is load the first time using whatever tool builds
the tables, then script the tables and fix the definiions. Those new
table definiions replace the first versions of tables, and I then
rework the load process to load into existing tables rather than
creating them.
Roy Harvey
Beacon Falls, CT
On Wed, 26 Jul 2006 06:28:02 -0700, sage
<sage@.discussions.microsoft.com> wrote:

>Thanks Roy
>dbs = database (Sorry for confusing you)
>The problem is we have at least 150 tables.
>A coleague found the "Drop" command .. do you know if this could be used?
>Andy|||Well we have successfully done a copy already and found that if we wanted to
keep the database uptodate we needed to copy 3 times per week (for our
purposes) If the database exists it fails to copy. So we had looked at
deleting manually but then someone suggested that if we could incorporate
this in the import then that would solve it all in one go.
Basically I'm just trying to achive a copy of the Unisys database to the SQL
server using DTS on a schedual.
As for the table definitions I have a colleague who may have the skills
required. If we achive this can the import into existing tables be done via
DTS?
Andy
--
Ah! thats what pushing the button does.!!!
"Roy Harvey" wrote:

> Are you figuring on...
> Ddropping and re-creating the database?
> Dropping and recreating the tables?
> Truncating the tables and reloading the data?
> What tool are you using for the load? Something from Unisys? Have
> you already gone through the process the first time, and just trying
> to make it run three times a week? Or is that first load still ahead
> of you?
> In general, DROP will let you drop a database or a table. TRUNCATE or
> DELETE will let you remove the data from a table.
> I have always found that the table definitions generated by automatic
> load tools leave something - usually a great deal - to be desired.
> Having all the character columns end up as NVARCHAR(4000), and all the
> numbers end up as FLOAT, is not my idea of good data type choices. So
> what I generally do is load the first time using whatever tool builds
> the tables, then script the tables and fix the definiions. Those new
> table definiions replace the first versions of tables, and I then
> rework the load process to load into existing tables rather than
> creating them.
> Roy Harvey
> Beacon Falls, CT
> On Wed, 26 Jul 2006 06:28:02 -0700, sage
> <sage@.discussions.microsoft.com> wrote:
>
>|||If you have already loaded the data onece you are well along.
Yes, import to existing tables can be done using DTS. If you start
with the import wizard, there is a spot where you can choose an
existing table (the Destination column), and next to that (the
Transform column) specify that the existing data in the table is to be
deleted before the fresh data is loaded. I generally use the wizard
to get me started, telling it to save the package, then edit the
package as required.
Once choice you have to make is whether to make one huge DTS package,
an individual package for each table, or something in between.
Combining all the tables into one package is a bit less work to set
up, perhaps, and simpler to schedule, but has major shortocomings when
you need to run just one bit. Also, the tools for editing packages,
with the graphical representation of the objects, isn't the greatest
for working with large numbers of objects. So I would be inclined
toward an individual package for each table. A compromise would be to
use individual packages for large or problematic tables, and then
group the rest of the tables into logical sets, a package for each.
Roy Harvey
Beacon Falls, CT
On Wed, 26 Jul 2006 07:28:02 -0700, sage
<sage@.discussions.microsoft.com> wrote:

>Well we have successfully done a copy already and found that if we wanted t
o
>keep the database uptodate we needed to copy 3 times per week (for our
>purposes) If the database exists it fails to copy. So we had looked at
>deleting manually but then someone suggested that if we could incorporate
>this in the import then that would solve it all in one go.
>Basically I'm just trying to achive a copy of the Unisys database to the SQ
L
>server using DTS on a schedual.
>As for the table definitions I have a colleague who may have the skills
>required. If we achive this can the import into existing tables be done via
>DTS?
>Andy|||Thanks very much for this Roy. I apriciate you taking the time to expliain
it. I think I'll follow your guidence here and although more work initialy,
do seperate packages. I'll see how it goes
Thank you once again
Andy
"Roy Harvey" wrote:

> If you have already loaded the data onece you are well along.
> Yes, import to existing tables can be done using DTS. If you start
> with the import wizard, there is a spot where you can choose an
> existing table (the Destination column), and next to that (the
> Transform column) specify that the existing data in the table is to be
> deleted before the fresh data is loaded. I generally use the wizard
> to get me started, telling it to save the package, then edit the
> package as required.
> Once choice you have to make is whether to make one huge DTS package,
> an individual package for each table, or something in between.
> Combining all the tables into one package is a bit less work to set
> up, perhaps, and simpler to schedule, but has major shortocomings when
> you need to run just one bit. Also, the tools for editing packages,
> with the graphical representation of the objects, isn't the greatest
> for working with large numbers of objects. So I would be inclined
> toward an individual package for each table. A compromise would be to
> use individual packages for large or problematic tables, and then
> group the rest of the tables into logical sets, a package for each.
> Roy Harvey
> Beacon Falls, CT
> On Wed, 26 Jul 2006 07:28:02 -0700, sage
> <sage@.discussions.microsoft.com> wrote:
>
>

Saturday, February 25, 2012

Auto-Increment of varchar primary key

Hi All
I am looking for a bit of advice.
I am in the process of creating a database in which it has been decided that
all primary keys are going to varchar(40). Not my decision, but anyway.
When inserting into each table it will be possible to specify a value for
the primary, but if not specified a value should be auto-generated. That
means that the values in the primary key field can be a mixture of both
numbers and letters, but if auto-generated it should just be a number.
What be the best way to make this autogenerated values if no value is being
specified in the insert?
TIA
KlausDepends really, if auto-generated does it just need to be a number? Any old
number, or a specific format and range?
I always recommend putting a surrogate key on the tables and use that as the
foriegn key and inside the application (not for display purposes, but for
use as the value in a listbox for instance), that can be a int column with
the IDENTITY property, not null and have a unique constraint on it.
You could set the value of the primary key to that if not specified, that
would save calculating a new unique number.
Otherwise, you could use an 'instead of' trigger, for example...
Instead of using MAX, you could take the value from a table that holds the
last number used.
create table testtrg (
mycol int not null unique
)
go
insert testtrg ( mycol ) values ( 1 )
go
create trigger trgTestTrg on testtrg instead of insert
as
begin
if @.@.rowcount = 0
return
declare @.nextid int
begin tran
set @.nextid = ( select max( mycol )
from testtrg with (tablockx) )
set @.nextid = isnull( @.nextid, 0 ) + 1
insert testtrg values( @.nextid )
commit tran
end
go
-- Note, inserting 1 but it already exists so should give a key violation,
-- but the instead of trigger code kicks in and gives the next id.
select * from testtrg
insert testtrg ( mycol ) values( 1 )
select * from testtrg
insert testtrg ( mycol ) values( 1 )
select * from testtrg
insert testtrg ( mycol ) values( 1 )
select * from testtrg
go
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"Klaus" <Klaus@.discussions.microsoft.com> wrote in message
news:FE28E558-F88F-4A9F-9AAA-40837A9966E9@.microsoft.com...
> Hi All
> I am looking for a bit of advice.
> I am in the process of creating a database in which it has been decided
> that
> all primary keys are going to varchar(40). Not my decision, but anyway.
> When inserting into each table it will be possible to specify a value for
> the primary, but if not specified a value should be auto-generated. That
> means that the values in the primary key field can be a mixture of both
> numbers and letters, but if auto-generated it should just be a number.
> What be the best way to make this autogenerated values if no value is
> being
> specified in the insert?
> TIA
> Klaus
>|||Thanks a lot, Tony. That was very helpfull.
I will create a unique field on each of my tables. The value for this will
be auto-generated using identity. A trigger will then keep an eye on the
inserts. If no value is being specified for the Primary key, the Identity
value will be copied into the varchar(40) primary key field.
-- Klaus
"Tony Rogerson" wrote:

> Depends really, if auto-generated does it just need to be a number? Any ol
d
> number, or a specific format and range?
> I always recommend putting a surrogate key on the tables and use that as t
he
> foriegn key and inside the application (not for display purposes, but for
> use as the value in a listbox for instance), that can be a int column with
> the IDENTITY property, not null and have a unique constraint on it.
> You could set the value of the primary key to that if not specified, that
> would save calculating a new unique number.
> Otherwise, you could use an 'instead of' trigger, for example...
> Instead of using MAX, you could take the value from a table that holds the
> last number used.
> create table testtrg (
> mycol int not null unique
> )
> go
>
> insert testtrg ( mycol ) values ( 1 )
> go
>
> create trigger trgTestTrg on testtrg instead of insert
> as
> begin
> if @.@.rowcount = 0
> return
>
> declare @.nextid int
>
> begin tran
>
> set @.nextid = ( select max( mycol )
> from testtrg with (tablockx) )
>
> set @.nextid = isnull( @.nextid, 0 ) + 1
>
> insert testtrg values( @.nextid )
>
> commit tran
>
> end
> go
>
> -- Note, inserting 1 but it already exists so should give a key violation
,
> -- but the instead of trigger code kicks in and gives the next id.
> select * from testtrg
> insert testtrg ( mycol ) values( 1 )
> select * from testtrg
> insert testtrg ( mycol ) values( 1 )
> select * from testtrg
> insert testtrg ( mycol ) values( 1 )
> select * from testtrg
> go
>
>
> --
> Tony Rogerson
> SQL Server MVP
> http://sqlserverfaq.com - free video tutorials
>
> "Klaus" <Klaus@.discussions.microsoft.com> wrote in message
> news:FE28E558-F88F-4A9F-9AAA-40837A9966E9@.microsoft.com...
>
>|||The second part of Tony's point should not be lost. You should use that int
key as the FK for relationships with other tables. If you need to show your
client the benefit of using an int instead of a varchar(40). Load up a
couple of tables with some test data. Perform join's using varchar(40) as
the keys and then the same using int as the key. The performance difference
is noticable.
So use an int (or even bigint) PK, put a unique constraint on the
varchar(40) column and for all business logic purposes, the varchar(40) fiel
d
is the "key". But behind the scenes in the database the far more efficient
int is the key.
John Scragg
"Klaus" wrote:
> Thanks a lot, Tony. That was very helpfull.
> I will create a unique field on each of my tables. The value for this will
> be auto-generated using identity. A trigger will then keep an eye on the
> inserts. If no value is being specified for the Primary key, the Identity
> value will be copied into the varchar(40) primary key field.
> -- Klaus
> "Tony Rogerson" wrote:
>|||Also, if you need the data in a varchar(40) field you can use a calculated
column (if they dont need to enter it).
I concurr with Tony & John, if you're doing joins, definately use the INT
field as the joining field, joining on varchar fields gets very slow at
medium to high data volumes.
create table ( id int identity(1,1) primary key , myPK AS cast( ID as
varchar(40)) )
"John Scragg" <JohnScragg@.discussions.microsoft.com> wrote in message
news:EDB86798-0F96-415A-9D8D-733ED2E0CA02@.microsoft.com...
> The second part of Tony's point should not be lost. You should use that
int
> key as the FK for relationships with other tables. If you need to show
your
> client the benefit of using an int instead of a varchar(40). Load up a
> couple of tables with some test data. Perform join's using varchar(40) as
> the keys and then the same using int as the key. The performance
difference
> is noticable.
> So use an int (or even bigint) PK, put a unique constraint on the
> varchar(40) column and for all business logic purposes, the varchar(40)
field
> is the "key". But behind the scenes in the database the far more
efficient
> int is the key.
> John Scragg
> "Klaus" wrote:
>
will
Identity
Any old
as the
for
with
that
the
violation,
decided
anyway.
value for
That
both
number.
is