Showing posts with label ive. Show all posts
Showing posts with label ive. Show all posts

Sunday, March 25, 2012

Automatically generating PDF with ReportViewer

Hi, I've been using the Report viewer control for a bit, and was wondering how I can automatically run the report as a PDF. I can run it and prompt the user to open or save the PDF, but I'm not sure how to auto generate it. This is the code which prompts the user:

ProtectedSub Page_SaveStateComplete(ByVal senderAsObject,ByVal eAs System.EventArgs)HandlesMe.SaveStateComplete

Dim pdfContentAsByte() =Me.ReportViewer1.LocalReport.Render("pdf",Nothing,Nothing,Nothing,Nothing,Nothing,Nothing)

Me.Response.Clear()

Me.Response.ContentType ="application/pdf"

Dim modCodeAsString = Request.QueryString("modCode")

Me.Response.AddHeader("Content-disposition","attachment; filename=ModuleCode-" & modCode &".pdf")

Me.Response.BinaryWrite(pdfContent)

Me.Response.End()

EndSub

Has anybody done this?

Thanks

Hi,

Do you want to save the PDF file which generates by the reporting service automatically, right?

Based on my understanding, I think when you have generate the byte[] content for your PDF file, what you should do is to use FileStream class and FileInfo class to create the corresponding file, and write the content into files, save it onto your hard disk.

If you want the method be called automatically, you can invoke the method when page loads.
For more reference on FileStream and FileInfo, see:

http://msdn2.microsoft.com/en-us/library/system.io.fileinfo.aspx
http://msdn2.microsoft.com/en-us/library/system.io.filestream.aspx

Thanks.

|||

Thanks for that, I'll have a look. Also, when I ask the user to save as .xls or .pdf for example, how can I specify that the file will be saved on the D drive, rather than the C drive as the C drive is full?

Thanks

|||

Hi,

When you are creating and storing the file, you can specify the actual disk drive where to store the file. Also, you can import the "System.Runtime.InteropServices" namespace to check the free disk volume, so that you can determine which disk drive to store the file.

Thanks.

Tuesday, March 20, 2012

Automatic Update From Access?

Hey all,
This is one of my first forays into SQL server, so I don't know how dumb this question is. I've got a SQL server db that's going to be the underlying source for an ASP app. Two of the tables will need to be updated somewhat regularly, perhaps 5 or 6 times
a month. We actually get the info in the form of an Access database.
All I need is to take My_Table and My_Other_Table from the Access DB and use them to replace My_Table and My_Other_Table in the SQL Server DB - I'll keep the same table names, column names, and I don't need to play around w/ data types.
What's the best way to do this? BCP, DTS, etc? I'd appreciate some guidance. Thanks.
The simplest way (IMO) is to use a transform data task in DTS.
First you could create the tables on sql server with the datatypes you
require - before creating the package.
Next you create a package with a transform data task for each table's
import. Before the transform data task runs, you have an Execute SQL task
which truncates the tables.
After that you can schedule the task. BTW, I prefer to use DTSRUNUI to
generate the readable command-line string and use that, just so I can see
which version and package is really running in a job.
HTH,
Paul Ibison

Monday, March 19, 2012

Automatic Identity Range Management

Hello,
I've got a question dealing with automatic identity range management.
Currently we have a database setup for publication in which the subscribers
are running SQL Server CE on mobile devices. We've got automatic identity
range management turned on for a few of our tables and we're having some
issues with the publisher identity ranges not getting automatically
reassigned after they are exhausted. I found this statement in the SQL Server
manuals:
"If the Publisher exhausts its identity range after an insert, it can
automatically assign a new range if the insert was performed by a member of
the db_owner fixed database role."
The user that is inserting the rows into the publisher database is a member
of the db_owner fixed database role but when a row is inserted which exhausts
the currenty identity range they are not automatically reassigned. I can
manually adjust the identity ranges using sp_adjustpublisheridentityrange
logged in with the same user, but they are not automatically adjusted after
the insert as the documentation suggests.
What else should we check for?
Thanks in advance,
mike...
You have to size the ranges for the max amount of inserts that would occur
between syncs. Otherwise the range will not be adjusted.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Mike..." <Mike@.discussions.microsoft.com> wrote in message
news:5D34048F-05C2-4502-B295-C10AB89AF4BE@.microsoft.com...
> Hello,
> I've got a question dealing with automatic identity range management.
> Currently we have a database setup for publication in which the
> subscribers
> are running SQL Server CE on mobile devices. We've got automatic identity
> range management turned on for a few of our tables and we're having some
> issues with the publisher identity ranges not getting automatically
> reassigned after they are exhausted. I found this statement in the SQL
> Server
> manuals:
> "If the Publisher exhausts its identity range after an insert, it can
> automatically assign a new range if the insert was performed by a member
> of
> the db_owner fixed database role."
> The user that is inserting the rows into the publisher database is a
> member
> of the db_owner fixed database role but when a row is inserted which
> exhausts
> the currenty identity range they are not automatically reassigned. I can
> manually adjust the identity ranges using sp_adjustpublisheridentityrange
> logged in with the same user, but they are not automatically adjusted
> after
> the insert as the documentation suggests.
> What else should we check for?
> Thanks in advance,
> mike...
>
|||Hilary, Thanks for the reply - the problem we've run into is that the user
accounts (Windows Auth) sync'ing the devices are not members of the db_owner
role so the merge agents do not refresh the identity ranges (which is another
issues we've been attempting to work around). From our experiences (and from
reading the SQL Server manuals) the publisher identity ranges should refresh
themselves if the inserts are done at the publisher by users in the db_owner
fixed database role. This isn't occuring at one of our sites and I'm
wondering what other security arrangements need to be made for this to occur.
Any ideas?
Thanks again,
mike...
"Hilary Cotter" wrote:

> You have to size the ranges for the max amount of inserts that would occur
> between syncs. Otherwise the range will not be adjusted.
> --
> Hilary Cotter
> Director of Text Mining and Database Strategy
> RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
> This posting is my own and doesn't necessarily represent RelevantNoise's
> positions, strategies or opinions.
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
>
> "Mike..." <Mike@.discussions.microsoft.com> wrote in message
> news:5D34048F-05C2-4502-B295-C10AB89AF4BE@.microsoft.com...
>
>

Automatic Identity Range Handling warning message.

Ive implemented merge replication with SQL Server and an Access Database.
SQL Server is the publisher and Access database is the subscriber.
I have ONE table in the schema that can be updated in both the subscriber
side as well as the publisher side. I have an IDENTITY column in this table
and I have set the property "Automatic Identity Range Handling" ON for this
table.
Ive also created a script to create this publisher. Now, heres my problem.
When I create the publisher, I get a warning message thats stated below:
"Warning: only Subscribers running SQL Server 2000 can synchronize with
publication '<database>' because automatic identity ranges are being used."
OK, Ive done some resonable testing on this statement and it SEEMS that
synchronization with Automatic Identity Range Handing enabled DOES work for
Access databases as well even though the warning states otherwise! Im using
JET to connect to access and not the desktop engine. Once the threshold is
reached, and after the merge agent kicks in, the indentity seed is reset to
the appropriate values on both the access and the sql server side.
Appropriate check constraints are also placed.
I just want to make sure if this is correct - and this warning is just
bogus.
Thanks,
Girish
Hi Girish,
From your descriptions, I understood that you would like to know whether it
matters when warning is shown for "automatic identity ranges are being
used". Have I understood you? If there is anything I misunderstood, please
feel free to let me know.
Based on my scope, there are some known issues for us by using automatic
identity ranges. You could have a view on them
BUG: Unable to Change Identity Range of Publisher If You Use Auto Identity
Range
http://support.microsoft.com/default...b;en-us;310540
BUG: Identity Range Not Adjusted on Publisher When Merge Agent Runs
Continuously
http://support.microsoft.com/default...b;en-us;304706
Additionaly, there is a by design issue for automatic identity range based
on the descriptions in the following documents.
PRB: Automatic Identity Range Handling Is Not Correct If the Merge Agent
Runs an Insert Trigger
http://support.microsoft.com/default...b;en-us;324361
I am afraid the above all are why this warning is shown up.
Thank you for your patience and cooperation. If you have any questions or
concerns, don't hesitate to let me know. We are here to be of assistance!
Sincerely yours,
Mingqing Cheng
Microsoft Developer Community Support
Introduction to Yukon! - http://www.microsoft.com/sql/yukon
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only, many thanks!

Thursday, March 8, 2012

Automated Backup to Remote Server

Hi all ~
I've been able to use the command line listed below to automate a
backup. One snag - I'm trying to backup to an alternate server. In
my case, my servers are grab-02 (database node) and grab-01 (location
of desired backup).
I do have shared folders for both folders.
How can I use this command to backup from grab-02 to grab-01 using the
following command?
What would be the fully quanlified path for the shared docs folder on
grab-01?
C:\Documents and Settings\GB>osql -U sa -P password -S grab-02 -Q
"BACKUP
DATABASE gb_production TO DISK = 'c:\database_backup'"
Hi,
Provide UNC path along with the BACKUP DATABASE command.
Eg:-
Backup database gb_production TO DISK = '\\grab-01\share_name\dbname.bak'
Please go thu the below script will take the UNC path as the parameter and
will Backup Master, MSDB and all the User databases to the remote machine.
This
script will create the unique Backup files names, this will ensure that old
backup sets were not overwritten.
Prerequisites
1. SQL server and SQL Server Agent should be configured to start in Domain
Account
2.. This Domain account should have change privileges to add files to the
Remote machine
Script
CREATE PROCEDURE BACKUP_SP @.UNCPATH VARCHAR(200) AS
BEGIN
SET NOCOUNT ON
DECLARE @.NAME VARCHAR(100),
DECLARE @.DBNAME VARCHAR(100)
DECLARE BACKUP_CUR CURSOR FOR
SELECT name FROM master..Sysdatabases where name not in
('model','pubs','tempdb','northwind')
OPEN BACKUP_CUR
FETCH NEXT FROM BACKUP_CUR INTO @.DBNAME
WHILE @.@.FETCH_STATUS=0
BEGIN
SELECT
NAME=@.UNCPATH+@.DBNAME+'_'+ltrim (rtrim (convert (char,
getdate(),105)))+'Dump.bak'
BACKUP DATABASE @.DBNAME TO DISK = @.NAME WITH INIT , NOUNLOAD ,
NAME = @.DBNAME, NOSKIP, STATS = 10, NOFORMAT
FETCH NEXT FROM BACKUP_CUR INTO @.DBNAME
END
CLOSE BACKUP_CUR
DEALLOCATE BACKUP_CUR
END
How to Execute:
This procedure will take @.UNCPATH as the input parameter, Say you have to
backup the database to machine BACKUPSERVER in to share SQLBACKUP then the
execution will be
EXEC BACKUP_SP '\\BACKUPSERVER\SQLBACKUP\'
This will backup all the databases to the SQLBACKUP folder in BACKUPSERVER.
Thanks
Hari
MCDBA
"Jerry Penna" <jerrypenna@.msn.com> wrote in message
news:3ac97809.0408271340.382f4981@.posting.google.c om...
> Hi all ~
> I've been able to use the command line listed below to automate a
> backup. One snag - I'm trying to backup to an alternate server. In
> my case, my servers are grab-02 (database node) and grab-01 (location
> of desired backup).
> I do have shared folders for both folders.
> How can I use this command to backup from grab-02 to grab-01 using the
> following command?
> What would be the fully quanlified path for the shared docs folder on
> grab-01?
> C:\Documents and Settings\GB>osql -U sa -P password -S grab-02 -Q
> "BACKUP
> DATABASE gb_production TO DISK = 'c:\database_backup'"
|||Thanks Hari ~
A couple of questions. Can you explain your Prerequisites
1. & 2. I know little about SQL Server and XP - I'm an Oracle DBA
familiar with UNIX.
Also, when I ran the script you provided, but I received the followng
error:
Server: Msg 156, Level 15, State 1, Procedure BACKUP_SP, Line 5
Incorrect syntax near the keyword 'DECLARE'.
Thanks again.
"Hari Prasad" <hari_prasad_k@.hotmail.com> wrote in message news:<uSLxD5ajEHA.1040@.TK2MSFTNGP09.phx.gbl>...[vbcol=seagreen]
> Hi,
> Provide UNC path along with the BACKUP DATABASE command.
> Eg:-
> Backup database gb_production TO DISK = '\\grab-01\share_name\dbname.bak'
> Please go thu the below script will take the UNC path as the parameter and
> will Backup Master, MSDB and all the User databases to the remote machine.
> This
> script will create the unique Backup files names, this will ensure that old
> backup sets were not overwritten.
>
> Prerequisites
> 1. SQL server and SQL Server Agent should be configured to start in Domain
> Account
> 2.. This Domain account should have change privileges to add files to the
> Remote machine
> Script
>
> CREATE PROCEDURE BACKUP_SP @.UNCPATH VARCHAR(200) AS
> BEGIN
> SET NOCOUNT ON
> DECLARE @.NAME VARCHAR(100),
> DECLARE @.DBNAME VARCHAR(100)
> DECLARE BACKUP_CUR CURSOR FOR
> SELECT name FROM master..Sysdatabases where name not in
> ('model','pubs','tempdb','northwind')
> OPEN BACKUP_CUR
> FETCH NEXT FROM BACKUP_CUR INTO @.DBNAME
> WHILE @.@.FETCH_STATUS=0
> BEGIN
> SELECT
> NAME=@.UNCPATH+@.DBNAME+'_'+ltrim (rtrim (convert (char,
> getdate(),105)))+'Dump.bak'
> BACKUP DATABASE @.DBNAME TO DISK = @.NAME WITH INIT , NOUNLOAD ,
> NAME = @.DBNAME, NOSKIP, STATS = 10, NOFORMAT
> FETCH NEXT FROM BACKUP_CUR INTO @.DBNAME
> END
> CLOSE BACKUP_CUR
> DEALLOCATE BACKUP_CUR
> END
> How to Execute:
> --
> This procedure will take @.UNCPATH as the input parameter, Say you have to
> backup the database to machine BACKUPSERVER in to share SQLBACKUP then the
> execution will be
> EXEC BACKUP_SP '\\BACKUPSERVER\SQLBACKUP\'
> This will backup all the databases to the SQLBACKUP folder in BACKUPSERVER.
>
> Thanks
> Hari
> MCDBA
>
> "Jerry Penna" <jerrypenna@.msn.com> wrote in message
> news:3ac97809.0408271340.382f4981@.posting.google.c om...
|||The comma at the end of line 4 shouldn't be there...
On 30 Aug 2004 13:51:50 -0700, Jerry Penna <jerrypenna@.msn.com> wrote:
[vbcol=seagreen]
> Thanks Hari ~
> A couple of questions. Can you explain your Prerequisites
> 1. & 2. I know little about SQL Server and XP - I'm an Oracle DBA
> familiar with UNIX.
> Also, when I ran the script you provided, but I received the followng
> error:
> Server: Msg 156, Level 15, State 1, Procedure BACKUP_SP, Line 5
> Incorrect syntax near the keyword 'DECLARE'.
> Thanks again.
> "Hari Prasad" <hari_prasad_k@.hotmail.com> wrote in message
> news:<uSLxD5ajEHA.1040@.TK2MSFTNGP09.phx.gbl>...
Glenn Adams
Tiber Creek Consulting
http://www.tibercreek.com
glenn@.tibercreek.com
Please DO NOT respond to me directly but post all responses here in the
newsgroup so that all can share the information
|||I've corrected the comma, thanks, but now there is a syntax error at line 23:
Server: Msg 170, Level 15, State 1, Procedure BACKUP_SP, Line 21
Line 21: Incorrect syntax near '@.DBNAME'.
"Glenn Adams" <glenn@.tibercreek.com.nospam> wrote in message news:<opsdlnonryfcuvf7@.saruman>...[vbcol=seagreen]
> The comma at the end of line 4 shouldn't be there...
>
> On 30 Aug 2004 13:51:50 -0700, Jerry Penna <jerrypenna@.msn.com> wrote:
|||"NAME=@.UNCPATH+@.DBNAME"
should be
"@.NAME=@.UNCPATH+@.DBNAME"
On 31 Aug 2004 13:53:44 -0700, Jerry Penna <jerrypenna@.msn.com> wrote:
[vbcol=seagreen]
> I've corrected the comma, thanks, but now there is a syntax error at
> line 23:
> Server: Msg 170, Level 15, State 1, Procedure BACKUP_SP, Line 21
> Line 21: Incorrect syntax near '@.DBNAME'.
> "Glenn Adams" <glenn@.tibercreek.com.nospam> wrote in message
> news:<opsdlnonryfcuvf7@.saruman>...
Glenn Adams
Tiber Creek Consulting
http://www.tibercreek.com
glenn@.tibercreek.com
Please DO NOT respond to me directly but post all responses here in the
newsgroup so that all can share the information

Saturday, February 25, 2012

Autogrowth automatically set to 640 percent

Hello,
I've configured SQL Server 2005 express to use autogrowth by 5mb.
After a few weeks the autogrowth option automatically switched to 640
percent what will lead to performance problems.
After is set back the autogrowth to 5mb, a few weeks later the same
problem occurs.
Does anyone know what can cause this problem?
Thanks in advance,
Pim
Which servicepack is installed?
You can disable autogrow and set the database size manually as temporarly
workaround.
greetz
"Pim75" wrote:

> Hello,
> I've configured SQL Server 2005 express to use autogrowth by 5mb.
> After a few weeks the autogrowth option automatically switched to 640
> percent what will lead to performance problems.
> After is set back the autogrowth to 5mb, a few weeks later the same
> problem occurs.
> Does anyone know what can cause this problem?
> Thanks in advance,
> Pim
>
|||We use Microsoft SQL Server Express Edition, version 9.00.1399.06
Windows 2003 webserver edition, servicepack 1.
We have some more servers but this is the only one where the problem
occurs.
On 13 jun, 13:37, Hate_orphaned_users
<Hateorphanedus...@.discussions.microsoft.com> wrote:
> Which servicepack is installed?
> You can disable autogrow and set the database size manually as temporarly
> workaround.
> greetz
>
> "Pim75" wrote:
>
>
> - Tekst uit oorspronkelijk bericht weergeven -
|||Pim75 wrote:
> Hello,
> I've configured SQL Server 2005 express to use autogrowth by 5mb.
> After a few weeks the autogrowth option automatically switched to 640
> percent what will lead to performance problems.
> After is set back the autogrowth to 5mb, a few weeks later the same
> problem occurs.
> Does anyone know what can cause this problem?
> Thanks in advance,
> Pim
>
Hi
Not that it helps you a lot, but we see the same thing happening on one
specific database. Occasionally the mdf file for this database is being
set to a growth rate of 12800% and then the file is around 258 GB.
This has started after we moved this database from a SQL2000 server to a
SQL2005 server. The vendor knows about the issue and they has the same
problem with a few other customers. Since it happens quite seldom, they
haven't been able to figure out what it is that causes it to happen. As
a temporary workaround, I'v written a little sql script that runs every
15 minuttes to check the filesize and filegrowth. If one of them has
changed since last run it sends me an email. I can then go in an correct
the values.
Regards
Steen Schlter Persson
Database Administrator / System Administrator
|||Personally I would just set the job to issue the necessary ALTER DATABASE
statement to fix the growth increment when identified (as well as send an
email that it had been done).
Also, I didn't investigate, but it may be possible to use a DDL trigger to
trap ALTER DATABASE calls on this database to 1) prevent the growth factor
change and 2) identify the offending code and get it fixed.
TheSQLGuru
President
Indicium Resources, Inc.
""Steen Schlter Persson (DK)"" <steen@.REMOVE_THIS_asavaenget.dk> wrote in
message news:%23lpkSXbrHHA.1296@.TK2MSFTNGP06.phx.gbl...
> Pim75 wrote:
> Hi
> Not that it helps you a lot, but we see the same thing happening on one
> specific database. Occasionally the mdf file for this database is being
> set to a growth rate of 12800% and then the file is around 258 GB.
> This has started after we moved this database from a SQL2000 server to a
> SQL2005 server. The vendor knows about the issue and they has the same
> problem with a few other customers. Since it happens quite seldom, they
> haven't been able to figure out what it is that causes it to happen. As a
> temporary workaround, I'v written a little sql script that runs every 15
> minuttes to check the filesize and filegrowth. If one of them has changed
> since last run it sends me an email. I can then go in an correct the
> values.
>
> --
> Regards
> Steen Schlter Persson
> Database Administrator / System Administrator

Thursday, February 16, 2012

Auto update statistics

I've seen conflicting information about having auto-update statistics. On
view was that having this on would cause spikes in utilization whenever the
auto update kicks in during peak load time, and so this option should be
turned off and update statistics should be run via a job during off times.
The other view was that it didn't matter.
Any thoughts?
Thanks
Bob Castleman
SuccessWare Software
Bob Castleman wrote:
> I've seen conflicting information about having auto-update
> statistics. On view was that having this on would cause spikes in
> utilization whenever the auto update kicks in during peak load time,
> and so this option should be turned off and update statistics should
> be run via a job during off times. The other view was that it didn't
> matter.
> Any thoughts?
> Thanks
> Bob Castleman
> SuccessWare Software
You're right that there is no clear concensus. Some customers do perform
statistics updates after hours. They may be doing this because the
database was around during the SQL 7 days or because the amount of data
loaded into tables during peak hours does not generall affect the
statistical distribution of data in those tables.
If you have tables that are effected by large loads during work hours
and are worried that queries that access those tables are likely to use
inefficient plans because of outdated statistics, then keep the option
on. Certainly, running a bunch of inefficient queries will put more
stress on the server than allowing SQL Server to update statistics as it
sees fit. OTOH, if your tables are unaffected by large changes during
the day, you can update statistics at night (daily, weekly, or monthly
as you see fit).
It all depends on your data and the queries hitting your data.
What is you situation?
David Gugick
Imceda Software
www.imceda.com
|||Bob,
Yes, I come up against this a lot. I am a freelance consultant and I
hear both sides of the argument - the "correct" answer is that it
depends on your environment. If leaving auto-update stats on is
acceptable to the users -- ie they perceive no degradation in
performance -- then leave it on.
Unless your system is processing many transactions per second - say
10-100 then perhaps turn it off and schedule update stats manually in a
maintenance window.
Some places I go to have 24x7 shops where there simply isn't a
maintenance window, so they just leave auto-update stats on all the time
anyway. You need to base your decision on how it affects your users.
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602m.html
Bob Castleman wrote:
> I've seen conflicting information about having auto-update statistics. On
> view was that having this on would cause spikes in utilization whenever the
> auto update kicks in during peak load time, and so this option should be
> turned off and update statistics should be run via a job during off times.
> The other view was that it didn't matter.
> Any thoughts?
> Thanks
> Bob Castleman
> SuccessWare Software
>
|||Our production environment hosts about 125 of our clients in a Citrix farm
against two Active/Passive clusters. Probably 90% of the load occurs between
8:00 AM and 6:00 PM and we have a defined maintenance window of 12:00 AM to
5:00 AM. No single client has huge processing needs, but the aggregate load
can get pretty heavy. We recently solved a problem that was sucking up
proccessor cycles and limited the number of clients on a database server. We
want to increase the number of clients per database server, so I am trying
to identify things I can do before hand that might help keep things under
control. We expect to triple or quadruple the number of clients we are
hosting over the next 24 to 36 months so I am trying to get everything as
stable as possible now instead of waiting for fires to start.
We are also looking at tools for helping us profile usage patterns and
things like that so we can start optimizing the queries and indexing. The
application was not designed for a hosted environment and query optimization
has been ad hoc in the past. It is highly likely that there are numerous
inefficient queries.
Thx,
Bob
"David Gugick" <davidg-nospam@.imceda.com> wrote in message
news:uxFc8EX4EHA.2012@.TK2MSFTNGP15.phx.gbl...
> Bob Castleman wrote:
> You're right that there is no clear concensus. Some customers do perform
> statistics updates after hours. They may be doing this because the
> database was around during the SQL 7 days or because the amount of data
> loaded into tables during peak hours does not generall affect the
> statistical distribution of data in those tables.
> If you have tables that are effected by large loads during work hours and
> are worried that queries that access those tables are likely to use
> inefficient plans because of outdated statistics, then keep the option on.
> Certainly, running a bunch of inefficient queries will put more stress on
> the server than allowing SQL Server to update statistics as it sees fit.
> OTOH, if your tables are unaffected by large changes during the day, you
> can update statistics at night (daily, weekly, or monthly as you see fit).
> It all depends on your data and the queries hitting your data.
> What is you situation?
>
> --
> David Gugick
> Imceda Software
> www.imceda.com
|||Our transaction rate runs 500-1000 per second. (see my response to Dave
Guigick for more info on our production environment)
It sounds like this is a peripheral issue, rather than one that will show
any significant change. I probably am better off digging deeper into
optimization of the queries and stuff.
Thx,
Bob
"Mark Allison" <marka@.no.tinned.meat.mvps.org> wrote in message
news:ejzUXsc4EHA.3820@.TK2MSFTNGP11.phx.gbl...[vbcol=seagreen]
> Bob,
> Yes, I come up against this a lot. I am a freelance consultant and I hear
> both sides of the argument - the "correct" answer is that it depends on
> your environment. If leaving auto-update stats on is acceptable to the
> users -- ie they perceive no degradation in performance -- then leave it
> on.
> Unless your system is processing many transactions per second - say 10-100
> then perhaps turn it off and schedule update stats manually in a
> maintenance window.
> Some places I go to have 24x7 shops where there simply isn't a maintenance
> window, so they just leave auto-update stats on all the time anyway. You
> need to base your decision on how it affects your users.
> --
> Mark Allison, SQL Server MVP
> http://www.markallison.co.uk
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602m.html
>
> Bob Castleman wrote:
|||Bob,
OK, that's quite healthy! :-)
If these transactions are DML statements then you may find that
auto-update stats has an effect on performance. This effect could be
positive or negative! NOT switching on auto-update stats may actually be
negative in such a high transaction environment. Is there any way you
can take a profiler trace and replay it into a test environment and
benchmarking your system with and without auto-update stats? You will
know for sure this way how it is going to affect you.
Obviously you will need to have similar spec hardware - not sure how
feasible this is. You may also wish to connect a typical user
application to this test environment while the trace is playing to see
how well it "feels".
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602m.html
Bob Castleman wrote:
> Our transaction rate runs 500-1000 per second. (see my response to Dave
> Guigick for more info on our production environment)
> It sounds like this is a peripheral issue, rather than one that will show
> any significant change. I probably am better off digging deeper into
> optimization of the queries and stuff.
|||Bob,
Is it possible for you to email me directly? I have a question for you but
don't know how to reach you.
Please remove the "nooospam" from my reply address.
Thanks
Andrew J. Kelly SQL MVP
"Bob Castleman" <nomail@.here> wrote in message
news:uTl8I7e4EHA.1260@.TK2MSFTNGP12.phx.gbl...
> Our production environment hosts about 125 of our clients in a Citrix farm
> against two Active/Passive clusters. Probably 90% of the load occurs
> between 8:00 AM and 6:00 PM and we have a defined maintenance window of
> 12:00 AM to 5:00 AM. No single client has huge processing needs, but the
> aggregate load can get pretty heavy. We recently solved a problem that was
> sucking up proccessor cycles and limited the number of clients on a
> database server. We want to increase the number of clients per database
> server, so I am trying to identify things I can do before hand that might
> help keep things under control. We expect to triple or quadruple the
> number of clients we are hosting over the next 24 to 36 months so I am
> trying to get everything as stable as possible now instead of waiting for
> fires to start.
> We are also looking at tools for helping us profile usage patterns and
> things like that so we can start optimizing the queries and indexing. The
> application was not designed for a hosted environment and query
> optimization has been ad hoc in the past. It is highly likely that there
> are numerous inefficient queries.
> Thx,
> Bob
>
>
> "David Gugick" <davidg-nospam@.imceda.com> wrote in message
> news:uxFc8EX4EHA.2012@.TK2MSFTNGP15.phx.gbl...
>
|||We are actually in the process of creating a test environment that includes
a mini Citrix farm and a clustered SQL setup for precisely this purpose.
It's still not complete. I'll probably put the auto-update issue on a task
list and try addressing it as you suggest. Turning it off just to see what
happens sounds too risky and it makes much more sense to work out a
reasonable test.
Thx,
Bob
"Mark Allison" <marka@.no.tinned.meat.mvps.org> wrote in message
news:%23U2Xyaf4EHA.2180@.TK2MSFTNGP10.phx.gbl...[vbcol=seagreen]
> Bob,
> OK, that's quite healthy! :-)
> If these transactions are DML statements then you may find that
> auto-update stats has an effect on performance. This effect could be
> positive or negative! NOT switching on auto-update stats may actually be
> negative in such a high transaction environment. Is there any way you can
> take a profiler trace and replay it into a test environment and
> benchmarking your system with and without auto-update stats? You will know
> for sure this way how it is going to affect you.
> Obviously you will need to have similar spec hardware - not sure how
> feasible this is. You may also wish to connect a typical user application
> to this test environment while the trace is playing to see how well it
> "feels".
> --
> Mark Allison, SQL Server MVP
> http://www.markallison.co.uk
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602m.html
>
> Bob Castleman wrote:

Monday, February 13, 2012

Auto numbering field similar to

I've been using MS Access 2000 for a while and have recently switched to MS
SQL 2000. When creating a primary key I am used to MS Access ability to auto
matically enter a number in the ID field when I enter data into my tables.
Does MS SQL have a feature similar to this? I checked all the data types and
the only thing that I see that is close to autonumber is uniqueidentifier. Is
that the same thing?
Walker_Michael wrote:
> I've been using MS Access 2000 for a while and have recently switched
> to MS SQL 2000. When creating a primary key I am used to MS Access
> ability to auto matically enter a number in the ID field when I enter
> data into my tables. Does MS SQL have a feature similar to this? I
> checked all the data types and the only thing that I see that is
> close to autonumber is uniqueidentifier. Is that the same thing?
No, not really. What you want is an IDENTITY column (attached to a
numeric data type) as in:
Create Table Customers (
CustID INT IDENTITY NOT NULL )
Unique identifiers can be used as well, but you need to generate the
number manually using the newid() function. They consists of a 16-byte
hexadecimal number (GUID). Some SQL Server users use them as keys. They
are used frequently in replication. The INT IDENTITY can accommodate
more than 2 Billion values and is only 4-bytes as opposed to 16.
The return the last identity value inserted, you should use
scope_identity(). From a stored procedure, you could use:
Create Proc dbo.UpdateCustomer
@.CustID INT OUTPUT,
@.CustName VARCHAR(50
as
Begin
If @.CustID IS NOT NULL
Update dbo.Customers
Set CustName = @.CustName
Where CustID = @.CustID
Else
Begin
Insert dbo.Customers (
CustName)
Values (
@.CustName )
Set @.CustID = SCOPE_IDENTITY()
End
End
To call this procedure:
Declare @.CustID INT
Exec dbo.UpdateCustomer @.CustID OUTPUT, 'David Gugick'
Select @.CustID
David Gugick
Imceda Software
www.imceda.com

Friday, February 10, 2012

Auto Grow/Shrink

I've got a half terabyte 2000 server with 100 + databases on it. The larger
databases I've taken off of auto shrink. But I am wondering if the 80-90
smaller databases are causing some performance issues with auto shrinks
(which I left on just for ease/convenience).
To check the server out, I turned Profiler on and captured various Auth
Shrink/Grow events. I ran this overnight and into the production hours today
and I got very few of these and so I'm wondering if I did it right.
I simply selected from my trace where event class in 92-95. Is that the
right way to do it? (I want to make sure that I'm not missing these.)
CLM,
Take a look at:
http://www.aspfaq.com/show.asp?id=2471
and
http://www.karaszi.com/SQLServer/info_dont_shrink.asp
HTH
Jerry
"CLM" <CLM@.discussions.microsoft.com> wrote in message
news:A69B6508-CE0A-4AA7-8575-13D593CE1353@.microsoft.com...
> I've got a half terabyte 2000 server with 100 + databases on it. The
> larger
> databases I've taken off of auto shrink. But I am wondering if the 80-90
> smaller databases are causing some performance issues with auto shrinks
> (which I left on just for ease/convenience).
> To check the server out, I turned Profiler on and captured various Auth
> Shrink/Grow events. I ran this overnight and into the production hours
> today
> and I got very few of these and so I'm wondering if I did it right.
> I simply selected from my trace where event class in 92-95. Is that the
> right way to do it? (I want to make sure that I'm not missing these.)
|||Thx, but these I know. I have an unusual configuration that imo requires
autoshrink (or a lot more disk).
Did I do the querying correctly?
"Jerry Spivey" wrote:

> CLM,
> Take a look at:
> http://www.aspfaq.com/show.asp?id=2471
> and
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp
> HTH
> Jerry
> "CLM" <CLM@.discussions.microsoft.com> wrote in message
> news:A69B6508-CE0A-4AA7-8575-13D593CE1353@.microsoft.com...
>
>
|||CLM,
Yes the 'Data File Auto Grow', 'Data File Auto Shrink', 'Log File Auto
Grow', 'Log File Auto Shrink' event classes are the correct ones to use.
However, I really wouldn't recommend having 80-90 databases auto growing and
auto shrinking as this is likely to cause fragmentation and reduce
performance.
HTH
Jerry
"CLM" <CLM@.discussions.microsoft.com> wrote in message
news:8DD5AE5E-2B7D-4891-A21A-31276BBF47F5@.microsoft.com...[vbcol=seagreen]
> Thx, but these I know. I have an unusual configuration that imo requires
> autoshrink (or a lot more disk).
> Did I do the querying correctly?
> "Jerry Spivey" wrote:
|||I understand. But the problem I have - and I inherited this by the way - is
that most of those databases have one table with a clustered and a
nonclustered index. When I reindex them, the database expands to a little
over twice its size. And I just don't have the space for that to stay that
way.
I've got index defrags on the larger databases, but I can't have that on
every database.
If there's an easier/better way, I'm open to it of course...
"Jerry Spivey" wrote:

> CLM,
> Yes the 'Data File Auto Grow', 'Data File Auto Shrink', 'Log File Auto
> Grow', 'Log File Auto Shrink' event classes are the correct ones to use.
> However, I really wouldn't recommend having 80-90 databases auto growing and
> auto shrinking as this is likely to cause fragmentation and reduce
> performance.
> HTH
> Jerry
> "CLM" <CLM@.discussions.microsoft.com> wrote in message
> news:8DD5AE5E-2B7D-4891-A21A-31276BBF47F5@.microsoft.com...
>
>
|||Hmmm...
1. Extend your array(s) to fit the storage needs of the business
2. Use fewer databases - less space and will also be easier to manage
3. Try using index defrag on the smaller databases
4. Move some of these databases to another server with more space
HTH
Jerry
"CLM" <CLM@.discussions.microsoft.com> wrote in message
news:DB60B945-AB19-488C-8169-ADD24B00FB41@.microsoft.com...[vbcol=seagreen]
>I understand. But the problem I have - and I inherited this by the way -
>is
> that most of those databases have one table with a clustered and a
> nonclustered index. When I reindex them, the database expands to a little
> over twice its size. And I just don't have the space for that to stay
> that
> way.
> I've got index defrags on the larger databases, but I can't have that on
> every database.
> If there's an easier/better way, I'm open to it of course...
>
> "Jerry Spivey" wrote:
|||Why do you reindex? The following shrink will largely undo the defragmentation that the reindex
performed...
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"CLM" <CLM@.discussions.microsoft.com> wrote in message
news:DB60B945-AB19-488C-8169-ADD24B00FB41@.microsoft.com...[vbcol=seagreen]
>I understand. But the problem I have - and I inherited this by the way - is
> that most of those databases have one table with a clustered and a
> nonclustered index. When I reindex them, the database expands to a little
> over twice its size. And I just don't have the space for that to stay that
> way.
> I've got index defrags on the larger databases, but I can't have that on
> every database.
> If there's an easier/better way, I'm open to it of course...
>
> "Jerry Spivey" wrote:
|||Oh, I feel like an idiot! I was not thinking about it clearly. I tested
what you said and of course it came to exactly what you said - the scan
density went back to virutally what it was before the reindex!! I appreciate
so much your taking time to answer stupid questions...
"Tibor Karaszi" wrote:

> Why do you reindex? The following shrink will largely undo the defragmentation that the reindex
> performed...
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "CLM" <CLM@.discussions.microsoft.com> wrote in message
> news:DB60B945-AB19-488C-8169-ADD24B00FB41@.microsoft.com...
>
>
|||You're not the only one who shrink without realizing the consequences. I'm glad you figured it out.
:-)
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"CLM" <CLM@.discussions.microsoft.com> wrote in message
news:FF383A38-6E62-4B98-A621-3333A5117696@.microsoft.com...[vbcol=seagreen]
> Oh, I feel like an idiot! I was not thinking about it clearly. I tested
> what you said and of course it came to exactly what you said - the scan
> density went back to virutally what it was before the reindex!! I appreciate
> so much your taking time to answer stupid questions...
> "Tibor Karaszi" wrote:

Auto generating numeric column

Hi There,

I want to create a column that usto numbers upto a specified value, and the resets. I've tried using the identity column and then using DBCC CHECKIDENT, but this doesnt doesn't have the desired affect. Here's an example of what I'm after.

AutoRow
1
2
3
4 --- reset
1
2
3
4 ---reset

Any help would be great,

many thanks

Stuart

An indentity field is going to require unique values, so reseting it isnt going to work, your going to throw an SQL Exception.

Are you trying to accomplish this just on the SQL server, or can you do it as part of your application?

|||Unfortuanatley it needs to be done on the server|||

Check this sample to see whether it helps:

DROP TABLE tbl_testIden
go

CREATE TABLE tbl_testIden (id int identity(1,1), name sysname)
GO

CREATE TRIGGER trg_tsetIden ON tbl_testIden FOR INSERT,UPDATE
AS
IF (IDENT_CURRENT('tbl_testIden')>=4)
DBCC CHECKIDENT('tbl_testIden',RESEED,0)
GO

DECLARE @.i INT
SET @.i=1
WHILE (@.i<100)
BEGIN
INSERT INTO tbl_testIden(name) SELECT 'Person#'+CONVERT(VARCHAR(8),@.i)
SET @.i=@.i+1
END
go
SELECT * FROM tbl_testIden

|||

Hi There,

I managed to get it working using the Row_Number function. It has some very useul derivetives also.

Row_Number simply creates and incremental list which is reset when thepartiation by value changes.

ROW_NUMBER()OVER(PARTITIONBY ReqPointIdORDERBY ReqPointIdDESC)AS RowGroup

I'll look at your approach as it alows me to specify and upper limit.

Many thanks for your time

Regards

Stuart

|||

You are looking for a combination of ROW_NUMBER and %, but without more of the query, I can't help you. Generically...

SELECT (RowGroup % 4)+1 As AutoGen, more columns here

FROM (

SELECT ROW_NUMBER() OVER (something) AS RowGroup, more columns here

) t1

You may even be able to combine the two, but I'm not familiar enough with ROW_NUMBER to say that would work, but if it does... Then...

SELECT (ROW_NUMBER() OVER (something) % 4)+1 AS RowGroup, more columns here

FROM somewhere

should work as well. I think most people (myself included) were confused because you said you wanted an autogenerated column. That tends to lead people to believe you are speaking of a table column, not a field in a resultset.