Sunday, March 11, 2012
automatic dbase backup fails
backup to fail when the manual backup works fine? There seems to be plenty
of disk space. Also there does not to be a log created showing any error
messages when the backup fails. The autobackup was setup in Enterprise
manager under Management/jobs. Also the SQL Server Agent is running. Thanks.
Paul G
Software engineer.
Is the backup being sent to a remote share or machine? If so make sure the
account SQL Server is running under has the proper rights to that share.
Andrew J. Kelly SQL MVP
"Paul" <Paul@.discussions.microsoft.com> wrote in message
news:4E523A95-0F6E-403B-904B-B8F134A86055@.microsoft.com...
> Hi just wondering if anyone has any ideas what would cause an automatic
> backup to fail when the manual backup works fine? There seems to be
> plenty
> of disk space. Also there does not to be a log created showing any error
> messages when the backup fails. The autobackup was setup in Enterprise
> manager under Management/jobs. Also the SQL Server Agent is running.
> Thanks.
> --
> Paul G
> Software engineer.
|||the backup should be just going to the same machine as the database in the
directory c:\program files\microsoft sql server\BACKUP, thanks for the
information will check out the rights/permissions.
Paul G
Software engineer.
"Andrew J. Kelly" wrote:
> Is the backup being sent to a remote share or machine? If so make sure the
> account SQL Server is running under has the proper rights to that share.
> --
> Andrew J. Kelly SQL MVP
>
> "Paul" <Paul@.discussions.microsoft.com> wrote in message
> news:4E523A95-0F6E-403B-904B-B8F134A86055@.microsoft.com...
>
>
|||Backing up to the same drive the database is on is a bad idea. If you loose
the drive you loose both the db and the backups. Have a look in the SQL
Error logs and in the job history and see if there is more info to go on.
Andrew J. Kelly SQL MVP
"Paul" <Paul@.discussions.microsoft.com> wrote in message
news:E621A920-9E90-4401-B759-1806B6F03F8B@.microsoft.com...[vbcol=seagreen]
> the backup should be just going to the same machine as the database in the
> directory c:\program files\microsoft sql server\BACKUP, thanks for the
> information will check out the rights/permissions.
> --
> Paul G
> Software engineer.
>
> "Andrew J. Kelly" wrote:
|||ok thanks for the additional information. We have a seperate backup of the
drive that occures daily but will probably back up to an external device in
the near future.
Paul G
Software engineer.
"Andrew J. Kelly" wrote:
> Backing up to the same drive the database is on is a bad idea. If you loose
> the drive you loose both the db and the backups. Have a look in the SQL
> Error logs and in the job history and see if there is more info to go on.
> --
> Andrew J. Kelly SQL MVP
>
> "Paul" <Paul@.discussions.microsoft.com> wrote in message
> news:E621A920-9E90-4401-B759-1806B6F03F8B@.microsoft.com...
>
>
automatic dbase backup fails
backup to fail when the manual backup works fine? There seems to be plenty
of disk space. Also there does not to be a log created showing any error
messages when the backup fails. The autobackup was setup in Enterprise
manager under Management/jobs. Also the SQL Server Agent is running. Thanks.
--
Paul G
Software engineer.Is the backup being sent to a remote share or machine? If so make sure the
account SQL Server is running under has the proper rights to that share.
--
Andrew J. Kelly SQL MVP
"Paul" <Paul@.discussions.microsoft.com> wrote in message
news:4E523A95-0F6E-403B-904B-B8F134A86055@.microsoft.com...
> Hi just wondering if anyone has any ideas what would cause an automatic
> backup to fail when the manual backup works fine? There seems to be
> plenty
> of disk space. Also there does not to be a log created showing any error
> messages when the backup fails. The autobackup was setup in Enterprise
> manager under Management/jobs. Also the SQL Server Agent is running.
> Thanks.
> --
> Paul G
> Software engineer.|||the backup should be just going to the same machine as the database in the
directory c:\program files\microsoft sql server\BACKUP, thanks for the
information will check out the rights/permissions.
--
Paul G
Software engineer.
"Andrew J. Kelly" wrote:
> Is the backup being sent to a remote share or machine? If so make sure the
> account SQL Server is running under has the proper rights to that share.
> --
> Andrew J. Kelly SQL MVP
>
> "Paul" <Paul@.discussions.microsoft.com> wrote in message
> news:4E523A95-0F6E-403B-904B-B8F134A86055@.microsoft.com...
> > Hi just wondering if anyone has any ideas what would cause an automatic
> > backup to fail when the manual backup works fine? There seems to be
> > plenty
> > of disk space. Also there does not to be a log created showing any error
> > messages when the backup fails. The autobackup was setup in Enterprise
> > manager under Management/jobs. Also the SQL Server Agent is running.
> > Thanks.
> > --
> > Paul G
> > Software engineer.
>
>|||Backing up to the same drive the database is on is a bad idea. If you loose
the drive you loose both the db and the backups. Have a look in the SQL
Error logs and in the job history and see if there is more info to go on.
--
Andrew J. Kelly SQL MVP
"Paul" <Paul@.discussions.microsoft.com> wrote in message
news:E621A920-9E90-4401-B759-1806B6F03F8B@.microsoft.com...
> the backup should be just going to the same machine as the database in the
> directory c:\program files\microsoft sql server\BACKUP, thanks for the
> information will check out the rights/permissions.
> --
> Paul G
> Software engineer.
>
> "Andrew J. Kelly" wrote:
>> Is the backup being sent to a remote share or machine? If so make sure
>> the
>> account SQL Server is running under has the proper rights to that share.
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "Paul" <Paul@.discussions.microsoft.com> wrote in message
>> news:4E523A95-0F6E-403B-904B-B8F134A86055@.microsoft.com...
>> > Hi just wondering if anyone has any ideas what would cause an automatic
>> > backup to fail when the manual backup works fine? There seems to be
>> > plenty
>> > of disk space. Also there does not to be a log created showing any
>> > error
>> > messages when the backup fails. The autobackup was setup in Enterprise
>> > manager under Management/jobs. Also the SQL Server Agent is running.
>> > Thanks.
>> > --
>> > Paul G
>> > Software engineer.
>>|||ok thanks for the additional information. We have a seperate backup of the
drive that occures daily but will probably back up to an external device in
the near future.
--
Paul G
Software engineer.
"Andrew J. Kelly" wrote:
> Backing up to the same drive the database is on is a bad idea. If you loose
> the drive you loose both the db and the backups. Have a look in the SQL
> Error logs and in the job history and see if there is more info to go on.
> --
> Andrew J. Kelly SQL MVP
>
> "Paul" <Paul@.discussions.microsoft.com> wrote in message
> news:E621A920-9E90-4401-B759-1806B6F03F8B@.microsoft.com...
> > the backup should be just going to the same machine as the database in the
> > directory c:\program files\microsoft sql server\BACKUP, thanks for the
> > information will check out the rights/permissions.
> > --
> > Paul G
> > Software engineer.
> >
> >
> > "Andrew J. Kelly" wrote:
> >
> >> Is the backup being sent to a remote share or machine? If so make sure
> >> the
> >> account SQL Server is running under has the proper rights to that share.
> >>
> >> --
> >> Andrew J. Kelly SQL MVP
> >>
> >>
> >> "Paul" <Paul@.discussions.microsoft.com> wrote in message
> >> news:4E523A95-0F6E-403B-904B-B8F134A86055@.microsoft.com...
> >> > Hi just wondering if anyone has any ideas what would cause an automatic
> >> > backup to fail when the manual backup works fine? There seems to be
> >> > plenty
> >> > of disk space. Also there does not to be a log created showing any
> >> > error
> >> > messages when the backup fails. The autobackup was setup in Enterprise
> >> > manager under Management/jobs. Also the SQL Server Agent is running.
> >> > Thanks.
> >> > --
> >> > Paul G
> >> > Software engineer.
> >>
> >>
> >>
>
>
automatic dbase backup fails
backup to fail when the manual backup works fine? There seems to be plenty
of disk space. Also there does not to be a log created showing any error
messages when the backup fails. The autobackup was setup in Enterprise
manager under Management/jobs. Also the SQL Server Agent is running. Thank
s.
--
Paul G
Software engineer.Is the backup being sent to a remote share or machine? If so make sure the
account SQL Server is running under has the proper rights to that share.
Andrew J. Kelly SQL MVP
"Paul" <Paul@.discussions.microsoft.com> wrote in message
news:4E523A95-0F6E-403B-904B-B8F134A86055@.microsoft.com...
> Hi just wondering if anyone has any ideas what would cause an automatic
> backup to fail when the manual backup works fine? There seems to be
> plenty
> of disk space. Also there does not to be a log created showing any error
> messages when the backup fails. The autobackup was setup in Enterprise
> manager under Management/jobs. Also the SQL Server Agent is running.
> Thanks.
> --
> Paul G
> Software engineer.|||the backup should be just going to the same machine as the database in the
directory c:\program files\microsoft sql server\BACKUP, thanks for the
information will check out the rights/permissions.
--
Paul G
Software engineer.
"Andrew J. Kelly" wrote:
> Is the backup being sent to a remote share or machine? If so make sure th
e
> account SQL Server is running under has the proper rights to that share.
> --
> Andrew J. Kelly SQL MVP
>
> "Paul" <Paul@.discussions.microsoft.com> wrote in message
> news:4E523A95-0F6E-403B-904B-B8F134A86055@.microsoft.com...
>
>|||Backing up to the same drive the database is on is a bad idea. If you loose
the drive you loose both the db and the backups. Have a look in the SQL
Error logs and in the job history and see if there is more info to go on.
Andrew J. Kelly SQL MVP
"Paul" <Paul@.discussions.microsoft.com> wrote in message
news:E621A920-9E90-4401-B759-1806B6F03F8B@.microsoft.com...[vbcol=seagreen]
> the backup should be just going to the same machine as the database in the
> directory c:\program files\microsoft sql server\BACKUP, thanks for the
> information will check out the rights/permissions.
> --
> Paul G
> Software engineer.
>
> "Andrew J. Kelly" wrote:
>|||ok thanks for the additional information. We have a seperate backup of the
drive that occures daily but will probably back up to an external device in
the near future.
--
Paul G
Software engineer.
"Andrew J. Kelly" wrote:
> Backing up to the same drive the database is on is a bad idea. If you loo
se
> the drive you loose both the db and the backups. Have a look in the SQL
> Error logs and in the job history and see if there is more info to go on.
> --
> Andrew J. Kelly SQL MVP
>
> "Paul" <Paul@.discussions.microsoft.com> wrote in message
> news:E621A920-9E90-4401-B759-1806B6F03F8B@.microsoft.com...
>
>
Friday, February 24, 2012
autogrow of log file possibly cause timeout?
i had a problem with a server instance where an app processing a file and putting the data into the database using a stored procedure was timing out. the app is a service and in the case of errors, will try to process the file until it has been succesfully added data to the database. in the event viewer i keep seeing this message during this timeout period:
"Autogrow of file 'mydatabase_log' in database 'mydatabase' was cancelled by user or timed out after 15687 milliseconds. Use ALTER DATABASE to set a smaller FILEGROWTH value for this file or to explicitly set a new file size."
is there a possibility that this process was causing my SP to timeout? are there any other log or debug files that SQL Express creates that might be helpful in discovering what caused the timeout? currently the log is at... ouch 10gigs restricted... maybe trying to autogrow from this while restricted is hogging resources?
As SQL Server Express databases are limited to 4GB per database, I just wonder why you need 10GB of logs ? Did you consider backing up your database and shrink your logfiles ?HTH, Jens K. Suessmeyer.
http://www.sqlserver2005.de|||when we created the database, we used the default setting for the log file, which was "By 10 percent, restricted growth ". currently we backed up the database, detached the DB, removed the old log file, and reattached the DB. it created a new log file and we set it to a smaller size. as i understand it the log file holds the uncommitted transactions, is this true? if we want the log file to stay at a certain size maybe i need to uncheck AutoGrowth? and going back to the original question, if SQL was trying to autogrow a 10 gig file, could this have used all the resources at the moment and timeout any stored procedures running during this autogrowth process?|||
Hi nattylife,
for an instance assump your Log File size is 12 GB, you have set up Auto Growth option in 20% , now when ever your T-Log file need to grow it will grow 12 GB * 20 % means it will grow 2.4 GB in size and while this process is running it occupy the server resource (yes it will effect while this happens in pick time).
If you stop/uncheck/disabled Auto Grow , you may be in trouble because if your T-log need to grow and Auto Grow is disabled it might be resulted in SUSPECT status of your database, so don't disabled Auto Grow set it in appropriate % / MB as per your *requirement/perdiction* in growth of your database. BTW what is your Recovery Model of your database?
Refer T-Log architecture in BOL to understand more, BOL is your best friend.
Hemantgiri S. Goswami
|||Your way of removing the log file is dangerous and not best practise. You should take a backup of your database and shrink the log afterwards, rather than just *deleting* it and let it recreate by SQL Server.
HTH, Jens K. Suessmeyer.
http://www.sqlserver2005.de
|||Jens K. Suessmeyer wrote:
Your way of removing the log file is dangerous and not best practise. You should take a backup of your database and shrink the log afterwards, rather than just *deleting* it and let it recreate by SQL Server.
HTH, Jens K. Suessmeyer.http://www.sqlserver2005.de
we backed up the database before we do any questionable practices. the database is filled with 3 text files that our app recieves every day. inside each file are the records for the database. we archive these every day for 90 days in case we have any database issues. after doing some more research, i found how to shrink the log in the CTP which was the approach i was looking for, just didnt find until after the fact. i didnt realize that this log file would grow coninuously, i was under the assumption it would eventually get written over. so we are discussing a way to archive the ldf file now too.
|||Hi,
you should consider using the backup functions of SQL Server. You are able to do either full / differential or transaction log backups. A combination of those will help you to make your databases disaster-recoverable.
HTH, Jens K. Suessmeyer.
http://www.sqlserver2005.de
|||i checked the backup options in the CTP. i see where i can do a log backup, but is this a manual process or when will it do it on a regular basis since i dont see any option to automate it. if i wanted to automate it, would i need to implement this with my own service or such?|||Hi,SQL Server Agent is not shipped with SQL Server Express. An approach could be to script out the backup command (using the functionality of the management studio) and schedule the execution of the script using an AT command (or any other scheduler) by executing the script with SQLCMD.
HTH, Jens K. Suessmeyer.
http://www.sqlserver2005.de
Thursday, February 16, 2012
Auto update statistics
view was that having this on would cause spikes in utilization whenever the
auto update kicks in during peak load time, and so this option should be
turned off and update statistics should be run via a job during off times.
The other view was that it didn't matter.
Any thoughts?
Thanks
Bob Castleman
SuccessWare Software
Bob Castleman wrote:
> I've seen conflicting information about having auto-update
> statistics. On view was that having this on would cause spikes in
> utilization whenever the auto update kicks in during peak load time,
> and so this option should be turned off and update statistics should
> be run via a job during off times. The other view was that it didn't
> matter.
> Any thoughts?
> Thanks
> Bob Castleman
> SuccessWare Software
You're right that there is no clear concensus. Some customers do perform
statistics updates after hours. They may be doing this because the
database was around during the SQL 7 days or because the amount of data
loaded into tables during peak hours does not generall affect the
statistical distribution of data in those tables.
If you have tables that are effected by large loads during work hours
and are worried that queries that access those tables are likely to use
inefficient plans because of outdated statistics, then keep the option
on. Certainly, running a bunch of inefficient queries will put more
stress on the server than allowing SQL Server to update statistics as it
sees fit. OTOH, if your tables are unaffected by large changes during
the day, you can update statistics at night (daily, weekly, or monthly
as you see fit).
It all depends on your data and the queries hitting your data.
What is you situation?
David Gugick
Imceda Software
www.imceda.com
|||Bob,
Yes, I come up against this a lot. I am a freelance consultant and I
hear both sides of the argument - the "correct" answer is that it
depends on your environment. If leaving auto-update stats on is
acceptable to the users -- ie they perceive no degradation in
performance -- then leave it on.
Unless your system is processing many transactions per second - say
10-100 then perhaps turn it off and schedule update stats manually in a
maintenance window.
Some places I go to have 24x7 shops where there simply isn't a
maintenance window, so they just leave auto-update stats on all the time
anyway. You need to base your decision on how it affects your users.
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602m.html
Bob Castleman wrote:
> I've seen conflicting information about having auto-update statistics. On
> view was that having this on would cause spikes in utilization whenever the
> auto update kicks in during peak load time, and so this option should be
> turned off and update statistics should be run via a job during off times.
> The other view was that it didn't matter.
> Any thoughts?
> Thanks
> Bob Castleman
> SuccessWare Software
>
|||Our production environment hosts about 125 of our clients in a Citrix farm
against two Active/Passive clusters. Probably 90% of the load occurs between
8:00 AM and 6:00 PM and we have a defined maintenance window of 12:00 AM to
5:00 AM. No single client has huge processing needs, but the aggregate load
can get pretty heavy. We recently solved a problem that was sucking up
proccessor cycles and limited the number of clients on a database server. We
want to increase the number of clients per database server, so I am trying
to identify things I can do before hand that might help keep things under
control. We expect to triple or quadruple the number of clients we are
hosting over the next 24 to 36 months so I am trying to get everything as
stable as possible now instead of waiting for fires to start.
We are also looking at tools for helping us profile usage patterns and
things like that so we can start optimizing the queries and indexing. The
application was not designed for a hosted environment and query optimization
has been ad hoc in the past. It is highly likely that there are numerous
inefficient queries.
Thx,
Bob
"David Gugick" <davidg-nospam@.imceda.com> wrote in message
news:uxFc8EX4EHA.2012@.TK2MSFTNGP15.phx.gbl...
> Bob Castleman wrote:
> You're right that there is no clear concensus. Some customers do perform
> statistics updates after hours. They may be doing this because the
> database was around during the SQL 7 days or because the amount of data
> loaded into tables during peak hours does not generall affect the
> statistical distribution of data in those tables.
> If you have tables that are effected by large loads during work hours and
> are worried that queries that access those tables are likely to use
> inefficient plans because of outdated statistics, then keep the option on.
> Certainly, running a bunch of inefficient queries will put more stress on
> the server than allowing SQL Server to update statistics as it sees fit.
> OTOH, if your tables are unaffected by large changes during the day, you
> can update statistics at night (daily, weekly, or monthly as you see fit).
> It all depends on your data and the queries hitting your data.
> What is you situation?
>
> --
> David Gugick
> Imceda Software
> www.imceda.com
|||Our transaction rate runs 500-1000 per second. (see my response to Dave
Guigick for more info on our production environment)
It sounds like this is a peripheral issue, rather than one that will show
any significant change. I probably am better off digging deeper into
optimization of the queries and stuff.
Thx,
Bob
"Mark Allison" <marka@.no.tinned.meat.mvps.org> wrote in message
news:ejzUXsc4EHA.3820@.TK2MSFTNGP11.phx.gbl...[vbcol=seagreen]
> Bob,
> Yes, I come up against this a lot. I am a freelance consultant and I hear
> both sides of the argument - the "correct" answer is that it depends on
> your environment. If leaving auto-update stats on is acceptable to the
> users -- ie they perceive no degradation in performance -- then leave it
> on.
> Unless your system is processing many transactions per second - say 10-100
> then perhaps turn it off and schedule update stats manually in a
> maintenance window.
> Some places I go to have 24x7 shops where there simply isn't a maintenance
> window, so they just leave auto-update stats on all the time anyway. You
> need to base your decision on how it affects your users.
> --
> Mark Allison, SQL Server MVP
> http://www.markallison.co.uk
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602m.html
>
> Bob Castleman wrote:
|||Bob,
OK, that's quite healthy! :-)
If these transactions are DML statements then you may find that
auto-update stats has an effect on performance. This effect could be
positive or negative! NOT switching on auto-update stats may actually be
negative in such a high transaction environment. Is there any way you
can take a profiler trace and replay it into a test environment and
benchmarking your system with and without auto-update stats? You will
know for sure this way how it is going to affect you.
Obviously you will need to have similar spec hardware - not sure how
feasible this is. You may also wish to connect a typical user
application to this test environment while the trace is playing to see
how well it "feels".
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602m.html
Bob Castleman wrote:
> Our transaction rate runs 500-1000 per second. (see my response to Dave
> Guigick for more info on our production environment)
> It sounds like this is a peripheral issue, rather than one that will show
> any significant change. I probably am better off digging deeper into
> optimization of the queries and stuff.
|||Bob,
Is it possible for you to email me directly? I have a question for you but
don't know how to reach you.
Please remove the "nooospam" from my reply address.
Thanks
Andrew J. Kelly SQL MVP
"Bob Castleman" <nomail@.here> wrote in message
news:uTl8I7e4EHA.1260@.TK2MSFTNGP12.phx.gbl...
> Our production environment hosts about 125 of our clients in a Citrix farm
> against two Active/Passive clusters. Probably 90% of the load occurs
> between 8:00 AM and 6:00 PM and we have a defined maintenance window of
> 12:00 AM to 5:00 AM. No single client has huge processing needs, but the
> aggregate load can get pretty heavy. We recently solved a problem that was
> sucking up proccessor cycles and limited the number of clients on a
> database server. We want to increase the number of clients per database
> server, so I am trying to identify things I can do before hand that might
> help keep things under control. We expect to triple or quadruple the
> number of clients we are hosting over the next 24 to 36 months so I am
> trying to get everything as stable as possible now instead of waiting for
> fires to start.
> We are also looking at tools for helping us profile usage patterns and
> things like that so we can start optimizing the queries and indexing. The
> application was not designed for a hosted environment and query
> optimization has been ad hoc in the past. It is highly likely that there
> are numerous inefficient queries.
> Thx,
> Bob
>
>
> "David Gugick" <davidg-nospam@.imceda.com> wrote in message
> news:uxFc8EX4EHA.2012@.TK2MSFTNGP15.phx.gbl...
>
|||We are actually in the process of creating a test environment that includes
a mini Citrix farm and a clustered SQL setup for precisely this purpose.
It's still not complete. I'll probably put the auto-update issue on a task
list and try addressing it as you suggest. Turning it off just to see what
happens sounds too risky and it makes much more sense to work out a
reasonable test.
Thx,
Bob
"Mark Allison" <marka@.no.tinned.meat.mvps.org> wrote in message
news:%23U2Xyaf4EHA.2180@.TK2MSFTNGP10.phx.gbl...[vbcol=seagreen]
> Bob,
> OK, that's quite healthy! :-)
> If these transactions are DML statements then you may find that
> auto-update stats has an effect on performance. This effect could be
> positive or negative! NOT switching on auto-update stats may actually be
> negative in such a high transaction environment. Is there any way you can
> take a profiler trace and replay it into a test environment and
> benchmarking your system with and without auto-update stats? You will know
> for sure this way how it is going to affect you.
> Obviously you will need to have similar spec hardware - not sure how
> feasible this is. You may also wish to connect a typical user application
> to this test environment while the trace is playing to see how well it
> "feels".
> --
> Mark Allison, SQL Server MVP
> http://www.markallison.co.uk
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602m.html
>
> Bob Castleman wrote:
Auto update statistics
view was that having this on would cause spikes in utilization whenever the
auto update kicks in during peak load time, and so this option should be
turned off and update statistics should be run via a job during off times.
The other view was that it didn't matter.
Any thoughts?
Thanks
Bob Castleman
SuccessWare SoftwareBob Castleman wrote:
> I've seen conflicting information about having auto-update
> statistics. On view was that having this on would cause spikes in
> utilization whenever the auto update kicks in during peak load time,
> and so this option should be turned off and update statistics should
> be run via a job during off times. The other view was that it didn't
> matter.
> Any thoughts?
> Thanks
> Bob Castleman
> SuccessWare Software
You're right that there is no clear concensus. Some customers do perform
statistics updates after hours. They may be doing this because the
database was around during the SQL 7 days or because the amount of data
loaded into tables during peak hours does not generall affect the
statistical distribution of data in those tables.
If you have tables that are effected by large loads during work hours
and are worried that queries that access those tables are likely to use
inefficient plans because of outdated statistics, then keep the option
on. Certainly, running a bunch of inefficient queries will put more
stress on the server than allowing SQL Server to update statistics as it
sees fit. OTOH, if your tables are unaffected by large changes during
the day, you can update statistics at night (daily, weekly, or monthly
as you see fit).
It all depends on your data and the queries hitting your data.
What is you situation?
David Gugick
Imceda Software
www.imceda.com|||Bob,
Yes, I come up against this a lot. I am a freelance consultant and I
hear both sides of the argument - the "correct" answer is that it
depends on your environment. If leaving auto-update stats on is
acceptable to the users -- ie they perceive no degradation in
performance -- then leave it on.
Unless your system is processing many transactions per second - say
10-100 then perhaps turn it off and schedule update stats manually in a
maintenance window.
Some places I go to have 24x7 shops where there simply isn't a
maintenance window, so they just leave auto-update stats on all the time
anyway. You need to base your decision on how it affects your users.
--
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602m.html
Bob Castleman wrote:
> I've seen conflicting information about having auto-update statistics. On
> view was that having this on would cause spikes in utilization whenever the
> auto update kicks in during peak load time, and so this option should be
> turned off and update statistics should be run via a job during off times.
> The other view was that it didn't matter.
> Any thoughts?
> Thanks
> Bob Castleman
> SuccessWare Software
>|||Our production environment hosts about 125 of our clients in a Citrix farm
against two Active/Passive clusters. Probably 90% of the load occurs between
8:00 AM and 6:00 PM and we have a defined maintenance window of 12:00 AM to
5:00 AM. No single client has huge processing needs, but the aggregate load
can get pretty heavy. We recently solved a problem that was sucking up
proccessor cycles and limited the number of clients on a database server. We
want to increase the number of clients per database server, so I am trying
to identify things I can do before hand that might help keep things under
control. We expect to triple or quadruple the number of clients we are
hosting over the next 24 to 36 months so I am trying to get everything as
stable as possible now instead of waiting for fires to start.
We are also looking at tools for helping us profile usage patterns and
things like that so we can start optimizing the queries and indexing. The
application was not designed for a hosted environment and query optimization
has been ad hoc in the past. It is highly likely that there are numerous
inefficient queries.
Thx,
Bob
"David Gugick" <davidg-nospam@.imceda.com> wrote in message
news:uxFc8EX4EHA.2012@.TK2MSFTNGP15.phx.gbl...
> Bob Castleman wrote:
>> I've seen conflicting information about having auto-update
>> statistics. On view was that having this on would cause spikes in
>> utilization whenever the auto update kicks in during peak load time,
>> and so this option should be turned off and update statistics should
>> be run via a job during off times. The other view was that it didn't
>> matter.
>> Any thoughts?
>> Thanks
>> Bob Castleman
>> SuccessWare Software
> You're right that there is no clear concensus. Some customers do perform
> statistics updates after hours. They may be doing this because the
> database was around during the SQL 7 days or because the amount of data
> loaded into tables during peak hours does not generall affect the
> statistical distribution of data in those tables.
> If you have tables that are effected by large loads during work hours and
> are worried that queries that access those tables are likely to use
> inefficient plans because of outdated statistics, then keep the option on.
> Certainly, running a bunch of inefficient queries will put more stress on
> the server than allowing SQL Server to update statistics as it sees fit.
> OTOH, if your tables are unaffected by large changes during the day, you
> can update statistics at night (daily, weekly, or monthly as you see fit).
> It all depends on your data and the queries hitting your data.
> What is you situation?
>
> --
> David Gugick
> Imceda Software
> www.imceda.com|||Our transaction rate runs 500-1000 per second. (see my response to Dave
Guigick for more info on our production environment)
It sounds like this is a peripheral issue, rather than one that will show
any significant change. I probably am better off digging deeper into
optimization of the queries and stuff.
Thx,
Bob
"Mark Allison" <marka@.no.tinned.meat.mvps.org> wrote in message
news:ejzUXsc4EHA.3820@.TK2MSFTNGP11.phx.gbl...
> Bob,
> Yes, I come up against this a lot. I am a freelance consultant and I hear
> both sides of the argument - the "correct" answer is that it depends on
> your environment. If leaving auto-update stats on is acceptable to the
> users -- ie they perceive no degradation in performance -- then leave it
> on.
> Unless your system is processing many transactions per second - say 10-100
> then perhaps turn it off and schedule update stats manually in a
> maintenance window.
> Some places I go to have 24x7 shops where there simply isn't a maintenance
> window, so they just leave auto-update stats on all the time anyway. You
> need to base your decision on how it affects your users.
> --
> Mark Allison, SQL Server MVP
> http://www.markallison.co.uk
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602m.html
>
> Bob Castleman wrote:
>> I've seen conflicting information about having auto-update statistics. On
>> view was that having this on would cause spikes in utilization whenever
>> the auto update kicks in during peak load time, and so this option should
>> be turned off and update statistics should be run via a job during off
>> times. The other view was that it didn't matter.
>> Any thoughts?
>> Thanks
>> Bob Castleman
>> SuccessWare Software|||Bob,
OK, that's quite healthy! :-)
If these transactions are DML statements then you may find that
auto-update stats has an effect on performance. This effect could be
positive or negative! NOT switching on auto-update stats may actually be
negative in such a high transaction environment. Is there any way you
can take a profiler trace and replay it into a test environment and
benchmarking your system with and without auto-update stats? You will
know for sure this way how it is going to affect you.
Obviously you will need to have similar spec hardware - not sure how
feasible this is. You may also wish to connect a typical user
application to this test environment while the trace is playing to see
how well it "feels".
--
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602m.html
Bob Castleman wrote:
> Our transaction rate runs 500-1000 per second. (see my response to Dave
> Guigick for more info on our production environment)
> It sounds like this is a peripheral issue, rather than one that will show
> any significant change. I probably am better off digging deeper into
> optimization of the queries and stuff.|||Bob,
Is it possible for you to email me directly? I have a question for you but
don't know how to reach you.
Please remove the "nooospam" from my reply address.
Thanks
Andrew J. Kelly SQL MVP
"Bob Castleman" <nomail@.here> wrote in message
news:uTl8I7e4EHA.1260@.TK2MSFTNGP12.phx.gbl...
> Our production environment hosts about 125 of our clients in a Citrix farm
> against two Active/Passive clusters. Probably 90% of the load occurs
> between 8:00 AM and 6:00 PM and we have a defined maintenance window of
> 12:00 AM to 5:00 AM. No single client has huge processing needs, but the
> aggregate load can get pretty heavy. We recently solved a problem that was
> sucking up proccessor cycles and limited the number of clients on a
> database server. We want to increase the number of clients per database
> server, so I am trying to identify things I can do before hand that might
> help keep things under control. We expect to triple or quadruple the
> number of clients we are hosting over the next 24 to 36 months so I am
> trying to get everything as stable as possible now instead of waiting for
> fires to start.
> We are also looking at tools for helping us profile usage patterns and
> things like that so we can start optimizing the queries and indexing. The
> application was not designed for a hosted environment and query
> optimization has been ad hoc in the past. It is highly likely that there
> are numerous inefficient queries.
> Thx,
> Bob
>
>
> "David Gugick" <davidg-nospam@.imceda.com> wrote in message
> news:uxFc8EX4EHA.2012@.TK2MSFTNGP15.phx.gbl...
>> Bob Castleman wrote:
>> I've seen conflicting information about having auto-update
>> statistics. On view was that having this on would cause spikes in
>> utilization whenever the auto update kicks in during peak load time,
>> and so this option should be turned off and update statistics should
>> be run via a job during off times. The other view was that it didn't
>> matter.
>> Any thoughts?
>> Thanks
>> Bob Castleman
>> SuccessWare Software
>> You're right that there is no clear concensus. Some customers do perform
>> statistics updates after hours. They may be doing this because the
>> database was around during the SQL 7 days or because the amount of data
>> loaded into tables during peak hours does not generall affect the
>> statistical distribution of data in those tables.
>> If you have tables that are effected by large loads during work hours and
>> are worried that queries that access those tables are likely to use
>> inefficient plans because of outdated statistics, then keep the option
>> on. Certainly, running a bunch of inefficient queries will put more
>> stress on the server than allowing SQL Server to update statistics as it
>> sees fit. OTOH, if your tables are unaffected by large changes during the
>> day, you can update statistics at night (daily, weekly, or monthly as you
>> see fit).
>> It all depends on your data and the queries hitting your data.
>> What is you situation?
>>
>> --
>> David Gugick
>> Imceda Software
>> www.imceda.com
>|||We are actually in the process of creating a test environment that includes
a mini Citrix farm and a clustered SQL setup for precisely this purpose.
It's still not complete. I'll probably put the auto-update issue on a task
list and try addressing it as you suggest. Turning it off just to see what
happens sounds too risky and it makes much more sense to work out a
reasonable test.
Thx,
Bob
"Mark Allison" <marka@.no.tinned.meat.mvps.org> wrote in message
news:%23U2Xyaf4EHA.2180@.TK2MSFTNGP10.phx.gbl...
> Bob,
> OK, that's quite healthy! :-)
> If these transactions are DML statements then you may find that
> auto-update stats has an effect on performance. This effect could be
> positive or negative! NOT switching on auto-update stats may actually be
> negative in such a high transaction environment. Is there any way you can
> take a profiler trace and replay it into a test environment and
> benchmarking your system with and without auto-update stats? You will know
> for sure this way how it is going to affect you.
> Obviously you will need to have similar spec hardware - not sure how
> feasible this is. You may also wish to connect a typical user application
> to this test environment while the trace is playing to see how well it
> "feels".
> --
> Mark Allison, SQL Server MVP
> http://www.markallison.co.uk
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602m.html
>
> Bob Castleman wrote:
>> Our transaction rate runs 500-1000 per second. (see my response to Dave
>> Guigick for more info on our production environment)
>> It sounds like this is a peripheral issue, rather than one that will
>> show any significant change. I probably am better off digging deeper into
>> optimization of the queries and stuff.