Thursday, March 22, 2012
Automatic updating of datetime field
create table t (
id bigint identity(1,1) not null primary key,
name varchar(50),
value varchar(50),
ts datetime not null default getutcdate()
)
go
insert t (name, value) values ('fred', 'bob')
go
update t set value='robert' where id=1 and name='fred'
go
One option would be to use an instead of update trigger.
create trigger update_t on t
instead of update as
update t set ts=getutcdate(),name=inserted.name, value=inserted.value from t inner join inserted on t.id=inserted.id
go
update t set value='dick' where id=1 and name='fred'
go
Sounds like I've solved my own problem, heh? Well, here's the catch ... you can't know the names of the other columns at the time you write the trigger. I.e. you only know that there is a ts field that needs to be updated internally, otherwise you want the update to do the same thing it would normally do.
Any ideas?...also, you don't know what database server it is going to be running on, so it has to be platform-independent.
...and it needs to be fully compatible with the Mayan calendar as well.
...oh yeah, and the final code must be a palindrome that reads the same way forwards as backwards! Yeah, that's it! What a kick-ass application design! Whooooo-eeeeeeee!sql
Thursday, February 16, 2012
Auto update statistics
In SQL 2000 is there a way of changing the threshold that auto updating of statistics kicks in? Currently I am working with a db where the stats are becoming extremely poor after a day (as no automatic stats update is taking place) and the length of time a stored proc takes to compile and run goes from about 5 mins to around 120 mins. An update stats fixes the problem.
Thanks.
Hello...You could use a querry hint inside that procedure if only one is affected by this. You could use the hint to force SQL to use the right Index.
Also are you using some default Values for your SP?
If those defaults are "totally different" from the data thats in the Table (example default is NULL for a field and you change this to -1 inside the SP), then SQL might also be tempted to pick the wrong plan for a SP|||
Thanks for your reply.
I would like to avoid query hints, as quite a number of procedures are affected by the stats problem.
I am not using default values within the SPs in fact a lot of the data being processed is fairly similar to the existing data within the tables. I don't quite understand why the stats are becoming out of date so quickly. They are not being automatically updated as the additional data being added to the database is fairly small (comfortably under the 500 + 20% of rows in the table that forces an auto update stats). That's why I was asking if there was a way of changing the threshold at which an auto update of stats takes place.
Alternatively is there a way of limiting the number of times the optimizer attempts to find a cheap execution plan?
|||Have you considered scheduling a job to update the relevant statistics at regular intervals?|||Stefano Stefani - MSFT wrote:
Have you considered scheduling a job to update the relevant statistics at regular intervals?
Hi All,
I am facing the similar situation. When large number of records are added to certain tables, query optimizer generates very poor plan for those query that reference these tables. There are total of 3 tables and they contain 120 millions, 110 million,s and 70 millions of rows respectively.
Let suppose I run a job every hour but what is should be the right sample % to run the job effectively. Why auto update stats are not created promptly? What is the cause of query optimizer's such poor plan selection?
I am using SQL Server 2005 in Windows 2003 Server and we use SAN for data storage.
I would appreciate if someone can point to me some other usefull links.
Thanks in advance for all your help.
Regards,
JIm
|||Have you figured out your issues?
If not do you have both Auto Update Statistics and Auto Update Statistics Asynchronously set to True? Check out this link and go down to the section "Maintaining Statistics in SQL Server 2005". It give some good information. http://www.microsoft.com/technet/prodtechnol/sql/2005/qrystats.mspx also look at the new feature of auto update statistics Async.
Auto update statistics
In SQL 2000 is there a way of changing the threshold that auto updating of statistics kicks in? Currently I am working with a db where the stats are becoming extremely poor after a day (as no automatic stats update is taking place) and the length of time a stored proc takes to compile and run goes from about 5 mins to around 120 mins. An update stats fixes the problem.
Thanks.
Hello...You could use a querry hint inside that procedure if only one is affected by this. You could use the hint to force SQL to use the right Index.
Also are you using some default Values for your SP?
If those defaults are "totally different" from the data thats in the Table (example default is NULL for a field and you change this to -1 inside the SP), then SQL might also be tempted to pick the wrong plan for a SP|||
Thanks for your reply.
I would like to avoid query hints, as quite a number of procedures are affected by the stats problem.
I am not using default values within the SPs in fact a lot of the data being processed is fairly similar to the existing data within the tables. I don't quite understand why the stats are becoming out of date so quickly. They are not being automatically updated as the additional data being added to the database is fairly small (comfortably under the 500 + 20% of rows in the table that forces an auto update stats). That's why I was asking if there was a way of changing the threshold at which an auto update of stats takes place.
Alternatively is there a way of limiting the number of times the optimizer attempts to find a cheap execution plan?
|||Have you considered scheduling a job to update the relevant statistics at regular intervals?|||Stefano Stefani - MSFT wrote:
Have you considered scheduling a job to update the relevant statistics at regular intervals?
Hi All,
I am facing the similar situation. When large number of records are added to certain tables, query optimizer generates very poor plan for those query that reference these tables. There are total of 3 tables and they contain 120 millions, 110 million,s and 70 millions of rows respectively.
Let suppose I run a job every hour but what is should be the right sample % to run the job effectively. Why auto update stats are not created promptly? What is the cause of query optimizer's such poor plan selection?
I am using SQL Server 2005 in Windows 2003 Server and we use SAN for data storage.
I would appreciate if someone can point to me some other usefull links.
Thanks in advance for all your help.
Regards,
JIm
|||Have you figured out your issues?
If not do you have both Auto Update Statistics and Auto Update Statistics Asynchronously set to True? Check out this link and go down to the section "Maintaining Statistics in SQL Server 2005". It give some good information. http://www.microsoft.com/technet/prodtechnol/sql/2005/qrystats.mspx also look at the new feature of auto update statistics Async.