Monday, March 19, 2012
Automatic Range Management - I give up
process called Automatic Range Management, which is suggested as an easy way
to handle reseeding of the identity values. But I have no idea how to set up
ARM. I looked through the properties of my publications and subscriptions
and couldn't find anything for ARM. Nor do the wizards reveal any setting
for ARM.
So, howya do it?
Roger,
in the publication properties, on the articles tab, click on the elipsis
button which gives access to the article properties. There is a tab for
identities, and a checkbox at the top for automatic range management.
Assuming you haven't yet subscribed, it won't be greyed out and you can then
set the size of the range etc.
HTH,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||THANK YOU!!!!!!
"Paul Ibison" wrote:
> Roger,
> in the publication properties, on the articles tab, click on the elipsis
> button which gives access to the article properties. There is a tab for
> identities, and a checkbox at the top for automatic range management.
> Assuming you haven't yet subscribed, it won't be greyed out and you can then
> set the size of the range etc.
> HTH,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>
>
Sunday, February 19, 2012
auto-commit vs begin transaction overhead
We have an application that has problems with deadlocks. Part of our strategy in handling these deadlocks is resubmitting the transaction. My solution to this part of the strategy is to wrap every storedprocedure in a transaction so this would include reads and writes. My architect has raised some concerns with regard to performance if it's implemented this way. I've come across documentation (http://msdn2.microsoft.com/en-us/library/ms187878.aspx) in the sql server 2005 books online that auto-commit happens on every statement anyway but haven't come across anything that explicitly states the performance penalty if there is one. Is there difference in overhead between calling 'BeginTransaction' for every statement or just letting auto-commit do it's magic. Is this implemented in SqlClient object or at a database level?Some folks from the database engine team would be best suited to answer your question re: auto-commit vs. explicit transaction performance. Depending on their answer, you might also want to follow up with folks in the .Net Data Access and Storage forum to ensure there are no performance implications in the particular client that you are using.|||
AFAIK, definetely there is an overhead. And how much overhead depends on the traffic on your application, and the size of the DB.
There will be lot of contention since transactions will keep the locks until the transaction commits. You might see slow down in your transaction times. If you have a web form and a submit button, you keep the user waiting longer for his transaction to commit if there are more transactions waiting. Also, there could be timeout issues. Your CPU could shoot up. If you use #temp tables your tempDB could blow up.
If you have dead lock scenarios I would look into which procs cause it and fix them rather than wrap every T-SQL with a transaction.
And if you really want to measure the overhead, you can use some 3rd party tools like NetIQ and do a load test and quantify the results.