Thursday, March 29, 2012
automating snapshots
I set up replication from one server to another (push).
The replication is working fine every hour, but I have to start the
snapshots manually.
If I don't, no replication takes place, because there are no changes in the
snapshot.
I want the snapshots to be taken automatically every hour, and then
replicated to the other server.
Is this not the way it works normally? What am I missing?
Riki
It really depends on what type of replication you have set up. If it is
transactional replication, then the snapshot agent runs once to initialize
the subscriber and hten is not normally scheduled to run again - changes to
the publisher's data are sent to the subscriber as transactions rather than
ODBCBCP snapshots. If you have snapshot replication set up, then the snapshot
agent should indeed be set up and scheduled to run - before the distribution
agent. This is not typically done regularly as it'll lock tables on the
publisher and prevent access to the subscriber tables also until the process
completes.
HTH,
Paul Ibison
|||Thanks Paul!
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:D9819EC4-E490-4BDB-A1B2-9EA49E6D1708@.microsoft.com...
> It really depends on what type of replication you have set up. If it is
> transactional replication, then the snapshot agent runs once to initialize
> the subscriber and hten is not normally scheduled to run again - changes
> to
> the publisher's data are sent to the subscriber as transactions rather
> than
> ODBCBCP snapshots. If you have snapshot replication set up, then the
> snapshot
> agent should indeed be set up and scheduled to run - before the
> distribution
> agent. This is not typically done regularly as it'll lock tables on the
> publisher and prevent access to the subscriber tables also until the
> process
> completes.
> HTH,
> Paul Ibison
>
Sunday, March 25, 2012
automatically grow file
I had an insert statement that stopped working the other day. I figured out
that it had to do with the fact that "space available" on the database was
very small. I had "automatically grow file" on the db set to 10%, and I had
plenty of disk space. I think that when my client called the insert
statement, the db was trying to automatically grow, but it was taking so
long that the client timed out.
I finally called:
ALTER DATABASE xxx
MODIFY FILE
(NAME = 'xxx_data',
SIZE = 3000MB)
which took a minute and a half, and then the insert statement began working
quickly again.
I have a couple of half-formed ideas on how to deal with this over the long
term.
My db is now 3GB. If I set "automatically grow file" to 1%, instead of 10%,
does that mean that the automatic growing next time will be quicker? Is
the time it takes proportional to the amount of space we are adding?
Maybe I could create a weekly job that looks to see if the space available
on a database is less than 50MB or so, and if so, expands the database size.
I'm not sure how to write this. I can call sp_spaceused, but I don't know
how to stick "unallocated space" into a variable, since sp_spaceused returns
two datasets.
Any other ideas?
Thanks,
GeorgeThere is probably a simpler way to do this, but...
I would follow your weekly idea.
If you run sp_helptext sp_spaceused you will get back the queries used to
run the sp_spaceused command.
I would probably take that code and create a new sproc. In the new sproc,
instead of returning the results, you can check them there and make the
appropriate changes.
HTH
Rick Sawtell
MCT, MCSD, MCDBA|||Sp_helptext is useful. Thanks.
I'm mostly there, but now I am having trouble passing a parameter into the
alter database command.
declare @.newsize varchar(8)
...
ALTER DATABASE xyz MODIFY FILE
(NAME = xyz_data, SIZE = @.newsize)
Line 5: Incorrect syntax near '@.newsize'.
Thanks,
George|||Seems you can't use a variable for the size, quite simply. Try using dynamic SQL to EXEC the
statement instead.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"George Wynne" <george@.nssco.com> wrote in message news:O5hyQ6FkEHA.3724@.TK2MSFTNGP11.phx.gbl...
> Sp_helptext is useful. Thanks.
> I'm mostly there, but now I am having trouble passing a parameter into the
> alter database command.
> declare @.newsize varchar(8)
> ...
> ALTER DATABASE xyz MODIFY FILE
> (NAME = xyz_data, SIZE = @.newsize)
> Line 5: Incorrect syntax near '@.newsize'.
> Thanks,
> George
>sql
automatically grow file
I had an insert statement that stopped working the other day. I figured out
that it had to do with the fact that "space available" on the database was
very small. I had "automatically grow file" on the db set to 10%, and I had
plenty of disk space. I think that when my client called the insert
statement, the db was trying to automatically grow, but it was taking so
long that the client timed out.
I finally called:
ALTER DATABASE xxx
MODIFY FILE
(NAME = 'xxx_data',
SIZE = 3000MB)
which took a minute and a half, and then the insert statement began working
quickly again.
I have a couple of half-formed ideas on how to deal with this over the long
term.
My db is now 3GB. If I set "automatically grow file" to 1%, instead of 10%,
does that mean that the automatic growing next time will be quicker? Is
the time it takes proportional to the amount of space we are adding?
Maybe I could create a weekly job that looks to see if the space available
on a database is less than 50MB or so, and if so, expands the database size.
I'm not sure how to write this. I can call sp_spaceused, but I don't know
how to stick "unallocated space" into a variable, since sp_spaceused returns
two datasets.
Any other ideas?
Thanks,
George
There is probably a simpler way to do this, but...
I would follow your weekly idea.
If you run sp_helptext sp_spaceused you will get back the queries used to
run the sp_spaceused command.
I would probably take that code and create a new sproc. In the new sproc,
instead of returning the results, you can check them there and make the
appropriate changes.
HTH
Rick Sawtell
MCT, MCSD, MCDBA
|||Sp_helptext is useful. Thanks.
I'm mostly there, but now I am having trouble passing a parameter into the
alter database command.
declare @.newsize varchar(8)
...
ALTER DATABASE xyz MODIFY FILE
(NAME = xyz_data, SIZE = @.newsize)
Line 5: Incorrect syntax near '@.newsize'.
Thanks,
George
|||Seems you can't use a variable for the size, quite simply. Try using dynamic SQL to EXEC the
statement instead.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"George Wynne" <george@.nssco.com> wrote in message news:O5hyQ6FkEHA.3724@.TK2MSFTNGP11.phx.gbl...
> Sp_helptext is useful. Thanks.
> I'm mostly there, but now I am having trouble passing a parameter into the
> alter database command.
> declare @.newsize varchar(8)
> ...
> ALTER DATABASE xyz MODIFY FILE
> (NAME = xyz_data, SIZE = @.newsize)
> Line 5: Incorrect syntax near '@.newsize'.
> Thanks,
> George
>
automatically grow file
I had an insert statement that stopped working the other day. I figured out
that it had to do with the fact that "space available" on the database was
very small. I had "automatically grow file" on the db set to 10%, and I had
plenty of disk space. I think that when my client called the insert
statement, the db was trying to automatically grow, but it was taking so
long that the client timed out.
I finally called:
ALTER DATABASE xxx
MODIFY FILE
(NAME = 'xxx_data',
SIZE = 3000MB)
which took a minute and a half, and then the insert statement began working
quickly again.
I have a couple of half-formed ideas on how to deal with this over the long
term.
My db is now 3GB. If I set "automatically grow file" to 1%, instead of 10%,
does that mean that the automatic growing next time will be quicker? Is
the time it takes proportional to the amount of space we are adding?
Maybe I could create a weekly job that looks to see if the space available
on a database is less than 50MB or so, and if so, expands the database size.
I'm not sure how to write this. I can call sp_spaceused, but I don't know
how to stick "unallocated space" into a variable, since sp_spaceused returns
two datasets.
Any other ideas?
Thanks,
GeorgeThere is probably a simpler way to do this, but...
I would follow your weekly idea.
If you run sp_helptext sp_spaceused you will get back the queries used to
run the sp_spaceused command.
I would probably take that code and create a new sproc. In the new sproc,
instead of returning the results, you can check them there and make the
appropriate changes.
HTH
Rick Sawtell
MCT, MCSD, MCDBA|||Sp_helptext is useful. Thanks.
I'm mostly there, but now I am having trouble passing a parameter into the
alter database command.
declare @.newsize varchar(8)
...
ALTER DATABASE xyz MODIFY FILE
(NAME = xyz_data, SIZE = @.newsize)
Line 5: Incorrect syntax near '@.newsize'.
Thanks,
George|||Seems you can't use a variable for the size, quite simply. Try using dynamic
SQL to EXEC the
statement instead.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"George Wynne" <george@.nssco.com> wrote in message news:O5hyQ6FkEHA.3724@.TK2MSFTNGP11.phx.gb
l...
> Sp_helptext is useful. Thanks.
> I'm mostly there, but now I am having trouble passing a parameter into the
> alter database command.
> declare @.newsize varchar(8)
> ...
> ALTER DATABASE xyz MODIFY FILE
> (NAME = xyz_data, SIZE = @.newsize)
> Line 5: Incorrect syntax near '@.newsize'.
> Thanks,
> George
>
Automatically Generating PDFs With Reports Requiring Parameters
I'm working on many reports that are generated using SQL Server
Reporting Services SP 1 from an ASP.NET 1.1 web application. One of the
things the users would like to do is have the report automatically
generate a PDF when they click on the View Report button after entering
the parameters for the report. Parameters could be a date, a city, a
state, etcetera. However, they would like to avoid having to choose the
format and click the Export link. Is there a way to manipulate the
functionality of the View Report button such that it does this
automatically? Thank you for any insights you can provide.
JabooHow are you accessing the reports?
1) Report Manager
2) URL parameters
3) Web Service
Kulgan.sql
Tuesday, March 20, 2012
Automatic Update Not Working
update keeps trying to install the same "critical security" update to SQL
Server Desktop Engine. Apparently it is not installing, as I find two event
s
in the event view each time. The automatic update is attempted almost every
time I log onto the server. Any ideas? The following is the event
information:
Event Type: Error
Event Source: MsiInstaller
Event Category: None
Event ID: 10005
Date: 12/5/2005
Time: 9:54:07 AM
User: GNJPM\administrator
Computer: HQSERVER
Description:
Product: Microsoft SQL Server Desktop Engine -- Internal Error 2727.
DatabaseRepl.185C1D8F_1545_4277_BB64_857D2622DB57
For more information, see Help and Support Center at
http://go.microsoft.com/fwlink/events.asp.
Data:
0000: 7b 36 35 36 35 37 43 35 {65657C5
0008: 39 2d 32 33 41 38 2d 34 9-23A8-4
0010: 39 37 34 2d 42 38 45 30 974-B8E0
0018: 2d 42 41 30 34 45 42 44 -BA04EBD
0020: 30 34 45 34 46 7d 04E4F}
Event Type: Error
Event Source: MsiInstaller
Event Category: None
Event ID: 1023
Date: 12/5/2005
Time: 9:54:07 AM
User: GNJPM\administrator
Computer: HQSERVER
Description:
Product: Microsoft SQL Server Desktop Engine - Update
'{1F506503-186B-4D88-81B3-7113B9457C9D}' could not be installed. Error
code
1603. Additional information is available in the log file C:\WMSDEHotfix.log
.
For more information, see Help and Support Center at
http://go.microsoft.com/fwlink/events.asp.
Data:
0000: 7b 36 35 36 35 37 43 35 {65657C5
0008: 39 2d 32 33 41 38 2d 34 9-23A8-4
0010: 39 37 34 2d 42 38 45 30 974-B8E0
0018: 2d 42 41 30 34 45 42 44 -BA04EBD
0020: 30 34 45 34 46 7d 20 7b 04E4F} {
0028: 31 46 35 30 36 35 30 33 1F506503
0030: 2d 31 38 36 42 2d 34 44 -186B-4D
0038: 38 38 2d 38 31 42 33 2d 88-81B3-
0040: 37 31 31 33 42 39 34 35 7113B945
0048: 37 43 39 44 7d 20 31 36 7C9D} 16
0050: 30 33 03
Dr. Doug Pruiett
Good News Jail & Prison Ministry
www.goodnewsjail.orgIt could be a lot of different things. Make sure your disk
space is okay. Make sure your temp directory is cleared out.
You may also want to check the following logs for more clues
as to what the problem is:
C:\WINDOWS\sqlsp.log
C:\WINDOWS\sqlstp.log
-Sue
On Mon, 5 Dec 2005 07:10:03 -0800, "Chaplain Doug"
<ChaplainDoug@.discussions.microsoft.com> wrote:
>SQL Server 2000. Windows Small Business Server 2003. Windows automatic
>update keeps trying to install the same "critical security" update to SQL
>Server Desktop Engine. Apparently it is not installing, as I find two even
ts
>in the event view each time. The automatic update is attempted almost ever
y
>time I log onto the server. Any ideas? The following is the event
>information:
>Event Type: Error
>Event Source: MsiInstaller
>Event Category: None
>Event ID: 10005
>Date: 12/5/2005
>Time: 9:54:07 AM
>User: GNJPM\administrator
>Computer: HQSERVER
>Description:
>Product: Microsoft SQL Server Desktop Engine -- Internal Error 2727.
>DatabaseRepl.185C1D8F_1545_4277_BB64_857D2622DB57
>For more information, see Help and Support Center at
>http://go.microsoft.com/fwlink/events.asp.
>Data:
>0000: 7b 36 35 36 35 37 43 35 {65657C5
>0008: 39 2d 32 33 41 38 2d 34 9-23A8-4
>0010: 39 37 34 2d 42 38 45 30 974-B8E0
>0018: 2d 42 41 30 34 45 42 44 -BA04EBD
>0020: 30 34 45 34 46 7d 04E4F}
>Event Type: Error
>Event Source: MsiInstaller
>Event Category: None
>Event ID: 1023
>Date: 12/5/2005
>Time: 9:54:07 AM
>User: GNJPM\administrator
>Computer: HQSERVER
>Description:
>Product: Microsoft SQL Server Desktop Engine - Update
>'{1F506503-186B-4D88-81B3-7113B9457C9D}' could not be installed. Error
code
>1603. Additional information is available in the log file C:\WMSDEHotfix.lo
g.
>For more information, see Help and Support Center at
>http://go.microsoft.com/fwlink/events.asp.
>Data:
>0000: 7b 36 35 36 35 37 43 35 {65657C5
>0008: 39 2d 32 33 41 38 2d 34 9-23A8-4
>0010: 39 37 34 2d 42 38 45 30 974-B8E0
>0018: 2d 42 41 30 34 45 42 44 -BA04EBD
>0020: 30 34 45 34 46 7d 20 7b 04E4F} {
>0028: 31 46 35 30 36 35 30 33 1F506503
>0030: 2d 31 38 36 42 2d 34 44 -186B-4D
>0038: 38 38 2d 38 31 42 33 2d 88-81B3-
>0040: 37 31 31 33 42 39 34 35 7113B945
>0048: 37 43 39 44 7d 20 31 36 7C9D} 16
>0050: 30 33 03
Friday, February 24, 2012
Auto-grow option isnt working
we've had on one particular database since SQL7 on NT4.
The database in question is set to autogrow by 10% (currently sitting
at 31Gb total size). However, last week users complained of a
slowdown in performance. When we checked we found that only 14Mb was
free on the database (we thought it would've grown automatically
before then), and when we added an additional 1Gb manually performance
picked up.
Does SQLServer wait until all the space is used up (i.e. 0% free)
before autogrowing? Even at that, we've never actually had the
database grow automatically - we've always had to add space manually.
Settings on this database, and one that does grow automatically,
appear to be the same (have also checked via sp_helpdb). So where
does the problem lie?
Any help you can give would be greatly appreciated.Do you have large transactions occurring in that database? If so, this KB
article might apply to your case:
http://support.microsoft.com/defaul...kb;en-us;305635
Simon
"LizP" <liz.porteous@.scottishfriendly.co.uk> wrote in message
news:8170ed5a.0306240753.1dd021f8@.posting.google.c om...
> We're using SQL2000 on Windows 2000 Server, but this is a problem
> we've had on one particular database since SQL7 on NT4.
> The database in question is set to autogrow by 10% (currently sitting
> at 31Gb total size). However, last week users complained of a
> slowdown in performance. When we checked we found that only 14Mb was
> free on the database (we thought it would've grown automatically
> before then), and when we added an additional 1Gb manually performance
> picked up.
> Does SQLServer wait until all the space is used up (i.e. 0% free)
> before autogrowing? Even at that, we've never actually had the
> database grow automatically - we've always had to add space manually.
> Settings on this database, and one that does grow automatically,
> appear to be the same (have also checked via sp_helpdb). So where
> does the problem lie?
> Any help you can give would be greatly appreciated.
Sunday, February 19, 2012
Auto update to sql server tables
I've read about DTS, but have never done anything like that. Would it be worth the time and effort to study? (So far I've created a package, with the import wizard, that doesn't work & I don't have the authority to delete :-)
I know I could create a dataset with my Oracle data and use that to update sql server. But is there a way to schedule an aspx to run authomatically? Would this affect performance? The sql server db isn't very big (30-40,000 records), but the Oracle db is & I need to do quite a bit of manipulation to the data.
This is new to me & I'm don't know what I should be searching for to find help. And if there is a more appropriate place to post this question, please let me know.
Thanks.Yes. DTS is one good way to go and it is very easy to do.
Do you need to purge the data in your SQL before each load? Then the account has to have the right.
You can schedule the DTS to run at whatever time you want.
Thursday, February 16, 2012
Auto update statistics
In SQL 2000 is there a way of changing the threshold that auto updating of statistics kicks in? Currently I am working with a db where the stats are becoming extremely poor after a day (as no automatic stats update is taking place) and the length of time a stored proc takes to compile and run goes from about 5 mins to around 120 mins. An update stats fixes the problem.
Thanks.
Hello...You could use a querry hint inside that procedure if only one is affected by this. You could use the hint to force SQL to use the right Index.
Also are you using some default Values for your SP?
If those defaults are "totally different" from the data thats in the Table (example default is NULL for a field and you change this to -1 inside the SP), then SQL might also be tempted to pick the wrong plan for a SP|||
Thanks for your reply.
I would like to avoid query hints, as quite a number of procedures are affected by the stats problem.
I am not using default values within the SPs in fact a lot of the data being processed is fairly similar to the existing data within the tables. I don't quite understand why the stats are becoming out of date so quickly. They are not being automatically updated as the additional data being added to the database is fairly small (comfortably under the 500 + 20% of rows in the table that forces an auto update stats). That's why I was asking if there was a way of changing the threshold at which an auto update of stats takes place.
Alternatively is there a way of limiting the number of times the optimizer attempts to find a cheap execution plan?
|||Have you considered scheduling a job to update the relevant statistics at regular intervals?|||Stefano Stefani - MSFT wrote:
Have you considered scheduling a job to update the relevant statistics at regular intervals?
Hi All,
I am facing the similar situation. When large number of records are added to certain tables, query optimizer generates very poor plan for those query that reference these tables. There are total of 3 tables and they contain 120 millions, 110 million,s and 70 millions of rows respectively.
Let suppose I run a job every hour but what is should be the right sample % to run the job effectively. Why auto update stats are not created promptly? What is the cause of query optimizer's such poor plan selection?
I am using SQL Server 2005 in Windows 2003 Server and we use SAN for data storage.
I would appreciate if someone can point to me some other usefull links.
Thanks in advance for all your help.
Regards,
JIm
|||Have you figured out your issues?
If not do you have both Auto Update Statistics and Auto Update Statistics Asynchronously set to True? Check out this link and go down to the section "Maintaining Statistics in SQL Server 2005". It give some good information. http://www.microsoft.com/technet/prodtechnol/sql/2005/qrystats.mspx also look at the new feature of auto update statistics Async.
Auto update statistics
In SQL 2000 is there a way of changing the threshold that auto updating of statistics kicks in? Currently I am working with a db where the stats are becoming extremely poor after a day (as no automatic stats update is taking place) and the length of time a stored proc takes to compile and run goes from about 5 mins to around 120 mins. An update stats fixes the problem.
Thanks.
Hello...You could use a querry hint inside that procedure if only one is affected by this. You could use the hint to force SQL to use the right Index.
Also are you using some default Values for your SP?
If those defaults are "totally different" from the data thats in the Table (example default is NULL for a field and you change this to -1 inside the SP), then SQL might also be tempted to pick the wrong plan for a SP|||
Thanks for your reply.
I would like to avoid query hints, as quite a number of procedures are affected by the stats problem.
I am not using default values within the SPs in fact a lot of the data being processed is fairly similar to the existing data within the tables. I don't quite understand why the stats are becoming out of date so quickly. They are not being automatically updated as the additional data being added to the database is fairly small (comfortably under the 500 + 20% of rows in the table that forces an auto update stats). That's why I was asking if there was a way of changing the threshold at which an auto update of stats takes place.
Alternatively is there a way of limiting the number of times the optimizer attempts to find a cheap execution plan?
|||Have you considered scheduling a job to update the relevant statistics at regular intervals?|||Stefano Stefani - MSFT wrote:
Have you considered scheduling a job to update the relevant statistics at regular intervals?
Hi All,
I am facing the similar situation. When large number of records are added to certain tables, query optimizer generates very poor plan for those query that reference these tables. There are total of 3 tables and they contain 120 millions, 110 million,s and 70 millions of rows respectively.
Let suppose I run a job every hour but what is should be the right sample % to run the job effectively. Why auto update stats are not created promptly? What is the cause of query optimizer's such poor plan selection?
I am using SQL Server 2005 in Windows 2003 Server and we use SAN for data storage.
I would appreciate if someone can point to me some other usefull links.
Thanks in advance for all your help.
Regards,
JIm
|||Have you figured out your issues?
If not do you have both Auto Update Statistics and Auto Update Statistics Asynchronously set to True? Check out this link and go down to the section "Maintaining Statistics in SQL Server 2005". It give some good information. http://www.microsoft.com/technet/prodtechnol/sql/2005/qrystats.mspx also look at the new feature of auto update statistics Async.
Auto shrink not working as expected
that backs up the tran logs on my db's and the box is checked to autoshrink
the tran log when it exceeds 100 mb. There are no errors and the shrink is
executed as shown in the log reports but the size doesn't appear to change.
What is really stumping me is if I do it manually, I have to do a tran log
backup, shrink the db. This results in a few mb shrinkage. If I then go
back and do the same thing again (This is consistent on four major DB's on
this server) tran log backup followed by a shrink db it then shrinks the
tranlog as expected. Several of these db's aren't activily being updated at
the backup/shrink time so records aren't being inserted (At least not that I
am aware of) at the time.
Is there something I am not doing or an idea someone may have to do this?
The backup and shrink are being handled via the setup by Enterprise Manager.
Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
http://www.pbbergs.com
Please no e-mails, any questions should be posted in the NewsGroup
This posting is provided "AS IS" with no warranties, and confers no rights.An obvious question is why do you think you need to shrink the files every
day? If they grow every day then all you're doing is slowing down your
database every day because it has to grow the file. Do you tear down your
garage every time you back your car out and build it again when you come
home? This is about the same logic as shrinking the database and log files
daily. You should only shrink the files when you know they aren't going to
grow again.
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Paul Bergson" <pbergson@.allete_nospam.com> wrote in message
news:%23CHDBP%23vGHA.4512@.TK2MSFTNGP05.phx.gbl...
> I'll be the first to admin, I am not a sql expert. I have a nightly job
> that backs up the tran logs on my db's and the box is checked to
> autoshrink the tran log when it exceeds 100 mb. There are no errors and
> the shrink is executed as shown in the log reports but the size doesn't
> appear to change. What is really stumping me is if I do it manually, I
> have to do a tran log backup, shrink the db. This results in a few mb
> shrinkage. If I then go back and do the same thing again (This is
> consistent on four major DB's on this server) tran log backup followed by
> a shrink db it then shrinks the tranlog as expected. Several of these
> db's aren't activily being updated at the backup/shrink time so records
> aren't being inserted (At least not that I am aware of) at the time.
> Is there something I am not doing or an idea someone may have to do this?
> The backup and shrink are being handled via the setup by Enterprise
> Manager.
> --
> Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
> http://www.pbbergs.com
> Please no e-mails, any questions should be posted in the NewsGroup
> This posting is provided "AS IS" with no warranties, and confers no
> rights.
>|||Roger is 100% correct but these may be of interest to you as well:
http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
considerations
http://www.nigelrivett.net/Transact...ileGrows_1.html Log File issues
http://www.support.microsoft.com/?id=317375 Log File Grows too big
http://www.support.microsoft.com/?id=110139 Log file filling up
http://www.support.microsoft.com/?id=315512 Considerations for Autogrow
and AutoShrink
http://www.support.microsoft.com/?id=272318 Shrinking Log in SQL Server
2000 with DBCC SHRINKFILE
http://www.support.microsoft.com/?id=873235 How to stop the log file from
growing
http://www.support.microsoft.com/?id=305635 Timeout while DB expanding
http://www.support.microsoft.com/?id=307487 Shrinking TempDB
Andrew J. Kelly SQL MVP
"Paul Bergson" <pbergson@.allete_nospam.com> wrote in message
news:%23CHDBP%23vGHA.4512@.TK2MSFTNGP05.phx.gbl...
> I'll be the first to admin, I am not a sql expert. I have a nightly job
> that backs up the tran logs on my db's and the box is checked to
> autoshrink the tran log when it exceeds 100 mb. There are no errors and
> the shrink is executed as shown in the log reports but the size doesn't
> appear to change. What is really stumping me is if I do it manually, I
> have to do a tran log backup, shrink the db. This results in a few mb
> shrinkage. If I then go back and do the same thing again (This is
> consistent on four major DB's on this server) tran log backup followed by
> a shrink db it then shrinks the tranlog as expected. Several of these
> db's aren't activily being updated at the backup/shrink time so records
> aren't being inserted (At least not that I am aware of) at the time.
> Is there something I am not doing or an idea someone may have to do this?
> The backup and shrink are being handled via the setup by Enterprise
> Manager.
> --
> Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
> http://www.pbbergs.com
> Please no e-mails, any questions should be posted in the NewsGroup
> This posting is provided "AS IS" with no warranties, and confers no
> rights.
>|||The transaction log backup is nightly, the attempted shrinking is weekly. I
don't believe a tran log needs to be 10 gig in size, considering the data
file is only 2 gig in size. I would have thought the nightly tran log back
up would force the tran log to re-use from the truncation point forward but
that doesn't seem to be occuring. The goal is to keep the tran log the max
log that could be used in a day, which is almost 2 gig in size.
Am I doing some thing wrong? Probably, but what I don't know.
Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
http://www.pbbergs.com
Please no e-mails, any questions should be posted in the NewsGroup
This posting is provided "AS IS" with no warranties, and confers no rights.
"Roger Wolter[MSFT]" <rwolter@.online.microsoft.com> wrote in message
news:e%23oT4sCwGHA.1436@.TK2MSFTNGP02.phx.gbl...
> An obvious question is why do you think you need to shrink the files every
> day? If they grow every day then all you're doing is slowing down your
> database every day because it has to grow the file. Do you tear down your
> garage every time you back your car out and build it again when you come
> home? This is about the same logic as shrinking the database and log
> files daily. You should only shrink the files when you know they aren't
> going to grow again.
> --
> This posting is provided "AS IS" with no warranties, and confers no
> rights.
> Use of included script samples are subject to the terms specified at
> http://www.microsoft.com/info/cpyright.htm
> "Paul Bergson" <pbergson@.allete_nospam.com> wrote in message
> news:%23CHDBP%23vGHA.4512@.TK2MSFTNGP05.phx.gbl...
>|||Paul
You don't need to shrink the log , because it is meaningless as it will be
grown againg and again. One option is set up the log file with an appropiate
size that it does not need to grow frequently .
"Paul Bergson" <pbergson@.allete_nospam.com> wrote in message
news:%23yRZMdGwGHA.1272@.TK2MSFTNGP05.phx.gbl...
> The transaction log backup is nightly, the attempted shrinking is weekly.
> I don't believe a tran log needs to be 10 gig in size, considering the
> data file is only 2 gig in size. I would have thought the nightly tran
> log back up would force the tran log to re-use from the truncation point
> forward but that doesn't seem to be occuring. The goal is to keep the
> tran log the max log that could be used in a day, which is almost 2 gig in
> size.
> Am I doing some thing wrong? Probably, but what I don't know.
> --
> Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
> http://www.pbbergs.com
> Please no e-mails, any questions should be posted in the NewsGroup
> This posting is provided "AS IS" with no warranties, and confers no
> rights.
> "Roger Wolter[MSFT]" <rwolter@.online.microsoft.com> wrote in message
> news:e%23oT4sCwGHA.1436@.TK2MSFTNGP02.phx.gbl...
>|||I don't think people understand my predicament.
I have four specific db's on my sql server 2000 server. I run these in Full
Recovery mode with nightly tran log and weekly full back ups. The log file
in some instances is more than 5 times the size of the db. I find it hard
to believe that this would be considered normal since a nightly job would
never have more info than the db itself.
If you can provide details as to why this is normal, please do.
Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
http://www.pbbergs.com
Please no e-mails, any questions should be posted in the NewsGroup
This posting is provided "AS IS" with no warranties, and confers no rights.
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:%238Ke8jGwGHA.4444@.TK2MSFTNGP05.phx.gbl...
> Paul
> You don't need to shrink the log , because it is meaningless as it will
> be grown againg and again. One option is set up the log file with an
> appropiate size that it does not need to grow frequently .
>
> "Paul Bergson" <pbergson@.allete_nospam.com> wrote in message
> news:%23yRZMdGwGHA.1272@.TK2MSFTNGP05.phx.gbl...
>|||Paul Bergson wrote:
> I don't think people understand my predicament.
> I have four specific db's on my sql server 2000 server. I run these in Fu
ll
> Recovery mode with nightly tran log and weekly full back ups. The log fil
e
> in some instances is more than 5 times the size of the db. I find it hard
> to believe that this would be considered normal since a nightly job would
> never have more info than the db itself.
> If you can provide details as to why this is normal, please do.
>
Are you doing something like rebuilding indexes at night? Large imports?
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Imports can be large
Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
http://www.pbbergs.com
Please no e-mails, any questions should be posted in the NewsGroup
This posting is provided "AS IS" with no warranties, and confers no rights.
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:uKZQv8GwGHA.4296@.TK2MSFTNGP06.phx.gbl...
> Paul Bergson wrote:
> Are you doing something like rebuilding indexes at night? Large imports?
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||Paul Bergson wrote:
> Imports can be large
>
Ok, that could explain the large transaction log file. Say you're
importing 100,000 new rows of data, all as one transaction. The
transaction log has to be able to hold that entire transaction, in case
it has to roll back.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||This may turn out to be a mute point. I'm on the phone with the vendor and
they are some how using the Log Files to store log history. It sounds like
it is unrelated to the actual logs that are need for roll back. I don't get
it. I need to get more info if this is the case. It sounds to me like they
have a configuration option which could allow me to control history kept
within this.
Maybe this is normal use, seems odd to me though.
Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
http://www.pbbergs.com
Please no e-mails, any questions should be posted in the NewsGroup
This posting is provided "AS IS" with no warranties, and confers no rights.
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:O57rkXHwGHA.1224@.TK2MSFTNGP03.phx.gbl...
> Paul Bergson wrote:
> Ok, that could explain the large transaction log file. Say you're
> importing 100,000 new rows of data, all as one transaction. The
> transaction log has to be able to hold that entire transaction, in case it
> has to roll back.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com
Auto shrink not working as expected
that backs up the tran logs on my db's and the box is checked to autoshrink
the tran log when it exceeds 100 mb. There are no errors and the shrink is
executed as shown in the log reports but the size doesn't appear to change.
What is really stumping me is if I do it manually, I have to do a tran log
backup, shrink the db. This results in a few mb shrinkage. If I then go
back and do the same thing again (This is consistent on four major DB's on
this server) tran log backup followed by a shrink db it then shrinks the
tranlog as expected. Several of these db's aren't activily being updated at
the backup/shrink time so records aren't being inserted (At least not that I
am aware of) at the time.
Is there something I am not doing or an idea someone may have to do this?
The backup and shrink are being handled via the setup by Enterprise Manager.
--
Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
http://www.pbbergs.com
Please no e-mails, any questions should be posted in the NewsGroup
This posting is provided "AS IS" with no warranties, and confers no rights.An obvious question is why do you think you need to shrink the files every
day? If they grow every day then all you're doing is slowing down your
database every day because it has to grow the file. Do you tear down your
garage every time you back your car out and build it again when you come
home? This is about the same logic as shrinking the database and log files
daily. You should only shrink the files when you know they aren't going to
grow again.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Paul Bergson" <pbergson@.allete_nospam.com> wrote in message
news:%23CHDBP%23vGHA.4512@.TK2MSFTNGP05.phx.gbl...
> I'll be the first to admin, I am not a sql expert. I have a nightly job
> that backs up the tran logs on my db's and the box is checked to
> autoshrink the tran log when it exceeds 100 mb. There are no errors and
> the shrink is executed as shown in the log reports but the size doesn't
> appear to change. What is really stumping me is if I do it manually, I
> have to do a tran log backup, shrink the db. This results in a few mb
> shrinkage. If I then go back and do the same thing again (This is
> consistent on four major DB's on this server) tran log backup followed by
> a shrink db it then shrinks the tranlog as expected. Several of these
> db's aren't activily being updated at the backup/shrink time so records
> aren't being inserted (At least not that I am aware of) at the time.
> Is there something I am not doing or an idea someone may have to do this?
> The backup and shrink are being handled via the setup by Enterprise
> Manager.
> --
> Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
> http://www.pbbergs.com
> Please no e-mails, any questions should be posted in the NewsGroup
> This posting is provided "AS IS" with no warranties, and confers no
> rights.
>|||Roger is 100% correct but these may be of interest to you as well:
http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
considerations
http://www.nigelrivett.net/TransactionLogFileGrows_1.html Log File issues
http://www.support.microsoft.com/?id=317375 Log File Grows too big
http://www.support.microsoft.com/?id=110139 Log file filling up
http://www.support.microsoft.com/?id=315512 Considerations for Autogrow
and AutoShrink
http://www.support.microsoft.com/?id=272318 Shrinking Log in SQL Server
2000 with DBCC SHRINKFILE
http://www.support.microsoft.com/?id=873235 How to stop the log file from
growing
http://www.support.microsoft.com/?id=305635 Timeout while DB expanding
http://www.support.microsoft.com/?id=307487 Shrinking TempDB
--
Andrew J. Kelly SQL MVP
"Paul Bergson" <pbergson@.allete_nospam.com> wrote in message
news:%23CHDBP%23vGHA.4512@.TK2MSFTNGP05.phx.gbl...
> I'll be the first to admin, I am not a sql expert. I have a nightly job
> that backs up the tran logs on my db's and the box is checked to
> autoshrink the tran log when it exceeds 100 mb. There are no errors and
> the shrink is executed as shown in the log reports but the size doesn't
> appear to change. What is really stumping me is if I do it manually, I
> have to do a tran log backup, shrink the db. This results in a few mb
> shrinkage. If I then go back and do the same thing again (This is
> consistent on four major DB's on this server) tran log backup followed by
> a shrink db it then shrinks the tranlog as expected. Several of these
> db's aren't activily being updated at the backup/shrink time so records
> aren't being inserted (At least not that I am aware of) at the time.
> Is there something I am not doing or an idea someone may have to do this?
> The backup and shrink are being handled via the setup by Enterprise
> Manager.
> --
> Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
> http://www.pbbergs.com
> Please no e-mails, any questions should be posted in the NewsGroup
> This posting is provided "AS IS" with no warranties, and confers no
> rights.
>|||The transaction log backup is nightly, the attempted shrinking is weekly. I
don't believe a tran log needs to be 10 gig in size, considering the data
file is only 2 gig in size. I would have thought the nightly tran log back
up would force the tran log to re-use from the truncation point forward but
that doesn't seem to be occuring. The goal is to keep the tran log the max
log that could be used in a day, which is almost 2 gig in size.
Am I doing some thing wrong? Probably, but what I don't know.
--
Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
http://www.pbbergs.com
Please no e-mails, any questions should be posted in the NewsGroup
This posting is provided "AS IS" with no warranties, and confers no rights.
"Roger Wolter[MSFT]" <rwolter@.online.microsoft.com> wrote in message
news:e%23oT4sCwGHA.1436@.TK2MSFTNGP02.phx.gbl...
> An obvious question is why do you think you need to shrink the files every
> day? If they grow every day then all you're doing is slowing down your
> database every day because it has to grow the file. Do you tear down your
> garage every time you back your car out and build it again when you come
> home? This is about the same logic as shrinking the database and log
> files daily. You should only shrink the files when you know they aren't
> going to grow again.
> --
> This posting is provided "AS IS" with no warranties, and confers no
> rights.
> Use of included script samples are subject to the terms specified at
> http://www.microsoft.com/info/cpyright.htm
> "Paul Bergson" <pbergson@.allete_nospam.com> wrote in message
> news:%23CHDBP%23vGHA.4512@.TK2MSFTNGP05.phx.gbl...
>> I'll be the first to admin, I am not a sql expert. I have a nightly job
>> that backs up the tran logs on my db's and the box is checked to
>> autoshrink the tran log when it exceeds 100 mb. There are no errors and
>> the shrink is executed as shown in the log reports but the size doesn't
>> appear to change. What is really stumping me is if I do it manually, I
>> have to do a tran log backup, shrink the db. This results in a few mb
>> shrinkage. If I then go back and do the same thing again (This is
>> consistent on four major DB's on this server) tran log backup followed by
>> a shrink db it then shrinks the tranlog as expected. Several of these
>> db's aren't activily being updated at the backup/shrink time so records
>> aren't being inserted (At least not that I am aware of) at the time.
>> Is there something I am not doing or an idea someone may have to do this?
>> The backup and shrink are being handled via the setup by Enterprise
>> Manager.
>> --
>> Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
>> http://www.pbbergs.com
>> Please no e-mails, any questions should be posted in the NewsGroup
>> This posting is provided "AS IS" with no warranties, and confers no
>> rights.
>>
>|||Paul
You don't need to shrink the log , because it is meaningless as it will be
grown againg and again. One option is set up the log file with an appropiate
size that it does not need to grow frequently .
"Paul Bergson" <pbergson@.allete_nospam.com> wrote in message
news:%23yRZMdGwGHA.1272@.TK2MSFTNGP05.phx.gbl...
> The transaction log backup is nightly, the attempted shrinking is weekly.
> I don't believe a tran log needs to be 10 gig in size, considering the
> data file is only 2 gig in size. I would have thought the nightly tran
> log back up would force the tran log to re-use from the truncation point
> forward but that doesn't seem to be occuring. The goal is to keep the
> tran log the max log that could be used in a day, which is almost 2 gig in
> size.
> Am I doing some thing wrong? Probably, but what I don't know.
> --
> Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
> http://www.pbbergs.com
> Please no e-mails, any questions should be posted in the NewsGroup
> This posting is provided "AS IS" with no warranties, and confers no
> rights.
> "Roger Wolter[MSFT]" <rwolter@.online.microsoft.com> wrote in message
> news:e%23oT4sCwGHA.1436@.TK2MSFTNGP02.phx.gbl...
>> An obvious question is why do you think you need to shrink the files
>> every day? If they grow every day then all you're doing is slowing down
>> your database every day because it has to grow the file. Do you tear
>> down your garage every time you back your car out and build it again when
>> you come home? This is about the same logic as shrinking the database
>> and log files daily. You should only shrink the files when you know they
>> aren't going to grow again.
>> --
>> This posting is provided "AS IS" with no warranties, and confers no
>> rights.
>> Use of included script samples are subject to the terms specified at
>> http://www.microsoft.com/info/cpyright.htm
>> "Paul Bergson" <pbergson@.allete_nospam.com> wrote in message
>> news:%23CHDBP%23vGHA.4512@.TK2MSFTNGP05.phx.gbl...
>> I'll be the first to admin, I am not a sql expert. I have a nightly job
>> that backs up the tran logs on my db's and the box is checked to
>> autoshrink the tran log when it exceeds 100 mb. There are no errors and
>> the shrink is executed as shown in the log reports but the size doesn't
>> appear to change. What is really stumping me is if I do it manually, I
>> have to do a tran log backup, shrink the db. This results in a few mb
>> shrinkage. If I then go back and do the same thing again (This is
>> consistent on four major DB's on this server) tran log backup followed
>> by a shrink db it then shrinks the tranlog as expected. Several of
>> these db's aren't activily being updated at the backup/shrink time so
>> records aren't being inserted (At least not that I am aware of) at the
>> time.
>> Is there something I am not doing or an idea someone may have to do
>> this? The backup and shrink are being handled via the setup by
>> Enterprise Manager.
>> --
>> Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
>> http://www.pbbergs.com
>> Please no e-mails, any questions should be posted in the NewsGroup
>> This posting is provided "AS IS" with no warranties, and confers no
>> rights.
>>
>>
>|||I don't think people understand my predicament.
I have four specific db's on my sql server 2000 server. I run these in Full
Recovery mode with nightly tran log and weekly full back ups. The log file
in some instances is more than 5 times the size of the db. I find it hard
to believe that this would be considered normal since a nightly job would
never have more info than the db itself.
If you can provide details as to why this is normal, please do.
--
Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
http://www.pbbergs.com
Please no e-mails, any questions should be posted in the NewsGroup
This posting is provided "AS IS" with no warranties, and confers no rights.
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:%238Ke8jGwGHA.4444@.TK2MSFTNGP05.phx.gbl...
> Paul
> You don't need to shrink the log , because it is meaningless as it will
> be grown againg and again. One option is set up the log file with an
> appropiate size that it does not need to grow frequently .
>
> "Paul Bergson" <pbergson@.allete_nospam.com> wrote in message
> news:%23yRZMdGwGHA.1272@.TK2MSFTNGP05.phx.gbl...
>> The transaction log backup is nightly, the attempted shrinking is weekly.
>> I don't believe a tran log needs to be 10 gig in size, considering the
>> data file is only 2 gig in size. I would have thought the nightly tran
>> log back up would force the tran log to re-use from the truncation point
>> forward but that doesn't seem to be occuring. The goal is to keep the
>> tran log the max log that could be used in a day, which is almost 2 gig
>> in size.
>> Am I doing some thing wrong? Probably, but what I don't know.
>> --
>> Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
>> http://www.pbbergs.com
>> Please no e-mails, any questions should be posted in the NewsGroup
>> This posting is provided "AS IS" with no warranties, and confers no
>> rights.
>> "Roger Wolter[MSFT]" <rwolter@.online.microsoft.com> wrote in message
>> news:e%23oT4sCwGHA.1436@.TK2MSFTNGP02.phx.gbl...
>> An obvious question is why do you think you need to shrink the files
>> every day? If they grow every day then all you're doing is slowing down
>> your database every day because it has to grow the file. Do you tear
>> down your garage every time you back your car out and build it again
>> when you come home? This is about the same logic as shrinking the
>> database and log files daily. You should only shrink the files when you
>> know they aren't going to grow again.
>> --
>> This posting is provided "AS IS" with no warranties, and confers no
>> rights.
>> Use of included script samples are subject to the terms specified at
>> http://www.microsoft.com/info/cpyright.htm
>> "Paul Bergson" <pbergson@.allete_nospam.com> wrote in message
>> news:%23CHDBP%23vGHA.4512@.TK2MSFTNGP05.phx.gbl...
>> I'll be the first to admin, I am not a sql expert. I have a nightly
>> job that backs up the tran logs on my db's and the box is checked to
>> autoshrink the tran log when it exceeds 100 mb. There are no errors
>> and the shrink is executed as shown in the log reports but the size
>> doesn't appear to change. What is really stumping me is if I do it
>> manually, I have to do a tran log backup, shrink the db. This results
>> in a few mb shrinkage. If I then go back and do the same thing again
>> (This is consistent on four major DB's on this server) tran log backup
>> followed by a shrink db it then shrinks the tranlog as expected.
>> Several of these db's aren't activily being updated at the
>> backup/shrink time so records aren't being inserted (At least not that
>> I am aware of) at the time.
>> Is there something I am not doing or an idea someone may have to do
>> this? The backup and shrink are being handled via the setup by
>> Enterprise Manager.
>> --
>> Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
>> http://www.pbbergs.com
>> Please no e-mails, any questions should be posted in the NewsGroup
>> This posting is provided "AS IS" with no warranties, and confers no
>> rights.
>>
>>
>>
>|||Paul Bergson wrote:
> I don't think people understand my predicament.
> I have four specific db's on my sql server 2000 server. I run these in Full
> Recovery mode with nightly tran log and weekly full back ups. The log file
> in some instances is more than 5 times the size of the db. I find it hard
> to believe that this would be considered normal since a nightly job would
> never have more info than the db itself.
> If you can provide details as to why this is normal, please do.
>
Are you doing something like rebuilding indexes at night? Large imports?
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Imports can be large
--
Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
http://www.pbbergs.com
Please no e-mails, any questions should be posted in the NewsGroup
This posting is provided "AS IS" with no warranties, and confers no rights.
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:uKZQv8GwGHA.4296@.TK2MSFTNGP06.phx.gbl...
> Paul Bergson wrote:
>> I don't think people understand my predicament.
>> I have four specific db's on my sql server 2000 server. I run these in
>> Full Recovery mode with nightly tran log and weekly full back ups. The
>> log file in some instances is more than 5 times the size of the db. I
>> find it hard to believe that this would be considered normal since a
>> nightly job would never have more info than the db itself.
>> If you can provide details as to why this is normal, please do.
> Are you doing something like rebuilding indexes at night? Large imports?
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||Paul Bergson wrote:
> Imports can be large
>
Ok, that could explain the large transaction log file. Say you're
importing 100,000 new rows of data, all as one transaction. The
transaction log has to be able to hold that entire transaction, in case
it has to roll back.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||This may turn out to be a mute point. I'm on the phone with the vendor and
they are some how using the Log Files to store log history. It sounds like
it is unrelated to the actual logs that are need for roll back. I don't get
it. I need to get more info if this is the case. It sounds to me like they
have a configuration option which could allow me to control history kept
within this.
Maybe this is normal use, seems odd to me though.
--
Paul Bergson MCT, MCSE, MCSA, Security+, CNE, CNA, CCA
http://www.pbbergs.com
Please no e-mails, any questions should be posted in the NewsGroup
This posting is provided "AS IS" with no warranties, and confers no rights.
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:O57rkXHwGHA.1224@.TK2MSFTNGP03.phx.gbl...
> Paul Bergson wrote:
>> Imports can be large
> Ok, that could explain the large transaction log file. Say you're
> importing 100,000 new rows of data, all as one transaction. The
> transaction log has to be able to hold that entire transaction, in case it
> has to roll back.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com
Monday, February 13, 2012
auto resizing column
hi all
i am working on sql reorting 2005
how can i set the column width to autoresize depending on the maximum length of data for that column?
plz help me.
As far as I know, there is no way to autoresize column width. The only property to account for adjusting a cell's dimensions is "CanGrow." If this is set to True, the cell will increase vertically to fit all the data. However, this only adjusts the height and not the width.Friday, February 10, 2012
auto identity for each Type
I am working on an accounting system using VB.NET and sql server 2005 as a database. the application should be used by multiple users.
i have a the following structure:
Voucher: ID (primary), Date,TypeID, ReferenceCode, ....
Type: ID, Code, Name. (the user can add new type anytime!)
(Ex: PV- payment voucher, JV - Journal Voucher ,...)
When adding a voucher the user will choose a type, according to this type (for each year) a counter will be increminted.
for example: PV1, PV2...PV233,... the other type will have its separate counter JV1, JV2 ,...JV4569,..
I am using the sqlTransaction cause i am doing other operations that should be transactional with the insertion of the Voucher.
The question is :
What is the best solution to generate a counter for each type?(With code sample)
Thanks.do you really need to have the 'PV' and 'JV' before each value? if you could use ints, then you could use identity columns. That's the standard way of doing this.
You can always tack on a JV or PV in the front end if that's the way your boss wants it to look in a report or something.
from BOL:
IDENTITY
Indicates that the new column is an identity column. When a new row is added to the table, Microsoft® SQL Server™ provides a unique, incremental value for the column. Identity columns are commonly used in conjunction with PRIMARY KEY constraints to serve as the unique row identifier for the table. The IDENTITY property can be assigned to tinyint, smallint, int, bigint, decimal(p,0), or numeric(p,0) columns. Only one identity column can be created per table. Bound defaults and DEFAULT constraints cannot be used with an identity column. You must specify both the seed and increment or neither. If neither is specified, the default is (1,1).|||if you were using mysql, this functionality (starting a new auto_increment within each type group) is built in
it's impossible to do this with an IDENTITY column
you will have to generate your own numbers, and i would recommend very strongly against it|||the counter in the question is the ReferenceCode in the Voucher table
Voucher: ID (primary), Date,TypeID, ReferenceCode.
so for each added voucher and according to the TypeID a the reference code will be generated. let say the last counter for the PV type is 230 so the referenceCode will be PV231. if the Type is JV and the last counter is 566 then the ReferenceCode will be JV567 and so on.
We don't have to forget that we are working in a multi user enviroment, and the Reference Code should be unique .|||put the JV or PV in another field and concatenate it in the front end. smart numbers are stupid and loved by the accounting types. this kind of things slow down joins and causes other kinds of pain. i have not seen smart numbers in a project for five years and that was a legacy foxpro app.