Sunday, March 25, 2012
Automatically grow file did not function.
I try to manaully input to Space Allocated (MB) but I receive error as below
fcb::ZeroFile(): GetOverLappedResult() failed with error 121.This error was associated with disk problems in the past. I'd try to verify
the interity of the disk.
--
Linchi Shea
linchi_shea@.NOSPAMml.com
"Jittima D." <anonymous@.discussions.microsoft.com> wrote in message
news:F310CAB1-E3B8-48A0-B90A-28D5C3273DEE@.microsoft.com...
> My server SQL7.0 SP4 found the problem Automatically grow file did not
function.
> I try to manaully input to Space Allocated (MB) but I receive error as
below
> fcb::ZeroFile(): GetOverLappedResult() failed with error 121.
>
automatically grow file
I had an insert statement that stopped working the other day. I figured out
that it had to do with the fact that "space available" on the database was
very small. I had "automatically grow file" on the db set to 10%, and I had
plenty of disk space. I think that when my client called the insert
statement, the db was trying to automatically grow, but it was taking so
long that the client timed out.
I finally called:
ALTER DATABASE xxx
MODIFY FILE
(NAME = 'xxx_data',
SIZE = 3000MB)
which took a minute and a half, and then the insert statement began working
quickly again.
I have a couple of half-formed ideas on how to deal with this over the long
term.
My db is now 3GB. If I set "automatically grow file" to 1%, instead of 10%,
does that mean that the automatic growing next time will be quicker? Is
the time it takes proportional to the amount of space we are adding?
Maybe I could create a weekly job that looks to see if the space available
on a database is less than 50MB or so, and if so, expands the database size.
I'm not sure how to write this. I can call sp_spaceused, but I don't know
how to stick "unallocated space" into a variable, since sp_spaceused returns
two datasets.
Any other ideas?
Thanks,
GeorgeThere is probably a simpler way to do this, but...
I would follow your weekly idea.
If you run sp_helptext sp_spaceused you will get back the queries used to
run the sp_spaceused command.
I would probably take that code and create a new sproc. In the new sproc,
instead of returning the results, you can check them there and make the
appropriate changes.
HTH
Rick Sawtell
MCT, MCSD, MCDBA|||Sp_helptext is useful. Thanks.
I'm mostly there, but now I am having trouble passing a parameter into the
alter database command.
declare @.newsize varchar(8)
...
ALTER DATABASE xyz MODIFY FILE
(NAME = xyz_data, SIZE = @.newsize)
Line 5: Incorrect syntax near '@.newsize'.
Thanks,
George|||Seems you can't use a variable for the size, quite simply. Try using dynamic SQL to EXEC the
statement instead.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"George Wynne" <george@.nssco.com> wrote in message news:O5hyQ6FkEHA.3724@.TK2MSFTNGP11.phx.gbl...
> Sp_helptext is useful. Thanks.
> I'm mostly there, but now I am having trouble passing a parameter into the
> alter database command.
> declare @.newsize varchar(8)
> ...
> ALTER DATABASE xyz MODIFY FILE
> (NAME = xyz_data, SIZE = @.newsize)
> Line 5: Incorrect syntax near '@.newsize'.
> Thanks,
> George
>sql
automatically grow file
I had an insert statement that stopped working the other day. I figured out
that it had to do with the fact that "space available" on the database was
very small. I had "automatically grow file" on the db set to 10%, and I had
plenty of disk space. I think that when my client called the insert
statement, the db was trying to automatically grow, but it was taking so
long that the client timed out.
I finally called:
ALTER DATABASE xxx
MODIFY FILE
(NAME = 'xxx_data',
SIZE = 3000MB)
which took a minute and a half, and then the insert statement began working
quickly again.
I have a couple of half-formed ideas on how to deal with this over the long
term.
My db is now 3GB. If I set "automatically grow file" to 1%, instead of 10%,
does that mean that the automatic growing next time will be quicker? Is
the time it takes proportional to the amount of space we are adding?
Maybe I could create a weekly job that looks to see if the space available
on a database is less than 50MB or so, and if so, expands the database size.
I'm not sure how to write this. I can call sp_spaceused, but I don't know
how to stick "unallocated space" into a variable, since sp_spaceused returns
two datasets.
Any other ideas?
Thanks,
George
There is probably a simpler way to do this, but...
I would follow your weekly idea.
If you run sp_helptext sp_spaceused you will get back the queries used to
run the sp_spaceused command.
I would probably take that code and create a new sproc. In the new sproc,
instead of returning the results, you can check them there and make the
appropriate changes.
HTH
Rick Sawtell
MCT, MCSD, MCDBA
|||Sp_helptext is useful. Thanks.
I'm mostly there, but now I am having trouble passing a parameter into the
alter database command.
declare @.newsize varchar(8)
...
ALTER DATABASE xyz MODIFY FILE
(NAME = xyz_data, SIZE = @.newsize)
Line 5: Incorrect syntax near '@.newsize'.
Thanks,
George
|||Seems you can't use a variable for the size, quite simply. Try using dynamic SQL to EXEC the
statement instead.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"George Wynne" <george@.nssco.com> wrote in message news:O5hyQ6FkEHA.3724@.TK2MSFTNGP11.phx.gbl...
> Sp_helptext is useful. Thanks.
> I'm mostly there, but now I am having trouble passing a parameter into the
> alter database command.
> declare @.newsize varchar(8)
> ...
> ALTER DATABASE xyz MODIFY FILE
> (NAME = xyz_data, SIZE = @.newsize)
> Line 5: Incorrect syntax near '@.newsize'.
> Thanks,
> George
>
automatically grow file
I had an insert statement that stopped working the other day. I figured out
that it had to do with the fact that "space available" on the database was
very small. I had "automatically grow file" on the db set to 10%, and I had
plenty of disk space. I think that when my client called the insert
statement, the db was trying to automatically grow, but it was taking so
long that the client timed out.
I finally called:
ALTER DATABASE xxx
MODIFY FILE
(NAME = 'xxx_data',
SIZE = 3000MB)
which took a minute and a half, and then the insert statement began working
quickly again.
I have a couple of half-formed ideas on how to deal with this over the long
term.
My db is now 3GB. If I set "automatically grow file" to 1%, instead of 10%,
does that mean that the automatic growing next time will be quicker? Is
the time it takes proportional to the amount of space we are adding?
Maybe I could create a weekly job that looks to see if the space available
on a database is less than 50MB or so, and if so, expands the database size.
I'm not sure how to write this. I can call sp_spaceused, but I don't know
how to stick "unallocated space" into a variable, since sp_spaceused returns
two datasets.
Any other ideas?
Thanks,
GeorgeThere is probably a simpler way to do this, but...
I would follow your weekly idea.
If you run sp_helptext sp_spaceused you will get back the queries used to
run the sp_spaceused command.
I would probably take that code and create a new sproc. In the new sproc,
instead of returning the results, you can check them there and make the
appropriate changes.
HTH
Rick Sawtell
MCT, MCSD, MCDBA|||Sp_helptext is useful. Thanks.
I'm mostly there, but now I am having trouble passing a parameter into the
alter database command.
declare @.newsize varchar(8)
...
ALTER DATABASE xyz MODIFY FILE
(NAME = xyz_data, SIZE = @.newsize)
Line 5: Incorrect syntax near '@.newsize'.
Thanks,
George|||Seems you can't use a variable for the size, quite simply. Try using dynamic
SQL to EXEC the
statement instead.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"George Wynne" <george@.nssco.com> wrote in message news:O5hyQ6FkEHA.3724@.TK2MSFTNGP11.phx.gb
l...
> Sp_helptext is useful. Thanks.
> I'm mostly there, but now I am having trouble passing a parameter into the
> alter database command.
> declare @.newsize varchar(8)
> ...
> ALTER DATABASE xyz MODIFY FILE
> (NAME = xyz_data, SIZE = @.newsize)
> Line 5: Incorrect syntax near '@.newsize'.
> Thanks,
> George
>
Automatically grow database
when does the server actually grow the file? Does it wait
for an out of space condition or is it automated?It is before an out of space condtion. But you can set up a job to automate
it with the ALTER DATABASE statement.
"Kirk" <anonymous@.discussions.microsoft.com> wrote in message
news:5ed001c3df71$7aa9c230$7d02280a@.phx.gbl...
quote:|||The reason I ask is that we have the primary file group to
> If the autogrow is set on a database primary file group
> when does the server actually grow the file? Does it wait
> for an out of space condition or is it automated?
autogrow at 100mb. We have a maintenance plan that runs
every Sunday. For the past 2 Sunday's the job has failed
due to out of space. There is plenty of space on the
drive for the file to grow. Not sure why the job is
failing on space issue. That is why I ask if it is on
error does it grow.|||Having autogrow is better than nothing. But better yet, don't leave it
solely for SQL Server. The better way is to size your db, forecast its
growth, and allocate space accordingly. Leave the autogrow on but keep
checking back whether there is need of growing again, and if needed, do it
manually at a not-so-busy time. Autogrow can take time so long that your
application may error out while waiting for the growth (though it's not
likely in your case of autogrow size).
Not sure what your problem is. You have a plan to grow the db file every
sunday? That doesn't sound right. What for job was failing? What's the
role of the maintenance plan in your problem?
<anonymous@.discussions.microsoft.com> wrote in message
news:126301c3df8b$67613d60$a001280a@.phx.gbl...
quote:
> The reason I ask is that we have the primary file group to
> autogrow at 100mb. We have a maintenance plan that runs
> every Sunday. For the past 2 Sunday's the job has failed
> due to out of space. There is plenty of space on the
> drive for the file to grow. Not sure why the job is
> failing on space issue. That is why I ask if it is on
> error does it grow.
Automatically grow database
when does the server actually grow the file? Does it wait
for an out of space condition or is it automated?Yes, on reaching its current size it will grow by either a
percentage of the current size or as a fixed number of MB.
It will take the space on the Hard Disk up, so make sure
you have plenty of disk space.
J
>--Original Message--
>If the autogrow is set on a database primary file group
>when does the server actually grow the file? Does it
wait
>for an out of space condition or is it automated?
>.
>|||It is before an out of space condtion. But you can set up a job to automate
it with the ALTER DATABASE statement.
"Kirk" <anonymous@.discussions.microsoft.com> wrote in message
news:5ed001c3df71$7aa9c230$7d02280a@.phx.gbl...
> If the autogrow is set on a database primary file group
> when does the server actually grow the file? Does it wait
> for an out of space condition or is it automated?|||The reason I ask is that we have the primary file group to
autogrow at 100mb. We have a maintenance plan that runs
every Sunday. For the past 2 Sunday's the job has failed
due to out of space. There is plenty of space on the
drive for the file to grow. Not sure why the job is
failing on space issue. That is why I ask if it is on
error does it grow.|||Having autogrow is better than nothing. But better yet, don't leave it
solely for SQL Server. The better way is to size your db, forecast its
growth, and allocate space accordingly. Leave the autogrow on but keep
checking back whether there is need of growing again, and if needed, do it
manually at a not-so-busy time. Autogrow can take time so long that your
application may error out while waiting for the growth (though it's not
likely in your case of autogrow size).
Not sure what your problem is. You have a plan to grow the db file every
sunday? That doesn't sound right. What for job was failing? What's the
role of the maintenance plan in your problem?
<anonymous@.discussions.microsoft.com> wrote in message
news:126301c3df8b$67613d60$a001280a@.phx.gbl...
> The reason I ask is that we have the primary file group to
> autogrow at 100mb. We have a maintenance plan that runs
> every Sunday. For the past 2 Sunday's the job has failed
> due to out of space. There is plenty of space on the
> drive for the file to grow. Not sure why the job is
> failing on space issue. That is why I ask if it is on
> error does it grow.
Thursday, March 22, 2012
automaticallly grow file
Is it in megabytes or by percent?I'm a fan of megabytes, and always make the growth factor a multiple of 64 megabytes if that is practical.
-PatP|||I tend to be a fan of % growth up to around 15-20 GB. After that point, I prefer to carefully monitor and control the growth process myself.
Some other notes/thoughts:
1. Set a max limit on both the log file and the database file
2. Consider fixing the size of the tempdb; of all the dbs, this one will tend to change size most often. I have seen others recommend setting the size of tempdb to 1.5x RAM.
I am by no means the subject expert in this area; read BOL and Inside SQL 2000 carefully and draw your own conclusions from your own needs and requirements.
Regards,
hmscott|||It's vital to know how your data is going to grow, at what rate, and how often. Carefully estimating the needs for your data growth will spare you hours of "watching" its uncontrollable growth. Preallocation of space per device and properly configuring growth (preferably based on the multiple of default disk/controller cluster size) is the way to go. For data and log devices of user databases preallocation should cover at least a month worth of used space growth without affecting the actual size of the file. And of course, don't use percentage, it may get you by for a little while, right until you forget to check it...
automatical grow does not work
we encountered problem on testing environment that database does not grow
even though it is configured to grow automatically. There is SQL Server 2000
installed on two Windows Server 2003 in cluster. Application runs on its
dedicated WinServer using ADO.NET for data manipulation. When data file is
full (or it remains only small free space ~2MB), all application's db
requests fail with timeout. When I manually enlarge data file, application
starts to work again.
Is there any known issue regarding to automatical grow of database. Does
anyone face this problem ?
eXavierHi
How big is your database?
It might take time .For example if the database is 20GB and you set a 20%
growth rate that means if the database grows it will increase by 2GB. This
may or may not be what you want.
"eXavier" <fhns@.centrum.cz> wrote in message
news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
> Hi all,
> we encountered problem on testing environment that database does not grow
> even though it is configured to grow automatically. There is SQL Server
2000
> installed on two Windows Server 2003 in cluster. Application runs on its
> dedicated WinServer using ADO.NET for data manipulation. When data file is
> full (or it remains only small free space ~2MB), all application's db
> requests fail with timeout. When I manually enlarge data file, application
> starts to work again.
> Is there any known issue regarding to automatical grow of database. Does
> anyone face this problem ?
> eXavier
>|||After restore, the database is about 1.7 GB, I tried to set growing to
both - fixed amount and percentage factor with no success. When I did SQL
insert from QA - it took about 3 minutes but the file increased. I have only
60 seconds timeout on ADO connection string, may it be that timeouting of
connection prior to data are inserted could cause not performing of file
growth ?
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:%23gBBktgWFHA.2572@.TK2MSFTNGP14.phx.gbl...
> Hi
> How big is your database?
> It might take time .For example if the database is 20GB and you set a 20%
> growth rate that means if the database grows it will increase by 2GB.
This
> may or may not be what you want.
>
>
>
> "eXavier" <fhns@.centrum.cz> wrote in message
> news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
grow[vbcol=seagreen]
> 2000
is[vbcol=seagreen]
application[vbcol=seagreen]
>|||Well, what is about your hardware? Do you have enough space for the database
file?
How many users connected to the database? Have you checked are there
blocking,locking?
Try to change a TimeOut command to 0
"eXavier" <fhns@.centrum.cz> wrote in message
news:OS9EcBhWFHA.3840@.tk2msftngp13.phx.gbl...
> After restore, the database is about 1.7 GB, I tried to set growing to
> both - fixed amount and percentage factor with no success. When I did SQL
> insert from QA - it took about 3 minutes but the file increased. I have
only
> 60 seconds timeout on ADO connection string, may it be that timeouting of
> connection prior to data are inserted could cause not performing of file
> growth ?
>
> "Uri Dimant" <urid@.iscar.co.il> wrote in message
> news:%23gBBktgWFHA.2572@.TK2MSFTNGP14.phx.gbl...
20%[vbcol=seagreen]
> This
> grow
Server[vbcol=seagreen]
its[vbcol=seagreen]
file[vbcol=seagreen]
> is
> application
Does[vbcol=seagreen]
>|||First off you should never rely on Autogrow to do the work for you. You
should always have plenty of free space in the files so Autogrow never kicks
in. If a thread invokes the autogrow process and during the growth the user
connection times out it can roll back the growth that it did as well. So it
may not look like it grew when in fact it did and then shrunk again. Never
use the default of 10% on anything over a few hundred MB's. And make it just
a few MB so as you are sure it will grow in less than 30 seconds if you have
a slow disk subsystem.
Andrew J. Kelly SQL MVP
"eXavier" <fhns@.centrum.cz> wrote in message
news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
> Hi all,
> we encountered problem on testing environment that database does not grow
> even though it is configured to grow automatically. There is SQL Server
> 2000
> installed on two Windows Server 2003 in cluster. Application runs on its
> dedicated WinServer using ADO.NET for data manipulation. When data file is
> full (or it remains only small free space ~2MB), all application's db
> requests fail with timeout. When I manually enlarge data file, application
> starts to work again.
> Is there any known issue regarding to automatical grow of database. Does
> anyone face this problem ?
> eXavier
>|||Thank you for your explanation. In fact, I was facing similar issue, when
transaction log was growing too fast and when it was about 2 GB, the
application also started to timeout. To avoid this I created job for
shrinking transaction log every midnight. Is it good practice to do some
file size check/grow job yet?
eXavier
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%23f3AFLhWFHA.3176@.TK2MSFTNGP12.phx.gbl...
> First off you should never rely on Autogrow to do the work for you. You
> should always have plenty of free space in the files so Autogrow never
kicks
> in. If a thread invokes the autogrow process and during the growth the
user
> connection times out it can roll back the growth that it did as well. So
it
> may not look like it grew when in fact it did and then shrunk again.
Never
> use the default of 10% on anything over a few hundred MB's. And make it
just
> a few MB so as you are sure it will grow in less than 30 seconds if you
have
> a slow disk subsystem.
> --
> Andrew J. Kelly SQL MVP
>
> "eXavier" <fhns@.centrum.cz> wrote in message
> news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
grow[vbcol=seagreen]
is[vbcol=seagreen]
application[vbcol=seagreen]
>|||Why are you shrinking the files? You just proved to yourself that the
growth was causing problems and yet you still continue to shrink. Have a
look here:
http://www.karaszi.com/SQLServer/info_dont_shrink.asp
Andrew J. Kelly SQL MVP
"eXavier" <fhns@.centrum.cz> wrote in message
news:eIfscSiWFHA.1796@.TK2MSFTNGP15.phx.gbl...
> Thank you for your explanation. In fact, I was facing similar issue, when
> transaction log was growing too fast and when it was about 2 GB, the
> application also started to timeout. To avoid this I created job for
> shrinking transaction log every midnight. Is it good practice to do some
> file size check/grow job yet?
> eXavier
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:%23f3AFLhWFHA.3176@.TK2MSFTNGP12.phx.gbl...
> kicks
> user
> it
> Never
> just
> have
> grow
> is
> application
>|||I shrink the transaction log because the application hangs if the file
becomes too large (~2GB). There is enough space on disk array (~40GB).
Probably it could be the same problem as with data file - timeouted
connection together with autogrow, I'll have to test it.
If I 'd set max size of log file, does SQL Server reuses the file from
beggining ? Now it looks like the log size is still increasing, appending
data at the end - this is the reason for my daily shrink. (I left default
values for auto grow - i.e. unrestricted grow by 10%, I'll change it.). What
are best practices for setting size / max siz of database/log. I expect data
file size could be 5-10 GB in my application, but don't have idea about
optimal size of transaction log and if it is better to set max size or
rather letting it grow unrestricted.
The application is highly transactional, but there are typically not much
transactions running in parallel - it processes text-based business messages
mapping them to database - several clients then operates on data typically
changing states again in DB transaction.
Do you have any tips or could you recomend some articles or books on this
topic ?
Thanks a lot
eXavier
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:uwlMFsiWFHA.796@.TK2MSFTNGP09.phx.gbl...
> Why are you shrinking the files? You just proved to yourself that the
> growth was causing problems and yet you still continue to shrink. Have a
> look here:
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp
> --
> Andrew J. Kelly SQL MVP
>
> "eXavier" <fhns@.centrum.cz> wrote in message
> news:eIfscSiWFHA.1796@.TK2MSFTNGP15.phx.gbl...
when[vbcol=seagreen]
You[vbcol=seagreen]
So[vbcol=seagreen]
Server[vbcol=seagreen]
file[vbcol=seagreen]
>|||> the application hangs if the file
> becomes too large (~2GB
Yes, because with a 2GB log and 10% growth you are forcing SQL Server
to allocate upto 200MB in a transaction! Best policy is to assess the
correct size under test conditions, then fix the size and leave it. No
shrinking and no autogrow. Log usage should be controlled by the
frequency of log backups, not by shrinking. Shrinking achieves nothing
in a production OLTP environment.
Once you've fixed the size, monitor usage and create a new log file if
you need one rather than increase the size of the existing one. Since
your file system will already be highly fragmented you should probably
start afresh with a new file of the correct size for optimum
performance.
David Portas
SQL Server MVP
--|||> If I 'd set max size of log file, does SQL Server reuses the file from
> beggining ?
If SQL Server were to do that, you could end up in a situation where SQL Ser
ver had break your
sequence of transaction log backup files. If the db is in full recovery mode
l, the log is emptied
when you do log backup. If you don't do log backup, run in simple recovery m
ode.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"eXavier" <fhns@.centrum.cz> wrote in message news:ufn39WjWFHA.3240@.TK2MSFTNGP10.phx.gbl...[v
bcol=seagreen]
>I shrink the transaction log because the application hangs if the file
> becomes too large (~2GB). There is enough space on disk array (~40GB).
> Probably it could be the same problem as with data file - timeouted
> connection together with autogrow, I'll have to test it.
> If I 'd set max size of log file, does SQL Server reuses the file from
> beggining ? Now it looks like the log size is still increasing, appending
> data at the end - this is the reason for my daily shrink. (I left default
> values for auto grow - i.e. unrestricted grow by 10%, I'll change it.). Wh
at
> are best practices for setting size / max siz of database/log. I expect da
ta
> file size could be 5-10 GB in my application, but don't have idea about
> optimal size of transaction log and if it is better to set max size or
> rather letting it grow unrestricted.
> The application is highly transactional, but there are typically not much
> transactions running in parallel - it processes text-based business messag
es
> mapping them to database - several clients then operates on data typically
> changing states again in DB transaction.
> Do you have any tips or could you recomend some articles or books on this
> topic ?
> Thanks a lot
> eXavier
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:uwlMFsiWFHA.796@.TK2MSFTNGP09.phx.gbl...
> when
> You
> So
> Server
> file
>[/vbcol]sql
automatical grow does not work
we encountered problem on testing environment that database does not grow
even though it is configured to grow automatically. There is SQL Server 2000
installed on two Windows Server 2003 in cluster. Application runs on its
dedicated WinServer using ADO.NET for data manipulation. When data file is
full (or it remains only small free space ~2MB), all application's db
requests fail with timeout. When I manually enlarge data file, application
starts to work again.
Is there any known issue regarding to automatical grow of database. Does
anyone face this problem ?
eXavierHi
How big is your database?
It might take time .For example if the database is 20GB and you set a 20%
growth rate that means if the database grows it will increase by 2GB. This
may or may not be what you want.
"eXavier" <fhns@.centrum.cz> wrote in message
news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
> Hi all,
> we encountered problem on testing environment that database does not grow
> even though it is configured to grow automatically. There is SQL Server
2000
> installed on two Windows Server 2003 in cluster. Application runs on its
> dedicated WinServer using ADO.NET for data manipulation. When data file is
> full (or it remains only small free space ~2MB), all application's db
> requests fail with timeout. When I manually enlarge data file, application
> starts to work again.
> Is there any known issue regarding to automatical grow of database. Does
> anyone face this problem ?
> eXavier
>|||After restore, the database is about 1.7 GB, I tried to set growing to
both - fixed amount and percentage factor with no success. When I did SQL
insert from QA - it took about 3 minutes but the file increased. I have only
60 seconds timeout on ADO connection string, may it be that timeouting of
connection prior to data are inserted could cause not performing of file
growth ?
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:%23gBBktgWFHA.2572@.TK2MSFTNGP14.phx.gbl...
> Hi
> How big is your database?
> It might take time .For example if the database is 20GB and you set a 20%
> growth rate that means if the database grows it will increase by 2GB.
This
> may or may not be what you want.
>
>
>
> "eXavier" <fhns@.centrum.cz> wrote in message
> news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
grow
> 2000
is
application
>|||Well, what is about your hardware? Do you have enough space for the database
file?
How many users connected to the database? Have you checked are there
blocking,locking?
Try to change a TimeOut command to 0
"eXavier" <fhns@.centrum.cz> wrote in message
news:OS9EcBhWFHA.3840@.tk2msftngp13.phx.gbl...
> After restore, the database is about 1.7 GB, I tried to set growing to
> both - fixed amount and percentage factor with no success. When I did SQL
> insert from QA - it took about 3 minutes but the file increased. I have
only
> 60 seconds timeout on ADO connection string, may it be that timeouting of
> connection prior to data are inserted could cause not performing of file
> growth ?
>
> "Uri Dimant" <urid@.iscar.co.il> wrote in message
> news:%23gBBktgWFHA.2572@.TK2MSFTNGP14.phx.gbl...
20%
> This
> grow
Server
its
file
> is
> application
Does
>|||First off you should never rely on Autogrow to do the work for you. You
should always have plenty of free space in the files so Autogrow never kicks
in. If a thread invokes the autogrow process and during the growth the user
connection times out it can roll back the growth that it did as well. So it
may not look like it grew when in fact it did and then shrunk again. Never
use the default of 10% on anything over a few hundred MB's. And make it just
a few MB so as you are sure it will grow in less than 30 seconds if you have
a slow disk subsystem.
Andrew J. Kelly SQL MVP
"eXavier" <fhns@.centrum.cz> wrote in message
news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
> Hi all,
> we encountered problem on testing environment that database does not grow
> even though it is configured to grow automatically. There is SQL Server
> 2000
> installed on two Windows Server 2003 in cluster. Application runs on its
> dedicated WinServer using ADO.NET for data manipulation. When data file is
> full (or it remains only small free space ~2MB), all application's db
> requests fail with timeout. When I manually enlarge data file, application
> starts to work again.
> Is there any known issue regarding to automatical grow of database. Does
> anyone face this problem ?
> eXavier
>|||Thank you for your explanation. In fact, I was facing similar issue, when
transaction log was growing too fast and when it was about 2 GB, the
application also started to timeout. To avoid this I created job for
shrinking transaction log every midnight. Is it good practice to do some
file size check/grow job yet?
eXavier
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%23f3AFLhWFHA.3176@.TK2MSFTNGP12.phx.gbl...
> First off you should never rely on Autogrow to do the work for you. You
> should always have plenty of free space in the files so Autogrow never
kicks
> in. If a thread invokes the autogrow process and during the growth the
user
> connection times out it can roll back the growth that it did as well. So
it
> may not look like it grew when in fact it did and then shrunk again.
Never
> use the default of 10% on anything over a few hundred MB's. And make it
just
> a few MB so as you are sure it will grow in less than 30 seconds if you
have
> a slow disk subsystem.
> --
> Andrew J. Kelly SQL MVP
>
> "eXavier" <fhns@.centrum.cz> wrote in message
> news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
grow
is
application
>|||Why are you shrinking the files? You just proved to yourself that the
growth was causing problems and yet you still continue to shrink. Have a
look here:
http://www.karaszi.com/SQLServer/info_dont_shrink.asp
Andrew J. Kelly SQL MVP
"eXavier" <fhns@.centrum.cz> wrote in message
news:eIfscSiWFHA.1796@.TK2MSFTNGP15.phx.gbl...
> Thank you for your explanation. In fact, I was facing similar issue, when
> transaction log was growing too fast and when it was about 2 GB, the
> application also started to timeout. To avoid this I created job for
> shrinking transaction log every midnight. Is it good practice to do some
> file size check/grow job yet?
> eXavier
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:%23f3AFLhWFHA.3176@.TK2MSFTNGP12.phx.gbl...
> kicks
> user
> it
> Never
> just
> have
> grow
> is
> application
>|||I shrink the transaction log because the application hangs if the file
becomes too large (~2GB). There is enough space on disk array (~40GB).
Probably it could be the same problem as with data file - timeouted
connection together with autogrow, I'll have to test it.
If I 'd set max size of log file, does SQL Server reuses the file from
beggining ? Now it looks like the log size is still increasing, appending
data at the end - this is the reason for my daily shrink. (I left default
values for auto grow - i.e. unrestricted grow by 10%, I'll change it.). What
are best practices for setting size / max siz of database/log. I expect data
file size could be 5-10 GB in my application, but don't have idea about
optimal size of transaction log and if it is better to set max size or
rather letting it grow unrestricted.
The application is highly transactional, but there are typically not much
transactions running in parallel - it processes text-based business messages
mapping them to database - several clients then operates on data typically
changing states again in DB transaction.
Do you have any tips or could you recomend some articles or books on this
topic ?
Thanks a lot
eXavier
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:uwlMFsiWFHA.796@.TK2MSFTNGP09.phx.gbl...
> Why are you shrinking the files? You just proved to yourself that the
> growth was causing problems and yet you still continue to shrink. Have a
> look here:
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp
> --
> Andrew J. Kelly SQL MVP
>
> "eXavier" <fhns@.centrum.cz> wrote in message
> news:eIfscSiWFHA.1796@.TK2MSFTNGP15.phx.gbl...
when
You
So
Server
file
>|||> the application hangs if the file
> becomes too large (~2GB
Yes, because with a 2GB log and 10% growth you are forcing SQL Server
to allocate upto 200MB in a transaction! Best policy is to assess the
correct size under test conditions, then fix the size and leave it. No
shrinking and no autogrow. Log usage should be controlled by the
frequency of log backups, not by shrinking. Shrinking achieves nothing
in a production OLTP environment.
Once you've fixed the size, monitor usage and create a new log file if
you need one rather than increase the size of the existing one. Since
your file system will already be highly fragmented you should probably
start afresh with a new file of the correct size for optimum
performance.
David Portas
SQL Server MVP
--|||> If I 'd set max size of log file, does SQL Server reuses the file from
> beggining ?
If SQL Server were to do that, you could end up in a situation where SQL Ser
ver had break your
sequence of transaction log backup files. If the db is in full recovery mode
l, the log is emptied
when you do log backup. If you don't do log backup, run in simple recovery m
ode.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"eXavier" <fhns@.centrum.cz> wrote in message news:ufn39WjWFHA.3240@.TK2MSFTNGP10.phx.gbl...[
color=darkred]
>I shrink the transaction log because the application hangs if the file
> becomes too large (~2GB). There is enough space on disk array (~40GB).
> Probably it could be the same problem as with data file - timeouted
> connection together with autogrow, I'll have to test it.
> If I 'd set max size of log file, does SQL Server reuses the file from
> beggining ? Now it looks like the log size is still increasing, appending
> data at the end - this is the reason for my daily shrink. (I left default
> values for auto grow - i.e. unrestricted grow by 10%, I'll change it.). Wh
at
> are best practices for setting size / max siz of database/log. I expect da
ta
> file size could be 5-10 GB in my application, but don't have idea about
> optimal size of transaction log and if it is better to set max size or
> rather letting it grow unrestricted.
> The application is highly transactional, but there are typically not much
> transactions running in parallel - it processes text-based business messag
es
> mapping them to database - several clients then operates on data typically
> changing states again in DB transaction.
> Do you have any tips or could you recomend some articles or books on this
> topic ?
> Thanks a lot
> eXavier
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:uwlMFsiWFHA.796@.TK2MSFTNGP09.phx.gbl...
> when
> You
> So
> Server
> file
>[/color]
automatical grow does not work
we encountered problem on testing environment that database does not grow
even though it is configured to grow automatically. There is SQL Server 2000
installed on two Windows Server 2003 in cluster. Application runs on its
dedicated WinServer using ADO.NET for data manipulation. When data file is
full (or it remains only small free space ~2MB), all application's db
requests fail with timeout. When I manually enlarge data file, application
starts to work again.
Is there any known issue regarding to automatical grow of database. Does
anyone face this problem ?
eXavier
Hi
How big is your database?
It might take time .For example if the database is 20GB and you set a 20%
growth rate that means if the database grows it will increase by 2GB. This
may or may not be what you want.
"eXavier" <fhns@.centrum.cz> wrote in message
news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
> Hi all,
> we encountered problem on testing environment that database does not grow
> even though it is configured to grow automatically. There is SQL Server
2000
> installed on two Windows Server 2003 in cluster. Application runs on its
> dedicated WinServer using ADO.NET for data manipulation. When data file is
> full (or it remains only small free space ~2MB), all application's db
> requests fail with timeout. When I manually enlarge data file, application
> starts to work again.
> Is there any known issue regarding to automatical grow of database. Does
> anyone face this problem ?
> eXavier
>
|||After restore, the database is about 1.7 GB, I tried to set growing to
both - fixed amount and percentage factor with no success. When I did SQL
insert from QA - it took about 3 minutes but the file increased. I have only
60 seconds timeout on ADO connection string, may it be that timeouting of
connection prior to data are inserted could cause not performing of file
growth ?
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:%23gBBktgWFHA.2572@.TK2MSFTNGP14.phx.gbl...
> Hi
> How big is your database?
> It might take time .For example if the database is 20GB and you set a 20%
> growth rate that means if the database grows it will increase by 2GB.
This[vbcol=seagreen]
> may or may not be what you want.
>
>
>
> "eXavier" <fhns@.centrum.cz> wrote in message
> news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
grow[vbcol=seagreen]
> 2000
is[vbcol=seagreen]
application
>
|||Well, what is about your hardware? Do you have enough space for the database
file?
How many users connected to the database? Have you checked are there
blocking,locking?
Try to change a TimeOut command to 0
"eXavier" <fhns@.centrum.cz> wrote in message
news:OS9EcBhWFHA.3840@.tk2msftngp13.phx.gbl...
> After restore, the database is about 1.7 GB, I tried to set growing to
> both - fixed amount and percentage factor with no success. When I did SQL
> insert from QA - it took about 3 minutes but the file increased. I have
only[vbcol=seagreen]
> 60 seconds timeout on ADO connection string, may it be that timeouting of
> connection prior to data are inserted could cause not performing of file
> growth ?
>
> "Uri Dimant" <urid@.iscar.co.il> wrote in message
> news:%23gBBktgWFHA.2572@.TK2MSFTNGP14.phx.gbl...
20%[vbcol=seagreen]
> This
> grow
Server[vbcol=seagreen]
its[vbcol=seagreen]
file[vbcol=seagreen]
> is
> application
Does
>
|||First off you should never rely on Autogrow to do the work for you. You
should always have plenty of free space in the files so Autogrow never kicks
in. If a thread invokes the autogrow process and during the growth the user
connection times out it can roll back the growth that it did as well. So it
may not look like it grew when in fact it did and then shrunk again. Never
use the default of 10% on anything over a few hundred MB's. And make it just
a few MB so as you are sure it will grow in less than 30 seconds if you have
a slow disk subsystem.
Andrew J. Kelly SQL MVP
"eXavier" <fhns@.centrum.cz> wrote in message
news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
> Hi all,
> we encountered problem on testing environment that database does not grow
> even though it is configured to grow automatically. There is SQL Server
> 2000
> installed on two Windows Server 2003 in cluster. Application runs on its
> dedicated WinServer using ADO.NET for data manipulation. When data file is
> full (or it remains only small free space ~2MB), all application's db
> requests fail with timeout. When I manually enlarge data file, application
> starts to work again.
> Is there any known issue regarding to automatical grow of database. Does
> anyone face this problem ?
> eXavier
>
|||Thank you for your explanation. In fact, I was facing similar issue, when
transaction log was growing too fast and when it was about 2 GB, the
application also started to timeout. To avoid this I created job for
shrinking transaction log every midnight. Is it good practice to do some
file size check/grow job yet?
eXavier
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%23f3AFLhWFHA.3176@.TK2MSFTNGP12.phx.gbl...
> First off you should never rely on Autogrow to do the work for you. You
> should always have plenty of free space in the files so Autogrow never
kicks
> in. If a thread invokes the autogrow process and during the growth the
user
> connection times out it can roll back the growth that it did as well. So
it
> may not look like it grew when in fact it did and then shrunk again.
Never
> use the default of 10% on anything over a few hundred MB's. And make it
just
> a few MB so as you are sure it will grow in less than 30 seconds if you
have[vbcol=seagreen]
> a slow disk subsystem.
> --
> Andrew J. Kelly SQL MVP
>
> "eXavier" <fhns@.centrum.cz> wrote in message
> news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
grow[vbcol=seagreen]
is[vbcol=seagreen]
application
>
|||Why are you shrinking the files? You just proved to yourself that the
growth was causing problems and yet you still continue to shrink. Have a
look here:
http://www.karaszi.com/SQLServer/info_dont_shrink.asp
Andrew J. Kelly SQL MVP
"eXavier" <fhns@.centrum.cz> wrote in message
news:eIfscSiWFHA.1796@.TK2MSFTNGP15.phx.gbl...
> Thank you for your explanation. In fact, I was facing similar issue, when
> transaction log was growing too fast and when it was about 2 GB, the
> application also started to timeout. To avoid this I created job for
> shrinking transaction log every midnight. Is it good practice to do some
> file size check/grow job yet?
> eXavier
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:%23f3AFLhWFHA.3176@.TK2MSFTNGP12.phx.gbl...
> kicks
> user
> it
> Never
> just
> have
> grow
> is
> application
>
|||I shrink the transaction log because the application hangs if the file
becomes too large (~2GB). There is enough space on disk array (~40GB).
Probably it could be the same problem as with data file - timeouted
connection together with autogrow, I'll have to test it.
If I 'd set max size of log file, does SQL Server reuses the file from
beggining ? Now it looks like the log size is still increasing, appending
data at the end - this is the reason for my daily shrink. (I left default
values for auto grow - i.e. unrestricted grow by 10%, I'll change it.). What
are best practices for setting size / max siz of database/log. I expect data
file size could be 5-10 GB in my application, but don't have idea about
optimal size of transaction log and if it is better to set max size or
rather letting it grow unrestricted.
The application is highly transactional, but there are typically not much
transactions running in parallel - it processes text-based business messages
mapping them to database - several clients then operates on data typically
changing states again in DB transaction.
Do you have any tips or could you recomend some articles or books on this
topic ?
Thanks a lot
eXavier
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:uwlMFsiWFHA.796@.TK2MSFTNGP09.phx.gbl...[vbcol=seagreen]
> Why are you shrinking the files? You just proved to yourself that the
> growth was causing problems and yet you still continue to shrink. Have a
> look here:
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp
> --
> Andrew J. Kelly SQL MVP
>
> "eXavier" <fhns@.centrum.cz> wrote in message
> news:eIfscSiWFHA.1796@.TK2MSFTNGP15.phx.gbl...
when[vbcol=seagreen]
You[vbcol=seagreen]
So[vbcol=seagreen]
Server[vbcol=seagreen]
file
>
|||> the application hangs if the file
> becomes too large (~2GB
Yes, because with a 2GB log and 10% growth you are forcing SQL Server
to allocate upto 200MB in a transaction! Best policy is to assess the
correct size under test conditions, then fix the size and leave it. No
shrinking and no autogrow. Log usage should be controlled by the
frequency of log backups, not by shrinking. Shrinking achieves nothing
in a production OLTP environment.
Once you've fixed the size, monitor usage and create a new log file if
you need one rather than increase the size of the existing one. Since
your file system will already be highly fragmented you should probably
start afresh with a new file of the correct size for optimum
performance.
David Portas
SQL Server MVP
|||> If I 'd set max size of log file, does SQL Server reuses the file from
> beggining ?
If SQL Server were to do that, you could end up in a situation where SQL Server had break your
sequence of transaction log backup files. If the db is in full recovery model, the log is emptied
when you do log backup. If you don't do log backup, run in simple recovery mode.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"eXavier" <fhns@.centrum.cz> wrote in message news:ufn39WjWFHA.3240@.TK2MSFTNGP10.phx.gbl...
>I shrink the transaction log because the application hangs if the file
> becomes too large (~2GB). There is enough space on disk array (~40GB).
> Probably it could be the same problem as with data file - timeouted
> connection together with autogrow, I'll have to test it.
> If I 'd set max size of log file, does SQL Server reuses the file from
> beggining ? Now it looks like the log size is still increasing, appending
> data at the end - this is the reason for my daily shrink. (I left default
> values for auto grow - i.e. unrestricted grow by 10%, I'll change it.). What
> are best practices for setting size / max siz of database/log. I expect data
> file size could be 5-10 GB in my application, but don't have idea about
> optimal size of transaction log and if it is better to set max size or
> rather letting it grow unrestricted.
> The application is highly transactional, but there are typically not much
> transactions running in parallel - it processes text-based business messages
> mapping them to database - several clients then operates on data typically
> changing states again in DB transaction.
> Do you have any tips or could you recomend some articles or books on this
> topic ?
> Thanks a lot
> eXavier
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:uwlMFsiWFHA.796@.TK2MSFTNGP09.phx.gbl...
> when
> You
> So
> Server
> file
>
automatical grow does not work
we encountered problem on testing environment that database does not grow
even though it is configured to grow automatically. There is SQL Server 2000
installed on two Windows Server 2003 in cluster. Application runs on its
dedicated WinServer using ADO.NET for data manipulation. When data file is
full (or it remains only small free space ~2MB), all application's db
requests fail with timeout. When I manually enlarge data file, application
starts to work again.
Is there any known issue regarding to automatical grow of database. Does
anyone face this problem ?
eXavierHi
How big is your database?
It might take time .For example if the database is 20GB and you set a 20%
growth rate that means if the database grows it will increase by 2GB. This
may or may not be what you want.
"eXavier" <fhns@.centrum.cz> wrote in message
news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
> Hi all,
> we encountered problem on testing environment that database does not grow
> even though it is configured to grow automatically. There is SQL Server
2000
> installed on two Windows Server 2003 in cluster. Application runs on its
> dedicated WinServer using ADO.NET for data manipulation. When data file is
> full (or it remains only small free space ~2MB), all application's db
> requests fail with timeout. When I manually enlarge data file, application
> starts to work again.
> Is there any known issue regarding to automatical grow of database. Does
> anyone face this problem ?
> eXavier
>|||After restore, the database is about 1.7 GB, I tried to set growing to
both - fixed amount and percentage factor with no success. When I did SQL
insert from QA - it took about 3 minutes but the file increased. I have only
60 seconds timeout on ADO connection string, may it be that timeouting of
connection prior to data are inserted could cause not performing of file
growth ?
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:%23gBBktgWFHA.2572@.TK2MSFTNGP14.phx.gbl...
> Hi
> How big is your database?
> It might take time .For example if the database is 20GB and you set a 20%
> growth rate that means if the database grows it will increase by 2GB.
This
> may or may not be what you want.
>
>
>
> "eXavier" <fhns@.centrum.cz> wrote in message
> news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
> > Hi all,
> > we encountered problem on testing environment that database does not
grow
> > even though it is configured to grow automatically. There is SQL Server
> 2000
> > installed on two Windows Server 2003 in cluster. Application runs on its
> > dedicated WinServer using ADO.NET for data manipulation. When data file
is
> > full (or it remains only small free space ~2MB), all application's db
> > requests fail with timeout. When I manually enlarge data file,
application
> > starts to work again.
> > Is there any known issue regarding to automatical grow of database. Does
> > anyone face this problem ?
> >
> > eXavier
> >
> >
>|||Well, what is about your hardware? Do you have enough space for the database
file?
How many users connected to the database? Have you checked are there
blocking,locking?
Try to change a TimeOut command to 0
"eXavier" <fhns@.centrum.cz> wrote in message
news:OS9EcBhWFHA.3840@.tk2msftngp13.phx.gbl...
> After restore, the database is about 1.7 GB, I tried to set growing to
> both - fixed amount and percentage factor with no success. When I did SQL
> insert from QA - it took about 3 minutes but the file increased. I have
only
> 60 seconds timeout on ADO connection string, may it be that timeouting of
> connection prior to data are inserted could cause not performing of file
> growth ?
>
> "Uri Dimant" <urid@.iscar.co.il> wrote in message
> news:%23gBBktgWFHA.2572@.TK2MSFTNGP14.phx.gbl...
> > Hi
> > How big is your database?
> > It might take time .For example if the database is 20GB and you set a
20%
> > growth rate that means if the database grows it will increase by 2GB.
> This
> > may or may not be what you want.
> >
> >
> >
> >
> >
> >
> >
> > "eXavier" <fhns@.centrum.cz> wrote in message
> > news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
> > > Hi all,
> > > we encountered problem on testing environment that database does not
> grow
> > > even though it is configured to grow automatically. There is SQL
Server
> > 2000
> > > installed on two Windows Server 2003 in cluster. Application runs on
its
> > > dedicated WinServer using ADO.NET for data manipulation. When data
file
> is
> > > full (or it remains only small free space ~2MB), all application's db
> > > requests fail with timeout. When I manually enlarge data file,
> application
> > > starts to work again.
> > > Is there any known issue regarding to automatical grow of database.
Does
> > > anyone face this problem ?
> > >
> > > eXavier
> > >
> > >
> >
> >
>|||First off you should never rely on Autogrow to do the work for you. You
should always have plenty of free space in the files so Autogrow never kicks
in. If a thread invokes the autogrow process and during the growth the user
connection times out it can roll back the growth that it did as well. So it
may not look like it grew when in fact it did and then shrunk again. Never
use the default of 10% on anything over a few hundred MB's. And make it just
a few MB so as you are sure it will grow in less than 30 seconds if you have
a slow disk subsystem.
--
Andrew J. Kelly SQL MVP
"eXavier" <fhns@.centrum.cz> wrote in message
news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
> Hi all,
> we encountered problem on testing environment that database does not grow
> even though it is configured to grow automatically. There is SQL Server
> 2000
> installed on two Windows Server 2003 in cluster. Application runs on its
> dedicated WinServer using ADO.NET for data manipulation. When data file is
> full (or it remains only small free space ~2MB), all application's db
> requests fail with timeout. When I manually enlarge data file, application
> starts to work again.
> Is there any known issue regarding to automatical grow of database. Does
> anyone face this problem ?
> eXavier
>|||Thank you for your explanation. In fact, I was facing similar issue, when
transaction log was growing too fast and when it was about 2 GB, the
application also started to timeout. To avoid this I created job for
shrinking transaction log every midnight. Is it good practice to do some
file size check/grow job yet?
eXavier
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%23f3AFLhWFHA.3176@.TK2MSFTNGP12.phx.gbl...
> First off you should never rely on Autogrow to do the work for you. You
> should always have plenty of free space in the files so Autogrow never
kicks
> in. If a thread invokes the autogrow process and during the growth the
user
> connection times out it can roll back the growth that it did as well. So
it
> may not look like it grew when in fact it did and then shrunk again.
Never
> use the default of 10% on anything over a few hundred MB's. And make it
just
> a few MB so as you are sure it will grow in less than 30 seconds if you
have
> a slow disk subsystem.
> --
> Andrew J. Kelly SQL MVP
>
> "eXavier" <fhns@.centrum.cz> wrote in message
> news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
> > Hi all,
> > we encountered problem on testing environment that database does not
grow
> > even though it is configured to grow automatically. There is SQL Server
> > 2000
> > installed on two Windows Server 2003 in cluster. Application runs on its
> > dedicated WinServer using ADO.NET for data manipulation. When data file
is
> > full (or it remains only small free space ~2MB), all application's db
> > requests fail with timeout. When I manually enlarge data file,
application
> > starts to work again.
> > Is there any known issue regarding to automatical grow of database. Does
> > anyone face this problem ?
> >
> > eXavier
> >
> >
>|||Why are you shrinking the files? You just proved to yourself that the
growth was causing problems and yet you still continue to shrink. Have a
look here:
http://www.karaszi.com/SQLServer/info_dont_shrink.asp
--
Andrew J. Kelly SQL MVP
"eXavier" <fhns@.centrum.cz> wrote in message
news:eIfscSiWFHA.1796@.TK2MSFTNGP15.phx.gbl...
> Thank you for your explanation. In fact, I was facing similar issue, when
> transaction log was growing too fast and when it was about 2 GB, the
> application also started to timeout. To avoid this I created job for
> shrinking transaction log every midnight. Is it good practice to do some
> file size check/grow job yet?
> eXavier
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:%23f3AFLhWFHA.3176@.TK2MSFTNGP12.phx.gbl...
>> First off you should never rely on Autogrow to do the work for you. You
>> should always have plenty of free space in the files so Autogrow never
> kicks
>> in. If a thread invokes the autogrow process and during the growth the
> user
>> connection times out it can roll back the growth that it did as well. So
> it
>> may not look like it grew when in fact it did and then shrunk again.
> Never
>> use the default of 10% on anything over a few hundred MB's. And make it
> just
>> a few MB so as you are sure it will grow in less than 30 seconds if you
> have
>> a slow disk subsystem.
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "eXavier" <fhns@.centrum.cz> wrote in message
>> news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
>> > Hi all,
>> > we encountered problem on testing environment that database does not
> grow
>> > even though it is configured to grow automatically. There is SQL Server
>> > 2000
>> > installed on two Windows Server 2003 in cluster. Application runs on
>> > its
>> > dedicated WinServer using ADO.NET for data manipulation. When data file
> is
>> > full (or it remains only small free space ~2MB), all application's db
>> > requests fail with timeout. When I manually enlarge data file,
> application
>> > starts to work again.
>> > Is there any known issue regarding to automatical grow of database.
>> > Does
>> > anyone face this problem ?
>> >
>> > eXavier
>> >
>> >
>>
>|||I shrink the transaction log because the application hangs if the file
becomes too large (~2GB). There is enough space on disk array (~40GB).
Probably it could be the same problem as with data file - timeouted
connection together with autogrow, I'll have to test it.
If I 'd set max size of log file, does SQL Server reuses the file from
beggining ? Now it looks like the log size is still increasing, appending
data at the end - this is the reason for my daily shrink. (I left default
values for auto grow - i.e. unrestricted grow by 10%, I'll change it.). What
are best practices for setting size / max siz of database/log. I expect data
file size could be 5-10 GB in my application, but don't have idea about
optimal size of transaction log and if it is better to set max size or
rather letting it grow unrestricted.
The application is highly transactional, but there are typically not much
transactions running in parallel - it processes text-based business messages
mapping them to database - several clients then operates on data typically
changing states again in DB transaction.
Do you have any tips or could you recomend some articles or books on this
topic ?
Thanks a lot
eXavier
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:uwlMFsiWFHA.796@.TK2MSFTNGP09.phx.gbl...
> Why are you shrinking the files? You just proved to yourself that the
> growth was causing problems and yet you still continue to shrink. Have a
> look here:
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp
> --
> Andrew J. Kelly SQL MVP
>
> "eXavier" <fhns@.centrum.cz> wrote in message
> news:eIfscSiWFHA.1796@.TK2MSFTNGP15.phx.gbl...
> > Thank you for your explanation. In fact, I was facing similar issue,
when
> > transaction log was growing too fast and when it was about 2 GB, the
> > application also started to timeout. To avoid this I created job for
> > shrinking transaction log every midnight. Is it good practice to do some
> > file size check/grow job yet?
> >
> > eXavier
> >
> > "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> > news:%23f3AFLhWFHA.3176@.TK2MSFTNGP12.phx.gbl...
> >> First off you should never rely on Autogrow to do the work for you.
You
> >> should always have plenty of free space in the files so Autogrow never
> > kicks
> >> in. If a thread invokes the autogrow process and during the growth the
> > user
> >> connection times out it can roll back the growth that it did as well.
So
> > it
> >> may not look like it grew when in fact it did and then shrunk again.
> > Never
> >> use the default of 10% on anything over a few hundred MB's. And make it
> > just
> >> a few MB so as you are sure it will grow in less than 30 seconds if you
> > have
> >> a slow disk subsystem.
> >>
> >> --
> >> Andrew J. Kelly SQL MVP
> >>
> >>
> >> "eXavier" <fhns@.centrum.cz> wrote in message
> >> news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
> >> > Hi all,
> >> > we encountered problem on testing environment that database does not
> > grow
> >> > even though it is configured to grow automatically. There is SQL
Server
> >> > 2000
> >> > installed on two Windows Server 2003 in cluster. Application runs on
> >> > its
> >> > dedicated WinServer using ADO.NET for data manipulation. When data
file
> > is
> >> > full (or it remains only small free space ~2MB), all application's db
> >> > requests fail with timeout. When I manually enlarge data file,
> > application
> >> > starts to work again.
> >> > Is there any known issue regarding to automatical grow of database.
> >> > Does
> >> > anyone face this problem ?
> >> >
> >> > eXavier
> >> >
> >> >
> >>
> >>
> >
> >
>|||> the application hangs if the file
> becomes too large (~2GB
Yes, because with a 2GB log and 10% growth you are forcing SQL Server
to allocate upto 200MB in a transaction! Best policy is to assess the
correct size under test conditions, then fix the size and leave it. No
shrinking and no autogrow. Log usage should be controlled by the
frequency of log backups, not by shrinking. Shrinking achieves nothing
in a production OLTP environment.
Once you've fixed the size, monitor usage and create a new log file if
you need one rather than increase the size of the existing one. Since
your file system will already be highly fragmented you should probably
start afresh with a new file of the correct size for optimum
performance.
--
David Portas
SQL Server MVP
--|||> If I 'd set max size of log file, does SQL Server reuses the file from
> beggining ?
If SQL Server were to do that, you could end up in a situation where SQL Server had break your
sequence of transaction log backup files. If the db is in full recovery model, the log is emptied
when you do log backup. If you don't do log backup, run in simple recovery mode.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"eXavier" <fhns@.centrum.cz> wrote in message news:ufn39WjWFHA.3240@.TK2MSFTNGP10.phx.gbl...
>I shrink the transaction log because the application hangs if the file
> becomes too large (~2GB). There is enough space on disk array (~40GB).
> Probably it could be the same problem as with data file - timeouted
> connection together with autogrow, I'll have to test it.
> If I 'd set max size of log file, does SQL Server reuses the file from
> beggining ? Now it looks like the log size is still increasing, appending
> data at the end - this is the reason for my daily shrink. (I left default
> values for auto grow - i.e. unrestricted grow by 10%, I'll change it.). What
> are best practices for setting size / max siz of database/log. I expect data
> file size could be 5-10 GB in my application, but don't have idea about
> optimal size of transaction log and if it is better to set max size or
> rather letting it grow unrestricted.
> The application is highly transactional, but there are typically not much
> transactions running in parallel - it processes text-based business messages
> mapping them to database - several clients then operates on data typically
> changing states again in DB transaction.
> Do you have any tips or could you recomend some articles or books on this
> topic ?
> Thanks a lot
> eXavier
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:uwlMFsiWFHA.796@.TK2MSFTNGP09.phx.gbl...
>> Why are you shrinking the files? You just proved to yourself that the
>> growth was causing problems and yet you still continue to shrink. Have a
>> look here:
>> http://www.karaszi.com/SQLServer/info_dont_shrink.asp
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "eXavier" <fhns@.centrum.cz> wrote in message
>> news:eIfscSiWFHA.1796@.TK2MSFTNGP15.phx.gbl...
>> > Thank you for your explanation. In fact, I was facing similar issue,
> when
>> > transaction log was growing too fast and when it was about 2 GB, the
>> > application also started to timeout. To avoid this I created job for
>> > shrinking transaction log every midnight. Is it good practice to do some
>> > file size check/grow job yet?
>> >
>> > eXavier
>> >
>> > "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> > news:%23f3AFLhWFHA.3176@.TK2MSFTNGP12.phx.gbl...
>> >> First off you should never rely on Autogrow to do the work for you.
> You
>> >> should always have plenty of free space in the files so Autogrow never
>> > kicks
>> >> in. If a thread invokes the autogrow process and during the growth the
>> > user
>> >> connection times out it can roll back the growth that it did as well.
> So
>> > it
>> >> may not look like it grew when in fact it did and then shrunk again.
>> > Never
>> >> use the default of 10% on anything over a few hundred MB's. And make it
>> > just
>> >> a few MB so as you are sure it will grow in less than 30 seconds if you
>> > have
>> >> a slow disk subsystem.
>> >>
>> >> --
>> >> Andrew J. Kelly SQL MVP
>> >>
>> >>
>> >> "eXavier" <fhns@.centrum.cz> wrote in message
>> >> news:ekIyJogWFHA.2740@.TK2MSFTNGP14.phx.gbl...
>> >> > Hi all,
>> >> > we encountered problem on testing environment that database does not
>> > grow
>> >> > even though it is configured to grow automatically. There is SQL
> Server
>> >> > 2000
>> >> > installed on two Windows Server 2003 in cluster. Application runs on
>> >> > its
>> >> > dedicated WinServer using ADO.NET for data manipulation. When data
> file
>> > is
>> >> > full (or it remains only small free space ~2MB), all application's db
>> >> > requests fail with timeout. When I manually enlarge data file,
>> > application
>> >> > starts to work again.
>> >> > Is there any known issue regarding to automatical grow of database.
>> >> > Does
>> >> > anyone face this problem ?
>> >> >
>> >> > eXavier
>> >> >
>> >> >
>> >>
>> >>
>> >
>> >
>>
>
Saturday, February 25, 2012
Auto-increment PK - Use MS or Grow your own ?
I thought I would throw this out there for some feedback from others.
I'd like to know if you feel using MS auto-increment field is a good
solution these days or should one grow their own ?
Thanks,
Me.Definitely use SQL Servers to auto-increment the primary key field.
Then there is no chance of duplicates.|||I can give you a definite Maybe.
I've used both depending on:
* Do you care that numbers may be missing in the sequence if insert
transactions get rolled back?
* Are you making the Primary Key the table's clustered index?
* Do you need to populate the value of that key into other tables within
the same procedure?
<csomberg@.dwr.com> wrote in message
news:1109115558.223984.69950@.z14g2000cwz.googlegro ups.com...
> SQL 2000
> I thought I would throw this out there for some feedback from others.
> I'd like to know if you feel using MS auto-increment field is a good
> solution these days or should one grow their own ?
> Thanks,
> Me.|||This question is bound to start a few of the regulars off on one of
their favorite arguements. Still to add my tuppence worth, I'd say it's
OK to use it under some circumstances. When and where depends entirely
on the solution you are providing and the data you are working with.
Surrogate keys in my opinion are valid and often much more simple than
a primary key of several columns. Having said that, you do get
instances with 'gaps' in the number sequence so this may not be
something you want. You could go down the route of generating a
surrogate key yourself (your original question), but I would hesitate
if IDENTITY offers you the same advantages as you may introduce more
problems depending on your approach. If possible and where logical, use
a properly defined primary key, but use your head and decide what is
most appropriate.
I know full well that some of the regulars will not agree with me and
that some will. However, it is my opinion and they are entitled to
theirs. I would suggest looking up 'Occams Razor' for anyone who
disagrees ( http://pespmc1.vub.ac.be/ASC/OCCAM'_RAZOR.html )
Going back to your question, if you 'grow your own' then this seems
reasonable if you are doing something that using IDENTITY doesn't allow
you. If it's the same, then why bother ?
Ryan
csomberg@.dwr.com wrote:
> SQL 2000
> I thought I would throw this out there for some feedback from others.
> I'd like to know if you feel using MS auto-increment field is a good
> solution these days or should one grow their own ?
> Thanks,
> Me.|||If you want an artificial key then use the feature provided - it's the
most efficient method. The harder question is, do you really want an
artificial key?
--
David Portas
SQL Server MVP
--|||(csomberg@.dwr.com) writes:
> I thought I would throw this out there for some feedback from others.
> I'd like to know if you feel using MS auto-increment field is a good
> solution these days or should one grow their own ?
Depends. If you need consecutive numbers, forget about IDENTITY. If you
want high scalability and don't want to have a hot spot on the current
key value, use IDENTITY.
Another situation where IDENTITY is difficult is when you insert many rows
in one table, and then need to know the values for inserts into a child
table.
If none of this applies, it's a toss-up. IDENTITY is somewhat simpler to
use, but there are some gotchas in odd situations. Rolling your own
is simple as well.
A general remark is that whatever method you use, don't use it for
every table. For "top" concepts like customers, orders, products it
may be inevitable. But for derived concepts that refers to other
concepts, there is rarely any need for artificial keys.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||On 23 Feb 2005 01:16:27 -0800, "Ryan" <ryanofford@.hotmail.com> wrote:
>This question is bound to start a few of the regulars off on one of
>their favorite arguements. Still to add my tuppence worth, I'd say it's
Don't you mean "tupple's worth"? <g>|||On 22 Feb 2005 15:39:18 -0800, csomberg@.dwr.com wrote:
>SQL 2000
>I thought I would throw this out there for some feedback from others.
>I'd like to know if you feel using MS auto-increment field is a good
>solution these days or should one grow their own ?
>Thanks,
>Me.
I often use a mix.
As has been pointed out here, if you need to ensure there no gaps, IDENTITY is
not your friend. To me, however, the kind of numbering that needs no gaps
should be a logical key, not a physical/surrogate key, so that's no issue.
Where I have found IDENTITY to be limiting are as follows.
1. I want to be able to merge data sets from 2 or more separate databases
without using a proprietary replication system. In this case, I like to use
some kind of variation on the GUID. Using the global key as the primary key
keeps merge processes simpler than if the global key is used in addition to a
local primary key
2. I need to generate sequences of master-detail sets in stored procedures.
In case #2, the problem is that we want to try to use set operations, not
cursors, and there's no good way to figure out the IDs of the master records
created in one query, so you can use them to create matching details in a
subsequent query. On the other hand, if you have a shared counter, you obtain
a count of the master records to be added, get the current counter value, and
update the counter, adding the master-count to its value. Reading and
updating the counter can be in its own short transaction to reduce blocking
overhead, since we should not care if we add a gap, but don't end up adding
the records with those keys.|||"Steve Jorgensen" <nospam@.nospam.nospam> wrote in message
news:1f4q11d4hnd9sa440mst33f4chnis0ffma@.4ax.com...
> On 23 Feb 2005 01:16:27 -0800, "Ryan" <ryanofford@.hotmail.com> wrote:
>>This question is bound to start a few of the regulars off on one of
>>their favorite arguements. Still to add my tuppence worth, I'd say it's
> Don't you mean "tupple's worth"? <g
From the definition of the DIF file format:
It uses the terms vector and TUPLE.
You may generally interpret vector as column and tuple as row.
From the Mary Poppins Sound Track:
With TUPPENCE for paper and strings
You can have your own set of wings
With your feet on the ground
You're a bird in a flight
With your fist holding tight
To the string of your kite
I hope I've cleared that up,
Oh my ... It seems I'm out of Pinot.|||On Thu, 24 Feb 2005 05:02:55 GMT, "David Rawheiser" <rawhide58@.hotmail.com>
wrote:
>"Steve Jorgensen" <nospam@.nospam.nospam> wrote in message
>news:1f4q11d4hnd9sa440mst33f4chnis0ffma@.4ax.com...
>> On 23 Feb 2005 01:16:27 -0800, "Ryan" <ryanofford@.hotmail.com> wrote:
>>
>>>This question is bound to start a few of the regulars off on one of
>>>their favorite arguements. Still to add my tuppence worth, I'd say it's
>>
>> Don't you mean "tupple's worth"? <g>
>From the definition of the DIF file format:
> It uses the terms vector and TUPLE.
> You may generally interpret vector as column and tuple as row.
>From the Mary Poppins Sound Track:
> With TUPPENCE for paper and strings
> You can have your own set of wings
> With your feet on the ground
> You're a bird in a flight
> With your fist holding tight
> To the string of your kite
>I hope I've cleared that up,
>Oh my ... It seems I'm out of Pinot.
LOL|||Oh no ! I'm going to be humming that all day now ! :-)
> From the Mary Poppins Sound Track:
> With TUPPENCE for paper and strings
> You can have your own set of wings
> With your feet on the ground
> You're a bird in a flight
> With your fist holding tight
> To the string of your kite|||>> I'd like to know if you feel using MS auto-increment field is a good
solution these days or should one grow their own ? <<
What did you want to use it for?
It is fine for adding a reference number to a cursor, which is a
sequential file structure. But you would never use it inside the
schema for anything. That would that your data model is all screwed up
and has the PHYSICAL state of the machine mixed with the LOGICAL data
model.
Newbies often use IDENTITY and the like to substitute for the pointer
chains they had in IDMS, IMS, TOTAL and other pre-RDBMS databases.
if they are really screwed up, they use them for keys and do not have
natural keys. There is no way to verifiy or validate the data and the
schema loses data integrity.
If I told you that the best key is the 17 digit Hebrew number which God
assigned to all things in creation, you would think I was nuts. There
is no magic universal key; you actually have to do some work when you
design a schema.
We have a lot of problems with terminology on this one, so let me get
that out of the way.
There is no such thing as a "universal, one-size-fits-all" key. Just
as no two sets of entities are the same, the attributes that make them
unique have to be found in the reality of the data. Here is my
classification of types of keys:
natural artificial exposed surrogate
================================================== ================
Constructed from reality |
of the data model | Y N N Y
|
verifiable in reality | Y N N N
|
verifiable in itself | Y Y N N
|
visible to the user | Y Y Y N
1) A natural key is a subset of attributes which occur in a table and
act as a unique identifier. They are seen by the user. You can go to
the external reality and verify them. you would also like to have some
validation rule. Example: UPC codes on consumer goods (read the
package barcode) and validate them with a check digit or a
manufacturer's website, geographical co-ordinates (get a GPS).
2) An artificial key is an extra attribute added to the table which is
seen by the user. It does not exist in the external reality, but can
be verified for syntax or check digits inside itself.
Example: the open codes in the UPC scheme which a user can assign to
his own stuff. The check digits still work, but you have to verify
them inside your own enterprise.
If you have to construct a key yourself, it takes time to deisgn them,
to invetn a validation rule, etc.
3) An "exposed physical locator" is not based on attributes in the data
model and is exposed to user. There is no way to predict it or verify
it. The system obtains a value thru some physical process in the
storage hardware totally unrelated to the logical data model. Example:
IDENTITY columns, other proprietary, non-relaitonal auto-numbering
devices.
Technically, these are not really keys at all, sinc they are attributes
of the PHYSICAL storage and are not even part of the LOGICAL data
model. But they are handy for lazry, non-RDBMS programmers who don't
want to research or think! This is the worst way to program in SQL.
4) A surrogate key is system generated to replace the actual key behind
the covers where the user never sees it. It is based on attributes in
the table. Example: Teradata hashing algorithms, pointer chains.
The fact that you can never see it or use it for DELETE and UPDATE or
create it for INSERT is vital. When users can get to them, they will
screw up the data integrity by getting the real keys and these physical
locators out of synch. The system must maintain them.
** Notice that people get "exposed physical locator" and surrogate
mixed up; they are totally different concepts. **
An appeal to authority, with a quote from Dr. Codd: "..Database users
may cause the system to generate or delete a surrogate, but they have
no control over its value, nor is its value ever displayed to them
..."(Dr. Codd in ACM TODS, pp 409-410) and Codd, E. (1979), Extending
the database relational model to capture more meaning. ACM
Transactions on Database Systems, 4(4). pp. 397-434.
This means that a surrogate ought to act like an index; created by the
user, managed by the system and NEVER seen by a user. That means never
used in queries, DRI or anything else that a user does.
Codd also wrote the following:
"There are three difficulties in employing user-controlled keys as
permanent surrogates for entities.
(1) The actual values of user-controlled keys are determined by users
and must therefore be subject to change by them (e.g. if two companies
merge, the two employee databases might be combined with the result
that some or all of the serial numbers might be changed.).
(2) Two relations may have user-controlled keys defined on distinct
domains (e.g. one uses social security, while the other uses employee
serial numbers) and yet the entities denoted are the same.
(3) It may be necessary to carry information about an entity either
before it has been assigned a user-controlled key value or after it has
ceased to have one (e.g. and applicant for a job and a retiree).
These difficulties have the important consequence that an equi-join on
common key values may not yield the same result as a join on common
entities. A solution - proposed in part [4] and more fully in [14] -
is to introduce entity domains which contain system-assigned
surrogates. Database users may cause the system to generate or delete
a surrogate, but they have no control over its value, nor is its value
ever displayed to them...." (Codd in ACM TODS, pp 409-410).
References
Codd, E. (1979), Extending the database relational model to capture
more meaning. ACM Transactions on Database Systems, 4(4). pp. 397-434
The steps for finding a key are
1) Look for an industry standard and the trusted source that maintains
it.
2) Look for a natural key in the attributes. Example: (longitude,
latitude) makes a good key for a geographical location.
3) If you must design a new identifier, plan it carefully -- especially
if people will see and use it. You have to be able to validate it in
application programs, so you need a regular expression, other syntax
rule and/or check digits. You have to be able to be verify in the
reality of the model or with a trusted source.
AutoGrowth - Which data file does SQL Prefers to auto grow & why ?
If we have multiple datafiles in a filegroup, lets say 8,
How does SQL distribute data ?(Will all data for a single table will go in
one file or will it be spread)
When there is need for space are all files auto grown uniformely OR just one
file ?
If just one file, will the same file auto grow next time too ?
Sorry to put too many questions ? Just trying to understand the Auto grow
feature of SQL.
Advance thanks for sharing the knowledge...
RangaHi
Check out
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_da2_9sab.asp
and
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/createdb/cm_8_des_02_2ak3.asp
Filegroups use a proportional fill strategy across all the files within each
filegroup. If there are multiple files in a filegroup, they do not autogrow
until all the files are full. The file to grow is then chosen using a
round-robin algorithm.
John
"Ranga" wrote:
> Win2k3, SQL 2000, SP4
> If we have multiple datafiles in a filegroup, lets say 8,
> How does SQL distribute data ?(Will all data for a single table will go in
> one file or will it be spread)
> When there is need for space are all files auto grown uniformely OR just one
> file ?
> If just one file, will the same file auto grow next time too ?
> Sorry to put too many questions ? Just trying to understand the Auto grow
> feature of SQL.
> Advance thanks for sharing the knowledge...
> Ranga
AutoGrowth - Which data file does SQL Prefers to auto grow & why ?
If we have multiple datafiles in a filegroup, lets say 8,
How does SQL distribute data ?(Will all data for a single table will go in
one file or will it be spread)
When there is need for space are all files auto grown uniformely OR just one
file ?
If just one file, will the same file auto grow next time too ?
Sorry to put too many questions ? Just trying to understand the Auto grow
feature of SQL.
Advance thanks for sharing the knowledge...
Ranga
Hi
Check out
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_da2_9sab.asp
and
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/createdb/cm_8_des_02_2ak3.asp
Filegroups use a proportional fill strategy across all the files within each
filegroup. If there are multiple files in a filegroup, they do not autogrow
until all the files are full. The file to grow is then chosen using a
round-robin algorithm.
John
"Ranga" wrote:
> Win2k3, SQL 2000, SP4
> If we have multiple datafiles in a filegroup, lets say 8,
> How does SQL distribute data ?(Will all data for a single table will go in
> one file or will it be spread)
> When there is need for space are all files auto grown uniformely OR just one
> file ?
> If just one file, will the same file auto grow next time too ?
> Sorry to put too many questions ? Just trying to understand the Auto grow
> feature of SQL.
> Advance thanks for sharing the knowledge...
> Ranga
Friday, February 24, 2012
AutoGrowth - Which data file does SQL Prefers to auto grow & why ?
If we have multiple datafiles in a filegroup, lets say 8,
How does SQL distribute data ?(Will all data for a single table will go in
one file or will it be spread)
When there is need for space are all files auto grown uniformely OR just one
file ?
If just one file, will the same file auto grow next time too ?
Sorry to put too many questions ? Just trying to understand the Auto grow
feature of SQL.
Advance thanks for sharing the knowledge...
RangaHi
Check out
http://msdn.microsoft.com/library/d...r />
_9sab.asp
and
http://msdn.microsoft.com/library/d...>
_02_2ak3.asp
Filegroups use a proportional fill strategy across all the files within each
filegroup. If there are multiple files in a filegroup, they do not autogrow
until all the files are full. The file to grow is then chosen using a
round-robin algorithm.
John
"Ranga" wrote:
> Win2k3, SQL 2000, SP4
> If we have multiple datafiles in a filegroup, lets say 8,
> How does SQL distribute data ?(Will all data for a single table will go in
> one file or will it be spread)
> When there is need for space are all files auto grown uniformely OR just o
ne
> file ?
> If just one file, will the same file auto grow next time too ?
> Sorry to put too many questions ? Just trying to understand the Auto grow
> feature of SQL.
> Advance thanks for sharing the knowledge...
> Ranga
AutoGrow DB option
the database, file, i found a post that says something about xmb size, what
can find no other impormation, So my question is what event promts sql to
autogrow its database file,
Thanks AndrewWhenever a connection is executing a query where space needed to be allocated (read about extents in
Books Online), and there is not space in the database file (or files if that object is on a
filegroup constituted of several files).
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"andrew bourne" <andrewbourne@.vardells.com> wrote in message
news:uYmvtb3ZFHA.3320@.TK2MSFTNGP12.phx.gbl...
> Hi all, what i really want to know is what actualy tells sql server to grow the database, file, i
> found a post that says something about xmb size, what can find no other impormation, So my
> question is what event promts sql to autogrow its database file,
> Thanks Andrew
>
Friday, February 10, 2012
Auto Grow/Shrink
databases I've taken off of auto shrink. But I am wondering if the 80-90
smaller databases are causing some performance issues with auto shrinks
(which I left on just for ease/convenience).
To check the server out, I turned Profiler on and captured various Auth
Shrink/Grow events. I ran this overnight and into the production hours today
and I got very few of these and so I'm wondering if I did it right.
I simply selected from my trace where event class in 92-95. Is that the
right way to do it' (I want to make sure that I'm not missing these.)CLM,
Take a look at:
http://www.aspfaq.com/show.asp?id=2471
and
http://www.karaszi.com/SQLServer/info_dont_shrink.asp
HTH
Jerry
"CLM" <CLM@.discussions.microsoft.com> wrote in message
news:A69B6508-CE0A-4AA7-8575-13D593CE1353@.microsoft.com...
> I've got a half terabyte 2000 server with 100 + databases on it. The
> larger
> databases I've taken off of auto shrink. But I am wondering if the 80-90
> smaller databases are causing some performance issues with auto shrinks
> (which I left on just for ease/convenience).
> To check the server out, I turned Profiler on and captured various Auth
> Shrink/Grow events. I ran this overnight and into the production hours
> today
> and I got very few of these and so I'm wondering if I did it right.
> I simply selected from my trace where event class in 92-95. Is that the
> right way to do it' (I want to make sure that I'm not missing these.)|||Thx, but these I know. I have an unusual configuration that imo requires
autoshrink (or a lot more disk).
Did I do the querying correctly?
"Jerry Spivey" wrote:
> CLM,
> Take a look at:
> http://www.aspfaq.com/show.asp?id=2471
> and
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp
> HTH
> Jerry
> "CLM" <CLM@.discussions.microsoft.com> wrote in message
> news:A69B6508-CE0A-4AA7-8575-13D593CE1353@.microsoft.com...
> > I've got a half terabyte 2000 server with 100 + databases on it. The
> > larger
> > databases I've taken off of auto shrink. But I am wondering if the 80-90
> > smaller databases are causing some performance issues with auto shrinks
> > (which I left on just for ease/convenience).
> > To check the server out, I turned Profiler on and captured various Auth
> > Shrink/Grow events. I ran this overnight and into the production hours
> > today
> > and I got very few of these and so I'm wondering if I did it right.
> > I simply selected from my trace where event class in 92-95. Is that the
> > right way to do it' (I want to make sure that I'm not missing these.)
>
>|||CLM,
Yes the 'Data File Auto Grow', 'Data File Auto Shrink', 'Log File Auto
Grow', 'Log File Auto Shrink' event classes are the correct ones to use.
However, I really wouldn't recommend having 80-90 databases auto growing and
auto shrinking as this is likely to cause fragmentation and reduce
performance.
HTH
Jerry
"CLM" <CLM@.discussions.microsoft.com> wrote in message
news:8DD5AE5E-2B7D-4891-A21A-31276BBF47F5@.microsoft.com...
> Thx, but these I know. I have an unusual configuration that imo requires
> autoshrink (or a lot more disk).
> Did I do the querying correctly?
> "Jerry Spivey" wrote:
>> CLM,
>> Take a look at:
>> http://www.aspfaq.com/show.asp?id=2471
>> and
>> http://www.karaszi.com/SQLServer/info_dont_shrink.asp
>> HTH
>> Jerry
>> "CLM" <CLM@.discussions.microsoft.com> wrote in message
>> news:A69B6508-CE0A-4AA7-8575-13D593CE1353@.microsoft.com...
>> > I've got a half terabyte 2000 server with 100 + databases on it. The
>> > larger
>> > databases I've taken off of auto shrink. But I am wondering if the
>> > 80-90
>> > smaller databases are causing some performance issues with auto shrinks
>> > (which I left on just for ease/convenience).
>> > To check the server out, I turned Profiler on and captured various Auth
>> > Shrink/Grow events. I ran this overnight and into the production hours
>> > today
>> > and I got very few of these and so I'm wondering if I did it right.
>> > I simply selected from my trace where event class in 92-95. Is that
>> > the
>> > right way to do it' (I want to make sure that I'm not missing these.)
>>|||I understand. But the problem I have - and I inherited this by the way - is
that most of those databases have one table with a clustered and a
nonclustered index. When I reindex them, the database expands to a little
over twice its size. And I just don't have the space for that to stay that
way.
I've got index defrags on the larger databases, but I can't have that on
every database.
If there's an easier/better way, I'm open to it of course...
"Jerry Spivey" wrote:
> CLM,
> Yes the 'Data File Auto Grow', 'Data File Auto Shrink', 'Log File Auto
> Grow', 'Log File Auto Shrink' event classes are the correct ones to use.
> However, I really wouldn't recommend having 80-90 databases auto growing and
> auto shrinking as this is likely to cause fragmentation and reduce
> performance.
> HTH
> Jerry
> "CLM" <CLM@.discussions.microsoft.com> wrote in message
> news:8DD5AE5E-2B7D-4891-A21A-31276BBF47F5@.microsoft.com...
> > Thx, but these I know. I have an unusual configuration that imo requires
> > autoshrink (or a lot more disk).
> > Did I do the querying correctly?
> >
> > "Jerry Spivey" wrote:
> >
> >> CLM,
> >>
> >> Take a look at:
> >>
> >> http://www.aspfaq.com/show.asp?id=2471
> >> and
> >> http://www.karaszi.com/SQLServer/info_dont_shrink.asp
> >>
> >> HTH
> >>
> >> Jerry
> >> "CLM" <CLM@.discussions.microsoft.com> wrote in message
> >> news:A69B6508-CE0A-4AA7-8575-13D593CE1353@.microsoft.com...
> >> > I've got a half terabyte 2000 server with 100 + databases on it. The
> >> > larger
> >> > databases I've taken off of auto shrink. But I am wondering if the
> >> > 80-90
> >> > smaller databases are causing some performance issues with auto shrinks
> >> > (which I left on just for ease/convenience).
> >> > To check the server out, I turned Profiler on and captured various Auth
> >> > Shrink/Grow events. I ran this overnight and into the production hours
> >> > today
> >> > and I got very few of these and so I'm wondering if I did it right.
> >> > I simply selected from my trace where event class in 92-95. Is that
> >> > the
> >> > right way to do it' (I want to make sure that I'm not missing these.)
> >>
> >>
> >>
>
>|||Hmmm...
1. Extend your array(s) to fit the storage needs of the business
2. Use fewer databases - less space and will also be easier to manage
3. Try using index defrag on the smaller databases
4. Move some of these databases to another server with more space
HTH
Jerry
"CLM" <CLM@.discussions.microsoft.com> wrote in message
news:DB60B945-AB19-488C-8169-ADD24B00FB41@.microsoft.com...
>I understand. But the problem I have - and I inherited this by the way -
>is
> that most of those databases have one table with a clustered and a
> nonclustered index. When I reindex them, the database expands to a little
> over twice its size. And I just don't have the space for that to stay
> that
> way.
> I've got index defrags on the larger databases, but I can't have that on
> every database.
> If there's an easier/better way, I'm open to it of course...
>
> "Jerry Spivey" wrote:
>> CLM,
>> Yes the 'Data File Auto Grow', 'Data File Auto Shrink', 'Log File Auto
>> Grow', 'Log File Auto Shrink' event classes are the correct ones to use.
>> However, I really wouldn't recommend having 80-90 databases auto growing
>> and
>> auto shrinking as this is likely to cause fragmentation and reduce
>> performance.
>> HTH
>> Jerry
>> "CLM" <CLM@.discussions.microsoft.com> wrote in message
>> news:8DD5AE5E-2B7D-4891-A21A-31276BBF47F5@.microsoft.com...
>> > Thx, but these I know. I have an unusual configuration that imo
>> > requires
>> > autoshrink (or a lot more disk).
>> > Did I do the querying correctly?
>> >
>> > "Jerry Spivey" wrote:
>> >
>> >> CLM,
>> >>
>> >> Take a look at:
>> >>
>> >> http://www.aspfaq.com/show.asp?id=2471
>> >> and
>> >> http://www.karaszi.com/SQLServer/info_dont_shrink.asp
>> >>
>> >> HTH
>> >>
>> >> Jerry
>> >> "CLM" <CLM@.discussions.microsoft.com> wrote in message
>> >> news:A69B6508-CE0A-4AA7-8575-13D593CE1353@.microsoft.com...
>> >> > I've got a half terabyte 2000 server with 100 + databases on it.
>> >> > The
>> >> > larger
>> >> > databases I've taken off of auto shrink. But I am wondering if the
>> >> > 80-90
>> >> > smaller databases are causing some performance issues with auto
>> >> > shrinks
>> >> > (which I left on just for ease/convenience).
>> >> > To check the server out, I turned Profiler on and captured various
>> >> > Auth
>> >> > Shrink/Grow events. I ran this overnight and into the production
>> >> > hours
>> >> > today
>> >> > and I got very few of these and so I'm wondering if I did it right.
>> >> > I simply selected from my trace where event class in 92-95. Is that
>> >> > the
>> >> > right way to do it' (I want to make sure that I'm not missing
>> >> > these.)
>> >>
>> >>
>> >>
>>|||Why do you reindex? The following shrink will largely undo the defragmentation that the reindex
performed...
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"CLM" <CLM@.discussions.microsoft.com> wrote in message
news:DB60B945-AB19-488C-8169-ADD24B00FB41@.microsoft.com...
>I understand. But the problem I have - and I inherited this by the way - is
> that most of those databases have one table with a clustered and a
> nonclustered index. When I reindex them, the database expands to a little
> over twice its size. And I just don't have the space for that to stay that
> way.
> I've got index defrags on the larger databases, but I can't have that on
> every database.
> If there's an easier/better way, I'm open to it of course...
>
> "Jerry Spivey" wrote:
>> CLM,
>> Yes the 'Data File Auto Grow', 'Data File Auto Shrink', 'Log File Auto
>> Grow', 'Log File Auto Shrink' event classes are the correct ones to use.
>> However, I really wouldn't recommend having 80-90 databases auto growing and
>> auto shrinking as this is likely to cause fragmentation and reduce
>> performance.
>> HTH
>> Jerry
>> "CLM" <CLM@.discussions.microsoft.com> wrote in message
>> news:8DD5AE5E-2B7D-4891-A21A-31276BBF47F5@.microsoft.com...
>> > Thx, but these I know. I have an unusual configuration that imo requires
>> > autoshrink (or a lot more disk).
>> > Did I do the querying correctly?
>> >
>> > "Jerry Spivey" wrote:
>> >
>> >> CLM,
>> >>
>> >> Take a look at:
>> >>
>> >> http://www.aspfaq.com/show.asp?id=2471
>> >> and
>> >> http://www.karaszi.com/SQLServer/info_dont_shrink.asp
>> >>
>> >> HTH
>> >>
>> >> Jerry
>> >> "CLM" <CLM@.discussions.microsoft.com> wrote in message
>> >> news:A69B6508-CE0A-4AA7-8575-13D593CE1353@.microsoft.com...
>> >> > I've got a half terabyte 2000 server with 100 + databases on it. The
>> >> > larger
>> >> > databases I've taken off of auto shrink. But I am wondering if the
>> >> > 80-90
>> >> > smaller databases are causing some performance issues with auto shrinks
>> >> > (which I left on just for ease/convenience).
>> >> > To check the server out, I turned Profiler on and captured various Auth
>> >> > Shrink/Grow events. I ran this overnight and into the production hours
>> >> > today
>> >> > and I got very few of these and so I'm wondering if I did it right.
>> >> > I simply selected from my trace where event class in 92-95. Is that
>> >> > the
>> >> > right way to do it' (I want to make sure that I'm not missing these.)
>> >>
>> >>
>> >>
>>|||Oh, I feel like an idiot! I was not thinking about it clearly. I tested
what you said and of course it came to exactly what you said - the scan
density went back to virutally what it was before the reindex!! I appreciate
so much your taking time to answer stupid questions...
"Tibor Karaszi" wrote:
> Why do you reindex? The following shrink will largely undo the defragmentation that the reindex
> performed...
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "CLM" <CLM@.discussions.microsoft.com> wrote in message
> news:DB60B945-AB19-488C-8169-ADD24B00FB41@.microsoft.com...
> >I understand. But the problem I have - and I inherited this by the way - is
> > that most of those databases have one table with a clustered and a
> > nonclustered index. When I reindex them, the database expands to a little
> > over twice its size. And I just don't have the space for that to stay that
> > way.
> > I've got index defrags on the larger databases, but I can't have that on
> > every database.
> > If there's an easier/better way, I'm open to it of course...
> >
> >
> >
> > "Jerry Spivey" wrote:
> >
> >> CLM,
> >>
> >> Yes the 'Data File Auto Grow', 'Data File Auto Shrink', 'Log File Auto
> >> Grow', 'Log File Auto Shrink' event classes are the correct ones to use.
> >> However, I really wouldn't recommend having 80-90 databases auto growing and
> >> auto shrinking as this is likely to cause fragmentation and reduce
> >> performance.
> >>
> >> HTH
> >>
> >> Jerry
> >> "CLM" <CLM@.discussions.microsoft.com> wrote in message
> >> news:8DD5AE5E-2B7D-4891-A21A-31276BBF47F5@.microsoft.com...
> >> > Thx, but these I know. I have an unusual configuration that imo requires
> >> > autoshrink (or a lot more disk).
> >> > Did I do the querying correctly?
> >> >
> >> > "Jerry Spivey" wrote:
> >> >
> >> >> CLM,
> >> >>
> >> >> Take a look at:
> >> >>
> >> >> http://www.aspfaq.com/show.asp?id=2471
> >> >> and
> >> >> http://www.karaszi.com/SQLServer/info_dont_shrink.asp
> >> >>
> >> >> HTH
> >> >>
> >> >> Jerry
> >> >> "CLM" <CLM@.discussions.microsoft.com> wrote in message
> >> >> news:A69B6508-CE0A-4AA7-8575-13D593CE1353@.microsoft.com...
> >> >> > I've got a half terabyte 2000 server with 100 + databases on it. The
> >> >> > larger
> >> >> > databases I've taken off of auto shrink. But I am wondering if the
> >> >> > 80-90
> >> >> > smaller databases are causing some performance issues with auto shrinks
> >> >> > (which I left on just for ease/convenience).
> >> >> > To check the server out, I turned Profiler on and captured various Auth
> >> >> > Shrink/Grow events. I ran this overnight and into the production hours
> >> >> > today
> >> >> > and I got very few of these and so I'm wondering if I did it right.
> >> >> > I simply selected from my trace where event class in 92-95. Is that
> >> >> > the
> >> >> > right way to do it' (I want to make sure that I'm not missing these.)
> >> >>
> >> >>
> >> >>
> >>
> >>
> >>
>
>|||You're not the only one who shrink without realizing the consequences. I'm glad you figured it out.
:-)
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"CLM" <CLM@.discussions.microsoft.com> wrote in message
news:FF383A38-6E62-4B98-A621-3333A5117696@.microsoft.com...
> Oh, I feel like an idiot! I was not thinking about it clearly. I tested
> what you said and of course it came to exactly what you said - the scan
> density went back to virutally what it was before the reindex!! I appreciate
> so much your taking time to answer stupid questions...
> "Tibor Karaszi" wrote:
>> Why do you reindex? The following shrink will largely undo the defragmentation that the reindex
>> performed...
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://www.solidqualitylearning.com/
>>
>> "CLM" <CLM@.discussions.microsoft.com> wrote in message
>> news:DB60B945-AB19-488C-8169-ADD24B00FB41@.microsoft.com...
>> >I understand. But the problem I have - and I inherited this by the way - is
>> > that most of those databases have one table with a clustered and a
>> > nonclustered index. When I reindex them, the database expands to a little
>> > over twice its size. And I just don't have the space for that to stay that
>> > way.
>> > I've got index defrags on the larger databases, but I can't have that on
>> > every database.
>> > If there's an easier/better way, I'm open to it of course...
>> >
>> >
>> >
>> > "Jerry Spivey" wrote:
>> >
>> >> CLM,
>> >>
>> >> Yes the 'Data File Auto Grow', 'Data File Auto Shrink', 'Log File Auto
>> >> Grow', 'Log File Auto Shrink' event classes are the correct ones to use.
>> >> However, I really wouldn't recommend having 80-90 databases auto growing and
>> >> auto shrinking as this is likely to cause fragmentation and reduce
>> >> performance.
>> >>
>> >> HTH
>> >>
>> >> Jerry
>> >> "CLM" <CLM@.discussions.microsoft.com> wrote in message
>> >> news:8DD5AE5E-2B7D-4891-A21A-31276BBF47F5@.microsoft.com...
>> >> > Thx, but these I know. I have an unusual configuration that imo requires
>> >> > autoshrink (or a lot more disk).
>> >> > Did I do the querying correctly?
>> >> >
>> >> > "Jerry Spivey" wrote:
>> >> >
>> >> >> CLM,
>> >> >>
>> >> >> Take a look at:
>> >> >>
>> >> >> http://www.aspfaq.com/show.asp?id=2471
>> >> >> and
>> >> >> http://www.karaszi.com/SQLServer/info_dont_shrink.asp
>> >> >>
>> >> >> HTH
>> >> >>
>> >> >> Jerry
>> >> >> "CLM" <CLM@.discussions.microsoft.com> wrote in message
>> >> >> news:A69B6508-CE0A-4AA7-8575-13D593CE1353@.microsoft.com...
>> >> >> > I've got a half terabyte 2000 server with 100 + databases on it. The
>> >> >> > larger
>> >> >> > databases I've taken off of auto shrink. But I am wondering if the
>> >> >> > 80-90
>> >> >> > smaller databases are causing some performance issues with auto shrinks
>> >> >> > (which I left on just for ease/convenience).
>> >> >> > To check the server out, I turned Profiler on and captured various Auth
>> >> >> > Shrink/Grow events. I ran this overnight and into the production hours
>> >> >> > today
>> >> >> > and I got very few of these and so I'm wondering if I did it right.
>> >> >> > I simply selected from my trace where event class in 92-95. Is that
>> >> >> > the
>> >> >> > right way to do it' (I want to make sure that I'm not missing these.)
>> >> >>
>> >> >>
>> >> >>
>> >>
>> >>
>> >>
>>