My data warehouse is now 4 years old and the size is about
600 GB od data.
We had only data insert no update in this duration. PKs
are all clustered indexes and in chronological order.
Some of tables are as large as 100 GB.
I wanted to check if the PKs are in good figure and ran
DBCC showcontig against many of large tables and 80 % had
bad rate for scan density , such as 50 %, 30 %.
I may need to reorganise PK.
I BOL it says comparing the values of Extent Switches and
Extents Scanned is a way to know how much fragmented. But
it says this method does not work if the index spans
multiple files. I presume all VLDB exploit multiple files
for one table in order to gain physical disk I/O.
My question: how can I check fragmentation rate of my
large tables which span multiple files (up to 4 to 6
files)?
What is the best way to reorganise clustered index which
are PK ? I have to drop all FK in order to reorganise PK,
don't I !
I hope to hear your idea!!!When you say you presume your db spans multiple files, does the database use
more than one file other than the MDF? DBCC DBreindex on the clustered key
should reindex your data tables and automatically reindex your other
nonclustered indexes.
Some links:
http://www.microsoft.com/technet/co...ql/sql0326.mspx
http://www.microsoft.com/technet/co...ql/sql1014.mspx
http://www.sqlservercentral.com/scr...butions/721.asp
Ray Higdon MCSE, MCDBA, CCNA
--
"didi" <anonymous@.discussions.microsoft.com> wrote in message
news:140c01c40b37$dd2c95d0$3501280a@.phx.gbl...
> My data warehouse is now 4 years old and the size is about
> 600 GB od data.
> We had only data insert no update in this duration. PKs
> are all clustered indexes and in chronological order.
> Some of tables are as large as 100 GB.
> I wanted to check if the PKs are in good figure and ran
> DBCC showcontig against many of large tables and 80 % had
> bad rate for scan density , such as 50 %, 30 %.
> I may need to reorganise PK.
> I BOL it says comparing the values of Extent Switches and
> Extents Scanned is a way to know how much fragmented. But
> it says this method does not work if the index spans
> multiple files. I presume all VLDB exploit multiple files
> for one table in order to gain physical disk I/O.
> My question: how can I check fragmentation rate of my
> large tables which span multiple files (up to 4 to 6
> files)?
> What is the best way to reorganise clustered index which
> are PK ? I have to drop all FK in order to reorganise PK,
> don't I !
> I hope to hear your idea!!!|||MDF file is used only for system table in all of my
databases. (especially when dealing with VLDB).
The database is over 600GB, and each table could be nearly
100GB,
Would DBreindex a good solution ?
This will copy the whole table into different location
without asking !
>--Original Message--
>When you say you presume your db spans multiple files,
does the database use
>more than one file other than the MDF? DBCC DBreindex on
the clustered key
>should reindex your data tables and automatically reindex
your other
>nonclustered indexes.
>Some links:
>http://www.microsoft.com/technet/co...chats/trans/sql
/sql0326.mspx
>http://www.microsoft.com/technet/co...chats/trans/sql
/sql1014.mspx
>http://www.sqlservercentral.com/scr...tributions/721.
asp
>--
>Ray Higdon MCSE, MCDBA, CCNA
>--
>"didi" <anonymous@.discussions.microsoft.com> wrote in
message
>news:140c01c40b37$dd2c95d0$3501280a@.phx.gbl...
about
had
and
But
files
PK,
>
>.
>|||Did those links help?
Ray Higdon MCSE, MCDBA, CCNA
--
"didi" <anonymous@.discussions.microsoft.com> wrote in message
news:148601c40b4b$a0c711b0$3a01280a@.phx.gbl...
> MDF file is used only for system table in all of my
> databases. (especially when dealing with VLDB).
> The database is over 600GB, and each table could be nearly
> 100GB,
> Would DBreindex a good solution ?
> This will copy the whole table into different location
> without asking !
>
> does the database use
> the clustered key
> your other
> /sql0326.mspx
> /sql1014.mspx
> asp
> message
> about
> had
> and
> But
> files
> PK,|||Links were very good! Thank you very much!
Especially Index Defrag Best Practices.
So, according to the article I should use fragmentation
level by logical scan fragmentation.
Still I am not very sure about using DBCC INDEXDEFRAG.
Because when a table is 100GB, and do this operation, how
large the log should be allocated ? 200 GB, 300 GB ?
Usually for copying data it takes about 2.5 times of data
size consumed in log before the data is inserted into.
Would DBCC INDEXDEFRAG be a best way in VLDB environment ?
>--Original Message--
>Did those links help?
>--
>Ray Higdon MCSE, MCDBA, CCNA
>--
>"didi" <anonymous@.discussions.microsoft.com> wrote in
message
>news:148601c40b4b$a0c711b0$3a01280a@.phx.gbl...
nearly
on
reindex
>http://www.microsoft.com/technet/co...chats/trans/sql
>http://www.microsoft.com/technet/co...chats/trans/sql
>http://www.sqlservercentral.com/scr...tributions/721.
PKs
ran
which
>
>.
>|||Depends on the needed uptime of your DB, you can write scripts to defrag in
chunks. Here is an example of using dbreindex (you can alter to use index
defrag) and backing up the log when needed, think I got this from MVP Andrew
Kelly but not 100% sure:
-- Reindexing the tables --
SET NOCOUNT ON
DECLARE @.TableName VARCHAR(100), @.Counter INT
SET @.Counter = 1
DECLARE curTables CURSOR STATIC LOCAL
FOR
SELECT Table_Name
FROM Information_Schema.Tables
WHERE Table_Type = 'BASE TABLE'
OPEN curTables
FETCH NEXT FROM curTables INTO @.TableName
SET @.TableName = RTRIM(@.TableName)
WHILE @.@.FETCH_STATUS = 0
BEGIN
SELECT 'Reindexing ' + @.TableName
DBCC DBREINDEX (@.TableName)
SET @.Counter = @.Counter + 1
-- Backup the Log every so often so as not to fill the log
IF @.Counter % 10 = 0
BEGIN
BACKUP LOG [Presents] TO [DD_Presents_Log] WITH NOINIT , NOUNLOAD
,
NAME = N'Presents Log Backup', NOSKIP , STATS = 10,
NOFORMAT
END
FETCH NEXT FROM curTables INTO @.TableName
END
CLOSE curTables
DEALLOCATE curTables
Ray Higdon MCSE, MCDBA, CCNA
--
"didi" <anonymous@.discussions.microsoft.com> wrote in message
news:159401c40c21$2fc43b60$3a01280a@.phx.gbl...
> Links were very good! Thank you very much!
> Especially Index Defrag Best Practices.
> So, according to the article I should use fragmentation
> level by logical scan fragmentation.
> Still I am not very sure about using DBCC INDEXDEFRAG.
> Because when a table is 100GB, and do this operation, how
> large the log should be allocated ? 200 GB, 300 GB ?
> Usually for copying data it takes about 2.5 times of data
> size consumed in log before the data is inserted into.
> Would DBCC INDEXDEFRAG be a best way in VLDB environment ?
>
> message
> nearly
> on
> reindex
> PKs
> ran
> which
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment