Tuesday, March 20, 2012

quicker way to create indexes

Hi,

I have a new job. It needs to drop and re-create (by insert) a table
every night. The table contains approximately 3,000,000 (and growing)
records. The insert is fine, runs in 2 minutes. The problem is that
when I create the indexes on the table, it is taking 15-20 minutes.
There is one clustered index and 11 non-clustered. This is a lookup
table that takes many different paremeters, so it really needs the
indexes for the user interface to run efficiently. However, the
database owners aren't keen on a job taking 20 minutes to run every
night.

Any ideas?shelleybobelly (shelleybobelly@.yahoo.com) writes:

Quote:

Originally Posted by

I have a new job. It needs to drop and re-create (by insert) a table
every night. The table contains approximately 3,000,000 (and growing)
records. The insert is fine, runs in 2 minutes. The problem is that
when I create the indexes on the table, it is taking 15-20 minutes.
There is one clustered index and 11 non-clustered. This is a lookup
table that takes many different paremeters, so it really needs the
indexes for the user interface to run efficiently. However, the
database owners aren't keen on a job taking 20 minutes to run every
night.


Without knowing much about the data, it's difficult to tell. But I find it
difficult to believe that data changes that much in a lookup table. Then
again, I would not expect a look-up table to have three million rows.
Anyway, rather than dropping and recreating, maybe it's more effective
to load into staging table, and then update changed rows, insert new
ones, and delete old ones.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Right, I agree. If I run it the other way (just update the big lookup
table from the separate tables where there were changes), it runs much
quicker, only deleting/adding a thousand or so records a day. The
owners of this DB are doing things differently, but I guess I'll just
have to 'educate' them on this one. Then I'll create another job to
re-index this table about once a week or so to keep it from
fragmenting.

BTW, this is for a Legal department that has to look up files that can
be up to 20 years old, but they don't have all the information. That's
why the 3 million records in a lookup table. It is a subset of a HUGE
archive table that contains 255 columns, too.

Thanks Erland, keep up your postings.

Erland Sommarskog wrote:

Quote:

Originally Posted by

shelleybobelly (shelleybobelly@.yahoo.com) writes:

Quote:

Originally Posted by

I have a new job. It needs to drop and re-create (by insert) a table
every night. The table contains approximately 3,000,000 (and growing)
records. The insert is fine, runs in 2 minutes. The problem is that
when I create the indexes on the table, it is taking 15-20 minutes.
There is one clustered index and 11 non-clustered. This is a lookup
table that takes many different paremeters, so it really needs the
indexes for the user interface to run efficiently. However, the
database owners aren't keen on a job taking 20 minutes to run every
night.


>
Without knowing much about the data, it's difficult to tell. But I find it
difficult to believe that data changes that much in a lookup table. Then
again, I would not expect a look-up table to have three million rows.
Anyway, rather than dropping and recreating, maybe it's more effective
to load into staging table, and then update changed rows, insert new
ones, and delete old ones.
>
>
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
>
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx

|||Not starting over every night would certainly be the best approach,
but anyway...

One thing you might try is to leave the clustered index on the table,
and order the data by the clustering key before loading. It is
possible that the longer time to load with the index in place will be
more than offset by the time saved in creating the clustered index.

Roy Harvey
Beacon Falls, CT

On 18 Jul 2006 13:58:55 -0700, "shelleybobelly"
<shelleybobelly@.yahoo.comwrote:

Quote:

Originally Posted by

>Hi,
>
>I have a new job. It needs to drop and re-create (by insert) a table
>every night. The table contains approximately 3,000,000 (and growing)
>records. The insert is fine, runs in 2 minutes. The problem is that
>when I create the indexes on the table, it is taking 15-20 minutes.
>There is one clustered index and 11 non-clustered. This is a lookup
>table that takes many different paremeters, so it really needs the
>indexes for the user interface to run efficiently. However, the
>database owners aren't keen on a job taking 20 minutes to run every
>night.
>
>Any ideas?

|||"shelleybobelly" <shelleybobelly@.yahoo.comwrote in message
news:1153260795.628994.213350@.h48g2000cwc.googlegr oups.com...

Quote:

Originally Posted by

Right, I agree. If I run it the other way (just update the big lookup
table from the separate tables where there were changes), it runs much
quicker, only deleting/adding a thousand or so records a day. The
owners of this DB are doing things differently, but I guess I'll just
have to 'educate' them on this one. Then I'll create another job to
re-index this table about once a week or so to keep it from
fragmenting.
>
BTW, this is for a Legal department that has to look up files that can
be up to 20 years old, but they don't have all the information. That's
why the 3 million records in a lookup table. It is a subset of a HUGE
archive table that contains 255 columns, too.
>


To suggestions:

1) move the indexes to an NDF file on a separate set of physical disks.

This may help if you're disk I/O bound at all.

2) May want to consider using full-text indexing for some of this.

Quote:

Originally Posted by

Thanks Erland, keep up your postings.
>
>
Erland Sommarskog wrote:

Quote:

Originally Posted by

shelleybobelly (shelleybobelly@.yahoo.com) writes:

Quote:

Originally Posted by

I have a new job. It needs to drop and re-create (by insert) a table
every night. The table contains approximately 3,000,000 (and growing)
records. The insert is fine, runs in 2 minutes. The problem is that
when I create the indexes on the table, it is taking 15-20 minutes.
There is one clustered index and 11 non-clustered. This is a lookup
table that takes many different paremeters, so it really needs the
indexes for the user interface to run efficiently. However, the
database owners aren't keen on a job taking 20 minutes to run every
night.


Without knowing much about the data, it's difficult to tell. But I find


it

Quote:

Originally Posted by

Quote:

Originally Posted by

difficult to believe that data changes that much in a lookup table. Then
again, I would not expect a look-up table to have three million rows.
Anyway, rather than dropping and recreating, maybe it's more effective
to load into staging table, and then update changed rows, insert new
ones, and delete old ones.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at


http://www.microsoft.com/technet/pr...oads/books.mspx

Quote:

Originally Posted by

Quote:

Originally Posted by

Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx


>

|||On 18 Jul 2006 13:58:55 -0700, "shelleybobelly"
<shelleybobelly@.yahoo.comwrote:

Quote:

Originally Posted by

>the database owners aren't keen on a job taking 20 minutes to run
>every night.


Is it automated? Are there users doing regular work at night? If
the answers are "yes" and "no" respectively, then why do they care
how long it runs?

No comments:

Post a Comment