Monday, September 08, 2008

The most important DBA skill?

If you were interviewing for a SQL Server DBA position and they asked you, "What is the most important skill that a DBA should have?", what would you say?

Technical skill? Working under pressure? Ability to work as part of a team? Ability to follow the standard procedures and rules that apply to your position?

All of these are good answers, but I think that one of the most important, if not the most important, is people skills.

Before you disagree with me (if you do) let me explain.

I have known lots of people with excellent technical skills. They were smarter than me by a long shot, but they could not communicate with people to save their lives. They were either socially hindered, arrogant, or just plain rude when it came to talking to our customers. This lead to our customers requesting that DBA X not be assigned to their project. This lead to a very bad situation inside the DBA team and the DBA in question finally left, much to the relief of all.

The same applies to all of the answers given above (although if you are a good team player your people skills should be OK as well). I would rather hire someone with a little less experience and technical skill who is a great communicator and people person than an expert who can not relate to people or is rude or arrogant. If you have the right base skills, you can always learn the technical stuff. Learning the people stuff is not so easy.

What do you think? Am I right or do you disagree? Let me know. Thanks!

Jim

Wednesday, September 03, 2008

Generate password in SQL

Here is a nice little stored procedure to let you generate a random strong password. I have used this to help automate the automatic changing of the SA password for all production SQL servers every 30 days.

The process looks up the SA password for the given server from a secure table and then connects to the server. Then this store procedure is used to generate a new password which is then applied to the SA account and saved in the secure location.

This process replaced the manual process of logging into 30 different production servers every month and doing it all by hand. Worked very well.

Here is the code for the stored procedure. As always code is provided as is. Use with care.

CREATE PROCEDURE GenPassWord
@len INT = 8
AS

-- Creates a complex password of the passed in lenght.
-- If no lenght passed in, defaults to 8 characters

DECLARE @password varchar(25)
DECLARE @type tinyintDECLARE @bitmap char(6)

SET @password=''

WHILE @len > 0
BEGIN

SET @type = ROUND(1 + (RAND() * (3)),0)
IF @type = 1 --Appending a random lower case alphabet to @password

SET @password = @password + CHAR(ROUND(97 + (RAND() * (25)),0))
ELSE
IF @type = 2 --Appending a random upper case alphabet to @password
SET @password = @password + CHAR(ROUND(65 + (RAND() * (25)),0))
ELSE
IF @type = 3 --Appending a random number between 0 and 9 to @password
SET @password = @password + CHAR(ROUND(48 + (RAND() * (9)),0))
ELSE
IF @type = 4 --Appending a random special character to @password
SET @password = @password + CHAR(ROUND(33 + (RAND() * (13)),0))

SET @len = @len - 1

END

--Here's the result
SELECT @password

Friday, August 29, 2008

Simple SQL Server Security

Do a search on the web for SQL Server security and you will find 1,000's of documents about best practices and how to secure your server. But many times, the biggest security threats come from inside.


No, I am not talking about a disgruntled employee or a DBA with a side job selling company information on EBay. Don't get me wrong, those are very real dangers and should be considered and protected against.


No, what I am talking about are small innocent mistakes that can cause HUGE problems for you and your company.


A perfect example is not having some elemental things in place like...

  • Change Control
  • Build Procedures and SOP's
  • No peer review
  • No proactive monitoring

These things are all very simple things that should be in place, but are often not or are overlooked.


Let's look at a simple one, a build check list. As part of your job, you probably build a couple of SQL servers a month. Sometimes more, sometimes less. When you build your server (hopefully you have a server department that burns a standard image for the OS) how do you do it? Do you just do it from memory because you have done a couple of hundred installs and are very good at it, or do you have a check list document that walks you through it?

The answer should be: a check list document that walks you through it. It is way to easy to be in the middle of installing a SQL Server instance and get called away for some emergency and then come back and start the build again from where you left off. But the problem is you skip a step you "know" you have already done and something gets left in place that should not or something does not get added that should. Having a build document greatly reduces the possibility of missing something vital and helps to make sure that every server is exactly the same as all the others.

I know that no process is perfect. Mistakes will be made. The question is how many? I know from experience how easy it is to not change the SQL port from the default 1433 to the company approved XXXXX and then have security call and ask why after they run their next security audit. Not a lot of fun. And potentially very serious, if you are in a PCI or similar environment.  And much more serious issues can arise like blank passwords, default accounts that should have been removed but were not, and so on.

Change control is another. A developer emails you a script and says it has to be ran right now due to the emergency nature of the issue. As soon as you see the email you run it and call the developer to let him know the change is done. Only to find out that he got tired of waiting for you (since you were in the bathroom for like 5 minutes) and he called DBA Bob and had him run it. Now you and Bob have made the same change withing minutes of each other without knowing it. Hopefully, the multiple runs don't mess anything up, but worse case is that the ecommerce site is now down and it will take you 20 minutes to fix what took 30 seconds to break. All because you don't have a defined change manage process.

Many of you are probably laughing and saying "Yea right, like that really happens.". Well I hope you are, because that means you have never had it happen to you. Yet. But it happens all to often in some fairly large shops that should know better.

So, secure your servers as suggested by Microsoft and the others, but also make sure you have the simple stuff covered as well.

The Devil is always in the details and the small ones are the ones he likes best.

Jim


Tuesday, August 26, 2008

The true cost of sub-select in queries

Yesterday I was working on a stored procedure trying to help to optimize it. It has some subselects in a big select statement and that was one of the main issues. One of the subselects was responsible for almost 50% of the relative cost of the stored procedure.

That got me to thinking, what is the real cost of subselects in queries? I "know" they are bad, but just how bad are they? This is probably old news to you all, but I found it very interesting to actually see the cost in how many reads were produced by each type of query. So I did a little test. Here are the parameters.

I created three simple tables (Table_A, Table_B, Table_C) with no indexes. Here is the structure (all three are the same).

ID INT (Identity)
OrderID INT
Fname VARCHAR(50)
Lname VARCHAR(50)



Then I filled each table with random numbers and letters.

Then I wrote the following 3 queries to help measure how many reads each type of query would produce.

#1 (subselect with no temp tables)
select * from table_a
where orderID not in (select distinct orderid from table_b)
and orderID not in (select distinct orderid from table_c)

#2 (subselect using temp tables)
Select distinct OrderID into #B from Table_B
Select distinct OrderID into #c from Table_C

select * from table_a
where orderID not in (select orderid from #b)
and orderID not in (select orderid from #c)

drop table #b
drop table #c

#3 (Joins using temp tables)
Select distinct OrderID into #B from Table_B
Select distinct OrderID into #c from Table_C

select * from table_a a
left join #b b on a.orderid = b.orderid
left join #c c on a.orderid = c.orderid
where b.orderid is null and c.orderid is null

drop table #b
drop table #c

Then using SQL Profiler, here are the results
(please forgive the horrible, ugly table)...



















# of records in tableJoinTotal
ABCTypeReads
5,0005,0005,000SubSelect no temp table25,063
5,00055,00055,000SubSelect no temp table50,548
5,000100,000100,000SubSelect no temp table51,022
50,000100,000100,000SubSelect no temp table251,002
5,0005,0005,000SubSelect with temp table40,505
5,00055,00055,000SubSelect with temp table40,799
5,000100,000100,000SubSelect with temp table41,091
50,000100,000100,000SubSelect with temp table401,273
5,0005,0005,000Join using temp table476
5,00055,00055,000Join using temp table799
5,000100,000100,000Join using temp table1,091
50,000100,000100,000Join using temp table1,237



To tell you the truth, I was surprised at just how much more IO was produced using SubSelects than not. Just thought I would share.



Friday, August 22, 2008

SQL Server Migration (aka The Big Pain)

Almost everyone has had the pleasure of moving a SQL instance from the old server to a new server. Whatever the reason for it (lease replacement, hardware upgrade, consolidation) it is never fun.


The last two places that I have worked, have always renamed the new physical server so that the SQL Server name had to change as well. Our SQL Server naming rules went something like this...


  • Location - Purpose - Client - Environment - Numeric ID

So the servers were named...


  • STL_SQL_PROD_ABCInc_001

and the replacement server would always be...


  • STL_SQL_PROD_ABCInc_095 or some such.

So moving the SQL instance was always a headache. But I did discover some tricks to make it easier. Here are some things that I have found very useful.


1. Use sp_help_revlogins. This is a stored procedure you can find on the Microsoft website or on the web (there is a 2000 and a 2005 version) that will script out all of the users and passwords (encrypted of course) so that you can move them to the new server. The nice thing is that it keeps the original SID so that when you restore your databases from old to new, everything will match up and you won't have any orphaned users.


2. If you are moving a SQL 2000 instance, you might have to move a bunch of DTS packages as well. One instance I migrated had over 1,000 packages! If they are saved at the OS File level, not a big deal, but if they are stored in the MSDB database, it can be a real hassle. You don't want to backup / restore the msdb because who knows what that will break, and there is no good way to script out the packages without messing them up (especially the comments) as far as I know. So here is what I found.


A) Make sure the SIDs for any standard users match on both servers. You can use a stored procedure called sp_help_revlogin (see above) to recreate the user, sid, and password on the new server.


B) Create a linked server from the old server to the new server. Set the security context to be "use login's security context". That way you are yourself (and SA) on both servers.

C) Run this query.
INSERT LinkedServerName.msdb.dbo.sysdtspackages
SELECT * FROM msdb.dbo.sysdtspackages order by name,createdate

The packages are stored in an image column so it copies it over exactly. But it might take a bit. It took me about 10 minutes to copy 189 packages, the last time I did this.


3. If you have to copy file shares yourself (hopefully you storage guys will do it for you but lots of times I have had to do it myself), use RoboCopy. RoboCopy is normally part of the server build and should be on the server already. If not, XCopy (yes the old DOS command) can be used. Both of these have a bunch of flags that can be set so you can copy over the permissions for file shares as well as the data.

There is nothing I enjoy more then having to add 50 users to a share folder by typing each one in. Maybe there is a way to copy and paste or some script that you can write, but I have not found it yet.

Since I have found RoboCopy, I normally do a full copy with permission settings from Server Old to Server New and then set up a bat file just to copy new or changed data after that and run it a couple of times a day. Then when you are ready to do your cut over, it should only take a few minutes to copy the last bit of new and or changed data and your done.


4. I have found it to be normal that these cut overs usually take place about 3 am on Sat or Sun morning. So the last thing I want to do is sit around and try to figure out why the application can't connect to the SQL server. So, I make sure that I strongly "suggest" to the developers that they check all the connectivity to the new server and that they go through and update all their stored procedures and DTS packages that might reference the old server name. And that they test each stored procedure and DTS package before the switch. This makes sure their stuff is working and checks my setup as well.

This means more work for you and maybe a few irked emails, but I have found it to be worth it. I would rather have someone complain that I am bugging them every day than have to spend hours rolling back a failed migration and then explaining that failure to my boss and his boss too. I have done that and it is not fun.

Is it really your job as a DBA to ask the developers to test on the new server? No, not really, at least not in most places. Is it worth it if no one else is stepping up to do it? Yes, I think so.


Well, there are some of my helpful hints on how to manage a SQL Server Migration. I am sure that other people have great tips and ways of doing this and I would love to hear them.

Leave a comment or send me an email with how you handle it.

Thanks!!

SQL Tokenizer

Sometimes you need to be able to pass a delimited string to a stored procedure and then have the procedure break that string into individual pieces (tokens) and process each.

A good example would be zip codes. You have a stored procedure that will take a zip code and return given information about it. Works great but now you need to be able to pass in a number of zip codes and then return the information on all of them in one report.

Lots of ways to do it, but I found this to be kind of handy. This code will take a delimited string and create a temp table of tokens that can then be processed in a loop. So if you pass in "12345,23456,34567,45678" it will create a temp table that looks like this...

12345
23456
34567
45678

and then you can process each entry. It can easily be turned into a stored procedure or function or just added to existing code.

Please feel free to use and modify this code as you need. As always this code is presented "as is" and you use at your own risk.

DECLARE
@TolkenString VARCHAR(500),
@i INT,
@iLenght INT,
@Tolken VARCHAR(500),
@Delimiter CHAR(1),
@Character VARCHAR(500)

SET @TolkenString = 'The brown Fox'

SET @iLenght = LEN(@TolkenString)
SET @i = 1
SET @Delimiter = ' '
SET @Tolken = ''


CREATE TABLE #TempTolken
(
Tolken VARCHAR(500)
)


--select @iLenght

While @i < = @iLenght BEGIN SELECT @Character = substring(@TolkenString,@i,1) IF @Character = @Delimiter BEGIN INSERT #TempTolken Values(@Tolken)
SET @Tolken = ''
END
ELSE
BEGIN
SET @Tolken = @Tolken + substring(@TolkenString,@i,1)
--SELECT @Tolken
END

SET @i = @i + 1

END

INSERT #TempTolken Values(@Tolken)
SET @Tolken = ''

SELECT * FROM #TempTolken

DROP TABLE #TempTolken

Calculate Business Hours

I have been writing stored procedures to do reports for various projects for as long as I have worked with SQL Server. And one thing that always gave me trouble was calculating the amount of business days (or hours) that had passed.

A great example is an order fulfillment aging report. Your boss wants to see all the orders that are more than 6 business hours old because your Service Level Agreement (SLA) states you will have all orders out the door in <= 6 hours. Your normal work day is 9 AM to 5 PM, Monday though Friday. So how do you do it? An order placed Friday night at 8 pm and shipped Monday at 12 pm is 64 hours old, but only 4 business hours old. So how do you calculate that?


I looked in a lot of places and found some "unique" methods of calculating this before I finally built my own. This function is pretty straight forward and I am sure it can be improved upon greatly, but it should give you a good idea of where to start.

One thing I need to do is add a parameter to set business days and hours. Right now it is hard coded for M-F from 8 am till 4 pm. Please feel free to use or modify this code as you might need. All code examples are presented AS IS and should be used at your own risk.


CREATE FUNCTION [dbo].[svf_CalcBusinessHours]
(
@StartDate DATETIME,
@EndDate DATETIME
)

RETURNS DECIMAL(10,2)

AS
/************************************************************************************************
Created 8/21/2008 by Jim Youmans. Code provided as is.

This function takes a start and end date and then counts the minutes based on
the M-F workday of 8 AM till 4 PM. Returns the number of hours in decimal
form (10,2).

*************************************************************************************************/

BEGIN

DECLARE @bFlag INT
DECLARE @TotalMinutes DECIMAL(10,2)
DECLARE @TotalHours DECIMAL(10,2)

SET @TotalMinutes = 0
SET @bFlag = 0

WHILE @bFlag = 0
BEGIN

IF (DATEPART(dw,@StartDate) >= 2) AND (DATEPART(dw,@StartDate) <= 6) BEGIN IF (DATEPART(hh,@StartDate) >= 8) AND (DATEPART(hh,@StartDate) < totalminutes =" @TotalMinutes">= @EndDate

SET @bFlag = 1

SET @StartDate = DATEADD(n,1,@StartDate)

END

SET @TotalHours = (@TotalMinutes / 60.0)
RETURN @TotalHours

END

Thursday, August 21, 2008

Rebuilding a database's indexes

Many DBA's that I know will use the Maintenance Plan Wizard to set up the reindexing for their databases. In my opinion, this is a mistake.

When you use a maintenance plan to reindex your database, you are rebuilding every index no matter if it needs it or not. This is like sending you car for a tune up once a week. It is excessive and can cause more harm than good.

For example, lets say we have a large database that is scheduled to be reindexed each night at 1:00 AM. On average it takes 7 hours for the job to complete. That means you have a 7 hour window with random time outs and delays (or so it will appear to your users) that are caused by your reindexing job locking tables and rebuilding indexes that may or may not need to be rebuilt. (Yes, in SQL 2005 Enterprise Edition there is online reindexing but lets assume you don't have the Enterprise Edition).

Wouldn't it make more sense to to just rebuild the indexes that are > x% fragmented? Also, if you only have to reindex 25% of the indexes each night instead of 100%, the job should run a lot faster.

So, we need an intelligent reindex stored procedure that will only rebuild those indexes that need to be rebuilt and that keeps track of what indexes are being worked on and how long it takes.

Well, here is a good start to that stored procedure. Please feel free to use and modify at will. I save results to a table and it also looks at each index's usage to see if it is being used or not.

I take no responsibility for the use of the below code. Please use at your own risk. (I should not have to say this but just in case: You should never just use posted code without reviewing it yourself and testing the crap out of it.)

This is for SQL 2005, I will post one for SQL 2000 later.


ALTER PROCEDURE [dbo].[ap_RebuildIndexes_2005] (
@maxfrag float = 20.0
, @MinUsage INT = 15
, @databasename varchar(255))

AS

/*
Created by Lara Rubbelke - 7/29/2007

Modified by

9/15/2007 - Jim Youmans - Added logic to write results to a table into a user database. Hardcoded database name, but could easily be passed as a parameter if need be.

8/6/2008 - Jim Youmans - Added the ability to identify and ignore unused indexes. @MinUsage is the sum of User_Seeks + User_Scans + User_Lookups from the DM view sys.dm_db_index_usage_stats

8/8/2008 - Jim Youmans - Added logic to ignore disabled indexes.


ap_RebuildIndexes_2005 is a process which will assess the level of fragmentation of all indexes in a database and reorganize those indexes which fall outside the specified parameters.

ap_RebuildIndexes_2005 accepts the following parameters:
@maxfrag The maximum level of acceptable fragmentation
@maxdensity The minimum level of acceptable density -- not in use
@databasename The database to assess and reorganize

This procedure assumes that a partitioned index will not be processed ONLINE.

*/

SET NOCOUNT ON;

-- Make sure database is set to compatibility level 90 (SQL 2005)

IF NOT EXISTS(select * from master.sys.databases where name = @DatabaseName and compatibility_level=90)
BEGIN
PRINT 'This stored procedure can only run on SQL 2005 databases with the compatibility_level set to 90'
RETURN 0
END


DECLARE
@schemaname sysname
, @objectname sysname
, @indexname sysname
, @indexid int
, @currentfrag float
, @currentdensity float
, @partitionnum varchar(10)
, @partitioncount bigint
, @indextype varchar(18)
, @updatecommand varchar(max)
, @command varchar(max)
, @ID int
, @IndexUsage int
, @SQLCommand varchar(1000)


-- ensure the temporary table does not exist
IF (SELECT object_id('tempdb..#work_to_do')) IS NOT NULL
DROP TABLE #work_to_do;

CREATE TABLE #work_to_do(
IndexID int not null
, IndexName varchar(255) null
, TableName varchar(255) null
, Tableid int not null
, SchemaName varchar(255) null
, IndexType varchar(18) not null
, PartitionNumber varchar(18) not null
, PartitionCount int null
, CurrentDensity float not null
, CurrentFragmentation float not null
, IndexUsage INT
);


-- 8/8/2008 Jim Youmans
-- Since this sp can be run from a centeral database against other databases on
-- the same server, we need to make sure we are checking the index information
-- for the target server and not source.
SET @SQLCommand = 'SELECT * INTO ##tempIndex FROM ' + @DatabaseName + '.sys.indexes'
EXEC(@SQLCommand)

-- This will only pick up indexes which are >= the @MaxFrag level
INSERT INTO #work_to_do(
IndexID, Tableid, IndexType, PartitionNumber, CurrentDensity, CurrentFragmentation
)
SELECT
fi.index_id
, fi.object_id
, fi.index_type_desc AS IndexType
, cast(fi.partition_number as varchar(10)) AS PartitionNumber
, fi.avg_page_space_used_in_percent AS CurrentDensity
, fi.avg_fragmentation_in_percent AS CurrentFragmentation
FROM sys.dm_db_index_physical_stats(db_id(@databasename), NULL, NULL, NULL, 'SAMPLED') AS fi
LEFT JOIN ##tempIndex i on fi.object_id = i.object_id AND fi.index_id = i.index_id
WHERE fi.avg_fragmentation_in_percent >= @maxfrag
AND fi.page_count > 1000
AND fi.index_id > 0
AND i.is_disabled = 0

DROP TABLE ##tempIndex

--Assign the index names, schema names, table names and partition counts
SET @updatecommand = 'UPDATE #work_to_do SET TableName = o.name, SchemaName = s.name, IndexName = i.Name
,PartitionCount = (SELECT COUNT(*) pcount
FROM '
+ QUOTENAME(@databasename) + '.sys.Partitions p
where p.Object_id = w.Tableid
AND p.index_id = w.Indexid)
FROM '
+ QUOTENAME(@databasename) + '.sys.objects o INNER JOIN '
+ QUOTENAME(@databasename) + '.sys.schemas s ON o.schema_id = s.schema_id
INNER JOIN #work_to_do w ON o.object_id = w.tableid INNER JOIN '
+ QUOTENAME(@databasename) + '.sys.indexes i ON w.tableid = i.object_id and w.indexid = i.index_id';

EXEC(@updatecommand)


-- Look at the indexes selected to see how much they are being used.
UPDATE #Work_To_Do
SET IndexUsage = (us.User_Seeks + us.User_Scans + us.User_Lookups)
FROM sys.dm_db_index_usage_stats us
RIGHT JOIN #work_to_do wtd ON us.index_id = wtd.IndexID and us.object_id = wtd.tableID
WHERE us.Database_ID = DB_ID(@DatabaseName)


--Declare the cursor for the list of tables, indexes and partitions to be processed.
--If the index is a clustered index, rebuild all of the nonclustered indexes for the table.
--If we are rebuilding the clustered indexes for a table, we can exclude the
--nonclustered and specify ALL instead on the table

DECLARE rebuildindex CURSOR FOR
SELECT QUOTENAME(IndexName) AS IndexName
, TableName
, SchemaName
, IndexType
, PartitionNumber
, PartitionCount
, CurrentDensity
, CurrentFragmentation
, IndexUsage
FROM #work_to_do i
ORDER BY TableName, IndexID;


-- Open the cursor.
OPEN rebuildindex;

-- Loop through the tables, indexes and partitions.
FETCH NEXT
FROM rebuildindex
INTO @indexname, @objectname, @schemaname, @indextype, @partitionnum, @partitioncount, @currentdensity, @currentfrag, @IndexUsage;

WHILE @@FETCH_STATUS = 0
BEGIN

-- jdy: 9/15/2007 - Changed this to only do rebuild and not reorganize. Also got rid of hard coded number
-- and used passed in parameter. Also write a log to a logging table to keep track of whats done,
-- when, and how long it took.

SELECT @command = 'ALTER INDEX ' + @indexname + ' ON ' + QUOTENAME(@databasename) +'.' + QUOTENAME(@schemaname) + '.' + QUOTENAME(@objectname) + ' REBUILD';

IF @partitioncount > 1
SELECT @command = @command + ' PARTITION=' + @partitionnum;


INSERT dbo.IndexLog(Databasename, TableName, SchemaName, IndexName, CurrentDensity, CurrentFragmentation,IndexUsage)
Values(@DatabaseName, @objectname, @schemaname, @indexname, @currentdensity, @currentfrag,@IndexUsage)

/*
-- Table structure for the IndexLog table in case you need to create it.
CREATE TABLE [dbo].[IndexLog](
[ID] [int] IDENTITY(1,1) NOT NULL,
[DatabaseName] [varchar](255) NULL,
[TableName] [varchar](255) NULL,
[SchemaName] [varchar](255) NULL,
[IndexName] [varchar](255) NULL,
[CurrentDensity] [float] NULL,
[CurrentFragmentation] [float] NULL,
[StartTime] [datetime] NULL CONSTRAINT [DF_IndexLog_StartTime] DEFAULT (getdate()),
[EndTime] [datetime] NULL,
[IndexUsage] [int] NULL,
[Notes] [varchar](50) NULL
CONSTRAINT [PK_IndexLog] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
*/

-- get the ID number for the newly inserted row.
SET @ID = @@Identity;

-- If index is not used much then make note of it but do not reindex it
IF @IndexUsage < @MinUsage BEGIN -- Update Notes but do not reindex UPDATE dbo.IndexLog SET Notes = 'Index may not be being used' WHERE ID = @ID; END ELSE BEGIN -- execute the reindex command EXEC (@command); END -- update endtime for log table for this command UPDATE dbo.IndexLog SET EndTime = GETDATE() WHERE ID = @ID; SET @ID = NULL; FETCH NEXT FROM rebuildindex INTO @indexname, @objectname, @schemaname, @indextype, @partitionnum, @partitioncount, @currentdensity, @currentfrag, @indexusage; END; -- Close and deallocate the cursor. CLOSE rebuildindex; DEALLOCATE rebuildindex; DROP TABLE #Work_To_DO; SET NOCOUNT OFF;