Reading from log files

If you really like looking at log files in real time (I won’t judge you), try using the Powershell command below:

get-content <Your log file path> -wait

That’s it, now new lines should be presented to your Powershell console as they are written to the log file.

Lovely.

Advertisements

Shrinking Databases

Hands up if you’ve had to shrink a database and it has taken aggggeesss…me too, in fact it was causing quite a problem

To qualify, I don’t recommend shrinking production databases either, this was to shrink a database on a transient server to go a test system.

DBCC SHRINKFILE works by removing pages from the end of the file. To do this it has to first move pages that are there (and presumably update pointer records when it does this) and DBCC SHRINKFILE works on a single thread.

After a visit to a local SQL Server user group, a talk by Chris McGowan on indexing highlighted a tool I had not used before (internals viewer). Install SSMS 2008 R2, install the add-in and take closer look your file.

For testing, create a brand new database then take a look at it with internals viewer:

CREATE DATABASE [ShrinkExample]
ONĀ  PRIMARY ( NAME = N'ShrinkExample', FILENAME='E:\ShrinkExample.mdf', SIZE = 50MB)
LOG ON ( NAME = N'ShrinkExample_log', FILENAME='E:\ShrinkExample_log.ldf',SIZE = 5MB);

Step1

Now create a table with some data in it:

SELECT TOP 480000
IDENTITY(INT,1,1) N
INTO SomeData
FROM syscolumns s1, syscolumns s2;
GO
SELECT TOP 480000
IDENTITY(INT,1,1) N
INTO SomeData2
FROM syscolumns s1, syscolumns s2;
GO
SELECT TOP 480000
IDENTITY(INT,1,1) N
INTO SomeData3
FROM syscolumns s1, syscolumns s2;

and the allocation map should look like this:

Step 3

Create a Non-Clustered index on Somedata

CREATE NONCLUSTERED INDEX NCIX_SomeData ON SomeData(N);

and than the allocation map looks like:
Step 4

So far all makes sense, each table has been created, they are near the start of the file and the non-clustered index is not far after them all.

Now watch what happens if we drop Somedata2, rebuild our index on Somedata, then recreate Somedata2.

DROP TABLE SomeData2;
GO
ALTER INDEX NCIX_SomeData ON SomeData REBUILD;
GO
SELECT TOP 480000
IDENTITY(INT,1,1) N
INTO SomeData2
FROM syscolumns s1, syscolumns s2;
GO

Step 5

Now our non-clustered index is way down the end of the file. This causes a problem when you want to shrink the database because DBCC SHRINKFILE or DBCC SHRINKDATABASE has to move the pages earlier in the file to be able to remove pages from the end.

If we rebuild the index yet again:

ALTER INDEX NCIX_SomeData ON SomeData REBUILD;

Step 6

The extents\pages for the index are now nearer the start of the file.

This blog post from Paul S Randal has a really useful query which let you estimate where the pages are. It is an estimate though as the first page doesn’t indicate where the rest of the pages are.

After making an exhaustive list of indexes and tables that were at the end of the file, it turned out rebuilding all the indexes only took 8 minutes, and then creating and dropping clustered indexes on all the heaps so they moved down the file only took 5. The good thing about the index rebuilds is they can benefit from parallel execution, so they should out perform DBCC SHRINKFILE.

For all those having problems with shrinking databases, I hope this helps.