Pages

Wednesday, October 31, 2012

Who Did that? The Default Trace Knows

We had an issue at work the other day where a database name was changed from MyDatabase to MyDatabase DO NOT USE. This change caused the overnight ETL process to fail. The question immediately arose, “When did this change and who made the change?” Using the default trace in SQL Server I was quickly able to determine when and who made the change.

The default trace is a system trace that is enabled by default in SQL Server 2005,2008, and 2012. According to BOL this trace captures information mostly relating to configuration changes. The 2012 documentation mentions that this functionality will be removed in a future version and we should use extended events instead. I however, find the default trace particularly useful because it is not something that I have to configure and enable on a server-by-server basis . For those of you that have viewed the database disk usage report in Management Studio, you are already familiar with the default trace. The autogrow / autoshrink events section of the database disk usage report is pulled from the default trace information.

To view the default trace you can open it in SQL Profiler or use the function fn_trace_gettable to query the trace file. It should be noted that the default trace rolls over after restarts and after the trace file reaches 20MB. Only 5 trace files are maintained so on a busy system the default trace will not hold a lot of history but for the instance we had the other morning the default trace was perfect.

Here is a copy of the script I use to query the default trace. I choose to filter out the backup and restore information, event_id 115, to make the results easier to analyze.

1:   DECLARE @Path    NVARCHAR(250);  
2:    
3:    SELECT  
4:      @Path = REVERSE(SUBSTRING(REVERSE([path]),  
5:      CHARINDEX('\', REVERSE([path])), 260)) + N'log.trc'      
6:    FROM  sys.traces  
7:    WHERE  is_default = 1;  
8:    
9:    SELECT DatabaseName,  
10:      Filename,  
11:      (Duration/1000)       AS Duration,  
12:      t.StartTime,  
13:      EndTime,  
14:      EventClass,  
15:      te.name  
16:      TextData,  
17:      LoginName,  
18:      ApplicationName,  
19:      Hostname,  
20:      (IntegerData * 8.0/1024)  AS ChangeInSize  
21:    FROM ::fn_trace_gettable(@Path, DEFAULT) t  
22:    JOIN sys.trace_events te  
23:      on t.EventClass = te.trace_event_id  
24:    WHERE t.StartTime > '20121028'  
25:      AND EventClass != 115  
26:    ORDER BY t.StartTime DESC;  

By passing the second parameter for fn_trace_gettable as “Default” the function reads the information from all of the trace files. Joining the trace table with the sys.trace_events system view allows me to pull the name of the trace event.
Here are the results after changing my database name from “Sample” to “Sample DO NOT USE”.

The default trace can be a very useful tool for finding information about your server instance. I urge you to investigate it further for yourself. Bear in mind when using the default trace that depending on service restarts and the level of activity on the server that information captured by the default trace may not stick around very long.

Thursday, October 25, 2012

SQL Server NUMA Surprise


Today I welcome myself back to my blog after months of neglect, sorry blog. 

Today as part of my MCM prep I decided to dive into Software NUMA and SQL Server.  I have read a lot of information regarding Software NUMA over the years but felt that I needed to dive deeper into NUMA to better understand exactly how and when to configure it. 

As part of my studies I came across two blog posts from Johnathan Kehayias (Blog | Twitter) regarding NUMA.  In his second blog on NUMA SQL Server and Soft NUMA  Johnathan does a great job of walking through how he calculated the CPU mask to divide a 24 CPU server into 4 NUMA nodes.  After reading through the blog post I was shocked to find out at the end that Johnathan discovered NUMA does not work as reported in BOL and MSDN. 
 
"The benefits of soft-NUMA include reducing I/O and lazy writer bottlenecks on computers with many CPUs and no hardware NUMA. There is a single I/O thread and a single lazy writer thread for each NUMA node. Depending on the usage of the database, these single threads may be a significant performance bottleneck. Configuring four soft-NUMA nodes provides four I/O threads and four lazy writer threads, which could increase performance."
After configuring 4 Soft NUMA nodes SQL Server still only created one lazy writer thread.  This is contrary to BOL which as you can see states that SQL Server will create a lazy writer for each Soft NUMA node.  In his post Johnathan references a post from Bob Dorr, an escalation engineer at Microsoft in the Product Support division, called How It Works: Soft NUMA, I/O Completion Thread, Lazy Writer Workers and Memory Nodes. In his post Bob explains that additional lazy writer threads are only created for Hard NUMA nodes and not Soft NUMA nodes.

I find this particularly interesting because this goes against everything I studied for the MCITP: Database Administrator certification.  Given this new information from Bob Dorr and Johnathan Kehayias, the next question is why would I implement Soft NUMA?  Soft NUMA will handle I/O completion threads.  When we think of I/O completion threads we would generally think this means writing data to transaction and data files but as Bob Dorr points out, I/O completion threads actually handle connection requests and TDS data. 

If you are considering implementing Soft NUMA or just want to learn more about NUMA I urge you to read the aforementioned blog posts from Johnathan and Bob.