Pages

Wednesday, October 31, 2012

Who Did that? The Default Trace Knows

We had an issue at work the other day where a database name was changed from MyDatabase to MyDatabase DO NOT USE. This change caused the overnight ETL process to fail. The question immediately arose, “When did this change and who made the change?” Using the default trace in SQL Server I was quickly able to determine when and who made the change.

The default trace is a system trace that is enabled by default in SQL Server 2005,2008, and 2012. According to BOL this trace captures information mostly relating to configuration changes. The 2012 documentation mentions that this functionality will be removed in a future version and we should use extended events instead. I however, find the default trace particularly useful because it is not something that I have to configure and enable on a server-by-server basis . For those of you that have viewed the database disk usage report in Management Studio, you are already familiar with the default trace. The autogrow / autoshrink events section of the database disk usage report is pulled from the default trace information.

To view the default trace you can open it in SQL Profiler or use the function fn_trace_gettable to query the trace file. It should be noted that the default trace rolls over after restarts and after the trace file reaches 20MB. Only 5 trace files are maintained so on a busy system the default trace will not hold a lot of history but for the instance we had the other morning the default trace was perfect.

Here is a copy of the script I use to query the default trace. I choose to filter out the backup and restore information, event_id 115, to make the results easier to analyze.

1:   DECLARE @Path    NVARCHAR(250);  
2:    
3:    SELECT  
4:      @Path = REVERSE(SUBSTRING(REVERSE([path]),  
5:      CHARINDEX('\', REVERSE([path])), 260)) + N'log.trc'      
6:    FROM  sys.traces  
7:    WHERE  is_default = 1;  
8:    
9:    SELECT DatabaseName,  
10:      Filename,  
11:      (Duration/1000)       AS Duration,  
12:      t.StartTime,  
13:      EndTime,  
14:      EventClass,  
15:      te.name  
16:      TextData,  
17:      LoginName,  
18:      ApplicationName,  
19:      Hostname,  
20:      (IntegerData * 8.0/1024)  AS ChangeInSize  
21:    FROM ::fn_trace_gettable(@Path, DEFAULT) t  
22:    JOIN sys.trace_events te  
23:      on t.EventClass = te.trace_event_id  
24:    WHERE t.StartTime > '20121028'  
25:      AND EventClass != 115  
26:    ORDER BY t.StartTime DESC;  

By passing the second parameter for fn_trace_gettable as “Default” the function reads the information from all of the trace files. Joining the trace table with the sys.trace_events system view allows me to pull the name of the trace event.
Here are the results after changing my database name from “Sample” to “Sample DO NOT USE”.

The default trace can be a very useful tool for finding information about your server instance. I urge you to investigate it further for yourself. Bear in mind when using the default trace that depending on service restarts and the level of activity on the server that information captured by the default trace may not stick around very long.

Thursday, October 25, 2012

SQL Server NUMA Surprise


Today I welcome myself back to my blog after months of neglect, sorry blog. 

Today as part of my MCM prep I decided to dive into Software NUMA and SQL Server.  I have read a lot of information regarding Software NUMA over the years but felt that I needed to dive deeper into NUMA to better understand exactly how and when to configure it. 

As part of my studies I came across two blog posts from Johnathan Kehayias (Blog | Twitter) regarding NUMA.  In his second blog on NUMA SQL Server and Soft NUMA  Johnathan does a great job of walking through how he calculated the CPU mask to divide a 24 CPU server into 4 NUMA nodes.  After reading through the blog post I was shocked to find out at the end that Johnathan discovered NUMA does not work as reported in BOL and MSDN. 
 
"The benefits of soft-NUMA include reducing I/O and lazy writer bottlenecks on computers with many CPUs and no hardware NUMA. There is a single I/O thread and a single lazy writer thread for each NUMA node. Depending on the usage of the database, these single threads may be a significant performance bottleneck. Configuring four soft-NUMA nodes provides four I/O threads and four lazy writer threads, which could increase performance."
After configuring 4 Soft NUMA nodes SQL Server still only created one lazy writer thread.  This is contrary to BOL which as you can see states that SQL Server will create a lazy writer for each Soft NUMA node.  In his post Johnathan references a post from Bob Dorr, an escalation engineer at Microsoft in the Product Support division, called How It Works: Soft NUMA, I/O Completion Thread, Lazy Writer Workers and Memory Nodes. In his post Bob explains that additional lazy writer threads are only created for Hard NUMA nodes and not Soft NUMA nodes.

I find this particularly interesting because this goes against everything I studied for the MCITP: Database Administrator certification.  Given this new information from Bob Dorr and Johnathan Kehayias, the next question is why would I implement Soft NUMA?  Soft NUMA will handle I/O completion threads.  When we think of I/O completion threads we would generally think this means writing data to transaction and data files but as Bob Dorr points out, I/O completion threads actually handle connection requests and TDS data. 

If you are considering implementing Soft NUMA or just want to learn more about NUMA I urge you to read the aforementioned blog posts from Johnathan and Bob.

Saturday, June 2, 2012

Index Statistics Norecompute


     I’ve been meaning to write about this one for a while but life has gotten in the way.  There seems to be some confusion over the index setting statistics norecompute so I thought I would write about it in hopes of shedding some light on the subject.

     When you specify an alter index statement, one of the arguments you may specify is STATISTICS_NORECOMPUTE ON or OFF.  This setting tells the database engine whether or not it should automatically recompute distribution statistics for the index.  

     When you create an index statistics are automatically created for the associated index.  By default SQL Server will automatically update statistics when 20% + 500 columns of data have been modified.  So on a table with 100,000 rows once 20,500 of the column data has changed statistics will be recomputed or in the case of SQL Server 2005 and 2008 the statistic will be marked for updating.   In 2005 and 2008 SQL Server will flag statistics as out-of-date and will update statistics the next time they are accessed. 

     This is where the STATISTICS_NORECOMPUTE setting for Alter Index comes into play.  Setting STATISTICS_NORECOMPUTE to ON for an index will disable the auto updating of statistics for the index once the threshold for changed data has been reached.  This is the same effect as creating statistics with NORECOMPUTE except you don’t have to drop and recreate the statistic to change the setting.  In order to enable auto updating of index statistics once it has been disabled simply rebuild the index with the NORECOMPUTE OFF parameter.

     The confusion for this setting comes from the fact that people think NORECOMPUTE refers to the updating of statistics during an index rebuild.  The thought is that you can speed up an index rebuild by telling SQL Server not to recomputed statistics during the rebuild.  This is not the case.  SQL Server will update index statistics during a rebuild regardless of the NORECOMPUTE setting for the index.
To validate this I did a test.  I used the sys.stats system view along with the STATS_DATE function to return the statistics information for my table along with the date and time the statistics were created.  The STATS_DATE function accepts the Object_Id and the Stats_Id from sys.stats and returns the date the statistics were last update.  I ran the following statement:

1:  ALTER INDEX PK_INF_GROUP  
2:  ON INF_GROUP  
3:  REBUILD  
4:  WITH( STATISTICS_NORECOMPUTE = ON )  

Following that I ran this query to view the statistics information:

1:  SELECT *  
2:      , StatDate = STATS_DATE(s.object_id, s.stats_id)  
3:  FROM sys.STATS s  
4:  WHERE object_id = object_id('INF_GROUP')  

From the results below I can see that although no_recompute is set for my index the statistics for the index were recomputed.
    
 


     You can also see from these results that the other statistics for my table were not updated during the index rebuild.   By viewing the details of the statistics in SQL Server Management Studio I can further see that the statistics were updated using a full scan and not a sampling of the data.

     There may be some situations where you would want to disable the automatic updating of statistics.  One that comes to mind is if your data is skewed and sampling does not provide an accurate enough picture of the distribution of data to allow SQL Server to choose an optimal execution plan.  However, you cannot disable the updating of index statistics by setting the STATISTICS_NORECOMPUTE parameter to On during an index rebuild.

     Kimberly Tripp has blogged extensively about index statistics and I suggest you check out her blog at www.SQLSkills.com/blogs/Kimberly for more in-depth information on statistics.

Monday, April 30, 2012

SQL Saturday #130

This weekend I attended my first SQL Saturday event here in Jacksonville.  I highly recommend anyone that has an opportunity to attend one of these events to go ahead and attend.  The sessions were great and I was also able to network with people that are as enthusiastic about SQL Server as I am.

My company was actually nice enough to spring for the SQL Saturday Pre-Conference, which was a presentation from MVP Kevin Kline.  Kevin gave an excellent all day presentation on Troubleshooting & Performance Tuning SQL Server.  This presentation included an explanation of wait stats, monitoring, DMVs, and many other topics.  Unfortunately I was forced to leave the Pre-Con a bit early to officiate a play-in game in Orlando for the Florida State lacrosse tournament but I still came away with a few takeaways from the presentation.  


The actual SQL Saturday conference was as informative as the Pre-Con.  I followed the DBA track all afternoon but there were also tracks for personal development, BI, and BI & Data Warehouse that were available.  My morning started with a presentation by Gareth Swanepoel on column-store indexes.  The next presentation was done by Bradley Ball, aka @sqlballs, on page and row compression.  This presentation I found to be very informative because it explained page and row compression at the storage level as well as providing a demo of the enabling page and row compression.Christina Leo was up next on SQL Server Internals followed by SQLRockstar with a presentation on monitoring in a virtual environment.  My old manager, Chad Churchwell, presented next on SQL 2012 high availability groups.  I then switched to the BI track to catch a presentation from Brian Knight on developing faster with SQL 2012.  Let me just tell you, I hope I can become as relaxed and natural presenting as Brian Knight is.  I'll have to keep practicing to reach that level so keep your eye out for one of my presentations coming soon.

All in all a great day and oh yeah the after party was fun too. 


Sunday, April 22, 2012

Welcome

Welcome to my blog.  Here I hope to periodically write posts regarding SQL Server and other IT and perhaps some non-IT related things that I find interesting.  I hope you enjoy the posts.