Pages

Friday, September 6, 2013

Goodbye to MCM/MCSM Certifications

Originally I had planned to blog on my path to the Microsoft Certified Master certification but given Microsoft's recent decision to cancel the certification my plans changed. At 1AM last Friday, the start to Labor Day weekend, Microsoft sent an email to the MCM/MCA community alerting us to the fact that effective October 1st, the advanced certifications will be no more. This abrupt decision coupled with the timing of the announcement, 10PM PST at the start to Labor Day weekend, raises many questions. The fact that this announcement comes almost two weeks after Microsoft announced plans to expand the testing for the program makes this announcement even more surprising.

Tim Sneath from Microsoft Learning responding to comments on a Microsoft Connect item points out:
The truth is, for as successful as the program is for those who are in it, it reaches only a tiny proportion of the overall community. Only a few hundred people have attained the certification in the last few years, far fewer than we would have hoped. We wanted to create a certification that many would aspire to and that would be the ultimate peak of the Microsoft Certified program, but with only ~0.08% of all MCSE-certified individuals being in the program across all programs, it just hasn't gained the traction we hoped for.

I certainly understand Microsoft's point here but it seems this could have been handled better. My journey to MCM status took a little over a year once I had decided to go for it. A long the way there were costs for exams, books, and online studying materials. I can only imagine that there are many people who have started their journey and are now told that they have until October 1st to complete it. It seems Microsoft could given more notice to those people who are in the process of trying to achieve MCM or MCSM status. Paul Randal has a survey on his blog asking Survey: SQL Server MCM cancellation – does it affect you?. From a PR standpoint it seems Microsoft certainly could have handled this better.

I had hoped to eventually recoup my investment in getting the MCM certification over time but now I wonder will anyone recognize or respect the MCM title once the program is cancelled. I also wonder if everyone will understand what it takes to become a SQL MCM and the depth of knowledge it requires. Only time will tell as far as that goes but so far it doesn't look good.

Even with that being said my path to MCM was a good one. The knowledge I gained made me a better DBA long before I gained the MCM title. I understand not everyone is able to afford the costs associated with the MCM program but I would certainly recommend that everyone take a look at the free MCM readiness videos on the Microsoft website. These videos along with blogs and books were the study material that I used.

According to Time Sneath Microsoft is looking to modify or create a new advanced certification with the hopes of reaching more people. I certainly welcome this and encourage them to do so but I hope that it will still require some sort of hands-on testing, which validates a user's knowledge and understanding, as opposed to just multiple choice questions. As others have stated we don't want to see a "Masters Lite" certification. The Masters certification gives people something to strive towards and allows them to differentiate themselves from their peers hopefully whatever Microsoft comes up will still allow this.

I certainly hope Microsoft reconsiders cancelling the advanced certifications.

Thursday, May 2, 2013

SQL Cursors and TempDB

During a meeting yesterday the question was asked "Aren't cursors stored in tempDB?"  My initial response was no they will be rendered in memory but the correct answer is..."It depends!" Let me preface the rest of this blog post by saying when possible use set-based operations rather than cursors as cursors may be a hit on performance, but as in all things SQL it is best to test.

According to Books Online, Insensitive/Static cursors create a temporary copy of the data being accessed by the cursor in TempDB.  Likewise a Keyset cursor will build a keyset containing the uniquely identifiying columns for the cursor in TempDB.  Other cursors Fast_Forward, Forward_Only, and Dynamic do not incur the overhead associated with storing all of the cursor data in TempDB.  By using the dmv, sys.dm_session_space_usage, I was able to view the page allocations in TempDB using the different cursor types.  I noticed that there were page allocations to TempDB no matter which type of cursor was chosen but there were far more page allocations for a Keyset cursor  versus a Fast_Forward or Static cursor.

When programming cursors I typically declare a cursor as Fast_Forward which creates a forward only static cursor.  This option performs better than other cursor options and seems to work well for me as I rarely have to move backwards through a cursor or modify the cursor data.  If I am updating or deleting data through a cursor then I will use a Forward_Only, Dynamic cursor.  I often have seen developers use the ISO syntax when creating cursors such as follows.

1:  DECLARE cur_DBs CURSOR  
2:  FOR  
3:  SELECT Name  
4:  FROM sys.databases d  
5:  WHERE d.state_desc = 'ONLINE';  

By omitting the T-SQL cursor model in the declaration statement a Keyset, optimistic, non-scrollable cursor is created which will have its keyset materialized in TempDB.

So there you have it.  Cursors will have some page allocations in TempDB but how many will depend on the type of cursor being declared.  When declaring cursors Fast_Forward cursors have less overhead and impact on TempDB.  So when reviewing or writing code remember to declare the type of cursor being created to reduce overhead.

Tuesday, March 5, 2013

Dynamically Setting Max Server Memory on Cluster

I was recently given the task of adding a new node to our one node cluster with two SQL instances.  Due to politics within the company what was originally going to be a two-node SQL cluster had lingered around as a single node cluster for over a year. The issue I was confronted with was how to maximize memory usage on both nodes of the cluster but ensure that the two instances do not compete for memory if they happen to be running on the same node.

The solution I came up with was to query the server to determine the physical node name that each of the SQL instances was running on.  If after running both queries it is determined that the instances are running on the same node then the max server memory for both servers is reduced to prevent contention.  Conversely, if both instances are running on different nodes then the max server memory is increased to leave just enough for the operating system.  I remembered that changing the max server memory on an instance causes a dump of the procedure cache so I had to add logic to determine if the max server memory needed to be changed or if it could be left as it was.

There are several ways to get the code I created to run at startup.  One option is to create a stored procedure that runs the code and to have the stored procedure run at startup for the instance.  To execute the procedure this way the server must be configured to scan for startup procedures and you must run sp_procoption to set the stored procedure to run at startup. I chose to create a SQL Agent job to run the code and set the job schedule to run at agent startup to keep from having to reconfigure the server.  This way is also more transparent for any future DBAs.

In order to change the max server memory setting I had to create a linked server on both servers that I was sure would have ALTER SETTINGS permission.

Below is the code I created.

1:  DECLARE @standalonememory    int      = 125000  
2:  DECLARE @sharedmemory      int      = 55000  
3:  DECLARE @SQLRPTHost       nvarchar(50)  
4:  DECLARE @SQLDWHost        nvarchar(50)   
5:  DECLARE @SQLRPTmaxmemory     int   
6:  DECLARE @SQLDWmaxmemory     int   
7:     
8:  --Retrieve the host name for SQLRPT host    
9:  SELECT @SQLRPTHost = CAST(SERVERPROPERTY('ComputerNamePhysicalNetBIOS') AS nvarchar(50))--Returns Node Name  
10:    
11:  -- Retrieve the host name for SQLDW      
12:  SELECT @SQLDWHost = ComputerNamePhysicalNetBIOS  
13:  FROM OPENQUERY (SYS_SQLDW,'SELECT CAST(SERVERPROPERTY(''ComputerNamePhysicalNetBIOS'') AS nvarchar(50)) AS ComputerNamePhysicalNetBIOS');  
14:    
15:  --Retrieve the max server memory on current server    
16:  SELECT @SQLRPTmaxmemory = CAST(value as int)     
17:  FROM sys.configurations    
18:  WHERE name = 'max server memory (MB)'   
19:    
20:  --Retrieve the max server memory on remote server    
21:  SELECT @SQLDWmaxmemory = value     
22:  FROM OPENQUERY(SYS_SQLDW, 'SELECT CAST(value as int) AS value  
23:      FROM sys.configurations  
24:      WHERE name = ''max server memory (MB)''')    
25:    
26:  IF @SQLRPTHost = @SQLDWHost      --Both instances running on same host      
27:   BEGIN        
28:      --Max Memory should be set to sharedmemory if not set it       
29:      IF @SQLRPTmaxmemory != @sharedmemory       
30:       BEGIN         
31:        
32:         EXEC sp_configure 'Max Server Memory', @sharedmemory         
33:         RECONFIGURE        
34:    
35:       END     --Check the memory setting for the SQLDW instance       
36:    
37:      IF @SQLDWmaxmemory != @sharedmemory  
38:       BEGIN         
39:       
40:         EXEC ('sp_configure ''Max Server Memory'', ' + @sharedmemory + '; RECONFIGURE') AT SYS_SQLDW        
41:     
42:       END        
43:    
44:      END     
45:         ELSE --Servers are each running on their own node      
46:          BEGIN         
47:            --Max Memory should be set to stand alone memory if not set it       
48:             IF @SQLRPTmaxmemory != @standalonememory       
49:    
50:              BEGIN         
51:                 
52:                EXEC sp_configure 'Max Server Memory', standalonememory         
53:                RECONFIGURE        
54:    
55:              END       
56:               
57:              --Check the memory setting for the SQLDW instance       
58:              IF @SQLDWmaxmemory != @standalonememory       
59:    
60:               BEGIN         
61:                    
62:                   EXEC ('sp_configure ''Max Server Memory'', ' + @standalonememory + '; RECONFIGURE') AT SYS_SQLDW        
63:                   
64:    
65:                END        
66:              END   

Wednesday, October 31, 2012

Who Did that? The Default Trace Knows

We had an issue at work the other day where a database name was changed from MyDatabase to MyDatabase DO NOT USE. This change caused the overnight ETL process to fail. The question immediately arose, “When did this change and who made the change?” Using the default trace in SQL Server I was quickly able to determine when and who made the change.

The default trace is a system trace that is enabled by default in SQL Server 2005,2008, and 2012. According to BOL this trace captures information mostly relating to configuration changes. The 2012 documentation mentions that this functionality will be removed in a future version and we should use extended events instead. I however, find the default trace particularly useful because it is not something that I have to configure and enable on a server-by-server basis . For those of you that have viewed the database disk usage report in Management Studio, you are already familiar with the default trace. The autogrow / autoshrink events section of the database disk usage report is pulled from the default trace information.

To view the default trace you can open it in SQL Profiler or use the function fn_trace_gettable to query the trace file. It should be noted that the default trace rolls over after restarts and after the trace file reaches 20MB. Only 5 trace files are maintained so on a busy system the default trace will not hold a lot of history but for the instance we had the other morning the default trace was perfect.

Here is a copy of the script I use to query the default trace. I choose to filter out the backup and restore information, event_id 115, to make the results easier to analyze.

1:   DECLARE @Path    NVARCHAR(250);  
2:    
3:    SELECT  
4:      @Path = REVERSE(SUBSTRING(REVERSE([path]),  
5:      CHARINDEX('\', REVERSE([path])), 260)) + N'log.trc'      
6:    FROM  sys.traces  
7:    WHERE  is_default = 1;  
8:    
9:    SELECT DatabaseName,  
10:      Filename,  
11:      (Duration/1000)       AS Duration,  
12:      t.StartTime,  
13:      EndTime,  
14:      EventClass,  
15:      te.name  
16:      TextData,  
17:      LoginName,  
18:      ApplicationName,  
19:      Hostname,  
20:      (IntegerData * 8.0/1024)  AS ChangeInSize  
21:    FROM ::fn_trace_gettable(@Path, DEFAULT) t  
22:    JOIN sys.trace_events te  
23:      on t.EventClass = te.trace_event_id  
24:    WHERE t.StartTime > '20121028'  
25:      AND EventClass != 115  
26:    ORDER BY t.StartTime DESC;  

By passing the second parameter for fn_trace_gettable as “Default” the function reads the information from all of the trace files. Joining the trace table with the sys.trace_events system view allows me to pull the name of the trace event.
Here are the results after changing my database name from “Sample” to “Sample DO NOT USE”.

The default trace can be a very useful tool for finding information about your server instance. I urge you to investigate it further for yourself. Bear in mind when using the default trace that depending on service restarts and the level of activity on the server that information captured by the default trace may not stick around very long.

Thursday, October 25, 2012

SQL Server NUMA Surprise


Today I welcome myself back to my blog after months of neglect, sorry blog. 

Today as part of my MCM prep I decided to dive into Software NUMA and SQL Server.  I have read a lot of information regarding Software NUMA over the years but felt that I needed to dive deeper into NUMA to better understand exactly how and when to configure it. 

As part of my studies I came across two blog posts from Johnathan Kehayias (Blog | Twitter) regarding NUMA.  In his second blog on NUMA SQL Server and Soft NUMA  Johnathan does a great job of walking through how he calculated the CPU mask to divide a 24 CPU server into 4 NUMA nodes.  After reading through the blog post I was shocked to find out at the end that Johnathan discovered NUMA does not work as reported in BOL and MSDN. 
 
"The benefits of soft-NUMA include reducing I/O and lazy writer bottlenecks on computers with many CPUs and no hardware NUMA. There is a single I/O thread and a single lazy writer thread for each NUMA node. Depending on the usage of the database, these single threads may be a significant performance bottleneck. Configuring four soft-NUMA nodes provides four I/O threads and four lazy writer threads, which could increase performance."
After configuring 4 Soft NUMA nodes SQL Server still only created one lazy writer thread.  This is contrary to BOL which as you can see states that SQL Server will create a lazy writer for each Soft NUMA node.  In his post Johnathan references a post from Bob Dorr, an escalation engineer at Microsoft in the Product Support division, called How It Works: Soft NUMA, I/O Completion Thread, Lazy Writer Workers and Memory Nodes. In his post Bob explains that additional lazy writer threads are only created for Hard NUMA nodes and not Soft NUMA nodes.

I find this particularly interesting because this goes against everything I studied for the MCITP: Database Administrator certification.  Given this new information from Bob Dorr and Johnathan Kehayias, the next question is why would I implement Soft NUMA?  Soft NUMA will handle I/O completion threads.  When we think of I/O completion threads we would generally think this means writing data to transaction and data files but as Bob Dorr points out, I/O completion threads actually handle connection requests and TDS data. 

If you are considering implementing Soft NUMA or just want to learn more about NUMA I urge you to read the aforementioned blog posts from Johnathan and Bob.

Saturday, June 2, 2012

Index Statistics Norecompute


     I’ve been meaning to write about this one for a while but life has gotten in the way.  There seems to be some confusion over the index setting statistics norecompute so I thought I would write about it in hopes of shedding some light on the subject.

     When you specify an alter index statement, one of the arguments you may specify is STATISTICS_NORECOMPUTE ON or OFF.  This setting tells the database engine whether or not it should automatically recompute distribution statistics for the index.  

     When you create an index statistics are automatically created for the associated index.  By default SQL Server will automatically update statistics when 20% + 500 columns of data have been modified.  So on a table with 100,000 rows once 20,500 of the column data has changed statistics will be recomputed or in the case of SQL Server 2005 and 2008 the statistic will be marked for updating.   In 2005 and 2008 SQL Server will flag statistics as out-of-date and will update statistics the next time they are accessed. 

     This is where the STATISTICS_NORECOMPUTE setting for Alter Index comes into play.  Setting STATISTICS_NORECOMPUTE to ON for an index will disable the auto updating of statistics for the index once the threshold for changed data has been reached.  This is the same effect as creating statistics with NORECOMPUTE except you don’t have to drop and recreate the statistic to change the setting.  In order to enable auto updating of index statistics once it has been disabled simply rebuild the index with the NORECOMPUTE OFF parameter.

     The confusion for this setting comes from the fact that people think NORECOMPUTE refers to the updating of statistics during an index rebuild.  The thought is that you can speed up an index rebuild by telling SQL Server not to recomputed statistics during the rebuild.  This is not the case.  SQL Server will update index statistics during a rebuild regardless of the NORECOMPUTE setting for the index.
To validate this I did a test.  I used the sys.stats system view along with the STATS_DATE function to return the statistics information for my table along with the date and time the statistics were created.  The STATS_DATE function accepts the Object_Id and the Stats_Id from sys.stats and returns the date the statistics were last update.  I ran the following statement:

1:  ALTER INDEX PK_INF_GROUP  
2:  ON INF_GROUP  
3:  REBUILD  
4:  WITH( STATISTICS_NORECOMPUTE = ON )  

Following that I ran this query to view the statistics information:

1:  SELECT *  
2:      , StatDate = STATS_DATE(s.object_id, s.stats_id)  
3:  FROM sys.STATS s  
4:  WHERE object_id = object_id('INF_GROUP')  

From the results below I can see that although no_recompute is set for my index the statistics for the index were recomputed.
    
 


     You can also see from these results that the other statistics for my table were not updated during the index rebuild.   By viewing the details of the statistics in SQL Server Management Studio I can further see that the statistics were updated using a full scan and not a sampling of the data.

     There may be some situations where you would want to disable the automatic updating of statistics.  One that comes to mind is if your data is skewed and sampling does not provide an accurate enough picture of the distribution of data to allow SQL Server to choose an optimal execution plan.  However, you cannot disable the updating of index statistics by setting the STATISTICS_NORECOMPUTE parameter to On during an index rebuild.

     Kimberly Tripp has blogged extensively about index statistics and I suggest you check out her blog at www.SQLSkills.com/blogs/Kimberly for more in-depth information on statistics.

Monday, April 30, 2012

SQL Saturday #130

This weekend I attended my first SQL Saturday event here in Jacksonville.  I highly recommend anyone that has an opportunity to attend one of these events to go ahead and attend.  The sessions were great and I was also able to network with people that are as enthusiastic about SQL Server as I am.

My company was actually nice enough to spring for the SQL Saturday Pre-Conference, which was a presentation from MVP Kevin Kline.  Kevin gave an excellent all day presentation on Troubleshooting & Performance Tuning SQL Server.  This presentation included an explanation of wait stats, monitoring, DMVs, and many other topics.  Unfortunately I was forced to leave the Pre-Con a bit early to officiate a play-in game in Orlando for the Florida State lacrosse tournament but I still came away with a few takeaways from the presentation.  


The actual SQL Saturday conference was as informative as the Pre-Con.  I followed the DBA track all afternoon but there were also tracks for personal development, BI, and BI & Data Warehouse that were available.  My morning started with a presentation by Gareth Swanepoel on column-store indexes.  The next presentation was done by Bradley Ball, aka @sqlballs, on page and row compression.  This presentation I found to be very informative because it explained page and row compression at the storage level as well as providing a demo of the enabling page and row compression.Christina Leo was up next on SQL Server Internals followed by SQLRockstar with a presentation on monitoring in a virtual environment.  My old manager, Chad Churchwell, presented next on SQL 2012 high availability groups.  I then switched to the BI track to catch a presentation from Brian Knight on developing faster with SQL 2012.  Let me just tell you, I hope I can become as relaxed and natural presenting as Brian Knight is.  I'll have to keep practicing to reach that level so keep your eye out for one of my presentations coming soon.

All in all a great day and oh yeah the after party was fun too.