Tuesday, October 30, 2012

Customize the Management Studio Toolbar

Management Studio is great tool with almost everything you need right at your fingertips.  Notice I said "almost".  I like to customize the toolbars to meet my needs of doing work every day.  Because being a DBA means you're busy all day long, and you don't have time to hunt through all the menus or memorize a keystroke for a special function. 

The very first customization I always make is to add the "Hide Results Pane" button to the SQL Editor toolbar.  This option is found under the Window menu and acts a toggle button to either show or hide the results pane for the query editor.


In SQL Server 2012 Management Studio, from the menu, select Tools, Customize, and then select the Commands tab.   Select the Toolbar radio button, and just to the right of that select SQL Editor from the dropdown menu.  In the Controls section, you will see each command that corresponds to the button on the SQL Editor toolbar.



To add a button to this toolbar, click the "Add Command" button.  From the Categories section select Window, then select "Show Results Pane" from the Commands section, and then click OK.  Remember this is a toggle button, so depending on the display state of the results pane, the button with either say "Show Results Pane" or "Hide Results Pane".  When we're adding the button in the customization wizard, it always shows up as "Show Results Pane". 

Now you should see the "Show Results Pane" button on the Controls list.



Make sure that control is selected and click the "Move Down" or "Move Up" button to reposition it on your toolbar.  Once you have the button where you want it, then just click Close.  Your toolbar has now been customized.



This will save you a few keystrokes and/or mouse clicks when you're trying to find that one thing that's buried in the Management Studio menus.  And don't worry about messing anything up, if you don't like the layout of a toolbar, you can just click the Reset All button in the Customize wizard to reset the layout back to the default.

Tuesday, October 23, 2012

Capture Index Usage Statistics for Better Analysis

Starting with SQL Server 2005, Microsoft introduced Dynamic Management Views to help DBAs see what's going inside of SQL Server.  One of my favorite is the sys.dm_db_index_usage_stats.  It collects statistics for different types of index operations and the timestamp at which it was last performed.   All the DMVs are great tools; however, they all have the same drawback.  They are memory resident, which means the statistics are reset any time the SQL Server service is restarted, the database is attached/detached, or it's restored.

The best way to avoid losing this valuable data is to store it in a persistent table.  I typically store data like this within the msdb database, but you can store in any database.  Below we have created a table DBA.IndexUsageStats to hold each snapshot of the data.


USE msdb;
GO
CREATE SCHEMA dba AUTHORIZATION dbo;
GO
CREATE TABLE dba.IndexUsageStats
  (
     SQLStartTime       DATETIME NULL,
     CaptureTime        DATETIME NULL,
     DatabaseName       NVARCHAR(128) NULL,
     ObjectName         NVARCHAR(128) NULL,
     index_name         NVARCHAR(128) NULL,
     index_type_desc    NVARCHAR(60) NULL,
     index_is_unique    BIT NULL,
     database_id        SMALLINT NULL,
     object_id          INT NULL,
     index_id           INT NULL,
     user_seeks         BIGINT NULL,
     user_scans         BIGINT NULL,
     user_lookups       BIGINT NULL,
     user_updates       BIGINT NULL,
     last_user_seek     DATETIME NULL,
     last_user_scan     DATETIME NULL,
     last_user_lookup   DATETIME NULL,
     last_user_update   DATETIME NULL,
     system_seeks       BIGINT NULL,
     system_scans       BIGINT NULL,
     system_lookups     BIGINT NULL,
     system_updates     BIGINT NULL,
     last_system_seek   DATETIME NULL,
     last_system_scan   DATETIME NULL,
     last_system_lookup DATETIME NULL,
     last_system_update DATETIME NULL
  );
GO

Next we need to create the collector.  This query will pull the current data from the DMV and store it in our table.


DECLARE @CurrentStartTime  AS DATETIME,
        @PreviousStartTime AS DATETIME;

SELECT @CurrentStartTime = sqlserver_start_time
FROM   master.sys.dm_os_sys_info;

USE TestDatabase;

INSERT msdb.DBA.IndexUsageStats
SELECT @CurrentStartTime          AS SQLStartTime,
       CURRENT_TIMESTAMP          AS CaptureTime,
       Db_name()                  AS DatabaseName,
       Object_name(ius.object_id) AS ObjectName,
       i.name                     AS IndexName,
       i.type_desc                AS IndexTypeDesc,
       i.is_unique                AS IndexIsUnique,
       ius.*
FROM   sys.dm_db_index_usage_stats ius
       INNER JOIN sys.indexes i
               ON ius.object_id = i.object_id
                  AND ius.index_id = i.index_id
WHERE  ius.database_id = Db_id()
ORDER  BY ObjectName,
          ius.index_id;
GO

The script can be scheduled with SQL Agent to run on a regular basis.  I prefer a weekly schedule, but any recurring schedule is fine.  If you have more than one user database, then you'll need to run this script for each one.

In the script we capture a bit more than just the DMV data. For example:
  1. SQLStartTime - The time when the SQL Server service started.
  2. CaptureTime - The time when the script captured a snapshot of the DMV data.
  3. DatabaseName, OjbectName, IndexName, IndexTypeDesc, IndexIsUnique are pretty self-explanatory.
Once you have let this job execute over a few weeks, you can then revisit the data and start seeing how your indexes are being utilized over a longer period of time.  This query will aggregate the index statistics using the captured data.


USE msdb;
SELECT Min(SQLStartTime) AS SQLStartTime,
       Max(CaptureTime)  AS CaptureTime,
       ObjectName,
       IndexName,
       IndexTypeDesc,
       index_id,
       Sum(user_seeks)   AS user_seeks,
       Sum(user_scans)   AS user_scans,
       Sum(user_lookups) AS user_lookups,
       Sum(user_updates) AS user_updates
FROM   DBA.IndexUsageStats
WHERE  DatabaseName = 'TestDatabase'
GROUP  BY ObjectName,
          index_name,
          index_type_desc,
          index_id
ORDER  BY ObjectName,
          index_id;
GO

These samples below were taken over a four month period and clearly show how the indexes have been utilized.  It even points out some indexes (IX_ItemCodeID and IX_PurchasedBy) that may be nothing more than overhead and could be candidates for removal.  These two indexes have not been used for any seeks or scans, but SQL still needs to update them every time DML code is applied to the Purchases table.


As you can see, the more data you have to draw a conclusion about your index usage, the better you can make tuning recommendations for each of them.

Tuesday, October 16, 2012

Which SQL Instance is Killing Your CPU?

You have 5 instances running.  How do you know which one is killing your CPU?

As you can see from Windows Task Manager, you have several instances of SQL installed and one of them is using nearly half of the CPU.  To figure this out, you'll need to know the PID (Process ID) number.  

To view the PID number in Windows Task Manager, from the menu select View, then Select Columns, and then check the box for PID.  For our example, it's Process ID 1212 that's using nearly 50% of our CPU.


So how do you correlate that PID to the SQL instance, so you can start the performance investigation? There's a couple of places you can find this information.

The first place is to look at the most recent SQL Error Log for each of your instances.  One of the very first entries will be the PID value.

Server process ID is 1212.

The other place is to open the SQL Server Configuration Manager, select SQL Server Services from the left hand side, then right click on an instance and click Properties.  Click the Service tab and you'll see the PID value for that instance.


Both of these methods give show you which instance is our culprit.  Now you can drill into that instance to find out why it's killing your CPU.

I prefer to look for this value in the SQL Error Log because I already have Management Studio open ready to investigate the issue.  Keep in mind the PID value will change every time your SQL instance is started.


Wednesday, October 10, 2012

Grant Execute Permission on All Stored Procedures

Right out of the box, SQL Server makes it pretty easy to grant SELECT, INSERT, UPDATE, and DELETE to all user tables.  That's accomplished by using the built-in db_datareader (SELECT) and db_datawriter (INSERT, UPDATE, and DELETE) database roles in every user database.  Any user you add to those database roles will be granted those permissions.

But what if you want to grant EXECUTE permission to all of the user stored procedures.  Where's the built-in database role for that?  Nowhere to be found. 

Oh you can create a database role, add users to that role, and grant EXECUTE permission to that role for each stored procedure that's created.  But that's just not as efficient as the built-in roles, mainly because you must always remember to grant the permission to the role every time you create a new stored procedure.

Here's an easier way to create a database role that's more like the built-in roles.


Create a database role and add a user to that role.

CREATE ROLE db_execproc;
GO
EXEC sp_addrolemember N'db_execproc', N'User1';
GO

Grant EXECUTE permission at the schema level.

GRANT EXECUTE ON SCHEMA::dbo TO db_execproc;
GO

Create a new stored procedure.

CREATE PROCEDURE dbo.p_test
AS
SELECT * FROM dbo.Table1;
GO

Login as the test user and execute the stored procedure.

EXEC dbo.p_test;
GO

This example granted EXECUTE permission to the dbo schema.  Any stored procedures that are created in the dbo schema can be executed by users who are members of the db_execproc database role. You can grant the same permission to other schemas if needed.  

This concept is not going to be for everyone and every database, but if you're already using the built-in database roles to manage permissions within a database then this will be a good option for you.

For more details on database roles and grant schema permissions, check out Books Online.



Thursday, October 4, 2012

How to Make SQL Server Listen on Multiple Ports

Have you ever needed an instance of SQL Server to listen on multiple TCP/IP port numbers but didn't know how to implement it?  Well here's a quick step-by-step guide that will work for SQL Server 2005, 2008, and 2012.

Open SQL Server Configuration Manager.

On the left hand side, expand SQL Server Network Configuration.


Select your instance name.


On the right hand side, right click on TCP/IP and select Properties.




Click the IP Addresses tab.

Scroll down to the IPALL section.


Make sure the TCP Dynamic Ports field is empty.


In the TCP Port field, enter each port number separated by commas.




Click OK.

A pop up message will appear stating the changes will not take effect until the service is restarted.  Just click OK to close the message.

In the SQL Server Configuration Manager, on the left hand side, select SQL Server Services.


On the right hand side, right click the SQL Server service and select Restart.




After the restart is complete, SQL Server will be listening on all the port numbers you specified.

The easiest way to verify this is to look in the SQL Server Errorlog.


You will see one entry for each port number.


Server is listening on [ 'any' <ipv4> 10000].
Server is listening on [ 'any' <ipv4> 10001].
Server is listening on [ 'any' <ipv4> 10002].

If you happen to pick a port number that is already in use, then you will see an error like this 
within the SQL Server Errorlog, and SQL Server will fail to start.

Server TCP provider failed to listen on [ 'any' <ipv4> 10000]. Tcp port is already in use.