jump to navigation

Looking into DMVs in SQL Server for Performance Tuning May 17, 2015

Posted by fofo in Sql Server, SQL Server 2008, SQL Server 2012, SQL Server 2014.
Tags: , ,
add a comment

I have been delivering a Microsoft Certified Course in MS SQL Server 2014 recently and I was highlighting with several examples and demos the importance of DMOs (DMVs and DMFs) in SQL Server. We can get a plethora of information regarding server state and monitor the health of a server instance, diagnose problems and tune performance with those objects.

In this post I will demonstrate with hands-on demos the power of DMVs and show how they can help a DBA, developer to identify expensive queries, low usage indexes, and check the fragmentation levels of indexes. These are very common problems that anyone that deals with an SQL Server database must look out for and troubleshoot. There are two types of DMVs, server scoped, which require the view server state permission on the server, and database scoped, which require the view database state permission on the database.

By querying these view with T-SQL statements, a lot of information is made available to the DBA. When you make a call to a DMV, you must use a minimum of a two part naming convention.

DMVs can be of very reliable resource for information about the performance of your system, but each time your SQL Server is restarted, the data in the views is reset.

I have installed SQL Server 2014 Enterprise edition in my machine but you can use the SQL Server 2014/2012/2008 Express edition as well (or any other edition).

I am connecting to my local instance through Windows Authentication.

The first query I’m going to run is for identifying fragmentation levels of indexes within the database. 

The more fragmented an index is, the slower it performs. I am going to use the DMV dm_db_index_physical_stats cause I’m looking at the statistics of how the indexes are physically laid out within SQL Server. The view returns the size and fragmentation details for the data and the indexes of a specified table or view. Fragmentation of indexes and tables can drastically affect performance of queries and maintenance, so this is a very good DMV to get familiar with.

Before I run my script let me explain a few things regarding Internal fragmentation.

Internal fragmentation occurs if there is unused space between records in a page. This fragmentation occurs through the process of data modifications (INSERT, UPDATE, and DELETE statements) that are made against the table and therefore, to the indexes defined on the table. This unused space causes poor cache utilization and more I/O, which ultimately leads to poor query performance.

I am connecting to a database that I am going to use for all of my demos and I open a new query window. Type (copy-paste the following)

USE mydb

EXEC sp_configure 'show advanced options', 1
DECLARE @DefaultFillFactor INT
 Name VARCHAR(100)
 ,Minimum INT
 ,Maximum INT
 ,config_value INT
 ,run_value INT
INSERT INTO @Fillfactor
 EXEC sp_configure 'fill factor (%)'
SELECT @DefaultFillFactor = CASE WHEN run_value = 0 THEN 100
 ELSE run_value
FROM @Fillfactor 

 ,QUOTENAME(s.name) AS SchemaName
 ,QUOTENAME(o.name) AS TableName
 ,i.name AS IndexName
 ,stats.Index_type_desc AS IndexType
 ,stats.page_count AS [PageCount]
 ,CASE WHEN i.fill_factor > 0 THEN i.fill_factor
 ELSE @DefaultFillFactor
 END AS [Fill Factor]
 ,CASE WHEN stats.index_level = 0 THEN 'Leaf Level'
 ELSE 'Nonleaf Level'
 END AS IndexLevel
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'DETAILED')
 AS stats
 ,sys.objects AS o
 ,sys.schemas AS s
 ,sys.indexes AS i
 AND s.schema_id = o.schema_id
 AND i.index_id = stats.index_id
 AND stats.avg_page_space_used_in_percent <= 85
 AND stats.page_count >= 10
 AND stats.index_id > 0
ORDER BY stats.avg_page_space_used_in_percent ASC
 ,stats.page_count DESC

Execute the code above and see the results.

Have a look at the picture below to see what results I have got.


Let me explain what the code above does.

At the beginning I am  just turning on some advanced options, re-configuring.

I am creating a temporary table variable to hold some information. I’m going to join the DMV with other objects, schemas, and indexes (system views) just to get more information.

In the Select list, I get information like the database name, the schema name, the table name, the index name, the index type, the page count, the fillfactor, the average page space used in percent and the index level.

Then I am using some of conditions like avg_page_space_used_in_percent is less than 85% and more than 10 pages in size for an index.

Then I’m  ordering them by the average page space used in percent (This is an average percentage use of pages that represents  internal fragmentation. Higher the value, the better it is. If this value is lower than 80%, then action should be taken) and then by the page count descending so the ones with the most pages come up first.  By all means do use this query for identifying fragmentation issues in your databases.

The second query I have is using the dm_execute_query_stats DMV, to identify our top 10 queries ranked by average CPU time. The query output will show us the statements that are the most expensive as far as resources and overhead regarding the CPU.

Knowing which are these queries, I can rewrite them in a way that will cause much less overhead.

In a new query window, type (copy-paste the following).

USE master

SELECT TOP 10 query_stats.query_hash AS "Query Hash",
 SUM(query_stats.total_worker_time) / SUM(query_stats.execution_count) AS "Average CPU Time",
 MIN(query_stats.statement_text) AS "SQL Statement"
 SUBSTRING(ST.text, (EQS.statement_start_offset/2) + 1,
 ((CASE statement_end_offset
 ELSE EQS.statement_end_offset END
 - EQS.statement_start_offset)/2) + 1) AS statement_text
 FROM sys.dm_exec_query_stats AS EQS
 CROSS APPLY sys.dm_exec_sql_text(EQS.sql_handle) as ST) as query_stats
GROUP BY query_stats.query_hash

Execute the  code above against your database and see the results.

Have a look at the picture below to see my results.


In the next demo I am going to use the dm_os_wait_stats DMV and actually do some calculations to show the ratio of time spent waiting for a CPU to free up and give us some processing power or for another resource such as memory or the disks to free up.

In a new query window, type (copy-paste the following).

USE master
Select signal_wait_time_ms=sum(signal_wait_time_ms)

 ,'%signal (cpu) waits' = cast(100.0 * sum(signal_wait_time_ms) / sum (wait_time_ms) as numeric(20,2))

 ,resource_wait_time_ms=sum(wait_time_ms - signal_wait_time_ms)

 ,'%resource waits'= cast(100.0 * sum(wait_time_ms - signal_wait_time_ms) / sum (wait_time_ms) as numeric(20,2))

From sys.dm_os_wait_stats

Execute the  code above against your database and see the results.

Have a look at the picture below to see my results.


On this server, we’re looking at about 15 % of the time is spent waiting on processing power to become available, versus almost 85% of the time is spent waiting usually for the disks to catch up and be able to give us the information that we need to load into memory or even for it to find it in the memory.

This indicates that I have a powerful enough server as far CPU is concerned but I have to look at how much memory is used and what my disk drives are doing and how well my storage is optimized on this server.

In this query I am going to use the dm_db_index_usage_stats DMV. I want to look at indexes and see how often are they used. If I have indexes that have not been used since the server’s restart and the server’s been running for a long time, then I probably do not need these indexes.

If the indexes are not being used, every time a table is updated by inserting, updating, deleting a record, the index is also being updated or changed in the table. The more indexes you have that are not being used, the more time you’re going to spend updating indexes and maintaining them. When it comes to defragmenting indexes, you’re going to be defragmenting indexes that you don’t even use.

USE mydb

SELECT o.name ,
 indexname = i.name ,
 i.index_id ,
 reads = user_seeks + user_scans + user_lookups ,
 writes = user_updates ,
 rows = ( SELECT SUM(p.rows)
 FROM sys.partitions p
 WHERE p.index_id = s.index_id
 AND s.object_id = p.object_id
 ) ,
 CASE WHEN s.user_updates < 1 THEN 100
 ELSE 1.00 * ( s.user_seeks + s.user_scans + s.user_lookups )
 / s.user_updates
 END AS reads_per_write ,
 'DROP INDEX ' + QUOTENAME(i.name) + ' ON ' + QUOTENAME(c.name) + '.'
 + QUOTENAME(OBJECT_NAME(s.object_id)) AS 'drop statement'
FROM sys.dm_db_index_usage_stats s
 INNER JOIN sys.indexes i ON i.index_id = s.index_id
 AND s.object_id = i.object_id
 INNER JOIN sys.objects o ON s.object_id = o.object_id
 INNER JOIN sys.schemas c ON o.schema_id = c.schema_id
WHERE OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1
 AND s.database_id = DB_ID()
 AND i.type_desc = 'nonclustered'
 AND i.is_primary_key = 0
 AND i.is_unique_constraint = 0
 AND ( SELECT SUM(p.rows)
 FROM sys.partitions p
 WHERE p.index_id = s.index_id
 AND s.object_id = p.object_id
 ) > 20000
ORDER BY reads

Execute the  code above against your database and see the results.

Have a look at the picture below to see my results.


The query pulls some information since our server restarted to show which indexes have not been run that often.I have placed a filter  that says I have to have at least 20,000 rows in a table for my query results to have a practical value. You can see that I am looking for reads, writes and their ratio to see if it is worth keeping the index.

With DMVs we are  monitoring performance with minimal overhead. There are many DMVs available to us at the moment and  more are added with new versions of SQL Server.

Hope it helps!!!


Looking into the contents of Plan Cache in SQL Server August 9, 2012

Posted by fofo in Sql Server, Sql Server 2005, SQL Server 2008, SQL Server 2012.
Tags: , ,

In this post I will be demonstrating with hands-on examples how to look into the contents of Plan Cache.  I will be explaining what the Plan Cache (Procedure Cache) is, the process of query optimization and the generation of execution plans in detail. Then I will use some DMVs to create a custom view that will return the contents of the plan cache.

I will be using SQL Server 2012 Developer edition in this post but feel free to use SQL Server 2005,2008 versions.I will be using the AdventureWorks2012 database.You can download it from the Codeplex website.

Let me talk a bit about query optimisation and T-SQL. T-SQL is declarative by nature. When we write T-SQL statements in a query window in SSMS and execute them,we just say to SQL Server “I want these results back”. We do not provide any details on how the results will be returned.If there was nothing else between our T-SQL code and the SQL Server Database engine, we simply would not get any results back.Luckily for us there is a very important component, the Query Optimizer that generates an imperative plan. By saying imperative I mean detailed. This plan that is called execution plan is what is actually executed by the relational engine.The query optimiser will not look for a perfect plan.It is a cost-based optimiser that must find an efficient plan.The optimiser when deciding upon the execution plan will take in to consideration the type of operations,statistics (must always be up to date),indexes,hardware resources (number of CPUs ,available memory),SQL Server edition,number of active concurrent connections and query hints.

The following steps  take place upon query receival from SQL Server:

  • Parser
  • Algebriser
  • Optimiser
  • Execution plan generation – caching
  • Query execution = execution plan execution

The Parser checks the an incoming query validating it for correct syntax.The query is terminated if a syntax error is detected.On validating a query the parser generates the parse tree.

The Algebriser takes the parse tree and resolves all the names of the different objects and data types.The output of all these verifications and resolutions is a binary set of data called a query processor tree.

Based on the query tree the Optimiser finds the optimal solution to execute the query.

Plans are then cached and reused so you do not have to create a new plan every time a query is executed.That will improve the performance significantly since execution plans are costly in both time and resources.So, the first time a query is executed the plan is created,compiled and placed in the procedure-plan cache. When there are subsequent requests for that query,the query plan will be used.

Connect to an instance of SQL Server. Type (copy paste) the code below and execute it. Make sure you include the execution plan ( CTRL+M )

USE [AdventureWorks2012]


SELECT  Sales.SalesOrderDetail.OrderQty ,
Sales.SalesOrderDetail.SpecialOfferID ,
Sales.SalesOrderDetail.UnitPrice ,
Sales.SalesOrderDetail.UnitPriceDiscount ,
Sales.SalesOrderHeader.Status ,
FROM    Sales.SalesOrderDetail
INNER JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID
ORDER BY Sales.SalesOrderDetail.UnitPrice DESC

Ηave a look at the picture below to see the results returned with the execution of the statement above along with the execution plan.

I will not be looking into the various components of this execution plan and what they mean. You can hover over each of these components and get information like IO cost and CPU cost.

Most SQL Server developers and administrators I know, strongly believe that DMVs are the best feature that has been added to SQL Server since 2005.I feel exactly the same way.Please have a look at this post of mine that I talk about new DMVs and DMFs in SQL Server 2008 SP1 and SQL Server 2012.

Wouldn’t it be nice to be able to look into the contents of the cache?We can do that by using various DMVs.

In a new query window type the following.Do not execute anything yet.

USE AdventureWorks2012


SELECT * FROM sys.dm_exec_cached_plans

SELECT * FROM Person.Person

SELECT * FROM sys.dm_exec_cached_plans

select * from sys.dm_exec_query_plan(0x060005003A0E56049009B9FF0100000001000000000000000000000000000000000000000000000000000000);

select * from sys.dm_exec_sql_text(0x060005003A0E56049009B9FF0100000001000000000000000000000000000000000000000000000000000000)

Execute line 1-5 first. I clear the contents of plan cache with the DBCC FREEPROCCACHE command.

Then (in line 7) I use the sys.dm_exec_cached_plans dynamic view that returns a row for each query plan that is cached by SQL Server.

Please have a look in this link for more information about the sys.dm_exec_cached_plans view.

At this time I do not expect to see anything in the plan cache.Then in line 9 I execute a simple SELECT statement..

An execution plan is generated. I run again the sys.dm_exec_cached_plans (line 11) view and I get back a row for this cached plan. I find out the relevant handle.

In my case is “0x060005003A0E56049029B9FF0100000001000000000000000000000000000000000000000000000000000000”

In your case it will be different so make sure you use yours instead.

Now I want to find the actual execution plan.We can use the sys.dm_exec_query_plan dynamic view.This view returns the plan in XML format for the batch specified by the plan handle.

Please have a look at this link for more information about the sys.dm_exec_query_plan view.I need to provide the handle as an input parameter.

I execute line 13 and I get the query plan in XML format for the sql query “Select * from Person.Person”

Then I want to find the SQL statement that created that plan.I will use the sys.dm_exec_sql_text dynamic view.

Please have a look at this link for more information about the sys.dm_exec_sql_text dynamic view.I need to provide the handle as an input parameter.

I execute the statement in line 15.The results I get is something like this

dbid    objectid    number    encrypted    text
5              NULL          NULL                0              SELECT * FROM Person.Person

So I managed to get the handle of the SQL query and then get the execution plan for this query and finally the text for the query itself.

Well that seems like a lot of work and I will demonstrate how to combine all these views into a custom-helper view.

In the same query window type (copy paste) and execute the following.

CREATE FUNCTION ExecPlanAndSqlText2 (@planhandle varbinary (MAX))
RETURN SELECT EST.text,ECP.objtype,ECP.cacheobjtype,ECP.usecounts,ECP.size_in_bytes,EQP.query_plan FROM sys.dm_exec_sql_text(@planhandle) AS EST CROSS JOIN
sys.dm_exec_query_plan(@planhandle) AS EQP
JOIN sys.dm_exec_cached_plans AS ECP
ON ECP.plan_handle = @planhandle;

SELECT * FROM ExecPlanAndSqlText(0x060005003A0E56049029B9FF0100000001000000000000000000000000000000000000000000000000000000);

I create a helper function first using all the dynamic management views and their relevant columns and then I pass as an input parameter the handle.

When I run the last SELECT statement I get the following results.

As you can see now I have all the information I need in one function.

In the same query window type (copy and paste) the code below.Execute the code

CREATE VIEW LookIntoPlanCache
FROM    sys.dm_exec_cached_plans AS ECP
CROSS APPLY ExecPlanAndSqlText(ECP.plan_handle) AS EPAST

SELECT * FROM LookIntoPlanCache

I create a view (LookIntoPlanCache) that collects all the plan handles and passes them to our user defined function. Finally I execute a simple SELECT to see the contents of the view.

When I do that  I get the following results.

Finally I have the SQL query text,the execution plan and the times this plan was used in one view.

Make sure you use this view to look into the contents of your Plan Cache.It works for every system.

Hope it helps!!!!

Looking into new and enhanced DMVs and DMFs in SQL Server 2012 July 19, 2012

Posted by fofo in Sql Server, SQL Server 2012.
Tags: , ,

In this post I would like to demonstrate with a few examples how to use the new DMOs (Dynamic Management Objects – DMVs & DMFs) that are available to us with the RTM version of SQL Server 2012.Some of them were present in SQL Server 2008 R2 SP1  but I do not know how many people have noticed their existence.

DMO’s were first introduced in SQL Server 2005 and were immediately very popular with developers and administrators.They give us a great insight on what is going on inside  SQL Server.It would be very difficult otherwise to get  all this information so easily. They have great information on sessions,plan cache,memory,threads,database connections,waits,latches e.t.c.We can get all this information without placing a heavy burden on the server.

All of them start with dm_  in the name.They are all stored in the sys schema.

You can use any edition of SQL Server of 2012 you want for this example. SQL Server Express edition will do just fine.Download the Express edition, here.

Connect to the instance of the SQL Server.

The first DMF I will be looking into is sys.dm_os_volume_stats. This function returns several important information about the physical disk drive the specified database and its respective files are stored in SQL Server 2012.It returns information such as: Available free space on the volume,mount point at which the volume is rooted,whether the volume supports sparse files or not,whether the volume is currently marked as read only or not,whether the volume is currently compressed or not, e.t.c

As input parameters we need to provide the Database Id and the File Id values.

In an new query window type and execute the following code.Note the results.

USE AdventureWorks2012;

SELECT name,total_bytes,available_bytes,volume_mount_point,file_system_type,is_compressed
FROM sys.database_files df
CROSS APPLY sys.dm_os_volume_stats(db_id(),df.file_id)

The next DMV is sys.dm_os_windows_info.It information about Windows Operating System.Type and execute the code below. Note the results.

SELECT * FROM sys.dm_os_windows_info;

The next one I want to talk about is (sys.dm_server_services) and returns the list of SQL Server’s windows services.Type and execute the code below.Note the results.

SELECT * FROM sys.dm_server_services

The next one I want to talk about is (sys.dm_server_memory_dumps) and returns  information about each memory dump file generated by SQL Server.

Type and execute the code below.Note the results.

SELECT * FROM sys.dm_server_memory_dumps;

The next one I want to talk about (sys.dm_server_registry) and returns configuration and installation information stored in the registry for the current SQL Server instance.

Type and execute the code below.Note the results.

SELECT * FROM sys.dm_server_registry;

The next one I want to talk about  is  (sys.dm_exec_query_stats). This is not a new one but it has changed in this version of SQL Server. 4 new columns have been added.

  •     total_rows –  This column returns the total number of rows returned by the query
  •     last_rows –    This column returns the total number of rows returned by last execution of the query
  •     min_rows –   This column returns the minimum number of rows query returned over the number of times that the plan has executed since it was last compiled
  •     max_rows –  This column returns the maximum number of rows query returned over the number of times that the plan has executed since it was last compiled

I will be using AdventureWorks2012 for this example. I run the first query below. I note that it returns 655 rows.

Then in the second query, I am returning old and new columns from the sys.dm_exec_query_stats for the query where the total number of rows is 655.

Type and execute the code below.

USE AdventureWorks2012;

SELECT        Sales.SalesOrderHeader.OrderDate, Sales.SalesOrderHeader.DueDate, Sales.SalesOrderHeader.ShipDate, Sales.SalesOrderHeader.SubTotal,
Sales.SalesOrderDetail.UnitPrice, Sales.SalesOrderDetail.OrderQty
FROM          Sales.SalesOrderHeader INNER JOIN
Sales.SalesOrderDetail ON Sales.SalesOrderHeader.SalesOrderID = Sales.SalesOrderDetail.SalesOrderID
WHERE Sales.SalesOrderHeader.ShipDate > '2008-07-29'
ORDER by UnitPrice

SELECT execution_count,total_logical_reads,total_logical_writes,total_physical_reads,total_worker_time,total_rows,last_rows,min_rows,max_rows
FROM sys.dm_exec_query_stats
WHERE total_rows = 655

There are many great posts on the web about DMVs but one post I really like is Glen Berry’s , A DMV per day series.

Hope it helps!!!!