jump to navigation

Looking into DMVs in SQL Server for Performance Tuning May 17, 2015

Posted by fofo in Sql Server, SQL Server 2008, SQL Server 2012, SQL Server 2014.
Tags: , ,
add a comment

I have been delivering a Microsoft Certified Course in MS SQL Server 2014 recently and I was highlighting with several examples and demos the importance of DMOs (DMVs and DMFs) in SQL Server. We can get a plethora of information regarding server state and monitor the health of a server instance, diagnose problems and tune performance with those objects.

In this post I will demonstrate with hands-on demos the power of DMVs and show how they can help a DBA, developer to identify expensive queries, low usage indexes, and check the fragmentation levels of indexes. These are very common problems that anyone that deals with an SQL Server database must look out for and troubleshoot. There are two types of DMVs, server scoped, which require the view server state permission on the server, and database scoped, which require the view database state permission on the database.

By querying these view with T-SQL statements, a lot of information is made available to the DBA. When you make a call to a DMV, you must use a minimum of a two part naming convention.

DMVs can be of very reliable resource for information about the performance of your system, but each time your SQL Server is restarted, the data in the views is reset.

I have installed SQL Server 2014 Enterprise edition in my machine but you can use the SQL Server 2014/2012/2008 Express edition as well (or any other edition).

I am connecting to my local instance through Windows Authentication.

The first query I’m going to run is for identifying fragmentation levels of indexes within the database. 

The more fragmented an index is, the slower it performs. I am going to use the DMV dm_db_index_physical_stats cause I’m looking at the statistics of how the indexes are physically laid out within SQL Server. The view returns the size and fragmentation details for the data and the indexes of a specified table or view. Fragmentation of indexes and tables can drastically affect performance of queries and maintenance, so this is a very good DMV to get familiar with.

Before I run my script let me explain a few things regarding Internal fragmentation.

Internal fragmentation occurs if there is unused space between records in a page. This fragmentation occurs through the process of data modifications (INSERT, UPDATE, and DELETE statements) that are made against the table and therefore, to the indexes defined on the table. This unused space causes poor cache utilization and more I/O, which ultimately leads to poor query performance.

I am connecting to a database that I am going to use for all of my demos and I open a new query window. Type (copy-paste the following)

USE mydb
GO

EXEC sp_configure 'show advanced options', 1
GO
RECONFIGURE WITH OVERRIDE
GO
DECLARE @DefaultFillFactor INT
DECLARE @Fillfactor TABLE
 (
 Name VARCHAR(100)
 ,Minimum INT
 ,Maximum INT
 ,config_value INT
 ,run_value INT
 )
INSERT INTO @Fillfactor
 EXEC sp_configure 'fill factor (%)'
SELECT @DefaultFillFactor = CASE WHEN run_value = 0 THEN 100
 ELSE run_value
 END
FROM @Fillfactor 

SELECT DB_NAME() AS DataBaseName
 ,QUOTENAME(s.name) AS SchemaName
 ,QUOTENAME(o.name) AS TableName
 ,i.name AS IndexName
 ,stats.Index_type_desc AS IndexType
 ,stats.page_count AS [PageCount]
 ,CASE WHEN i.fill_factor > 0 THEN i.fill_factor
 ELSE @DefaultFillFactor
 END AS [Fill Factor]
 ,stats.avg_page_space_used_in_percent
 ,CASE WHEN stats.index_level = 0 THEN 'Leaf Level'
 ELSE 'Nonleaf Level'
 END AS IndexLevel
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'DETAILED')
 AS stats
 ,sys.objects AS o
 ,sys.schemas AS s
 ,sys.indexes AS i
WHERE o.OBJECT_ID = stats.OBJECT_ID
 AND s.schema_id = o.schema_id
 AND i.OBJECT_ID = stats.OBJECT_ID
 AND i.index_id = stats.index_id
 AND stats.avg_page_space_used_in_percent <= 85
 AND stats.page_count >= 10
 AND stats.index_id > 0
ORDER BY stats.avg_page_space_used_in_percent ASC
 ,stats.page_count DESC

Execute the code above and see the results.

Have a look at the picture below to see what results I have got.

index-frag1

Let me explain what the code above does.

At the beginning I am  just turning on some advanced options, re-configuring.

I am creating a temporary table variable to hold some information. I’m going to join the DMV with other objects, schemas, and indexes (system views) just to get more information.

In the Select list, I get information like the database name, the schema name, the table name, the index name, the index type, the page count, the fillfactor, the average page space used in percent and the index level.

Then I am using some of conditions like avg_page_space_used_in_percent is less than 85% and more than 10 pages in size for an index.

Then I’m  ordering them by the average page space used in percent (This is an average percentage use of pages that represents  internal fragmentation. Higher the value, the better it is. If this value is lower than 80%, then action should be taken) and then by the page count descending so the ones with the most pages come up first.  By all means do use this query for identifying fragmentation issues in your databases.

The second query I have is using the dm_execute_query_stats DMV, to identify our top 10 queries ranked by average CPU time. The query output will show us the statements that are the most expensive as far as resources and overhead regarding the CPU.

Knowing which are these queries, I can rewrite them in a way that will cause much less overhead.

In a new query window, type (copy-paste the following).


USE master
GO

SELECT TOP 10 query_stats.query_hash AS "Query Hash",
 SUM(query_stats.total_worker_time) / SUM(query_stats.execution_count) AS "Average CPU Time",
 MIN(query_stats.statement_text) AS "SQL Statement"
FROM
 (SELECT EQS.*,
 SUBSTRING(ST.text, (EQS.statement_start_offset/2) + 1,
 ((CASE statement_end_offset
 WHEN -1 THEN DATALENGTH(ST.text)
 ELSE EQS.statement_end_offset END
 - EQS.statement_start_offset)/2) + 1) AS statement_text
 FROM sys.dm_exec_query_stats AS EQS
 CROSS APPLY sys.dm_exec_sql_text(EQS.sql_handle) as ST) as query_stats
GROUP BY query_stats.query_hash
ORDER BY 2 DESC;
GO

Execute the  code above against your database and see the results.

Have a look at the picture below to see my results.

query-analysis

In the next demo I am going to use the dm_os_wait_stats DMV and actually do some calculations to show the ratio of time spent waiting for a CPU to free up and give us some processing power or for another resource such as memory or the disks to free up.

In a new query window, type (copy-paste the following).


USE master
GO
Select signal_wait_time_ms=sum(signal_wait_time_ms)

 ,'%signal (cpu) waits' = cast(100.0 * sum(signal_wait_time_ms) / sum (wait_time_ms) as numeric(20,2))

 ,resource_wait_time_ms=sum(wait_time_ms - signal_wait_time_ms)

 ,'%resource waits'= cast(100.0 * sum(wait_time_ms - signal_wait_time_ms) / sum (wait_time_ms) as numeric(20,2))

From sys.dm_os_wait_stats

Execute the  code above against your database and see the results.

Have a look at the picture below to see my results.

wait-stats

On this server, we’re looking at about 15 % of the time is spent waiting on processing power to become available, versus almost 85% of the time is spent waiting usually for the disks to catch up and be able to give us the information that we need to load into memory or even for it to find it in the memory.

This indicates that I have a powerful enough server as far CPU is concerned but I have to look at how much memory is used and what my disk drives are doing and how well my storage is optimized on this server.

In this query I am going to use the dm_db_index_usage_stats DMV. I want to look at indexes and see how often are they used. If I have indexes that have not been used since the server’s restart and the server’s been running for a long time, then I probably do not need these indexes.

If the indexes are not being used, every time a table is updated by inserting, updating, deleting a record, the index is also being updated or changed in the table. The more indexes you have that are not being used, the more time you’re going to spend updating indexes and maintaining them. When it comes to defragmenting indexes, you’re going to be defragmenting indexes that you don’t even use.


USE mydb
GO

SELECT o.name ,
 indexname = i.name ,
 i.index_id ,
 reads = user_seeks + user_scans + user_lookups ,
 writes = user_updates ,
 rows = ( SELECT SUM(p.rows)
 FROM sys.partitions p
 WHERE p.index_id = s.index_id
 AND s.object_id = p.object_id
 ) ,
 CASE WHEN s.user_updates < 1 THEN 100
 ELSE 1.00 * ( s.user_seeks + s.user_scans + s.user_lookups )
 / s.user_updates
 END AS reads_per_write ,
 'DROP INDEX ' + QUOTENAME(i.name) + ' ON ' + QUOTENAME(c.name) + '.'
 + QUOTENAME(OBJECT_NAME(s.object_id)) AS 'drop statement'
FROM sys.dm_db_index_usage_stats s
 INNER JOIN sys.indexes i ON i.index_id = s.index_id
 AND s.object_id = i.object_id
 INNER JOIN sys.objects o ON s.object_id = o.object_id
 INNER JOIN sys.schemas c ON o.schema_id = c.schema_id
WHERE OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1
 AND s.database_id = DB_ID()
 AND i.type_desc = 'nonclustered'
 AND i.is_primary_key = 0
 AND i.is_unique_constraint = 0
 AND ( SELECT SUM(p.rows)
 FROM sys.partitions p
 WHERE p.index_id = s.index_id
 AND s.object_id = p.object_id
 ) > 20000
ORDER BY reads

Execute the  code above against your database and see the results.

Have a look at the picture below to see my results.

index-usage

The query pulls some information since our server restarted to show which indexes have not been run that often.I have placed a filter  that says I have to have at least 20,000 rows in a table for my query results to have a practical value. You can see that I am looking for reads, writes and their ratio to see if it is worth keeping the index.

With DMVs we are  monitoring performance with minimal overhead. There are many DMVs available to us at the moment and  more are added with new versions of SQL Server.

Hope it helps!!!

DevExpress DXperience Universal Edition review May 24, 2014

Posted by fofo in devexpress.
Tags:
1 comment so far

I am a Microsoft Trainer and a former ASP.Net MVP and I am using Visual Studio to build .Net and more
specifically ASP.Net applications.

Ι have build several ASP.Net applications using the ASP.Net & MVC DevExpress controls & libraries.

I have been using the DevExpress DXperience Universal Edition.

This is an edition,a subscription package that is available from DevExpress and has all the components
and libraries. There other packages like Enterprise, WinForms, WPF,Silverlight and ASP.NET.

The Universal edition includes DevExtreme, their mobile development framework for Visual Studio. It
also includes CodeRush, the eXpressApp Framework and the Business Intelligence Dashboard.

From the ASP.Net MVC controls & features I really liked the Application Themes.
This is an easy and handy way to personalize the look and feel of the web application.

Naturally I use the Data Grid for ASP.NET MVC control. It allows me to provide my clients with very nice user experience at a very fast speed.

It supports great out of the box functionality like master-detail and advanced lookup.

The ribbon control is another control I use often and has been a great addition to categorize
commands.

The reporting possibilities are endless and I can create master-detail reports very easily along with
side by side reports.

I have been using DevExpress MVC extensions to build various applications.Recently I have built an ASP.Net MVC application that monitors the working days/hours/leaves of the staff of a particular organisation that is also connected with a access control system(card readers).

Some of the controls I used in this application include:

1) MVCxPivotGrid

2) MVCxGridView

3) MVCxGridView, Exporter, Calendar Control

4) MVCxGridView, MVCxTreeview

These are some screenshots of my application.

pic1

 

pic2

 

pic3

 

pic4

 

 

I have also written about DevExpess controls in my other development blog (http://weblogs.asp.net/dotnetstories)

Ιn this post I am demonstrating how to bind an XPODataSource control to an ASPxGridView control.

Ιn this post I am demonstrating how to use client-side events that can make the user experience of your web application for the end user much better by avoiding unnecessary page flickering and postbacks.

Ιn this post I am demonstrating how to  bind data from an ArrayList object to the ASPxGridView control.

Ιn this post I am demonstrating how to implement Master-Detail functionality using the ASPxGridView control.

Ιn this post I am demonstrating how to use the ASPxGridView and its great features that include sorting,grouping,filtering,summaries.

In conclusion DevExpress Universal subscription allows developers to continue creating high
performance windows and web applications.

I have been able to develop great web experiences for my clients very quickly and efficiently.

 

 

Packt Publishing celebrates their 2000th title with an exclusive offer – We’ve got IT covered! March 26, 2014

Posted by fofo in general .net.
add a comment

Known for their extensive range of pragmatic IT ebooks, Packt Publishing are celebrating their 2000th book title `Learning Dart’– they want their customers to celebrate too.
To mark this milestone Packt Publishing will launch a ‘Buy One Get One Free’ offer across all eBooks on March 18th – for a limited period only.
`Learning Dart’ was selected as a title and published by Packt earlier this year. As a project that aims to revolutionise a language as crucial as JavaScript, Dart is a great example of an emerging technology which aims to support the community and their requirement for constant improvement. The content itself explains how to develop apps using Dart and HTML5 in a model-driven and fast-paced approach, enabling developers to build more complex and high-performing web apps.
David Maclean, Managing Director explains `It’s not by chance that this book is our 2000th title. Our customers and community drive demand and it is our job to ensure that whatever they’re working on, Packt provides practical help and support.
At Packt we understand that sometimes our customers want to learn a new programming language pretty much from scratch, with little knowledge of similar language concepts. Other times our customers know a related language fairly well and therefore want a fast-paced primer that brings them up to a competent professional level quickly.
That’s what makes Packt different: all our books are specifically commissioned by category experts, based on intensive research of the technology and the key tasks.’
Since 2004, Packt Publishing has been providing practical IT-related information that enables everyone to learn and develop their IT knowledge, from novice to expert.
Packt is one of the most prolific and fast-growing tech book publishers in the world. Originally focused on open source software, Packt contributes back into the community paying a royalty on relevant books directly to open source projects. These projects have received over $400,000 as part of Packt’s Open Source Royalty Scheme to date.
Their books focus on practicality, recognising that readers are ultimately concerned with getting the job done. Packt’s digitally-focused business model allows them to quickly publish up-to-date books in very specific areas across a range of key categories – web development, game development, big data, application development, and more. Their commitment to providing a comprehensive range of titles has seen Packt publish 1054% more titles in 2013 than in 2006.
Erol Staveley, Publisher, says `Recent research shows that 88% of our customers are very satisfied with the service knowing that we offer a wide breadth of titles in a timely manner, and owing to the quality of service that they receive 94% of customers are willing to recommend Packt to friends and family. It’s great that we’ve hit such a significant milestone, and we want to continue delivering this fantastic content to our customers.’
Here are some of the best titles across Packt’s main categories – but Buy One, Get One Free will apply across all 2000 titles:
Web Development
Big Data & Cloud
Game Development
App Development

Looking into temporary tables in SQL Server December 4, 2013

Posted by fofo in Sql Server, Sql Server 2005, SQL Server 2008, SQL Server 2012.
Tags: ,
add a comment

I have been delivering a certified course in MS SQL Server 2012 recently and I was asked several questions about temporary tables, how to create them, how to manage them, when to use them and what are the limitations of them.

In this post I will try to shed light on this particular issue with lots of hands-on demos.

Temporary tables and table variables make use of the system tempdb database.

I have installed SQL Server 2012 Enterprise edition in my machine but you can use the SQL Server 2012/2008 Express edition as well.

I am connecting to my local instance through Windows Authentication and in a new query window I type (you can copy paste)

First I am going to create a new temporary table and populate it. Execute the script below.

USE tempdb
GO

IF OBJECT_ID('tempdb..#footballer') IS NOT NULL

DROP TABLE #footballer;

GO
CREATE TABLE #footballer
 (
 [FootballerID] INT IDENTITY NOT NULL PRIMARY KEY,
 [lastname] [varchar](15) NOT NULL,
 [firstname] [varchar](15) NOT NULL,
 [shirt_no] [tinyint] NOT NULL,
 [position_played] [varchar](30) NOT NULL,

);

GO

SET IDENTITY_INSERT [dbo].[#footballer] ON

GO

INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (1,N'Oliver', N'Regina', 4, N'goalkeeper')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (2,N'Alexander', N'Roy', 8, N'goalkeeper')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (3,N'Mueller', N'Dewayne', 10, N'defender')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (4,N'Buckley', N'Beth', 3, N'midfielder')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (5,N'Koch', N'Jolene', 7, N'striker')
GO

SELECT * FROM #footballer

As you can see there is a # prefix in front of the table. This table will be saved in the tempdb.

Finally I select everything from the temporary table.

If I open a new query window and try to select everything (see the query below) from the #footballer table.


USE tempdb
GO

SELECT * FROM #footballer

You will not receive any results. You will receive an error – Invalid object name ‘#footballer’.

This is a local temporary table and it is in scope only in the current connection-session.

We can also create global temporary tables. In a new query window execute the following script.


USE tempdb
GO

IF OBJECT_ID('tempdb..##footballernew') IS NOT NULL

DROP TABLE ##footballernew;

GO
CREATE TABLE #footballernew
(
[FootballerID] INT IDENTITY NOT NULL PRIMARY KEY,
[lastname] [varchar](15) NOT NULL,
[firstname] [varchar](15) NOT NULL,
[shirt_no] [tinyint] NOT NULL,
[position_played] [varchar](30) NOT NULL,

);

GO

SET IDENTITY_INSERT [dbo].[##footballernew] ON

GO

INSERT [##footballernew] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (1,N'Oliver', N'Regina', 4, N'goalkeeper')
INSERT [##footballernew] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (2,N'Alexander', N'Roy', 8, N'goalkeeper')
INSERT [##footballernew] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (3,N'Mueller', N'Dewayne', 10, N'defender')
INSERT [##footballernew] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (4,N'Buckley', N'Beth', 3, N'midfielder')
INSERT [##footballernew] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (5,N'Koch', N'Jolene', 7, N'striker')
GO

SELECT * FROM ##footballernew

We denote the global temporary table with ## – ##footballernew

The global temporary table is deleted when all users referencing the table disconnect.

Both global and local temporary tables should be deleted in code rather than depending on automatic drop.

A temporary table created in a stored procedure is visible to other stored procedures executed from within the first procedure.

In a new query window type the following.


USE tempdb
GO

SELECT * FROM ##footballernew

In this case there will be no error. Global temporary tables persist across sessions-connections.

You can also add columns to temporary tables and alter the definition of existing columns.

In this script I add another column and then alter the definition of an existing column.


USE tempdb
GO

IF OBJECT_ID('tempdb..#footballer') IS NOT NULL

DROP TABLE #footballer;

GO
CREATE TABLE #footballer
 (
 [FootballerID] INT IDENTITY NOT NULL PRIMARY KEY,
 [lastname] [varchar](15) NOT NULL,
 [firstname] [varchar](15) NOT NULL,
 [shirt_no] [tinyint] NOT NULL,
 [position_played] [varchar](30) NOT NULL,

);

GO

SET IDENTITY_INSERT [dbo].[#footballer] ON

GO

INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (1,N'Oliver', N'Regina', 4, N'goalkeeper')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (2,N'Alexander', N'Roy', 8, N'goalkeeper')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (3,N'Mueller', N'Dewayne', 10, N'defender')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (4,N'Buckley', N'Beth', 3, N'midfielder')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (5,N'Koch', N'Jolene', 7, N'striker')
GO

ALTER TABLE #footballer
ADD [is_retired] BIT NULL;
GO

ALTER TABLE #footballer
ALTER COLUMN [lastname] [nvarchar](50);
GO

You can use any data type for columns definition in a temporary table. You can also use user-defined data types.

You can also have constraints in temporary tables.If you execute the code below, it will work perfectly fine.


USE tempdb
GO

IF OBJECT_ID('tempdb..#Movies') IS NOT NULL

DROP TABLE #footballer;

GO

CREATE TABLE #Movies
 (
 MovieID INT PRIMARY KEY ,
 MovieName NVARCHAR(50) ,
 MovieRating TINYINT
 )
GO
ALTER TABLE #Movies
 WITH CHECK
 ADD CONSTRAINT CK_Movie_Rating
CHECK (MovieRating >= 1 AND MovieRating <= 5)

But you have to be careful when creating-applying foreign keys. FOREIGN KEY constraints are not enforced on local or global temporary tables.
Execute the script below to see what I mean.The foreign key will not be created.

USE tempdb
go

CREATE TABLE #Persons
 (
 P_Id INT NOT NULL ,
 LastName VARCHAR(255) NOT NULL ,
 FirstName VARCHAR(255) ,
 Address VARCHAR(255) ,
 City VARCHAR(255) ,
 PRIMARY KEY ( P_Id )
 )

CREATE TABLE #Orders
(
O_Id int NOT NULL PRIMARY KEY,
OrderNo int NOT NULL,
P_Id int FOREIGN KEY REFERENCES #Persons(P_Id)
)

Please bear in mind that you can create temporary tables with clustered and non-clustered indexes on them.

Let’s investigate the behavior of temporary tables and IDENTITY columns.

If you execute the script below , it will fail. This is the same behavior when executing the same script to regular tables. You cannot specify values for the IDENTITY column.If you choose to do so you must set IDENTITY_INSERT ON.

USE tempdb
GO

IF OBJECT_ID('tempdb..#Persons') IS NOT NULL

DROP TABLE #Persons;

GO

CREATE TABLE #Persons
 (
 P_Id INT PRIMARY KEY CLUSTERED IDENTITY(1,1) ,
 LastName VARCHAR(255) NOT NULL ,
 FirstName VARCHAR(255) ,
 Address VARCHAR(255) ,
 City VARCHAR(255)
 )

--this will not work

INSERT #Persons(P_Id,LastName,FirstName,Address,City) VALUES (1,'Steven','Gerrard','123 liverpool street','liverpool')
SET IDENTITY_INSERT [#Persons] ON

GO

--this will work

INSERT #Persons(P_Id,LastName,FirstName,Address,City) VALUES (1,'Steven','Gerrard','123 liverpool street','liverpool')

Αlso note that transactions are honored in temporary tables. If I begin an explicit transaction -an insert- without committing it will insert the row of data but then if a rollback is issued the whole operation will be rolled back

Execute the script below.


USE tempdb
GO

IF OBJECT_ID('tempdb..#Persons') IS NOT NULL

DROP TABLE #Persons;

GO

CREATE TABLE #Persons
 (
 P_Id INT PRIMARY KEY CLUSTERED IDENTITY(1,1) ,
 LastName VARCHAR(255) NOT NULL ,
 FirstName VARCHAR(255) ,
 Address VARCHAR(255) ,
 City VARCHAR(255)
 )

SET IDENTITY_INSERT [#Persons] ON

GO

--this will insert the value

BEGIN TRAN
INSERT #Persons(P_Id,LastName,FirstName,Address,City) VALUES (1,'Steven','Gerrard','123 liverpool street','liverpool')

GO

SELECT * FROM #Persons

--this will rollback the transaction

ROLLBACK TRAN

Hope it helps!!!

Looking into Temp database usage in SQL Server December 4, 2013

Posted by fofo in Sql Server, Sql Server 2005, SQL Server 2008, SQL Server 2012.
Tags:
add a comment

I have been delivering a certified course in MS SQL Server 2012 recently and I was asked several questions about Tempdb usage and temporary objects.

In this post I will try to shed light on this particular issue.

Temporary tables and table variables make use of the system tempdb database.

There is only one tempdb system database per SQL Server instance so if there is a huge usage of temporary objects in this database it can be the point of contention.

When you create an entry in database it needs to allocate space.This is also valid for the tempdb database.There are three types of pages involved in the allocation process in the tempdb data file:

  • Page Free Space (PFS)
  • Shared Global Allocation Map (SGAM)
  • Global Allocation Map (GAM).

When there is a great page allocation contention in tempdb, the whole allocation process can suffer and we can experience with PAGELATCH waits.

In order to address the issue above, you can have a number of tempdb data files that are equal to the number of cores.For example if you have a system with less than 8 cores e.g 6 you should add/set up 6 data files for the tempdb.If you have a system with more than 8 cores you should add 8 data files for the tempdb and then if the contention is still big you can add 4 more data files.

By saying cores in this post I mean logical cores and not physical cores.So if you have 8 physical cores, then you have 16 logical cores and 32 logical cores if hyper-threading is enabled.

I will  provide some demos in order to show you what tempdb contention might look like and what are the main wait latches that occur.

I have installed SQL Server 2012 Enterprise edition in my machine but you can use the SQL Server 2012/2008 Express edition as well.

I am connecting to my local instance through Windows Authentication and in a new query window I type (you can copy paste)

In this snippet, I create a new database and then create a new table with some constraints

CREATE DATABASE mytempdbcontention
GO

USE mytempdbcontention;
GO
 CREATE TABLE dbo.footballer
 (
 [FootballerID] INT IDENTITY NOT NULL PRIMARY KEY,
 [lastname] [varchar](15) NOT NULL,
 [firstname] [varchar](15) NOT NULL,
 [shirt_no] [tinyint] NOT NULL,
 [position_played] [varchar](30) NOT NULL,

);

GO

ALTER TABLE dbo.footballer
ADD CONSTRAINT CK_Footballer_Shirt_No
CHECK (shirt_no >= 1 AND shirt_no <= 11)

GO

ALTER TABLE dbo.footballer
ADD CONSTRAINT CK_Footballer_Position
CHECK (position_played IN ('goalkeeper','defender','midfielder','striker'))
GO

Now I need to populate the table with 50.000 rows. This is the script you need to execute in order to make this happen.

You can download it here. Rename the insert-footballer.doc to insert-footballer.sql and execute the script in a new query window.

Now I need to create a script that will create tempdb contention. This is the script that creates a temporary object- #footballer, populates the #footballer from the footballer table and then selects from it. Finally it drops the temporary object.


USE mytempdbcontention;
GO

SET NOCOUNT ON;
GO

WHILE 1 = 1
 BEGIN

IF OBJECT_ID('tempdb..#footballer') IS NOT NULL

 DROP TABLE #footballer;

CREATE TABLE #footballer
 (
 [FootballerID] INT IDENTITY NOT NULL PRIMARY KEY,
 [lastname] [varchar](15) NOT NULL,
 [firstname] [varchar](15) NOT NULL,
 [shirt_no] [tinyint] NOT NULL,
 [position_played] [varchar](30) NOT NULL,

);
 INSERT INTO #footballer
 (lastname,
 firstname,
 shirt_no,
 position_played)
 SELECT TOP 4000
 lastname,
 firstname,
 shirt_no,
 position_played

 FROM dbo.footballer;

SELECT lastname
 FROM #footballer;

DROP TABLE #footballer;
 END
 GO

Now I am going to create a .cmd file where I will create contention to the tempdb.

You can download it here. Rename the temp-sql.cmd.doc to temp-sql.cmd. Make sure you execute it (by double-clicking it).

This will create lots of contention to the tempdb. We need to see exactly what this contention is and the wait latches that have occurred.

Execute the script below.


USE tempdb
go
SELECT session_id, wait_duration_ms,wait_type, resource_description
from sys.dm_os_waiting_tasks
where wait_type like 'PAGE%LATCH_%'

and

resource_description like '2:%'

As you can see from the picture below, I have PAGEIOLATCH_SH wait types.This wait type occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Shared mode.

wait_type

I have one tempdb data file, the default configurations

I have 8 cores in this machine so I will add 7 mores tempdb data files of equal size (MS recommendation).

Execute the script below.


USE [master]
GO

ALTER DATABASE [tempdb]
MODIFY FILE ( NAME = N'tempdev', SIZE = 500 MB ,
FILEGROWTH = 100 MB )
GO

ALTER DATABASE [tempdb]
ADD FILE ( NAME = N'tempdev2', FILENAME = N'd:\DATA\tempdb2.ndf' ,
SIZE = 500 MB , FILEGROWTH = 100 MB )
GO

ALTER DATABASE [tempdb]
ADD FILE ( NAME = N'tempdev3', FILENAME = N'd:\DATA\tempdb3.ndf' ,
SIZE = 500 MB , FILEGROWTH = 100 MB )
GO

ALTER DATABASE [tempdb]
ADD FILE ( NAME = N'tempdev4', FILENAME = N'd:\DATA\tempdb4.ndf' ,
SIZE = 500 MB , FILEGROWTH = 100 MB )
GO

ALTER DATABASE [tempdb]
ADD FILE ( NAME = N'tempdev5', FILENAME = N'd:\DATA\tempdb5.ndf' ,
SIZE = 500 MB , FILEGROWTH = 100 MB )
GO

ALTER DATABASE [tempdb]
ADD FILE ( NAME = N'tempdev6', FILENAME = N'd:\DATA\tempdb6.ndf' ,
SIZE = 500 MB , FILEGROWTH = 100 MB )
GO

ALTER DATABASE [tempdb]
ADD FILE ( NAME = N'tempdev7', FILENAME = N'd:\DATA\tempdb7.ndf' ,
SIZE = 500 MB , FILEGROWTH = 100 MB )
GO

Now run the query again and observe the results.


USE tempdb
go
SELECT session_id, wait_duration_ms,wait_type, resource_description
from sys.dm_os_waiting_tasks
where wait_type like 'PAGE%LATCH_%'

and

resource_description like '2:%'

You will see that there are no wait types hence no tempdb contention.

Stop now the temp-sql.cmd file so the contention of the tempdb stops.

Hope it helps!!!

Follow

Get every new post delivered to your Inbox.

Join 2,279 other followers