jump to navigation

Temporal Tables in SQL Server 2016 November 26, 2016

Posted by fofo in Sql Server, SQL Server 2016.
Tags:
add a comment

This is the second post in a series of posts about the new features of SQL Server 2016. You can find the first post in the SQL Server 2016 (JSON support in SQL Server 2016) series here .

SQL Server 2016 was officially released back in June 2016.

There are many new features in SQL Server 2016 and in  my opinion the most important are:

  • Always Encrypted
  • Query Store
  • Row Level Security
  • Temporal Tables
  • Stretch Database
  • Dynamic Data Masking
  • PolyBase
  • JSON Support

I am going to discuss the “Temporal Tables” in SQL Server 2016 in this post.

I have installed SQL Server 2016 Enterprise edition in my machine but you can use the SQL Server 2016 Developer edition as well.

You can download the SQL Server 2016 Developer edition by joining the Visual Studio Dev Essentials which is free and simple. After joining you will be able to download SQL Server 2016 developer edition in your machine.

Temporal tables is another brilliant feature that will make work with historical databases in SQL Server much easier.  Temporal is a database feature that was introduced in SQL 2011 and is now supported in SQL Server 2016.

A system-versioned temporal table is a new type of user table in SQL Server 2016, designed to keep a full history of data changes and allow easy point in time analysis.

Please do not confuse temporal tables with temporary tables.

We can use them to look at historical data in an easy way by having the ability to reconstruct the data at any point in time, or even to compare data from two periods without having this history in our current table.

We can use them to understand business trends over time. This is very important for people in the Sales or Marketing Departments of a big business.

Sometimes developers, DBAs can accidentally delete data. Temporal tables can help us recover accidental changes.We can also debug errors and understand data behavior.

Sometimes when we make a mistake and delete or update data by accident the only way to recover data is to restore a valid (recently taken backup).

That takes time and surely temporal tables give us the ability to recover data in a much easier and efficient way.

Another important thing to note is that we can create a new temporal table or alter the schema of an existing table. In order to create a system-versioned table we need to

  1. Have a table that has a primary key (that goes without saying)
  2. The table option SYSTEM_VERSIONING to be set to ON
  3. The two non-nullable DATETIME columns representing the start and the end of the row’s validity period
  4. Designation of the period columns
  5. A linked history table which naturally holds the past states of the modified rows

Also bear in mind that all T-SQL statements INSERT, UPDATE,SELECT are totally valid.

Let’s move on and demonstrate the temporal tables functionality with a hands-on demo.

Type (copy paste) the following query in a new query window and execute it.


CREATE DATABASE TheTemporalDB
GO

USE TheTemporalDB
GO
CREATE TABLE Project 
(
 ProjectID int NOT NULL IDENTITY(1,1) PRIMARY KEY, 
 ProjectName varchar(50) NOT NULL, 
 ManagerID int NULL, 
 ValidFrom datetime2 GENERATED ALWAYS AS ROW START NOT NULL, 
 ValidTo datetime2 GENERATED ALWAYS AS ROW END NOT NULL, 
 PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo) 
)
WITH (SYSTEM_VERSIONING = ON)
GO

In the code above, I create a sample database and a temporal table.

If you look at the picture below you will notice the temporal able and the system generated history table that is linked with it.

 

temporal-table-2

You can see the two tables have the exact same columns. The ProjectID column in the history table has not a Primary key constraint. The history table schema reflects the main table schema except for constraints.

Type (copy paste) the following query in a new query window and execute it.


SELECT
 object_id,
 name,
 temporal_type,
 temporal_type_desc,
 history_table_id
 FROM
 sys.tables
 WHERE
 object_id = OBJECT_ID('dbo.Project ', 'U') OR
 object_id = ( 
 SELECT history_table_id 
 FROM sys.tables
 WHERE object_id = OBJECT_ID('dbo.Project ', 'U')
)

GO

Executing the code above you can get information about the main table and the temporal history table.

Have a look at the picture below

 

temporal-table-3

 

If you want to delete those tables you must first turn off system versioning and then drop tables individually.

Type (copy paste) the following query in a new query window and execute it.


ALTER TABLE Project SET (SYSTEM_VERSIONING = OFF)
DROP TABLE Project
DROP TABLE[dbo].[MSSQL_TemporalHistoryFor_565577053] 

Now lets recreate the our temporal table specifying the name of the custom history table.

Type (copy paste) the following query in a new query window and execute it.

USE TheTemporalDB
GO
CREATE TABLE Project
(
 ProjectID int NOT NULL IDENTITY(1,1) PRIMARY KEY, 
 ProjectName varchar(50) NOT NULL, 
 ManagerID int NULL, 
 ValidFrom datetime2 GENERATED ALWAYS AS ROW START NOT NULL, 
 ValidTo datetime2 GENERATED ALWAYS AS ROW END NOT NULL, 
 PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo) 
)
WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.ProjectHistory))

The name of the history table is now ProjectHistory.

Now let’s create a simple table, populate it and then convert it to a temporal table

Type (copy paste) the following query in a new query window and execute it.


USE TheTemporalDB
GO

ALTER TABLE Project SET (SYSTEM_VERSIONING = OFF)
DROP TABLE Project
DROP TABLE[dbo].[ProjectHistory]

CREATE TABLE Project(
 ProjectId int PRIMARY KEY,
 ProjectName varchar(20) NOT NULL,
 DepartmentName varchar(50) NOT NULL
)
GO

INSERT INTO Project (ProjectId, ProjectName, DepartmentName) VALUES
 (1, 'NopCommerce E-shop', 'E-commerce SubUnit'),
 (2, 'Elastic Search', 'NoSQL DBs SubUnit'),
 (3, 'Adwords Campaign', 'Digital Marketing SubUnit')
GO

I drop the tables I created earlier. I create a simple table and insert some sample values in it.

Now I will convert the table to a temporal table.

Type (copy paste) the following query in a new query window and execute it.


USE TheTemporalDB
GO

ALTER TABLE Project ADD
 StartDate datetime2 GENERATED ALWAYS AS ROW START NOT NULL DEFAULT CAST('1900-01-01 00:00:00.0000000' AS datetime2),
 EndDate datetime2 GENERATED ALWAYS AS ROW END NOT NULL DEFAULT CAST('9999-12-31 23:59:59.9999999' AS datetime2),
 PERIOD FOR SYSTEM_TIME (StartDate, EndDate)
GO

ALTER TABLE Project 
 SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.ProjectHistory))
GO

Then I query the two tables. The history table naturally has not data in it.

Then I update (3 times) the first row of the table where ProjectID is 1.

Then I query the two tables. The history table has data in it this time. All changes are stored in the table.

Type (copy paste) the following query in a new query window and execute it.


USE TheTemporalDB
GO

SELECT * FROM [dbo].[Project]
SELECT * FROM [dbo].[ProjectHistory]

UPDATE Project SET ProjectName = 'Custom ASP.Net MVC' WHERE ProjectId = 1


UPDATE Project SET ProjectName = 'ERP Integration' WHERE ProjectId = 1


UPDATE Project SET ProjectName = 'Theme Conversion' WHERE ProjectId = 1


SELECT * FROM [dbo].[Project]
SELECT * FROM [dbo].[ProjectHistory]

Have a look at the picture below.

temporal-table-4

You can see that the history table has all the changes ( all 3 updates) in the ProjectName column.

Finally one should note that when comes to triggers and temporal tables INSTEAD triggers are unsupported while AFTER Triggers are supported only on the current table and not on the history table.

Tables with foreign key constraints that use Cascade updates and deleted are also not supported.

In Memory OLTP tables and FILESTREAM tables cannot become temporal tables.

 

Hope it helps!!!

Built-in support for JSON in SQL Server 2016 November 20, 2016

Posted by fofo in Sql Server, SQL Server 2016.
Tags: ,
add a comment

This is going to be the first post in a series of posts about the new features of SQL Server 2016.

SQL Server 2016 was officially released back in June 2016.

There are many new features in SQL Server 2016 and in  my opinion the most important are:

  • Always Encrypted
  • Query Store
  • Row Level Security
  • Temporal Tables
  • Stretch Database
  • Dynamic Data Masking
  • PolyBase
  • JSON Support

I am going to discuss the JSON support in SQL Server 2016 in this post.

I have installed SQL Server 2016 Enterprise edition in my machine but you can use the SQL Server 2016 Developer edition as well. You can download SQL Server 2016 Developer edition for free.

You can download the SQL Server 2016 Developer edition by joining the Visual Studio Dev Essentials which is free and simple. After joining you will be able to download SQL Server 2016 developer edition in your machine.

JSON is an acronym for JavaScript Object Notation, and it’s a lightweight format for exchanging data between different source types, similar to how XML is used.

The transferred text is easy to read, and the objects consist of attribute-value pairs.

Let’s look at a simple  JSON example

 

{“doctors”:[
{“firstName”:”John”, “lastName”:”Smith”},
{“firstName”:”Anna”, “lastName”:”Bernard”},
{“firstName”:”Peter”, “lastName”:”Feller”}
]}

SQL developers think that the SQL team has provided a  built-in JSON data type. That is not true.

JSON text is typically stored in varchar or nvarchar columns, and is indexed as plain text.

SQL Server 2016 provides built-in support for importing, exporting, parsing, and querying JSON documents or fragments.

Hence, we can format SQL Server data or query output as JSON by adding the FOR JSON clause to a select statement.

On the other hand we can convert JSON to data that feeds to a table from our database. This is done using the OPENJSON function.

Let’s us start with our first example.

In all my examples I will be using the AdventureWorks 2016 database. You can download it here.

We will convert the query results against our database to JSON format using the FOR JSON clause.

We will create a SELECT statement from a table and then transform it to JSON.

Open a new query window. Type (copy-paste the following) in that query window. Execute the query.

USE AdventureWorks2016

SELECT Production.Product.Name ,
 Production.Product.StandardCost ,
 Production.Product.ListPrice ,
 Production.Product.Size
FROM Production.Product
WHERE ListPrice > 1800
FOR JSON AUTO

 

I copy and paste the output of the query to online JSON parser – http://jsonviewer.stack.hu/

This is the output after formatting the JSON output.

[
{
“Name”: “Road-150 Red, 62”,
“StandardCost”: 2171.2942,
“ListPrice”: 3578.2700,
“Size”: “62”
},
{
“Name”: “Road-150 Red, 44”,
“StandardCost”: 2171.2942,
“ListPrice”: 3578.2700,
“Size”: “44”
},
{
“Name”: “Road-150 Red, 48”,
“StandardCost”: 2171.2942,
“ListPrice”: 3578.2700,
“Size”: “48”
},
{
“Name”: “Road-150 Red, 52”,
“StandardCost”: 2171.2942,
“ListPrice”: 3578.2700,
“Size”: “52”
},
{
“Name”: “Road-150 Red, 56”,
“StandardCost”: 2171.2942,
“ListPrice”: 3578.2700,
“Size”: “56”
},
{
“Name”: “Mountain-100 Silver, 38”,
“StandardCost”: 1912.1544,
“ListPrice”: 3399.9900,
“Size”: “38”
},
{
“Name”: “Mountain-100 Silver, 42”,
“StandardCost”: 1912.1544,
“ListPrice”: 3399.9900,
“Size”: “42”
},
{
“Name”: “Mountain-100 Silver, 44”,
“StandardCost”: 1912.1544,
“ListPrice”: 3399.9900,
“Size”: “44”
},
{
“Name”: “Mountain-100 Silver, 48”,
“StandardCost”: 1912.1544,
“ListPrice”: 3399.9900,
“Size”: “48”
},
{
“Name”: “Mountain-100 Black, 38”,
“StandardCost”: 1898.0944,
“ListPrice”: 3374.9900,
“Size”: “38”
},
{
“Name”: “Mountain-100 Black, 42”,
“StandardCost”: 1898.0944,
“ListPrice”: 3374.9900,
“Size”: “42”
},
{
“Name”: “Mountain-100 Black, 44”,
“StandardCost”: 1898.0944,
“ListPrice”: 3374.9900,
“Size”: “44”
},
{
“Name”: “Mountain-100 Black, 48”,
“StandardCost”: 1898.0944,
“ListPrice”: 3374.9900,
“Size”: “48”
},
{
“Name”: “Mountain-200 Silver, 38”,
“StandardCost”: 1265.6195,
“ListPrice”: 2319.9900,
“Size”: “38”
},
{
“Name”: “Mountain-200 Silver, 42”,
“StandardCost”: 1265.6195,
“ListPrice”: 2319.9900,
“Size”: “42”
},
{
“Name”: “Mountain-200 Silver, 46”,
“StandardCost”: 1265.6195,
“ListPrice”: 2319.9900,
“Size”: “46”
},
{
“Name”: “Mountain-200 Black, 38”,
“StandardCost”: 1251.9813,
“ListPrice”: 2294.9900,
“Size”: “38”
},
{
“Name”: “Mountain-200 Black, 42”,
“StandardCost”: 1251.9813,
“ListPrice”: 2294.9900,
“Size”: “42”
},
{
“Name”: “Mountain-200 Black, 46”,
“StandardCost”: 1251.9813,
“ListPrice”: 2294.9900,
“Size”: “46”
},
{
“Name”: “Road-250 Red, 44”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “44”
},
{
“Name”: “Road-250 Red, 48”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “48”
},
{
“Name”: “Road-250 Red, 52”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “52”
},
{
“Name”: “Road-250 Red, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”
},
{
“Name”: “Road-250 Black, 44”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “44”
},
{
“Name”: “Road-250 Black, 48”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “48”
},
{
“Name”: “Road-250 Black, 52”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “52”
},
{
“Name”: “Road-250 Black, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”
},
{
“Name”: “Touring-1000 Yellow, 46”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “46”
},
{
“Name”: “Touring-1000 Yellow, 50”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “50”
},
{
“Name”: “Touring-1000 Yellow, 54”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “54”
},
{
“Name”: “Touring-1000 Yellow, 60”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “60”
},
{
“Name”: “Touring-1000 Blue, 46”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “46”
},
{
“Name”: “Touring-1000 Blue, 50”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “50”
},
{
“Name”: “Touring-1000 Blue, 54”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “54”
},
{
“Name”: “Touring-1000 Blue, 60”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “60”
}
]

In AUTO mode, the structure of the SELECT statement determines the format of the JSON output. If we want to explicitly specify the format of the output, we can use the PATH mode instead of AUTO.

Open a new query window. Type (copy-paste the following) in that query window. Execute the query.

USE AdventureWorks2016

--AUTO
SELECT Production.Product.Name ,
 Production.Product.StandardCost ,
 Production.Product.ListPrice ,
 Production.Product.Size ,
 Production.ProductModel.Name AS modelname
FROM Production.Product
 INNER JOIN Production.ProductModel ON Production.Product.ProductModelID = Production.ProductModel.ProductModelID
WHERE ListPrice > 1800
 AND ProductModel.Name IN ( 'Road-250' )
 FOR JSON AUTO

--PATH

SELECT Production.Product.Name ,
 Production.Product.StandardCost ,
 Production.Product.ListPrice ,
 Production.Product.Size ,
 Production.ProductModel.Name AS modelname
FROM Production.Product
 INNER JOIN Production.ProductModel ON Production.Product.ProductModelID = Production.ProductModel.ProductModelID
WHERE ListPrice > 922
 AND ProductModel.Name IN ( 'Road-250' )
 FOR JSON PATH

GO
 

This is the output of the AUTO statement.

[
{
“Name”: “Road-250 Red, 44”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “44”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Red, 48”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “48”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Red, 52”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “52”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Red, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Black, 44”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “44”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Black, 48”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “48”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Black, 52”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “52”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Black, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
}
]

This is the output of the PATH statement

[
{
“Name”: “Road-250 Red, 44”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “44”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Red, 48”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “48”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Red, 52”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “52”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Red, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 44”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “44”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 48”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “48”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 52”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “52”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”,
“modelname”: “Road-250”
}
]

Now let’s see how we can use the ROOT statement.

Open a new query window. Type (copy-paste the following) in that query window. Execute the query.

USE AdventureWorks2016

SELECT Production.Product.Name ,
 Production.Product.StandardCost ,
 Production.Product.ListPrice ,
 Production.Product.Size ,
 Production.ProductModel.Name AS modelname
FROM Production.Product
 INNER JOIN Production.ProductModel ON Production.Product.ProductModelID = Production.ProductModel.ProductModelID
WHERE ListPrice > 1800
 AND ProductModel.Name IN ( 'Road-250' )
 FOR JSON PATH,ROOT ('Production.Product')

This is the output of the JSON result after formatting.

{
“Production.Product”: [
{
“Name”: “Road-250 Red, 44”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “44”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Red, 48”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “48”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Red, 52”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “52”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Red, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 44”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “44”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 48”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “48”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 52”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “52”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”,
“modelname”: “Road-250”
}
]
}

We wanted to specify a root element for our data format and we used the ROOT keyword.

The nodes returned by our query are children of the root node Production.Product

We can also convert JSON data to rows and columns.

We can use the OPENJSON function.  There are two ways of calling OPENJSON. We can use it with a default schema. In this case, the function returns a table with one row for each property of the object or for each element of the array.

Open a new query window. Type (copy-paste the following) in that query window. Execute the query.


DECLARE @json NVARCHAR(MAX) = N'
{
 "Name":"Road-155,62",
 "StandardCost":2123,
 "ListPrice":3123,
 "Size":62

}';
 
SELECT * FROM OPENJSON(@json);

The output of this statement is

key value type
Name Road-155,62 1
StandardCost 2123 2
ListPrice 3123 2
Size 62 2

 

As you can see we can the OPENJSON  function returns three values:

key: Property name within the object or index of the element within the array.
value: Property value within the object or value of the array element specified by the index.
type: Value’s data type, represented numerically, as described in the following table:

Numeric Value Data Type
0 null
1 string
2 int
3 true or false
4 array
5 object

We can also use the OPENJSON function with an explicit schema, and in this case the function returns a table with the schema that we define in the WITH clause.

In the WITH clause, we specify the output columns, their types, and the paths of the source properties for each column.

Open a new query window. Type (copy-paste the following) in that query window. Execute the query.


DECLARE @JSON NVARCHAR(max);
SET @JSON =N'{"Products"
{"Production.Product": 
 [
 {
 "Name": "Road-250 Red, 44",
 "StandardCost": 1518.7864,
 "ListPrice": 2443.3500,
 "Size": "44",
 "ProductModel": 
 {
 "modelname": "Road-250"
 }
 
 },
 {
 "Name": "Road-250 Red, 48",
 "StandardCost": 1518.7864,
 "ListPrice": 2443.3500,
 "Size": "48",
 "ProductModel": 
 {
 "modelname": "Road-250"
 }
 }
 ]
 }
 }'

 select * from OPENJSON (@JSON, '$.Products.Production.Product')

 with ( Name nvarchar(50) '$.Name',
 StandardCost decimal(18,2) '$.StandardCost',
 Listprice decimal(18,2) '$.ListPrice',
 Size int '$.Size',
 modelname nvarchar(100) '$.ProductModel.modelname'
 )

 

We can see that Name,StandardCost,ListPrice,Size come directly under Production.Product, while the modelname has another parent, ProductModel, so when we mention them we also specify the ProductModel node.

Finally let’s have a look at the ISJSON  built-in function. This function validates the JSON text so we can find out whether a string contains a valid JSON.

In this final example I make a TABLE variable in which a column is going to store string values that will hold date formatted as JSON. I am populating the TABLE variable with two rows.

By executing the query below we see that returns 2 rows because our JSON format is valid.

This is a good function for validating strings that are supposed to have a JSON format before exchanging them with client applications

 
DECLARE @temp TABLE
 (
 id INT ,
 value NVARCHAR(MAX)
 )

INSERT INTO @temp
VALUES ( 1, '{
 "Name": "Road-150 Red, 62",
 "StandardCost": 2171.2942,
 "ListPrice": 3578.2700,
 "Size": "62"
 }' ), ( 2, '{
 "Name": "Road-150 Red, 44",
 "StandardCost": 2171.2942,
 "ListPrice": 3578.2700,
 "Size": "44"
 }' )

 SELECT * FROM @temp
 where ISJSON([Value]) > 0 

JSON support in SQL Server is great and a much anticipated feature.

Hope it helps!!!

Looking into DMVs in SQL Server for Performance Tuning May 17, 2015

Posted by fofo in Sql Server, SQL Server 2008, SQL Server 2012, SQL Server 2014.
Tags: , ,
add a comment

I have been delivering a Microsoft Certified Course in MS SQL Server 2014 recently and I was highlighting with several examples and demos the importance of DMOs (DMVs and DMFs) in SQL Server. We can get a plethora of information regarding server state and monitor the health of a server instance, diagnose problems and tune performance with those objects.

In this post I will demonstrate with hands-on demos the power of DMVs and show how they can help a DBA, developer to identify expensive queries, low usage indexes, and check the fragmentation levels of indexes. These are very common problems that anyone that deals with an SQL Server database must look out for and troubleshoot. There are two types of DMVs, server scoped, which require the view server state permission on the server, and database scoped, which require the view database state permission on the database.

By querying these view with T-SQL statements, a lot of information is made available to the DBA. When you make a call to a DMV, you must use a minimum of a two part naming convention.

DMVs can be of very reliable resource for information about the performance of your system, but each time your SQL Server is restarted, the data in the views is reset.

I have installed SQL Server 2014 Enterprise edition in my machine but you can use the SQL Server 2014/2012/2008 Express edition as well (or any other edition).

I am connecting to my local instance through Windows Authentication.

The first query I’m going to run is for identifying fragmentation levels of indexes within the database. 

The more fragmented an index is, the slower it performs. I am going to use the DMV dm_db_index_physical_stats cause I’m looking at the statistics of how the indexes are physically laid out within SQL Server. The view returns the size and fragmentation details for the data and the indexes of a specified table or view. Fragmentation of indexes and tables can drastically affect performance of queries and maintenance, so this is a very good DMV to get familiar with.

Before I run my script let me explain a few things regarding Internal fragmentation.

Internal fragmentation occurs if there is unused space between records in a page. This fragmentation occurs through the process of data modifications (INSERT, UPDATE, and DELETE statements) that are made against the table and therefore, to the indexes defined on the table. This unused space causes poor cache utilization and more I/O, which ultimately leads to poor query performance.

I am connecting to a database that I am going to use for all of my demos and I open a new query window. Type (copy-paste the following)

USE mydb
GO

EXEC sp_configure 'show advanced options', 1
GO
RECONFIGURE WITH OVERRIDE
GO
DECLARE @DefaultFillFactor INT
DECLARE @Fillfactor TABLE
 (
 Name VARCHAR(100)
 ,Minimum INT
 ,Maximum INT
 ,config_value INT
 ,run_value INT
 )
INSERT INTO @Fillfactor
 EXEC sp_configure 'fill factor (%)'
SELECT @DefaultFillFactor = CASE WHEN run_value = 0 THEN 100
 ELSE run_value
 END
FROM @Fillfactor 

SELECT DB_NAME() AS DataBaseName
 ,QUOTENAME(s.name) AS SchemaName
 ,QUOTENAME(o.name) AS TableName
 ,i.name AS IndexName
 ,stats.Index_type_desc AS IndexType
 ,stats.page_count AS [PageCount]
 ,CASE WHEN i.fill_factor > 0 THEN i.fill_factor
 ELSE @DefaultFillFactor
 END AS [Fill Factor]
 ,stats.avg_page_space_used_in_percent
 ,CASE WHEN stats.index_level = 0 THEN 'Leaf Level'
 ELSE 'Nonleaf Level'
 END AS IndexLevel
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'DETAILED')
 AS stats
 ,sys.objects AS o
 ,sys.schemas AS s
 ,sys.indexes AS i
WHERE o.OBJECT_ID = stats.OBJECT_ID
 AND s.schema_id = o.schema_id
 AND i.OBJECT_ID = stats.OBJECT_ID
 AND i.index_id = stats.index_id
 AND stats.avg_page_space_used_in_percent <= 85
 AND stats.page_count >= 10
 AND stats.index_id > 0
ORDER BY stats.avg_page_space_used_in_percent ASC
 ,stats.page_count DESC

Execute the code above and see the results.

Have a look at the picture below to see what results I have got.

index-frag1

Let me explain what the code above does.

At the beginning I am  just turning on some advanced options, re-configuring.

I am creating a temporary table variable to hold some information. I’m going to join the DMV with other objects, schemas, and indexes (system views) just to get more information.

In the Select list, I get information like the database name, the schema name, the table name, the index name, the index type, the page count, the fillfactor, the average page space used in percent and the index level.

Then I am using some of conditions like avg_page_space_used_in_percent is less than 85% and more than 10 pages in size for an index.

Then I’m  ordering them by the average page space used in percent (This is an average percentage use of pages that represents  internal fragmentation. Higher the value, the better it is. If this value is lower than 80%, then action should be taken) and then by the page count descending so the ones with the most pages come up first.  By all means do use this query for identifying fragmentation issues in your databases.

The second query I have is using the dm_execute_query_stats DMV, to identify our top 10 queries ranked by average CPU time. The query output will show us the statements that are the most expensive as far as resources and overhead regarding the CPU.

Knowing which are these queries, I can rewrite them in a way that will cause much less overhead.

In a new query window, type (copy-paste the following).


USE master
GO

SELECT TOP 10 query_stats.query_hash AS "Query Hash",
 SUM(query_stats.total_worker_time) / SUM(query_stats.execution_count) AS "Average CPU Time",
 MIN(query_stats.statement_text) AS "SQL Statement"
FROM
 (SELECT EQS.*,
 SUBSTRING(ST.text, (EQS.statement_start_offset/2) + 1,
 ((CASE statement_end_offset
 WHEN -1 THEN DATALENGTH(ST.text)
 ELSE EQS.statement_end_offset END
 - EQS.statement_start_offset)/2) + 1) AS statement_text
 FROM sys.dm_exec_query_stats AS EQS
 CROSS APPLY sys.dm_exec_sql_text(EQS.sql_handle) as ST) as query_stats
GROUP BY query_stats.query_hash
ORDER BY 2 DESC;
GO

Execute the  code above against your database and see the results.

Have a look at the picture below to see my results.

query-analysis

In the next demo I am going to use the dm_os_wait_stats DMV and actually do some calculations to show the ratio of time spent waiting for a CPU to free up and give us some processing power or for another resource such as memory or the disks to free up.

In a new query window, type (copy-paste the following).


USE master
GO
Select signal_wait_time_ms=sum(signal_wait_time_ms)

 ,'%signal (cpu) waits' = cast(100.0 * sum(signal_wait_time_ms) / sum (wait_time_ms) as numeric(20,2))

 ,resource_wait_time_ms=sum(wait_time_ms - signal_wait_time_ms)

 ,'%resource waits'= cast(100.0 * sum(wait_time_ms - signal_wait_time_ms) / sum (wait_time_ms) as numeric(20,2))

From sys.dm_os_wait_stats

Execute the  code above against your database and see the results.

Have a look at the picture below to see my results.

wait-stats

On this server, we’re looking at about 15 % of the time is spent waiting on processing power to become available, versus almost 85% of the time is spent waiting usually for the disks to catch up and be able to give us the information that we need to load into memory or even for it to find it in the memory.

This indicates that I have a powerful enough server as far CPU is concerned but I have to look at how much memory is used and what my disk drives are doing and how well my storage is optimized on this server.

In this query I am going to use the dm_db_index_usage_stats DMV. I want to look at indexes and see how often are they used. If I have indexes that have not been used since the server’s restart and the server’s been running for a long time, then I probably do not need these indexes.

If the indexes are not being used, every time a table is updated by inserting, updating, deleting a record, the index is also being updated or changed in the table. The more indexes you have that are not being used, the more time you’re going to spend updating indexes and maintaining them. When it comes to defragmenting indexes, you’re going to be defragmenting indexes that you don’t even use.


USE mydb
GO

SELECT o.name ,
 indexname = i.name ,
 i.index_id ,
 reads = user_seeks + user_scans + user_lookups ,
 writes = user_updates ,
 rows = ( SELECT SUM(p.rows)
 FROM sys.partitions p
 WHERE p.index_id = s.index_id
 AND s.object_id = p.object_id
 ) ,
 CASE WHEN s.user_updates < 1 THEN 100
 ELSE 1.00 * ( s.user_seeks + s.user_scans + s.user_lookups )
 / s.user_updates
 END AS reads_per_write ,
 'DROP INDEX ' + QUOTENAME(i.name) + ' ON ' + QUOTENAME(c.name) + '.'
 + QUOTENAME(OBJECT_NAME(s.object_id)) AS 'drop statement'
FROM sys.dm_db_index_usage_stats s
 INNER JOIN sys.indexes i ON i.index_id = s.index_id
 AND s.object_id = i.object_id
 INNER JOIN sys.objects o ON s.object_id = o.object_id
 INNER JOIN sys.schemas c ON o.schema_id = c.schema_id
WHERE OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1
 AND s.database_id = DB_ID()
 AND i.type_desc = 'nonclustered'
 AND i.is_primary_key = 0
 AND i.is_unique_constraint = 0
 AND ( SELECT SUM(p.rows)
 FROM sys.partitions p
 WHERE p.index_id = s.index_id
 AND s.object_id = p.object_id
 ) > 20000
ORDER BY reads

Execute the  code above against your database and see the results.

Have a look at the picture below to see my results.

index-usage

The query pulls some information since our server restarted to show which indexes have not been run that often.I have placed a filter  that says I have to have at least 20,000 rows in a table for my query results to have a practical value. You can see that I am looking for reads, writes and their ratio to see if it is worth keeping the index.

With DMVs we are  monitoring performance with minimal overhead. There are many DMVs available to us at the moment and  more are added with new versions of SQL Server.

Hope it helps!!!

Looking into temporary tables in SQL Server December 4, 2013

Posted by fofo in Sql Server, Sql Server 2005, SQL Server 2008, SQL Server 2012.
Tags: ,
add a comment

I have been delivering a certified course in MS SQL Server 2012 recently and I was asked several questions about temporary tables, how to create them, how to manage them, when to use them and what are the limitations of them.

In this post I will try to shed light on this particular issue with lots of hands-on demos.

Temporary tables and table variables make use of the system tempdb database.

I have installed SQL Server 2012 Enterprise edition in my machine but you can use the SQL Server 2012/2008 Express edition as well.

I am connecting to my local instance through Windows Authentication and in a new query window I type (you can copy paste)

First I am going to create a new temporary table and populate it. Execute the script below.

USE tempdb
GO

IF OBJECT_ID('tempdb..#footballer') IS NOT NULL

DROP TABLE #footballer;

GO
CREATE TABLE #footballer
 (
 [FootballerID] INT IDENTITY NOT NULL PRIMARY KEY,
 [lastname] [varchar](15) NOT NULL,
 [firstname] [varchar](15) NOT NULL,
 [shirt_no] [tinyint] NOT NULL,
 [position_played] [varchar](30) NOT NULL,

);

GO

SET IDENTITY_INSERT [dbo].[#footballer] ON

GO

INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (1,N'Oliver', N'Regina', 4, N'goalkeeper')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (2,N'Alexander', N'Roy', 8, N'goalkeeper')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (3,N'Mueller', N'Dewayne', 10, N'defender')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (4,N'Buckley', N'Beth', 3, N'midfielder')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (5,N'Koch', N'Jolene', 7, N'striker')
GO

SELECT * FROM #footballer

As you can see there is a # prefix in front of the table. This table will be saved in the tempdb.

Finally I select everything from the temporary table.

If I open a new query window and try to select everything (see the query below) from the #footballer table.


USE tempdb
GO

SELECT * FROM #footballer

You will not receive any results. You will receive an error – Invalid object name ‘#footballer’.

This is a local temporary table and it is in scope only in the current connection-session.

We can also create global temporary tables. In a new query window execute the following script.


USE tempdb
GO

IF OBJECT_ID('tempdb..##footballernew') IS NOT NULL

DROP TABLE ##footballernew;

GO
CREATE TABLE #footballernew
(
[FootballerID] INT IDENTITY NOT NULL PRIMARY KEY,
[lastname] [varchar](15) NOT NULL,
[firstname] [varchar](15) NOT NULL,
[shirt_no] [tinyint] NOT NULL,
[position_played] [varchar](30) NOT NULL,

);

GO

SET IDENTITY_INSERT [dbo].[##footballernew] ON

GO

INSERT [##footballernew] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (1,N'Oliver', N'Regina', 4, N'goalkeeper')
INSERT [##footballernew] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (2,N'Alexander', N'Roy', 8, N'goalkeeper')
INSERT [##footballernew] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (3,N'Mueller', N'Dewayne', 10, N'defender')
INSERT [##footballernew] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (4,N'Buckley', N'Beth', 3, N'midfielder')
INSERT [##footballernew] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (5,N'Koch', N'Jolene', 7, N'striker')
GO

SELECT * FROM ##footballernew

We denote the global temporary table with ## – ##footballernew

The global temporary table is deleted when all users referencing the table disconnect.

Both global and local temporary tables should be deleted in code rather than depending on automatic drop.

A temporary table created in a stored procedure is visible to other stored procedures executed from within the first procedure.

In a new query window type the following.


USE tempdb
GO

SELECT * FROM ##footballernew

In this case there will be no error. Global temporary tables persist across sessions-connections.

You can also add columns to temporary tables and alter the definition of existing columns.

In this script I add another column and then alter the definition of an existing column.


USE tempdb
GO

IF OBJECT_ID('tempdb..#footballer') IS NOT NULL

DROP TABLE #footballer;

GO
CREATE TABLE #footballer
 (
 [FootballerID] INT IDENTITY NOT NULL PRIMARY KEY,
 [lastname] [varchar](15) NOT NULL,
 [firstname] [varchar](15) NOT NULL,
 [shirt_no] [tinyint] NOT NULL,
 [position_played] [varchar](30) NOT NULL,

);

GO

SET IDENTITY_INSERT [dbo].[#footballer] ON

GO

INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (1,N'Oliver', N'Regina', 4, N'goalkeeper')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (2,N'Alexander', N'Roy', 8, N'goalkeeper')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (3,N'Mueller', N'Dewayne', 10, N'defender')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (4,N'Buckley', N'Beth', 3, N'midfielder')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (5,N'Koch', N'Jolene', 7, N'striker')
GO

ALTER TABLE #footballer
ADD [is_retired] BIT NULL;
GO

ALTER TABLE #footballer
ALTER COLUMN [lastname] [nvarchar](50);
GO

You can use any data type for columns definition in a temporary table. You can also use user-defined data types.

You can also have constraints in temporary tables.If you execute the code below, it will work perfectly fine.


USE tempdb
GO

IF OBJECT_ID('tempdb..#Movies') IS NOT NULL

DROP TABLE #footballer;

GO

CREATE TABLE #Movies
 (
 MovieID INT PRIMARY KEY ,
 MovieName NVARCHAR(50) ,
 MovieRating TINYINT
 )
GO
ALTER TABLE #Movies
 WITH CHECK
 ADD CONSTRAINT CK_Movie_Rating
CHECK (MovieRating >= 1 AND MovieRating <= 5)

But you have to be careful when creating-applying foreign keys. FOREIGN KEY constraints are not enforced on local or global temporary tables.
Execute the script below to see what I mean.The foreign key will not be created.

USE tempdb
go

CREATE TABLE #Persons
 (
 P_Id INT NOT NULL ,
 LastName VARCHAR(255) NOT NULL ,
 FirstName VARCHAR(255) ,
 Address VARCHAR(255) ,
 City VARCHAR(255) ,
 PRIMARY KEY ( P_Id )
 )

CREATE TABLE #Orders
(
O_Id int NOT NULL PRIMARY KEY,
OrderNo int NOT NULL,
P_Id int FOREIGN KEY REFERENCES #Persons(P_Id)
)

Please bear in mind that you can create temporary tables with clustered and non-clustered indexes on them.

Let’s investigate the behavior of temporary tables and IDENTITY columns.

If you execute the script below , it will fail. This is the same behavior when executing the same script to regular tables. You cannot specify values for the IDENTITY column.If you choose to do so you must set IDENTITY_INSERT ON.

USE tempdb
GO

IF OBJECT_ID('tempdb..#Persons') IS NOT NULL

DROP TABLE #Persons;

GO

CREATE TABLE #Persons
 (
 P_Id INT PRIMARY KEY CLUSTERED IDENTITY(1,1) ,
 LastName VARCHAR(255) NOT NULL ,
 FirstName VARCHAR(255) ,
 Address VARCHAR(255) ,
 City VARCHAR(255)
 )

--this will not work

INSERT #Persons(P_Id,LastName,FirstName,Address,City) VALUES (1,'Steven','Gerrard','123 liverpool street','liverpool')
SET IDENTITY_INSERT [#Persons] ON

GO

--this will work

INSERT #Persons(P_Id,LastName,FirstName,Address,City) VALUES (1,'Steven','Gerrard','123 liverpool street','liverpool')

Αlso note that transactions are honored in temporary tables. If I begin an explicit transaction -an insert- without committing it will insert the row of data but then if a rollback is issued the whole operation will be rolled back

Execute the script below.


USE tempdb
GO

IF OBJECT_ID('tempdb..#Persons') IS NOT NULL

DROP TABLE #Persons;

GO

CREATE TABLE #Persons
 (
 P_Id INT PRIMARY KEY CLUSTERED IDENTITY(1,1) ,
 LastName VARCHAR(255) NOT NULL ,
 FirstName VARCHAR(255) ,
 Address VARCHAR(255) ,
 City VARCHAR(255)
 )

SET IDENTITY_INSERT [#Persons] ON

GO

--this will insert the value

BEGIN TRAN
INSERT #Persons(P_Id,LastName,FirstName,Address,City) VALUES (1,'Steven','Gerrard','123 liverpool street','liverpool')

GO

SELECT * FROM #Persons

--this will rollback the transaction

ROLLBACK TRAN

Hope it helps!!!

Looking into Temp database usage in SQL Server December 4, 2013

Posted by fofo in Sql Server, Sql Server 2005, SQL Server 2008, SQL Server 2012.
Tags:
add a comment

I have been delivering a certified course in MS SQL Server 2012 recently and I was asked several questions about Tempdb usage and temporary objects.

In this post I will try to shed light on this particular issue.

Temporary tables and table variables make use of the system tempdb database.

There is only one tempdb system database per SQL Server instance so if there is a huge usage of temporary objects in this database it can be the point of contention.

When you create an entry in database it needs to allocate space.This is also valid for the tempdb database.There are three types of pages involved in the allocation process in the tempdb data file:

  • Page Free Space (PFS)
  • Shared Global Allocation Map (SGAM)
  • Global Allocation Map (GAM).

When there is a great page allocation contention in tempdb, the whole allocation process can suffer and we can experience with PAGELATCH waits.

In order to address the issue above, you can have a number of tempdb data files that are equal to the number of cores.For example if you have a system with less than 8 cores e.g 6 you should add/set up 6 data files for the tempdb.If you have a system with more than 8 cores you should add 8 data files for the tempdb and then if the contention is still big you can add 4 more data files.

By saying cores in this post I mean logical cores and not physical cores.So if you have 8 physical cores, then you have 16 logical cores and 32 logical cores if hyper-threading is enabled.

I will  provide some demos in order to show you what tempdb contention might look like and what are the main wait latches that occur.

I have installed SQL Server 2012 Enterprise edition in my machine but you can use the SQL Server 2012/2008 Express edition as well.

I am connecting to my local instance through Windows Authentication and in a new query window I type (you can copy paste)

In this snippet, I create a new database and then create a new table with some constraints

CREATE DATABASE mytempdbcontention
GO

USE mytempdbcontention;
GO
 CREATE TABLE dbo.footballer
 (
 [FootballerID] INT IDENTITY NOT NULL PRIMARY KEY,
 [lastname] [varchar](15) NOT NULL,
 [firstname] [varchar](15) NOT NULL,
 [shirt_no] [tinyint] NOT NULL,
 [position_played] [varchar](30) NOT NULL,

);

GO

ALTER TABLE dbo.footballer
ADD CONSTRAINT CK_Footballer_Shirt_No
CHECK (shirt_no >= 1 AND shirt_no <= 11)

GO

ALTER TABLE dbo.footballer
ADD CONSTRAINT CK_Footballer_Position
CHECK (position_played IN ('goalkeeper','defender','midfielder','striker'))
GO

Now I need to populate the table with 50.000 rows. This is the script you need to execute in order to make this happen.

You can download it here. Rename the insert-footballer.doc to insert-footballer.sql and execute the script in a new query window.

Now I need to create a script that will create tempdb contention. This is the script that creates a temporary object- #footballer, populates the #footballer from the footballer table and then selects from it. Finally it drops the temporary object.


USE mytempdbcontention;
GO

SET NOCOUNT ON;
GO

WHILE 1 = 1
 BEGIN

IF OBJECT_ID('tempdb..#footballer') IS NOT NULL

 DROP TABLE #footballer;

CREATE TABLE #footballer
 (
 [FootballerID] INT IDENTITY NOT NULL PRIMARY KEY,
 [lastname] [varchar](15) NOT NULL,
 [firstname] [varchar](15) NOT NULL,
 [shirt_no] [tinyint] NOT NULL,
 [position_played] [varchar](30) NOT NULL,

);
 INSERT INTO #footballer
 (lastname,
 firstname,
 shirt_no,
 position_played)
 SELECT TOP 4000
 lastname,
 firstname,
 shirt_no,
 position_played

 FROM dbo.footballer;

SELECT lastname
 FROM #footballer;

DROP TABLE #footballer;
 END
 GO

Now I am going to create a .cmd file where I will create contention to the tempdb.

You can download it here. Rename the temp-sql.cmd.doc to temp-sql.cmd. Make sure you execute it (by double-clicking it).

This will create lots of contention to the tempdb. We need to see exactly what this contention is and the wait latches that have occurred.

Execute the script below.


USE tempdb
go
SELECT session_id, wait_duration_ms,wait_type, resource_description
from sys.dm_os_waiting_tasks
where wait_type like 'PAGE%LATCH_%'

and

resource_description like '2:%'

As you can see from the picture below, I have PAGEIOLATCH_SH wait types.This wait type occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Shared mode.

wait_type

I have one tempdb data file, the default configurations

I have 8 cores in this machine so I will add 7 mores tempdb data files of equal size (MS recommendation).

Execute the script below.


USE [master]
GO

ALTER DATABASE [tempdb]
MODIFY FILE ( NAME = N'tempdev', SIZE = 500 MB ,
FILEGROWTH = 100 MB )
GO

ALTER DATABASE [tempdb]
ADD FILE ( NAME = N'tempdev2', FILENAME = N'd:\DATA\tempdb2.ndf' ,
SIZE = 500 MB , FILEGROWTH = 100 MB )
GO

ALTER DATABASE [tempdb]
ADD FILE ( NAME = N'tempdev3', FILENAME = N'd:\DATA\tempdb3.ndf' ,
SIZE = 500 MB , FILEGROWTH = 100 MB )
GO

ALTER DATABASE [tempdb]
ADD FILE ( NAME = N'tempdev4', FILENAME = N'd:\DATA\tempdb4.ndf' ,
SIZE = 500 MB , FILEGROWTH = 100 MB )
GO

ALTER DATABASE [tempdb]
ADD FILE ( NAME = N'tempdev5', FILENAME = N'd:\DATA\tempdb5.ndf' ,
SIZE = 500 MB , FILEGROWTH = 100 MB )
GO

ALTER DATABASE [tempdb]
ADD FILE ( NAME = N'tempdev6', FILENAME = N'd:\DATA\tempdb6.ndf' ,
SIZE = 500 MB , FILEGROWTH = 100 MB )
GO

ALTER DATABASE [tempdb]
ADD FILE ( NAME = N'tempdev7', FILENAME = N'd:\DATA\tempdb7.ndf' ,
SIZE = 500 MB , FILEGROWTH = 100 MB )
GO

Now run the query again and observe the results.


USE tempdb
go
SELECT session_id, wait_duration_ms,wait_type, resource_description
from sys.dm_os_waiting_tasks
where wait_type like 'PAGE%LATCH_%'

and

resource_description like '2:%'

You will see that there are no wait types hence no tempdb contention.

Stop now the temp-sql.cmd file so the contention of the tempdb stops.

Hope it helps!!!