jump to navigation

Temporal Tables in SQL Server 2016 November 26, 2016

Posted by fofo in Sql Server, SQL Server 2016.
Tags:
add a comment

This is the second post in a series of posts about the new features of SQL Server 2016. You can find the first post in the SQL Server 2016 (JSON support in SQL Server 2016) series here .

SQL Server 2016 was officially released back in June 2016.

There are many new features in SQL Server 2016 and in  my opinion the most important are:

  • Always Encrypted
  • Query Store
  • Row Level Security
  • Temporal Tables
  • Stretch Database
  • Dynamic Data Masking
  • PolyBase
  • JSON Support

I am going to discuss the “Temporal Tables” in SQL Server 2016 in this post.

I have installed SQL Server 2016 Enterprise edition in my machine but you can use the SQL Server 2016 Developer edition as well.

You can download the SQL Server 2016 Developer edition by joining the Visual Studio Dev Essentials which is free and simple. After joining you will be able to download SQL Server 2016 developer edition in your machine.

Temporal tables is another brilliant feature that will make work with historical databases in SQL Server much easier.  Temporal is a database feature that was introduced in SQL 2011 and is now supported in SQL Server 2016.

A system-versioned temporal table is a new type of user table in SQL Server 2016, designed to keep a full history of data changes and allow easy point in time analysis.

Please do not confuse temporal tables with temporary tables.

We can use them to look at historical data in an easy way by having the ability to reconstruct the data at any point in time, or even to compare data from two periods without having this history in our current table.

We can use them to understand business trends over time. This is very important for people in the Sales or Marketing Departments of a big business.

Sometimes developers, DBAs can accidentally delete data. Temporal tables can help us recover accidental changes.We can also debug errors and understand data behavior.

Sometimes when we make a mistake and delete or update data by accident the only way to recover data is to restore a valid (recently taken backup).

That takes time and surely temporal tables give us the ability to recover data in a much easier and efficient way.

Another important thing to note is that we can create a new temporal table or alter the schema of an existing table. In order to create a system-versioned table we need to

  1. Have a table that has a primary key (that goes without saying)
  2. The table option SYSTEM_VERSIONING to be set to ON
  3. The two non-nullable DATETIME columns representing the start and the end of the row’s validity period
  4. Designation of the period columns
  5. A linked history table which naturally holds the past states of the modified rows

Also bear in mind that all T-SQL statements INSERT, UPDATE,SELECT are totally valid.

Let’s move on and demonstrate the temporal tables functionality with a hands-on demo.

Type (copy paste) the following query in a new query window and execute it.


CREATE DATABASE TheTemporalDB
GO

USE TheTemporalDB
GO
CREATE TABLE Project 
(
 ProjectID int NOT NULL IDENTITY(1,1) PRIMARY KEY, 
 ProjectName varchar(50) NOT NULL, 
 ManagerID int NULL, 
 ValidFrom datetime2 GENERATED ALWAYS AS ROW START NOT NULL, 
 ValidTo datetime2 GENERATED ALWAYS AS ROW END NOT NULL, 
 PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo) 
)
WITH (SYSTEM_VERSIONING = ON)
GO

In the code above, I create a sample database and a temporal table.

If you look at the picture below you will notice the temporal able and the system generated history table that is linked with it.

 

temporal-table-2

You can see the two tables have the exact same columns. The ProjectID column in the history table has not a Primary key constraint. The history table schema reflects the main table schema except for constraints.

Type (copy paste) the following query in a new query window and execute it.


SELECT
 object_id,
 name,
 temporal_type,
 temporal_type_desc,
 history_table_id
 FROM
 sys.tables
 WHERE
 object_id = OBJECT_ID('dbo.Project ', 'U') OR
 object_id = ( 
 SELECT history_table_id 
 FROM sys.tables
 WHERE object_id = OBJECT_ID('dbo.Project ', 'U')
)

GO

Executing the code above you can get information about the main table and the temporal history table.

Have a look at the picture below

 

temporal-table-3

 

If you want to delete those tables you must first turn off system versioning and then drop tables individually.

Type (copy paste) the following query in a new query window and execute it.


ALTER TABLE Project SET (SYSTEM_VERSIONING = OFF)
DROP TABLE Project
DROP TABLE[dbo].[MSSQL_TemporalHistoryFor_565577053] 

Now lets recreate the our temporal table specifying the name of the custom history table.

Type (copy paste) the following query in a new query window and execute it.

USE TheTemporalDB
GO
CREATE TABLE Project
(
 ProjectID int NOT NULL IDENTITY(1,1) PRIMARY KEY, 
 ProjectName varchar(50) NOT NULL, 
 ManagerID int NULL, 
 ValidFrom datetime2 GENERATED ALWAYS AS ROW START NOT NULL, 
 ValidTo datetime2 GENERATED ALWAYS AS ROW END NOT NULL, 
 PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo) 
)
WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.ProjectHistory))

The name of the history table is now ProjectHistory.

Now let’s create a simple table, populate it and then convert it to a temporal table

Type (copy paste) the following query in a new query window and execute it.


USE TheTemporalDB
GO

ALTER TABLE Project SET (SYSTEM_VERSIONING = OFF)
DROP TABLE Project
DROP TABLE[dbo].[ProjectHistory]

CREATE TABLE Project(
 ProjectId int PRIMARY KEY,
 ProjectName varchar(20) NOT NULL,
 DepartmentName varchar(50) NOT NULL
)
GO

INSERT INTO Project (ProjectId, ProjectName, DepartmentName) VALUES
 (1, 'NopCommerce E-shop', 'E-commerce SubUnit'),
 (2, 'Elastic Search', 'NoSQL DBs SubUnit'),
 (3, 'Adwords Campaign', 'Digital Marketing SubUnit')
GO

I drop the tables I created earlier. I create a simple table and insert some sample values in it.

Now I will convert the table to a temporal table.

Type (copy paste) the following query in a new query window and execute it.


USE TheTemporalDB
GO

ALTER TABLE Project ADD
 StartDate datetime2 GENERATED ALWAYS AS ROW START NOT NULL DEFAULT CAST('1900-01-01 00:00:00.0000000' AS datetime2),
 EndDate datetime2 GENERATED ALWAYS AS ROW END NOT NULL DEFAULT CAST('9999-12-31 23:59:59.9999999' AS datetime2),
 PERIOD FOR SYSTEM_TIME (StartDate, EndDate)
GO

ALTER TABLE Project 
 SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.ProjectHistory))
GO

Then I query the two tables. The history table naturally has not data in it.

Then I update (3 times) the first row of the table where ProjectID is 1.

Then I query the two tables. The history table has data in it this time. All changes are stored in the table.

Type (copy paste) the following query in a new query window and execute it.


USE TheTemporalDB
GO

SELECT * FROM [dbo].[Project]
SELECT * FROM [dbo].[ProjectHistory]

UPDATE Project SET ProjectName = 'Custom ASP.Net MVC' WHERE ProjectId = 1


UPDATE Project SET ProjectName = 'ERP Integration' WHERE ProjectId = 1


UPDATE Project SET ProjectName = 'Theme Conversion' WHERE ProjectId = 1


SELECT * FROM [dbo].[Project]
SELECT * FROM [dbo].[ProjectHistory]

Have a look at the picture below.

temporal-table-4

You can see that the history table has all the changes ( all 3 updates) in the ProjectName column.

Finally one should note that when comes to triggers and temporal tables INSTEAD triggers are unsupported while AFTER Triggers are supported only on the current table and not on the history table.

Tables with foreign key constraints that use Cascade updates and deleted are also not supported.

In Memory OLTP tables and FILESTREAM tables cannot become temporal tables.

 

Hope it helps!!!

Built-in support for JSON in SQL Server 2016 November 20, 2016

Posted by fofo in Sql Server, SQL Server 2016.
Tags: ,
add a comment

This is going to be the first post in a series of posts about the new features of SQL Server 2016.

SQL Server 2016 was officially released back in June 2016.

There are many new features in SQL Server 2016 and in  my opinion the most important are:

  • Always Encrypted
  • Query Store
  • Row Level Security
  • Temporal Tables
  • Stretch Database
  • Dynamic Data Masking
  • PolyBase
  • JSON Support

I am going to discuss the JSON support in SQL Server 2016 in this post.

I have installed SQL Server 2016 Enterprise edition in my machine but you can use the SQL Server 2016 Developer edition as well. You can download SQL Server 2016 Developer edition for free.

You can download the SQL Server 2016 Developer edition by joining the Visual Studio Dev Essentials which is free and simple. After joining you will be able to download SQL Server 2016 developer edition in your machine.

JSON is an acronym for JavaScript Object Notation, and it’s a lightweight format for exchanging data between different source types, similar to how XML is used.

The transferred text is easy to read, and the objects consist of attribute-value pairs.

Let’s look at a simple  JSON example

 

{“doctors”:[
{“firstName”:”John”, “lastName”:”Smith”},
{“firstName”:”Anna”, “lastName”:”Bernard”},
{“firstName”:”Peter”, “lastName”:”Feller”}
]}

SQL developers think that the SQL team has provided a  built-in JSON data type. That is not true.

JSON text is typically stored in varchar or nvarchar columns, and is indexed as plain text.

SQL Server 2016 provides built-in support for importing, exporting, parsing, and querying JSON documents or fragments.

Hence, we can format SQL Server data or query output as JSON by adding the FOR JSON clause to a select statement.

On the other hand we can convert JSON to data that feeds to a table from our database. This is done using the OPENJSON function.

Let’s us start with our first example.

In all my examples I will be using the AdventureWorks 2016 database. You can download it here.

We will convert the query results against our database to JSON format using the FOR JSON clause.

We will create a SELECT statement from a table and then transform it to JSON.

Open a new query window. Type (copy-paste the following) in that query window. Execute the query.

USE AdventureWorks2016

SELECT Production.Product.Name ,
 Production.Product.StandardCost ,
 Production.Product.ListPrice ,
 Production.Product.Size
FROM Production.Product
WHERE ListPrice > 1800
FOR JSON AUTO

 

I copy and paste the output of the query to online JSON parser – http://jsonviewer.stack.hu/

This is the output after formatting the JSON output.

[
{
“Name”: “Road-150 Red, 62”,
“StandardCost”: 2171.2942,
“ListPrice”: 3578.2700,
“Size”: “62”
},
{
“Name”: “Road-150 Red, 44”,
“StandardCost”: 2171.2942,
“ListPrice”: 3578.2700,
“Size”: “44”
},
{
“Name”: “Road-150 Red, 48”,
“StandardCost”: 2171.2942,
“ListPrice”: 3578.2700,
“Size”: “48”
},
{
“Name”: “Road-150 Red, 52”,
“StandardCost”: 2171.2942,
“ListPrice”: 3578.2700,
“Size”: “52”
},
{
“Name”: “Road-150 Red, 56”,
“StandardCost”: 2171.2942,
“ListPrice”: 3578.2700,
“Size”: “56”
},
{
“Name”: “Mountain-100 Silver, 38”,
“StandardCost”: 1912.1544,
“ListPrice”: 3399.9900,
“Size”: “38”
},
{
“Name”: “Mountain-100 Silver, 42”,
“StandardCost”: 1912.1544,
“ListPrice”: 3399.9900,
“Size”: “42”
},
{
“Name”: “Mountain-100 Silver, 44”,
“StandardCost”: 1912.1544,
“ListPrice”: 3399.9900,
“Size”: “44”
},
{
“Name”: “Mountain-100 Silver, 48”,
“StandardCost”: 1912.1544,
“ListPrice”: 3399.9900,
“Size”: “48”
},
{
“Name”: “Mountain-100 Black, 38”,
“StandardCost”: 1898.0944,
“ListPrice”: 3374.9900,
“Size”: “38”
},
{
“Name”: “Mountain-100 Black, 42”,
“StandardCost”: 1898.0944,
“ListPrice”: 3374.9900,
“Size”: “42”
},
{
“Name”: “Mountain-100 Black, 44”,
“StandardCost”: 1898.0944,
“ListPrice”: 3374.9900,
“Size”: “44”
},
{
“Name”: “Mountain-100 Black, 48”,
“StandardCost”: 1898.0944,
“ListPrice”: 3374.9900,
“Size”: “48”
},
{
“Name”: “Mountain-200 Silver, 38”,
“StandardCost”: 1265.6195,
“ListPrice”: 2319.9900,
“Size”: “38”
},
{
“Name”: “Mountain-200 Silver, 42”,
“StandardCost”: 1265.6195,
“ListPrice”: 2319.9900,
“Size”: “42”
},
{
“Name”: “Mountain-200 Silver, 46”,
“StandardCost”: 1265.6195,
“ListPrice”: 2319.9900,
“Size”: “46”
},
{
“Name”: “Mountain-200 Black, 38”,
“StandardCost”: 1251.9813,
“ListPrice”: 2294.9900,
“Size”: “38”
},
{
“Name”: “Mountain-200 Black, 42”,
“StandardCost”: 1251.9813,
“ListPrice”: 2294.9900,
“Size”: “42”
},
{
“Name”: “Mountain-200 Black, 46”,
“StandardCost”: 1251.9813,
“ListPrice”: 2294.9900,
“Size”: “46”
},
{
“Name”: “Road-250 Red, 44”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “44”
},
{
“Name”: “Road-250 Red, 48”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “48”
},
{
“Name”: “Road-250 Red, 52”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “52”
},
{
“Name”: “Road-250 Red, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”
},
{
“Name”: “Road-250 Black, 44”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “44”
},
{
“Name”: “Road-250 Black, 48”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “48”
},
{
“Name”: “Road-250 Black, 52”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “52”
},
{
“Name”: “Road-250 Black, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”
},
{
“Name”: “Touring-1000 Yellow, 46”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “46”
},
{
“Name”: “Touring-1000 Yellow, 50”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “50”
},
{
“Name”: “Touring-1000 Yellow, 54”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “54”
},
{
“Name”: “Touring-1000 Yellow, 60”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “60”
},
{
“Name”: “Touring-1000 Blue, 46”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “46”
},
{
“Name”: “Touring-1000 Blue, 50”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “50”
},
{
“Name”: “Touring-1000 Blue, 54”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “54”
},
{
“Name”: “Touring-1000 Blue, 60”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “60”
}
]

In AUTO mode, the structure of the SELECT statement determines the format of the JSON output. If we want to explicitly specify the format of the output, we can use the PATH mode instead of AUTO.

Open a new query window. Type (copy-paste the following) in that query window. Execute the query.

USE AdventureWorks2016

--AUTO
SELECT Production.Product.Name ,
 Production.Product.StandardCost ,
 Production.Product.ListPrice ,
 Production.Product.Size ,
 Production.ProductModel.Name AS modelname
FROM Production.Product
 INNER JOIN Production.ProductModel ON Production.Product.ProductModelID = Production.ProductModel.ProductModelID
WHERE ListPrice > 1800
 AND ProductModel.Name IN ( 'Road-250' )
 FOR JSON AUTO

--PATH

SELECT Production.Product.Name ,
 Production.Product.StandardCost ,
 Production.Product.ListPrice ,
 Production.Product.Size ,
 Production.ProductModel.Name AS modelname
FROM Production.Product
 INNER JOIN Production.ProductModel ON Production.Product.ProductModelID = Production.ProductModel.ProductModelID
WHERE ListPrice > 922
 AND ProductModel.Name IN ( 'Road-250' )
 FOR JSON PATH

GO
 

This is the output of the AUTO statement.

[
{
“Name”: “Road-250 Red, 44”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “44”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Red, 48”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “48”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Red, 52”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “52”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Red, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Black, 44”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “44”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Black, 48”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “48”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Black, 52”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “52”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Black, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
}
]

This is the output of the PATH statement

[
{
“Name”: “Road-250 Red, 44”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “44”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Red, 48”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “48”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Red, 52”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “52”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Red, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 44”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “44”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 48”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “48”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 52”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “52”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”,
“modelname”: “Road-250”
}
]

Now let’s see how we can use the ROOT statement.

Open a new query window. Type (copy-paste the following) in that query window. Execute the query.

USE AdventureWorks2016

SELECT Production.Product.Name ,
 Production.Product.StandardCost ,
 Production.Product.ListPrice ,
 Production.Product.Size ,
 Production.ProductModel.Name AS modelname
FROM Production.Product
 INNER JOIN Production.ProductModel ON Production.Product.ProductModelID = Production.ProductModel.ProductModelID
WHERE ListPrice > 1800
 AND ProductModel.Name IN ( 'Road-250' )
 FOR JSON PATH,ROOT ('Production.Product')

This is the output of the JSON result after formatting.

{
“Production.Product”: [
{
“Name”: “Road-250 Red, 44”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “44”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Red, 48”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “48”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Red, 52”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “52”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Red, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 44”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “44”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 48”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “48”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 52”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “52”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”,
“modelname”: “Road-250”
}
]
}

We wanted to specify a root element for our data format and we used the ROOT keyword.

The nodes returned by our query are children of the root node Production.Product

We can also convert JSON data to rows and columns.

We can use the OPENJSON function.  There are two ways of calling OPENJSON. We can use it with a default schema. In this case, the function returns a table with one row for each property of the object or for each element of the array.

Open a new query window. Type (copy-paste the following) in that query window. Execute the query.


DECLARE @json NVARCHAR(MAX) = N'
{
 "Name":"Road-155,62",
 "StandardCost":2123,
 "ListPrice":3123,
 "Size":62

}';
 
SELECT * FROM OPENJSON(@json);

The output of this statement is

key value type
Name Road-155,62 1
StandardCost 2123 2
ListPrice 3123 2
Size 62 2

 

As you can see we can the OPENJSON  function returns three values:

key: Property name within the object or index of the element within the array.
value: Property value within the object or value of the array element specified by the index.
type: Value’s data type, represented numerically, as described in the following table:

Numeric Value Data Type
0 null
1 string
2 int
3 true or false
4 array
5 object

We can also use the OPENJSON function with an explicit schema, and in this case the function returns a table with the schema that we define in the WITH clause.

In the WITH clause, we specify the output columns, their types, and the paths of the source properties for each column.

Open a new query window. Type (copy-paste the following) in that query window. Execute the query.


DECLARE @JSON NVARCHAR(max);
SET @JSON =N'{"Products"
{"Production.Product": 
 [
 {
 "Name": "Road-250 Red, 44",
 "StandardCost": 1518.7864,
 "ListPrice": 2443.3500,
 "Size": "44",
 "ProductModel": 
 {
 "modelname": "Road-250"
 }
 
 },
 {
 "Name": "Road-250 Red, 48",
 "StandardCost": 1518.7864,
 "ListPrice": 2443.3500,
 "Size": "48",
 "ProductModel": 
 {
 "modelname": "Road-250"
 }
 }
 ]
 }
 }'

 select * from OPENJSON (@JSON, '$.Products.Production.Product')

 with ( Name nvarchar(50) '$.Name',
 StandardCost decimal(18,2) '$.StandardCost',
 Listprice decimal(18,2) '$.ListPrice',
 Size int '$.Size',
 modelname nvarchar(100) '$.ProductModel.modelname'
 )

 

We can see that Name,StandardCost,ListPrice,Size come directly under Production.Product, while the modelname has another parent, ProductModel, so when we mention them we also specify the ProductModel node.

Finally let’s have a look at the ISJSON  built-in function. This function validates the JSON text so we can find out whether a string contains a valid JSON.

In this final example I make a TABLE variable in which a column is going to store string values that will hold date formatted as JSON. I am populating the TABLE variable with two rows.

By executing the query below we see that returns 2 rows because our JSON format is valid.

This is a good function for validating strings that are supposed to have a JSON format before exchanging them with client applications

 
DECLARE @temp TABLE
 (
 id INT ,
 value NVARCHAR(MAX)
 )

INSERT INTO @temp
VALUES ( 1, '{
 "Name": "Road-150 Red, 62",
 "StandardCost": 2171.2942,
 "ListPrice": 3578.2700,
 "Size": "62"
 }' ), ( 2, '{
 "Name": "Road-150 Red, 44",
 "StandardCost": 2171.2942,
 "ListPrice": 3578.2700,
 "Size": "44"
 }' )

 SELECT * FROM @temp
 where ISJSON([Value]) > 0 

JSON support in SQL Server is great and a much anticipated feature.

Hope it helps!!!

Looking into DMVs in SQL Server for Performance Tuning May 17, 2015

Posted by fofo in Sql Server, SQL Server 2008, SQL Server 2012, SQL Server 2014.
Tags: , ,
add a comment

I have been delivering a Microsoft Certified Course in MS SQL Server 2014 recently and I was highlighting with several examples and demos the importance of DMOs (DMVs and DMFs) in SQL Server. We can get a plethora of information regarding server state and monitor the health of a server instance, diagnose problems and tune performance with those objects.

In this post I will demonstrate with hands-on demos the power of DMVs and show how they can help a DBA, developer to identify expensive queries, low usage indexes, and check the fragmentation levels of indexes. These are very common problems that anyone that deals with an SQL Server database must look out for and troubleshoot. There are two types of DMVs, server scoped, which require the view server state permission on the server, and database scoped, which require the view database state permission on the database.

By querying these view with T-SQL statements, a lot of information is made available to the DBA. When you make a call to a DMV, you must use a minimum of a two part naming convention.

DMVs can be of very reliable resource for information about the performance of your system, but each time your SQL Server is restarted, the data in the views is reset.

I have installed SQL Server 2014 Enterprise edition in my machine but you can use the SQL Server 2014/2012/2008 Express edition as well (or any other edition).

I am connecting to my local instance through Windows Authentication.

The first query I’m going to run is for identifying fragmentation levels of indexes within the database. 

The more fragmented an index is, the slower it performs. I am going to use the DMV dm_db_index_physical_stats cause I’m looking at the statistics of how the indexes are physically laid out within SQL Server. The view returns the size and fragmentation details for the data and the indexes of a specified table or view. Fragmentation of indexes and tables can drastically affect performance of queries and maintenance, so this is a very good DMV to get familiar with.

Before I run my script let me explain a few things regarding Internal fragmentation.

Internal fragmentation occurs if there is unused space between records in a page. This fragmentation occurs through the process of data modifications (INSERT, UPDATE, and DELETE statements) that are made against the table and therefore, to the indexes defined on the table. This unused space causes poor cache utilization and more I/O, which ultimately leads to poor query performance.

I am connecting to a database that I am going to use for all of my demos and I open a new query window. Type (copy-paste the following)

USE mydb
GO

EXEC sp_configure 'show advanced options', 1
GO
RECONFIGURE WITH OVERRIDE
GO
DECLARE @DefaultFillFactor INT
DECLARE @Fillfactor TABLE
 (
 Name VARCHAR(100)
 ,Minimum INT
 ,Maximum INT
 ,config_value INT
 ,run_value INT
 )
INSERT INTO @Fillfactor
 EXEC sp_configure 'fill factor (%)'
SELECT @DefaultFillFactor = CASE WHEN run_value = 0 THEN 100
 ELSE run_value
 END
FROM @Fillfactor 

SELECT DB_NAME() AS DataBaseName
 ,QUOTENAME(s.name) AS SchemaName
 ,QUOTENAME(o.name) AS TableName
 ,i.name AS IndexName
 ,stats.Index_type_desc AS IndexType
 ,stats.page_count AS [PageCount]
 ,CASE WHEN i.fill_factor > 0 THEN i.fill_factor
 ELSE @DefaultFillFactor
 END AS [Fill Factor]
 ,stats.avg_page_space_used_in_percent
 ,CASE WHEN stats.index_level = 0 THEN 'Leaf Level'
 ELSE 'Nonleaf Level'
 END AS IndexLevel
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'DETAILED')
 AS stats
 ,sys.objects AS o
 ,sys.schemas AS s
 ,sys.indexes AS i
WHERE o.OBJECT_ID = stats.OBJECT_ID
 AND s.schema_id = o.schema_id
 AND i.OBJECT_ID = stats.OBJECT_ID
 AND i.index_id = stats.index_id
 AND stats.avg_page_space_used_in_percent <= 85
 AND stats.page_count >= 10
 AND stats.index_id > 0
ORDER BY stats.avg_page_space_used_in_percent ASC
 ,stats.page_count DESC

Execute the code above and see the results.

Have a look at the picture below to see what results I have got.

index-frag1

Let me explain what the code above does.

At the beginning I am  just turning on some advanced options, re-configuring.

I am creating a temporary table variable to hold some information. I’m going to join the DMV with other objects, schemas, and indexes (system views) just to get more information.

In the Select list, I get information like the database name, the schema name, the table name, the index name, the index type, the page count, the fillfactor, the average page space used in percent and the index level.

Then I am using some of conditions like avg_page_space_used_in_percent is less than 85% and more than 10 pages in size for an index.

Then I’m  ordering them by the average page space used in percent (This is an average percentage use of pages that represents  internal fragmentation. Higher the value, the better it is. If this value is lower than 80%, then action should be taken) and then by the page count descending so the ones with the most pages come up first.  By all means do use this query for identifying fragmentation issues in your databases.

The second query I have is using the dm_execute_query_stats DMV, to identify our top 10 queries ranked by average CPU time. The query output will show us the statements that are the most expensive as far as resources and overhead regarding the CPU.

Knowing which are these queries, I can rewrite them in a way that will cause much less overhead.

In a new query window, type (copy-paste the following).


USE master
GO

SELECT TOP 10 query_stats.query_hash AS "Query Hash",
 SUM(query_stats.total_worker_time) / SUM(query_stats.execution_count) AS "Average CPU Time",
 MIN(query_stats.statement_text) AS "SQL Statement"
FROM
 (SELECT EQS.*,
 SUBSTRING(ST.text, (EQS.statement_start_offset/2) + 1,
 ((CASE statement_end_offset
 WHEN -1 THEN DATALENGTH(ST.text)
 ELSE EQS.statement_end_offset END
 - EQS.statement_start_offset)/2) + 1) AS statement_text
 FROM sys.dm_exec_query_stats AS EQS
 CROSS APPLY sys.dm_exec_sql_text(EQS.sql_handle) as ST) as query_stats
GROUP BY query_stats.query_hash
ORDER BY 2 DESC;
GO

Execute the  code above against your database and see the results.

Have a look at the picture below to see my results.

query-analysis

In the next demo I am going to use the dm_os_wait_stats DMV and actually do some calculations to show the ratio of time spent waiting for a CPU to free up and give us some processing power or for another resource such as memory or the disks to free up.

In a new query window, type (copy-paste the following).


USE master
GO
Select signal_wait_time_ms=sum(signal_wait_time_ms)

 ,'%signal (cpu) waits' = cast(100.0 * sum(signal_wait_time_ms) / sum (wait_time_ms) as numeric(20,2))

 ,resource_wait_time_ms=sum(wait_time_ms - signal_wait_time_ms)

 ,'%resource waits'= cast(100.0 * sum(wait_time_ms - signal_wait_time_ms) / sum (wait_time_ms) as numeric(20,2))

From sys.dm_os_wait_stats

Execute the  code above against your database and see the results.

Have a look at the picture below to see my results.

wait-stats

On this server, we’re looking at about 15 % of the time is spent waiting on processing power to become available, versus almost 85% of the time is spent waiting usually for the disks to catch up and be able to give us the information that we need to load into memory or even for it to find it in the memory.

This indicates that I have a powerful enough server as far CPU is concerned but I have to look at how much memory is used and what my disk drives are doing and how well my storage is optimized on this server.

In this query I am going to use the dm_db_index_usage_stats DMV. I want to look at indexes and see how often are they used. If I have indexes that have not been used since the server’s restart and the server’s been running for a long time, then I probably do not need these indexes.

If the indexes are not being used, every time a table is updated by inserting, updating, deleting a record, the index is also being updated or changed in the table. The more indexes you have that are not being used, the more time you’re going to spend updating indexes and maintaining them. When it comes to defragmenting indexes, you’re going to be defragmenting indexes that you don’t even use.


USE mydb
GO

SELECT o.name ,
 indexname = i.name ,
 i.index_id ,
 reads = user_seeks + user_scans + user_lookups ,
 writes = user_updates ,
 rows = ( SELECT SUM(p.rows)
 FROM sys.partitions p
 WHERE p.index_id = s.index_id
 AND s.object_id = p.object_id
 ) ,
 CASE WHEN s.user_updates < 1 THEN 100
 ELSE 1.00 * ( s.user_seeks + s.user_scans + s.user_lookups )
 / s.user_updates
 END AS reads_per_write ,
 'DROP INDEX ' + QUOTENAME(i.name) + ' ON ' + QUOTENAME(c.name) + '.'
 + QUOTENAME(OBJECT_NAME(s.object_id)) AS 'drop statement'
FROM sys.dm_db_index_usage_stats s
 INNER JOIN sys.indexes i ON i.index_id = s.index_id
 AND s.object_id = i.object_id
 INNER JOIN sys.objects o ON s.object_id = o.object_id
 INNER JOIN sys.schemas c ON o.schema_id = c.schema_id
WHERE OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1
 AND s.database_id = DB_ID()
 AND i.type_desc = 'nonclustered'
 AND i.is_primary_key = 0
 AND i.is_unique_constraint = 0
 AND ( SELECT SUM(p.rows)
 FROM sys.partitions p
 WHERE p.index_id = s.index_id
 AND s.object_id = p.object_id
 ) > 20000
ORDER BY reads

Execute the  code above against your database and see the results.

Have a look at the picture below to see my results.

index-usage

The query pulls some information since our server restarted to show which indexes have not been run that often.I have placed a filter  that says I have to have at least 20,000 rows in a table for my query results to have a practical value. You can see that I am looking for reads, writes and their ratio to see if it is worth keeping the index.

With DMVs we are  monitoring performance with minimal overhead. There are many DMVs available to us at the moment and  more are added with new versions of SQL Server.

Hope it helps!!!

DevExpress DXperience Universal Edition review May 24, 2014

Posted by fofo in devexpress.
Tags:
1 comment so far

I am a Microsoft Trainer and a former ASP.Net MVP and I am using Visual Studio to build .Net and more
specifically ASP.Net applications.

Ι have build several ASP.Net applications using the ASP.Net & MVC DevExpress controls & libraries.

I have been using the DevExpress DXperience Universal Edition.

This is an edition,a subscription package that is available from DevExpress and has all the components
and libraries. There other packages like Enterprise, WinForms, WPF,Silverlight and ASP.NET.

The Universal edition includes DevExtreme, their mobile development framework for Visual Studio. It
also includes CodeRush, the eXpressApp Framework and the Business Intelligence Dashboard.

From the ASP.Net MVC controls & features I really liked the Application Themes.
This is an easy and handy way to personalize the look and feel of the web application.

Naturally I use the Data Grid for ASP.NET MVC control. It allows me to provide my clients with very nice user experience at a very fast speed.

It supports great out of the box functionality like master-detail and advanced lookup.

The ribbon control is another control I use often and has been a great addition to categorize
commands.

The reporting possibilities are endless and I can create master-detail reports very easily along with
side by side reports.

I have been using DevExpress MVC extensions to build various applications.Recently I have built an ASP.Net MVC application that monitors the working days/hours/leaves of the staff of a particular organisation that is also connected with a access control system(card readers).

Some of the controls I used in this application include:

1) MVCxPivotGrid

2) MVCxGridView

3) MVCxGridView, Exporter, Calendar Control

4) MVCxGridView, MVCxTreeview

These are some screenshots of my application.

pic1

 

pic2

 

pic3

 

pic4

 

 

I have also written about DevExpess controls in my other development blog (http://weblogs.asp.net/dotnetstories)

Ιn this post I am demonstrating how to bind an XPODataSource control to an ASPxGridView control.

Ιn this post I am demonstrating how to use client-side events that can make the user experience of your web application for the end user much better by avoiding unnecessary page flickering and postbacks.

Ιn this post I am demonstrating how to  bind data from an ArrayList object to the ASPxGridView control.

Ιn this post I am demonstrating how to implement Master-Detail functionality using the ASPxGridView control.

Ιn this post I am demonstrating how to use the ASPxGridView and its great features that include sorting,grouping,filtering,summaries.

In conclusion DevExpress Universal subscription allows developers to continue creating high
performance windows and web applications.

I have been able to develop great web experiences for my clients very quickly and efficiently.

 

 

Packt Publishing celebrates their 2000th title with an exclusive offer – We’ve got IT covered! March 26, 2014

Posted by fofo in general .net.
add a comment

Known for their extensive range of pragmatic IT ebooks, Packt Publishing are celebrating their 2000th book title `Learning Dart’– they want their customers to celebrate too.
To mark this milestone Packt Publishing will launch a ‘Buy One Get One Free’ offer across all eBooks on March 18th – for a limited period only.
`Learning Dart’ was selected as a title and published by Packt earlier this year. As a project that aims to revolutionise a language as crucial as JavaScript, Dart is a great example of an emerging technology which aims to support the community and their requirement for constant improvement. The content itself explains how to develop apps using Dart and HTML5 in a model-driven and fast-paced approach, enabling developers to build more complex and high-performing web apps.
David Maclean, Managing Director explains `It’s not by chance that this book is our 2000th title. Our customers and community drive demand and it is our job to ensure that whatever they’re working on, Packt provides practical help and support.
At Packt we understand that sometimes our customers want to learn a new programming language pretty much from scratch, with little knowledge of similar language concepts. Other times our customers know a related language fairly well and therefore want a fast-paced primer that brings them up to a competent professional level quickly.
That’s what makes Packt different: all our books are specifically commissioned by category experts, based on intensive research of the technology and the key tasks.’
Since 2004, Packt Publishing has been providing practical IT-related information that enables everyone to learn and develop their IT knowledge, from novice to expert.
Packt is one of the most prolific and fast-growing tech book publishers in the world. Originally focused on open source software, Packt contributes back into the community paying a royalty on relevant books directly to open source projects. These projects have received over $400,000 as part of Packt’s Open Source Royalty Scheme to date.
Their books focus on practicality, recognising that readers are ultimately concerned with getting the job done. Packt’s digitally-focused business model allows them to quickly publish up-to-date books in very specific areas across a range of key categories – web development, game development, big data, application development, and more. Their commitment to providing a comprehensive range of titles has seen Packt publish 1054% more titles in 2013 than in 2006.
Erol Staveley, Publisher, says `Recent research shows that 88% of our customers are very satisfied with the service knowing that we offer a wide breadth of titles in a timely manner, and owing to the quality of service that they receive 94% of customers are willing to recommend Packt to friends and family. It’s great that we’ve hit such a significant milestone, and we want to continue delivering this fantastic content to our customers.’
Here are some of the best titles across Packt’s main categories – but Buy One, Get One Free will apply across all 2000 titles:
Web Development
Big Data & Cloud
Game Development
App Development