jump to navigation

DDL Triggers are AFTER triggers December 24, 2016

Posted by fofo in Sql Server, Sql Server 2005, SQL Server 2008, SQL Server 2012, SQL Server 2014, SQL Server 2016.
Tags: ,
add a comment

In this post I would like to look into DDL triggers and explain their functionality.

Let me start with a short introduction on triggers

All SQL Server developers use SQL triggers that basically is a mechanism that is invoked when a particular action occurs on a particular table.

Triggers consist of

  • A name
  • An action
  • The execution

The maximum size of a trigger name is 128 characters.
The action of a trigger can be either a DML statement (INSERT, UPDATE, or DELETE) or a DDL statement (CREATE, ALTER, DROP)
Therefore, there are two trigger forms: DML triggers and DDL triggers.

 

The AFTER and INSTEAD OF options are two additional options that you can define for a trigger.

  • AFTER triggers fire after the triggering action occurs.
  • INSTEAD OF triggers are executed instead of the corresponding triggering action.
  • AFTER triggers can be created only on tables, while INSTEAD OF triggers can be created on both tables and views

DDL triggers were introduced in SQL Server 2005. DDL triggers are not INSTEAD OF triggers.

They are implemented as AFTER triggers, which means the operation occurs and is then caught in the trigger.

The the operation can be  optionally rolled-back, if you put a ROLLBACK statement in the trigger body.
This means they’re not quite as lightweight as you might think. Imagine doing the following:

ALTER TABLE MyTable ADD newcolumn VARCHAR (30) DEFAULT ‘default value’;

If we have a DDL trigger defined for ALTER_TABLE events, or DDL_TABLE_EVENTS, the trigger due to the above T-SQL batch will fire and every row in the table will be expanded to include the new column (as it has a non-null default), and the operation is rolled back by your trigger body.

Type (copy paste) the following T-SQL statements in a new query window in the SSMS



CREATE DATABASE sampleDB;

GO


USE sampleDB;

GO

CREATE TABLE Product
 (
 Pid INT PRIMARY KEY
 IDENTITY ,
 pname NVARCHAR(50) ,
 price DECIMAL(18, 4)
 );

CREATE TRIGGER Not_Alter_Tables ON DATABASE
 FOR ALTER_TABLE
AS
 PRINT 'Not Alter Statements are permitted on the tables of this DB'
 SELECT EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]',
 'nvarchar(max)')
 RAISERROR ('Tables cannot be altered in this database.', 16, 1) 
 ROLLBACK;
GO

ALTER TABLE dbo.Product ADD newcolumn VARCHAR (30) DEFAULT 'default value'; 

Let me explain what I do in the code above.

I create a sample database. I create sample empty table. I create a trigger “Not_Alter_tables” on this database. This trigger applies to all the tables in the database.

This trigger is a DDL trigger, hence an AFTER trigger. In the trigger I capture the T-SQL statement that invoked the trigger, I print a statement, raise an error and then rollback the operation.

Then I attempt to add a new column in the table.

In this example, when the trigger is invoked, the table will be expanded to include the new column (as it has a non-null default), and then the operation is rolled in the trigger body.

Have a look at the picture below.

trigger

 

In a nutshell DDL triggers are AFTER triggers and can be quite expensive. In the example above the best way is not to use DDL triggers but to use instead explicit permissions e.g REVOKE/DENY to prevent users from altering the schema of the table.

Hope it helps!!!

Advertisements

Drop if Exists T-SQL Enhancement statement in SQL Server 2016 December 23, 2016

Posted by fofo in Sql Server, SQL Server 2016.
Tags:
add a comment

This is the fourth post in a series of posts about the new features of SQL Server 2016. You can find the first post in the SQL Server 2016 (JSON support in SQL Server 2016) series here.

A post on Row Level Security in SQL Server 2016 can be found here.

A post on Temporal Tables in SQL Server 2016 can be found here .

SQL Server 2016 was officially released back in June 2016.

I have installed SQL Server 2016 Enterprise edition in my machine but you can use the SQL Server 2016 Developer edition or Express edition as well.

You can download the SQL Server 2016 Developer edition by joining the Visual Studio Dev Essentials which is free and simple. After joining you will be able to download SQL Server 2016 developer edition in your machine.

If we need to delete a database object e.g a table or a trigger, but we don’t know if they exist or not, then we are used to writing code like this e.g “If the table exists” then drop it  or “If the trigger exists” then drop it.

There is a new syntax “If EXISTS” in SQL Server 2016 that makes our life easier when it comes to drop objects.

You can use it to drop any database object you want.

Let’s move on and demonstrate the new enhancement with a hands-on demo.

Type (copy paste) the following query in a new query window and execute it.


CREATE DATABASE sampleDB;

GO


USE sampleDB;

GO

CREATE TABLE Product
 (
 Pid INT PRIMARY KEY
 IDENTITY ,
 pname NVARCHAR(50) ,
 price DECIMAL(18, 4)
 );


SELECT *
FROM sys.tables;

IF OBJECT_ID('dbo.Product', 'U') IS NOT NULL
 DROP TABLE dbo.Product;

SELECT *
FROM sys.tables;


CREATE TABLE Product
 (
 Pid INT PRIMARY KEY
 IDENTITY ,
 pname NVARCHAR(50) ,
 price DECIMAL(18, 4)
 );

SELECT *
FROM sys.tables;


DROP TABLE IF EXISTS dbo.Product;

SELECT * FROM sys.tables;

Let me explain what I am doing in the code above.

I create a sample database and then create a table in it. Then I query the sys.tables and my table appears in the results since it exists.

Then I use the old syntax to check if the table exists and drop it.

Then I query again the sys.tables and there are no rows returned since the table is dropped.

Then I create the sample table again. Again I query the sys.tables and my table appears in the results since it has been recreated.

Then I use the new syntax to drop it “DROP TABLE IF EXISTS dbo.Product;”

Then I query again the sys.tables and there are no rows returned since the table is dropped.

This is handy new T-SQL enhancement in SQL Server 2016.

Hope it helps!!!

Row Level Security in SQL Server 2016 December 10, 2016

Posted by fofo in Sql Server, SQL Server 2016.
Tags:
1 comment so far

This is the third post in a series of posts that highlight the new features of SQL Server 2016. You can find the first two post in SQL Server 2016 (JSON support in SQL Server 2016, temporal tables) series here & here.

SQL Server 2016 was officially released back in June 2016.

There are many new features in SQL Server 2016 and in  my opinion the most important are:

  • Always Encrypted
  • Query Store
  • Row Level Security
  • Temporal Tables
  • Stretch Database
  • Dynamic Data Masking
  • PolyBase
  • JSON Support

I am going to discuss the “Row-Level Security” in SQL Server 2016 in this post.

I have installed SQL Server 2016 Enterprise edition in my machine but you can use the SQL Server 2016 Developer edition as well.

You can download the SQL Server 2016 Developer edition by joining the Visual Studio Dev Essentials which is free and simple. After joining you will be able to download SQL Server 2016 developer edition in your machine.

In many cases in our applications we have a multi-tenant database with a shared schema.

In modern applications there are certain situations where we need to restrict the access to data to our users.

We  have sales people that need to access only their data, (sales, reports, targets that apply only to them) in a web sales application.

We need students to only see their data (grades, stats) in an online web university application.

In all the scenarios above we need to prevent users/tenants from accessing data that does not belong to them.

The solution to the problems above was to write custom application code or to create SQL views.

The problem with SQL Views is that if we need to protect 3-4 tables we will spread the business logic all over the place. Obviously this kind of implementation soon becomes difficult to maintain, and the views are prone to run time errors.

We can also use custom code to implement access restrictions in the application.

This solution is not the best either. If you need to use the same access logic for a database, but in different applications, you may need to do the same work multiple times.

It’s also difficult to maintain, especially as the code base grows. But the most important reason is that in some situations the custom code is actually impossible to use, especially if you do not own the code.

Row-Level security in SQL Server 2016 was implemented to help us solve these kind of problems in a much more efficient way. Now we have a mechanism where we can control who reads and writes data from our database tables, based on criteria that we define.

We can define a plethora of criteria.Row-level security is transparent to the application code. We don’t need to make any changes in the code. All our data access logic is stored within the database.

Row-Level Security once implemented will apply a security predicate to all queries executed on a table. The predicate will dictate to what rows a specific user has access.

Let’s us start with our hands-on example.

We need to have Project Managers (supposedly we have a Projects Table) looking into only to the project details they are overlooking and are responsible for.

 

Type (copy paste) the following query in a new query window and execute it.

USE master;
GO

CREATE DATABASE RowLevelSecurity
GO
 
USE RowLevelSecurity
GO

CREATE TABLE dbo.Projects
 (
 ProjectID INT PRIMARY KEY IDENTITY,
 ProjectName NVARCHAR(50) ,
 ProjectManager NVARCHAR(50) ,
 budget DECIMAL(18,4),
 DateDue DATETIME

 );

INSERT dbo.Projects
VALUES ('NopCommerce', 'George', 12512, '12/12/2016'),
 ('Custom Asp.Net MVC', 'George', 24512, '11/11/2017'),
 ('Social Media Marketing', 'Mary', 3562, '03/03/2017'),
 ('Google Adwords Remarketing', 'Mary', 4512, '04/04/2017'),
 ('Google Adwords Search', 'Mary', 9218, '04/06/2017'),
 ('SQL Server Reporting Services', 'Tom', 8765, '04/03/2017'),
 ('SQL Server Upgrade to 2016', 'Tom', 8512, '04/08/2017')
; 


create user George without login;
create user Mary without login;
create user Tom without login;


GRANT SELECT ON dbo.Projects TO George
GRANT SELECT ON dbo.Projects TO Mary
GRANT SELECT ON dbo.Projects TO Tom
 

SELECT * FROM dbo.Projects

In the code above, I create a sample database.

Then I create a Projects table and then insert some sample data. Then I create users without logins and grant them select rights on the table Projects. Finally I query the table to see the results.

Type (copy paste) the following query in a new query window and execute it.


CREATE FUNCTION dbo.fn_securityProjects (@FilterColumnName AS sysname)
RETURNS TABLE
WITH SCHEMABINDING
as
 RETURN SELECT 1 AS fn_securitypredicate_result 

where @FilterColumnName = user_name();


CREATE SECURITY POLICY FilterProjects
ADD FILTER PREDICATE dbo.fn_securityProjects(ProjectManager)
ON dbo.Projects
WITH (STATE = ON); 

In the code above I create an inline function and a security policy. My function will be used to filter out rows based on the database user.

The function code above accepts a parameter named @FilterColumnName, and then uses this parameter to filter out the rows based on the database user, using the user_name() function.

This function will be used to filter rows from my Projects table by associating this function with a security policy.

In the security policy  I added a FILTER PREDICATE that referenced my dbo.fn_securityProjects function.  By creating this security policy the SQL Server engine will make sure every time that a database user runs a SQL command that references the dbo.Projects table that the filter predicate  function will also be executed.

Now let’s try out our Row-Level Security functionality.

Type (copy paste) the following query in a new query window and execute it.

execute as user = 'George';
select * from dbo.Projects; 
revert;
 
execute as user = 'Mary';
select * from dbo.Projects; 
revert;
 
execute as user = 'Tom';
select * from dbo.Projects; 
revert;
 

Have a look at the picture below

users-db

As you can see, by looking at this output, the SELECT statement that was run when the database user was set to George returned 2 rows and each row had the value ‘George’ in the ProjectManager column. The user Mary returned 3 rows and Tom only had 2 rows associated with his user name.

This is a great new addition to SQL Server 2016 and help us having a fine-grained control over our data.

Hope it helps!!!

Temporal Tables in SQL Server 2016 November 26, 2016

Posted by fofo in Sql Server, SQL Server 2016.
Tags:
1 comment so far

This is the second post in a series of posts about the new features of SQL Server 2016. You can find the first post in the SQL Server 2016 (JSON support in SQL Server 2016) series here .

SQL Server 2016 was officially released back in June 2016.

There are many new features in SQL Server 2016 and in  my opinion the most important are:

  • Always Encrypted
  • Query Store
  • Row Level Security
  • Temporal Tables
  • Stretch Database
  • Dynamic Data Masking
  • PolyBase
  • JSON Support

I am going to discuss the “Temporal Tables” in SQL Server 2016 in this post.

I have installed SQL Server 2016 Enterprise edition in my machine but you can use the SQL Server 2016 Developer edition as well.

You can download the SQL Server 2016 Developer edition by joining the Visual Studio Dev Essentials which is free and simple. After joining you will be able to download SQL Server 2016 developer edition in your machine.

Temporal tables is another brilliant feature that will make work with historical databases in SQL Server much easier.  Temporal is a database feature that was introduced in SQL 2011 and is now supported in SQL Server 2016.

A system-versioned temporal table is a new type of user table in SQL Server 2016, designed to keep a full history of data changes and allow easy point in time analysis.

Please do not confuse temporal tables with temporary tables.

We can use them to look at historical data in an easy way by having the ability to reconstruct the data at any point in time, or even to compare data from two periods without having this history in our current table.

We can use them to understand business trends over time. This is very important for people in the Sales or Marketing Departments of a big business.

Sometimes developers, DBAs can accidentally delete data. Temporal tables can help us recover accidental changes.We can also debug errors and understand data behavior.

Sometimes when we make a mistake and delete or update data by accident the only way to recover data is to restore a valid (recently taken backup).

That takes time and surely temporal tables give us the ability to recover data in a much easier and efficient way.

Another important thing to note is that we can create a new temporal table or alter the schema of an existing table. In order to create a system-versioned table we need to

  1. Have a table that has a primary key (that goes without saying)
  2. The table option SYSTEM_VERSIONING to be set to ON
  3. The two non-nullable DATETIME columns representing the start and the end of the row’s validity period
  4. Designation of the period columns
  5. A linked history table which naturally holds the past states of the modified rows

Also bear in mind that all T-SQL statements INSERT, UPDATE,SELECT are totally valid.

Let’s move on and demonstrate the temporal tables functionality with a hands-on demo.

Type (copy paste) the following query in a new query window and execute it.


CREATE DATABASE TheTemporalDB
GO

USE TheTemporalDB
GO
CREATE TABLE Project 
(
 ProjectID int NOT NULL IDENTITY(1,1) PRIMARY KEY, 
 ProjectName varchar(50) NOT NULL, 
 ManagerID int NULL, 
 ValidFrom datetime2 GENERATED ALWAYS AS ROW START NOT NULL, 
 ValidTo datetime2 GENERATED ALWAYS AS ROW END NOT NULL, 
 PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo) 
)
WITH (SYSTEM_VERSIONING = ON)
GO

In the code above, I create a sample database and a temporal table.

If you look at the picture below you will notice the temporal able and the system generated history table that is linked with it.

 

temporal-table-2

You can see the two tables have the exact same columns. The ProjectID column in the history table has not a Primary key constraint. The history table schema reflects the main table schema except for constraints.

Type (copy paste) the following query in a new query window and execute it.


SELECT
 object_id,
 name,
 temporal_type,
 temporal_type_desc,
 history_table_id
 FROM
 sys.tables
 WHERE
 object_id = OBJECT_ID('dbo.Project ', 'U') OR
 object_id = ( 
 SELECT history_table_id 
 FROM sys.tables
 WHERE object_id = OBJECT_ID('dbo.Project ', 'U')
)

GO

Executing the code above you can get information about the main table and the temporal history table.

Have a look at the picture below

 

temporal-table-3

 

If you want to delete those tables you must first turn off system versioning and then drop tables individually.

Type (copy paste) the following query in a new query window and execute it.


ALTER TABLE Project SET (SYSTEM_VERSIONING = OFF)
DROP TABLE Project
DROP TABLE[dbo].[MSSQL_TemporalHistoryFor_565577053] 

Now lets recreate the our temporal table specifying the name of the custom history table.

Type (copy paste) the following query in a new query window and execute it.

USE TheTemporalDB
GO
CREATE TABLE Project
(
 ProjectID int NOT NULL IDENTITY(1,1) PRIMARY KEY, 
 ProjectName varchar(50) NOT NULL, 
 ManagerID int NULL, 
 ValidFrom datetime2 GENERATED ALWAYS AS ROW START NOT NULL, 
 ValidTo datetime2 GENERATED ALWAYS AS ROW END NOT NULL, 
 PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo) 
)
WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.ProjectHistory))

The name of the history table is now ProjectHistory.

Now let’s create a simple table, populate it and then convert it to a temporal table

Type (copy paste) the following query in a new query window and execute it.


USE TheTemporalDB
GO

ALTER TABLE Project SET (SYSTEM_VERSIONING = OFF)
DROP TABLE Project
DROP TABLE[dbo].[ProjectHistory]

CREATE TABLE Project(
 ProjectId int PRIMARY KEY,
 ProjectName varchar(20) NOT NULL,
 DepartmentName varchar(50) NOT NULL
)
GO

INSERT INTO Project (ProjectId, ProjectName, DepartmentName) VALUES
 (1, 'NopCommerce E-shop', 'E-commerce SubUnit'),
 (2, 'Elastic Search', 'NoSQL DBs SubUnit'),
 (3, 'Adwords Campaign', 'Digital Marketing SubUnit')
GO

I drop the tables I created earlier. I create a simple table and insert some sample values in it.

Now I will convert the table to a temporal table.

Type (copy paste) the following query in a new query window and execute it.


USE TheTemporalDB
GO

ALTER TABLE Project ADD
 StartDate datetime2 GENERATED ALWAYS AS ROW START NOT NULL DEFAULT CAST('1900-01-01 00:00:00.0000000' AS datetime2),
 EndDate datetime2 GENERATED ALWAYS AS ROW END NOT NULL DEFAULT CAST('9999-12-31 23:59:59.9999999' AS datetime2),
 PERIOD FOR SYSTEM_TIME (StartDate, EndDate)
GO

ALTER TABLE Project 
 SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.ProjectHistory))
GO

Then I query the two tables. The history table naturally has not data in it.

Then I update (3 times) the first row of the table where ProjectID is 1.

Then I query the two tables. The history table has data in it this time. All changes are stored in the table.

Type (copy paste) the following query in a new query window and execute it.


USE TheTemporalDB
GO

SELECT * FROM [dbo].[Project]
SELECT * FROM [dbo].[ProjectHistory]

UPDATE Project SET ProjectName = 'Custom ASP.Net MVC' WHERE ProjectId = 1


UPDATE Project SET ProjectName = 'ERP Integration' WHERE ProjectId = 1


UPDATE Project SET ProjectName = 'Theme Conversion' WHERE ProjectId = 1


SELECT * FROM [dbo].[Project]
SELECT * FROM [dbo].[ProjectHistory]

Have a look at the picture below.

temporal-table-4

You can see that the history table has all the changes ( all 3 updates) in the ProjectName column.

Finally one should note that when comes to triggers and temporal tables INSTEAD triggers are unsupported while AFTER Triggers are supported only on the current table and not on the history table.

Tables with foreign key constraints that use Cascade updates and deleted are also not supported.

In Memory OLTP tables and FILESTREAM tables cannot become temporal tables.

 

Hope it helps!!!

Built-in support for JSON in SQL Server 2016 November 20, 2016

Posted by fofo in Sql Server, SQL Server 2016.
Tags: ,
2 comments

This is going to be the first post in a series of posts about the new features of SQL Server 2016.

SQL Server 2016 was officially released back in June 2016.

There are many new features in SQL Server 2016 and in  my opinion the most important are:

  • Always Encrypted
  • Query Store
  • Row Level Security
  • Temporal Tables
  • Stretch Database
  • Dynamic Data Masking
  • PolyBase
  • JSON Support

I am going to discuss the JSON support in SQL Server 2016 in this post.

I have installed SQL Server 2016 Enterprise edition in my machine but you can use the SQL Server 2016 Developer edition as well. You can download SQL Server 2016 Developer edition for free.

You can download the SQL Server 2016 Developer edition by joining the Visual Studio Dev Essentials which is free and simple. After joining you will be able to download SQL Server 2016 developer edition in your machine.

JSON is an acronym for JavaScript Object Notation, and it’s a lightweight format for exchanging data between different source types, similar to how XML is used.

The transferred text is easy to read, and the objects consist of attribute-value pairs.

Let’s look at a simple  JSON example

 

{“doctors”:[
{“firstName”:”John”, “lastName”:”Smith”},
{“firstName”:”Anna”, “lastName”:”Bernard”},
{“firstName”:”Peter”, “lastName”:”Feller”}
]}

SQL developers think that the SQL team has provided a  built-in JSON data type. That is not true.

JSON text is typically stored in varchar or nvarchar columns, and is indexed as plain text.

SQL Server 2016 provides built-in support for importing, exporting, parsing, and querying JSON documents or fragments.

Hence, we can format SQL Server data or query output as JSON by adding the FOR JSON clause to a select statement.

On the other hand we can convert JSON to data that feeds to a table from our database. This is done using the OPENJSON function.

Let’s us start with our first example.

In all my examples I will be using the AdventureWorks 2016 database. You can download it here.

We will convert the query results against our database to JSON format using the FOR JSON clause.

We will create a SELECT statement from a table and then transform it to JSON.

Open a new query window. Type (copy-paste the following) in that query window. Execute the query.

USE AdventureWorks2016

SELECT Production.Product.Name ,
 Production.Product.StandardCost ,
 Production.Product.ListPrice ,
 Production.Product.Size
FROM Production.Product
WHERE ListPrice > 1800
FOR JSON AUTO

 

I copy and paste the output of the query to online JSON parser – http://jsonviewer.stack.hu/

This is the output after formatting the JSON output.

[
{
“Name”: “Road-150 Red, 62”,
“StandardCost”: 2171.2942,
“ListPrice”: 3578.2700,
“Size”: “62”
},
{
“Name”: “Road-150 Red, 44”,
“StandardCost”: 2171.2942,
“ListPrice”: 3578.2700,
“Size”: “44”
},
{
“Name”: “Road-150 Red, 48”,
“StandardCost”: 2171.2942,
“ListPrice”: 3578.2700,
“Size”: “48”
},
{
“Name”: “Road-150 Red, 52”,
“StandardCost”: 2171.2942,
“ListPrice”: 3578.2700,
“Size”: “52”
},
{
“Name”: “Road-150 Red, 56”,
“StandardCost”: 2171.2942,
“ListPrice”: 3578.2700,
“Size”: “56”
},
{
“Name”: “Mountain-100 Silver, 38”,
“StandardCost”: 1912.1544,
“ListPrice”: 3399.9900,
“Size”: “38”
},
{
“Name”: “Mountain-100 Silver, 42”,
“StandardCost”: 1912.1544,
“ListPrice”: 3399.9900,
“Size”: “42”
},
{
“Name”: “Mountain-100 Silver, 44”,
“StandardCost”: 1912.1544,
“ListPrice”: 3399.9900,
“Size”: “44”
},
{
“Name”: “Mountain-100 Silver, 48”,
“StandardCost”: 1912.1544,
“ListPrice”: 3399.9900,
“Size”: “48”
},
{
“Name”: “Mountain-100 Black, 38”,
“StandardCost”: 1898.0944,
“ListPrice”: 3374.9900,
“Size”: “38”
},
{
“Name”: “Mountain-100 Black, 42”,
“StandardCost”: 1898.0944,
“ListPrice”: 3374.9900,
“Size”: “42”
},
{
“Name”: “Mountain-100 Black, 44”,
“StandardCost”: 1898.0944,
“ListPrice”: 3374.9900,
“Size”: “44”
},
{
“Name”: “Mountain-100 Black, 48”,
“StandardCost”: 1898.0944,
“ListPrice”: 3374.9900,
“Size”: “48”
},
{
“Name”: “Mountain-200 Silver, 38”,
“StandardCost”: 1265.6195,
“ListPrice”: 2319.9900,
“Size”: “38”
},
{
“Name”: “Mountain-200 Silver, 42”,
“StandardCost”: 1265.6195,
“ListPrice”: 2319.9900,
“Size”: “42”
},
{
“Name”: “Mountain-200 Silver, 46”,
“StandardCost”: 1265.6195,
“ListPrice”: 2319.9900,
“Size”: “46”
},
{
“Name”: “Mountain-200 Black, 38”,
“StandardCost”: 1251.9813,
“ListPrice”: 2294.9900,
“Size”: “38”
},
{
“Name”: “Mountain-200 Black, 42”,
“StandardCost”: 1251.9813,
“ListPrice”: 2294.9900,
“Size”: “42”
},
{
“Name”: “Mountain-200 Black, 46”,
“StandardCost”: 1251.9813,
“ListPrice”: 2294.9900,
“Size”: “46”
},
{
“Name”: “Road-250 Red, 44”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “44”
},
{
“Name”: “Road-250 Red, 48”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “48”
},
{
“Name”: “Road-250 Red, 52”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “52”
},
{
“Name”: “Road-250 Red, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”
},
{
“Name”: “Road-250 Black, 44”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “44”
},
{
“Name”: “Road-250 Black, 48”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “48”
},
{
“Name”: “Road-250 Black, 52”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “52”
},
{
“Name”: “Road-250 Black, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”
},
{
“Name”: “Touring-1000 Yellow, 46”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “46”
},
{
“Name”: “Touring-1000 Yellow, 50”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “50”
},
{
“Name”: “Touring-1000 Yellow, 54”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “54”
},
{
“Name”: “Touring-1000 Yellow, 60”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “60”
},
{
“Name”: “Touring-1000 Blue, 46”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “46”
},
{
“Name”: “Touring-1000 Blue, 50”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “50”
},
{
“Name”: “Touring-1000 Blue, 54”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “54”
},
{
“Name”: “Touring-1000 Blue, 60”,
“StandardCost”: 1481.9379,
“ListPrice”: 2384.0700,
“Size”: “60”
}
]

In AUTO mode, the structure of the SELECT statement determines the format of the JSON output. If we want to explicitly specify the format of the output, we can use the PATH mode instead of AUTO.

Open a new query window. Type (copy-paste the following) in that query window. Execute the query.

USE AdventureWorks2016

--AUTO
SELECT Production.Product.Name ,
 Production.Product.StandardCost ,
 Production.Product.ListPrice ,
 Production.Product.Size ,
 Production.ProductModel.Name AS modelname
FROM Production.Product
 INNER JOIN Production.ProductModel ON Production.Product.ProductModelID = Production.ProductModel.ProductModelID
WHERE ListPrice > 1800
 AND ProductModel.Name IN ( 'Road-250' )
 FOR JSON AUTO

--PATH

SELECT Production.Product.Name ,
 Production.Product.StandardCost ,
 Production.Product.ListPrice ,
 Production.Product.Size ,
 Production.ProductModel.Name AS modelname
FROM Production.Product
 INNER JOIN Production.ProductModel ON Production.Product.ProductModelID = Production.ProductModel.ProductModelID
WHERE ListPrice > 922
 AND ProductModel.Name IN ( 'Road-250' )
 FOR JSON PATH

GO
 

This is the output of the AUTO statement.

[
{
“Name”: “Road-250 Red, 44”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “44”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Red, 48”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “48”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Red, 52”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “52”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Red, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Black, 44”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “44”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Black, 48”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “48”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Black, 52”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “52”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
},
{
“Name”: “Road-250 Black, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”,
“Production.ProductModel”: [
{
“modelname”: “Road-250”
}
]
}
]

This is the output of the PATH statement

[
{
“Name”: “Road-250 Red, 44”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “44”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Red, 48”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “48”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Red, 52”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “52”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Red, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 44”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “44”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 48”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “48”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 52”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “52”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”,
“modelname”: “Road-250”
}
]

Now let’s see how we can use the ROOT statement.

Open a new query window. Type (copy-paste the following) in that query window. Execute the query.

USE AdventureWorks2016

SELECT Production.Product.Name ,
 Production.Product.StandardCost ,
 Production.Product.ListPrice ,
 Production.Product.Size ,
 Production.ProductModel.Name AS modelname
FROM Production.Product
 INNER JOIN Production.ProductModel ON Production.Product.ProductModelID = Production.ProductModel.ProductModelID
WHERE ListPrice > 1800
 AND ProductModel.Name IN ( 'Road-250' )
 FOR JSON PATH,ROOT ('Production.Product')

This is the output of the JSON result after formatting.

{
“Production.Product”: [
{
“Name”: “Road-250 Red, 44”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “44”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Red, 48”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “48”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Red, 52”,
“StandardCost”: 1518.7864,
“ListPrice”: 2443.3500,
“Size”: “52”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Red, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 44”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “44”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 48”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “48”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 52”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “52”,
“modelname”: “Road-250”
},
{
“Name”: “Road-250 Black, 58”,
“StandardCost”: 1554.9479,
“ListPrice”: 2443.3500,
“Size”: “58”,
“modelname”: “Road-250”
}
]
}

We wanted to specify a root element for our data format and we used the ROOT keyword.

The nodes returned by our query are children of the root node Production.Product

We can also convert JSON data to rows and columns.

We can use the OPENJSON function.  There are two ways of calling OPENJSON. We can use it with a default schema. In this case, the function returns a table with one row for each property of the object or for each element of the array.

Open a new query window. Type (copy-paste the following) in that query window. Execute the query.


DECLARE @json NVARCHAR(MAX) = N'
{
 "Name":"Road-155,62",
 "StandardCost":2123,
 "ListPrice":3123,
 "Size":62

}';
 
SELECT * FROM OPENJSON(@json);

The output of this statement is

key value type
Name Road-155,62 1
StandardCost 2123 2
ListPrice 3123 2
Size 62 2

 

As you can see we can the OPENJSON  function returns three values:

key: Property name within the object or index of the element within the array.
value: Property value within the object or value of the array element specified by the index.
type: Value’s data type, represented numerically, as described in the following table:

Numeric Value Data Type
0 null
1 string
2 int
3 true or false
4 array
5 object

We can also use the OPENJSON function with an explicit schema, and in this case the function returns a table with the schema that we define in the WITH clause.

In the WITH clause, we specify the output columns, their types, and the paths of the source properties for each column.

Open a new query window. Type (copy-paste the following) in that query window. Execute the query.


DECLARE @JSON NVARCHAR(max);
SET @JSON =N'{"Products"
{"Production.Product": 
 [
 {
 "Name": "Road-250 Red, 44",
 "StandardCost": 1518.7864,
 "ListPrice": 2443.3500,
 "Size": "44",
 "ProductModel": 
 {
 "modelname": "Road-250"
 }
 
 },
 {
 "Name": "Road-250 Red, 48",
 "StandardCost": 1518.7864,
 "ListPrice": 2443.3500,
 "Size": "48",
 "ProductModel": 
 {
 "modelname": "Road-250"
 }
 }
 ]
 }
 }'

 select * from OPENJSON (@JSON, '$.Products.Production.Product')

 with ( Name nvarchar(50) '$.Name',
 StandardCost decimal(18,2) '$.StandardCost',
 Listprice decimal(18,2) '$.ListPrice',
 Size int '$.Size',
 modelname nvarchar(100) '$.ProductModel.modelname'
 )

 

We can see that Name,StandardCost,ListPrice,Size come directly under Production.Product, while the modelname has another parent, ProductModel, so when we mention them we also specify the ProductModel node.

Finally let’s have a look at the ISJSON  built-in function. This function validates the JSON text so we can find out whether a string contains a valid JSON.

In this final example I make a TABLE variable in which a column is going to store string values that will hold date formatted as JSON. I am populating the TABLE variable with two rows.

By executing the query below we see that returns 2 rows because our JSON format is valid.

This is a good function for validating strings that are supposed to have a JSON format before exchanging them with client applications

 
DECLARE @temp TABLE
 (
 id INT ,
 value NVARCHAR(MAX)
 )

INSERT INTO @temp
VALUES ( 1, '{
 "Name": "Road-150 Red, 62",
 "StandardCost": 2171.2942,
 "ListPrice": 3578.2700,
 "Size": "62"
 }' ), ( 2, '{
 "Name": "Road-150 Red, 44",
 "StandardCost": 2171.2942,
 "ListPrice": 3578.2700,
 "Size": "44"
 }' )

 SELECT * FROM @temp
 where ISJSON([Value]) > 0 

JSON support in SQL Server is great and a much anticipated feature.

Hope it helps!!!