jump to navigation

DDL Triggers are AFTER triggers December 24, 2016

Posted by fofo in Sql Server, Sql Server 2005, SQL Server 2008, SQL Server 2012, SQL Server 2014, SQL Server 2016.
Tags: ,
add a comment

In this post I would like to look into DDL triggers and explain their functionality.

Let me start with a short introduction on triggers

All SQL Server developers use SQL triggers that basically is a mechanism that is invoked when a particular action occurs on a particular table.

Triggers consist of

  • A name
  • An action
  • The execution

The maximum size of a trigger name is 128 characters.
The action of a trigger can be either a DML statement (INSERT, UPDATE, or DELETE) or a DDL statement (CREATE, ALTER, DROP)
Therefore, there are two trigger forms: DML triggers and DDL triggers.

 

The AFTER and INSTEAD OF options are two additional options that you can define for a trigger.

  • AFTER triggers fire after the triggering action occurs.
  • INSTEAD OF triggers are executed instead of the corresponding triggering action.
  • AFTER triggers can be created only on tables, while INSTEAD OF triggers can be created on both tables and views

DDL triggers were introduced in SQL Server 2005. DDL triggers are not INSTEAD OF triggers.

They are implemented as AFTER triggers, which means the operation occurs and is then caught in the trigger.

The the operation can be  optionally rolled-back, if you put a ROLLBACK statement in the trigger body.
This means they’re not quite as lightweight as you might think. Imagine doing the following:

ALTER TABLE MyTable ADD newcolumn VARCHAR (30) DEFAULT ‘default value’;

If we have a DDL trigger defined for ALTER_TABLE events, or DDL_TABLE_EVENTS, the trigger due to the above T-SQL batch will fire and every row in the table will be expanded to include the new column (as it has a non-null default), and the operation is rolled back by your trigger body.

Type (copy paste) the following T-SQL statements in a new query window in the SSMS



CREATE DATABASE sampleDB;

GO


USE sampleDB;

GO

CREATE TABLE Product
 (
 Pid INT PRIMARY KEY
 IDENTITY ,
 pname NVARCHAR(50) ,
 price DECIMAL(18, 4)
 );

CREATE TRIGGER Not_Alter_Tables ON DATABASE
 FOR ALTER_TABLE
AS
 PRINT 'Not Alter Statements are permitted on the tables of this DB'
 SELECT EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]',
 'nvarchar(max)')
 RAISERROR ('Tables cannot be altered in this database.', 16, 1) 
 ROLLBACK;
GO

ALTER TABLE dbo.Product ADD newcolumn VARCHAR (30) DEFAULT 'default value'; 

Let me explain what I do in the code above.

I create a sample database. I create sample empty table. I create a trigger “Not_Alter_tables” on this database. This trigger applies to all the tables in the database.

This trigger is a DDL trigger, hence an AFTER trigger. In the trigger I capture the T-SQL statement that invoked the trigger, I print a statement, raise an error and then rollback the operation.

Then I attempt to add a new column in the table.

In this example, when the trigger is invoked, the table will be expanded to include the new column (as it has a non-null default), and then the operation is rolled in the trigger body.

Have a look at the picture below.

trigger

 

In a nutshell DDL triggers are AFTER triggers and can be quite expensive. In the example above the best way is not to use DDL triggers but to use instead explicit permissions e.g REVOKE/DENY to prevent users from altering the schema of the table.

Hope it helps!!!

Looking into DMVs in SQL Server for Performance Tuning May 17, 2015

Posted by fofo in Sql Server, SQL Server 2008, SQL Server 2012, SQL Server 2014.
Tags: , ,
add a comment

I have been delivering a Microsoft Certified Course in MS SQL Server 2014 recently and I was highlighting with several examples and demos the importance of DMOs (DMVs and DMFs) in SQL Server. We can get a plethora of information regarding server state and monitor the health of a server instance, diagnose problems and tune performance with those objects.

In this post I will demonstrate with hands-on demos the power of DMVs and show how they can help a DBA, developer to identify expensive queries, low usage indexes, and check the fragmentation levels of indexes. These are very common problems that anyone that deals with an SQL Server database must look out for and troubleshoot. There are two types of DMVs, server scoped, which require the view server state permission on the server, and database scoped, which require the view database state permission on the database.

By querying these view with T-SQL statements, a lot of information is made available to the DBA. When you make a call to a DMV, you must use a minimum of a two part naming convention.

DMVs can be of very reliable resource for information about the performance of your system, but each time your SQL Server is restarted, the data in the views is reset.

I have installed SQL Server 2014 Enterprise edition in my machine but you can use the SQL Server 2014/2012/2008 Express edition as well (or any other edition).

I am connecting to my local instance through Windows Authentication.

The first query I’m going to run is for identifying fragmentation levels of indexes within the database. 

The more fragmented an index is, the slower it performs. I am going to use the DMV dm_db_index_physical_stats cause I’m looking at the statistics of how the indexes are physically laid out within SQL Server. The view returns the size and fragmentation details for the data and the indexes of a specified table or view. Fragmentation of indexes and tables can drastically affect performance of queries and maintenance, so this is a very good DMV to get familiar with.

Before I run my script let me explain a few things regarding Internal fragmentation.

Internal fragmentation occurs if there is unused space between records in a page. This fragmentation occurs through the process of data modifications (INSERT, UPDATE, and DELETE statements) that are made against the table and therefore, to the indexes defined on the table. This unused space causes poor cache utilization and more I/O, which ultimately leads to poor query performance.

I am connecting to a database that I am going to use for all of my demos and I open a new query window. Type (copy-paste the following)

USE mydb
GO

EXEC sp_configure 'show advanced options', 1
GO
RECONFIGURE WITH OVERRIDE
GO
DECLARE @DefaultFillFactor INT
DECLARE @Fillfactor TABLE
 (
 Name VARCHAR(100)
 ,Minimum INT
 ,Maximum INT
 ,config_value INT
 ,run_value INT
 )
INSERT INTO @Fillfactor
 EXEC sp_configure 'fill factor (%)'
SELECT @DefaultFillFactor = CASE WHEN run_value = 0 THEN 100
 ELSE run_value
 END
FROM @Fillfactor 

SELECT DB_NAME() AS DataBaseName
 ,QUOTENAME(s.name) AS SchemaName
 ,QUOTENAME(o.name) AS TableName
 ,i.name AS IndexName
 ,stats.Index_type_desc AS IndexType
 ,stats.page_count AS [PageCount]
 ,CASE WHEN i.fill_factor > 0 THEN i.fill_factor
 ELSE @DefaultFillFactor
 END AS [Fill Factor]
 ,stats.avg_page_space_used_in_percent
 ,CASE WHEN stats.index_level = 0 THEN 'Leaf Level'
 ELSE 'Nonleaf Level'
 END AS IndexLevel
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'DETAILED')
 AS stats
 ,sys.objects AS o
 ,sys.schemas AS s
 ,sys.indexes AS i
WHERE o.OBJECT_ID = stats.OBJECT_ID
 AND s.schema_id = o.schema_id
 AND i.OBJECT_ID = stats.OBJECT_ID
 AND i.index_id = stats.index_id
 AND stats.avg_page_space_used_in_percent <= 85
 AND stats.page_count >= 10
 AND stats.index_id > 0
ORDER BY stats.avg_page_space_used_in_percent ASC
 ,stats.page_count DESC

Execute the code above and see the results.

Have a look at the picture below to see what results I have got.

index-frag1

Let me explain what the code above does.

At the beginning I am  just turning on some advanced options, re-configuring.

I am creating a temporary table variable to hold some information. I’m going to join the DMV with other objects, schemas, and indexes (system views) just to get more information.

In the Select list, I get information like the database name, the schema name, the table name, the index name, the index type, the page count, the fillfactor, the average page space used in percent and the index level.

Then I am using some of conditions like avg_page_space_used_in_percent is less than 85% and more than 10 pages in size for an index.

Then I’m  ordering them by the average page space used in percent (This is an average percentage use of pages that represents  internal fragmentation. Higher the value, the better it is. If this value is lower than 80%, then action should be taken) and then by the page count descending so the ones with the most pages come up first.  By all means do use this query for identifying fragmentation issues in your databases.

The second query I have is using the dm_execute_query_stats DMV, to identify our top 10 queries ranked by average CPU time. The query output will show us the statements that are the most expensive as far as resources and overhead regarding the CPU.

Knowing which are these queries, I can rewrite them in a way that will cause much less overhead.

In a new query window, type (copy-paste the following).


USE master
GO

SELECT TOP 10 query_stats.query_hash AS "Query Hash",
 SUM(query_stats.total_worker_time) / SUM(query_stats.execution_count) AS "Average CPU Time",
 MIN(query_stats.statement_text) AS "SQL Statement"
FROM
 (SELECT EQS.*,
 SUBSTRING(ST.text, (EQS.statement_start_offset/2) + 1,
 ((CASE statement_end_offset
 WHEN -1 THEN DATALENGTH(ST.text)
 ELSE EQS.statement_end_offset END
 - EQS.statement_start_offset)/2) + 1) AS statement_text
 FROM sys.dm_exec_query_stats AS EQS
 CROSS APPLY sys.dm_exec_sql_text(EQS.sql_handle) as ST) as query_stats
GROUP BY query_stats.query_hash
ORDER BY 2 DESC;
GO

Execute the  code above against your database and see the results.

Have a look at the picture below to see my results.

query-analysis

In the next demo I am going to use the dm_os_wait_stats DMV and actually do some calculations to show the ratio of time spent waiting for a CPU to free up and give us some processing power or for another resource such as memory or the disks to free up.

In a new query window, type (copy-paste the following).


USE master
GO
Select signal_wait_time_ms=sum(signal_wait_time_ms)

 ,'%signal (cpu) waits' = cast(100.0 * sum(signal_wait_time_ms) / sum (wait_time_ms) as numeric(20,2))

 ,resource_wait_time_ms=sum(wait_time_ms - signal_wait_time_ms)

 ,'%resource waits'= cast(100.0 * sum(wait_time_ms - signal_wait_time_ms) / sum (wait_time_ms) as numeric(20,2))

From sys.dm_os_wait_stats

Execute the  code above against your database and see the results.

Have a look at the picture below to see my results.

wait-stats

On this server, we’re looking at about 15 % of the time is spent waiting on processing power to become available, versus almost 85% of the time is spent waiting usually for the disks to catch up and be able to give us the information that we need to load into memory or even for it to find it in the memory.

This indicates that I have a powerful enough server as far CPU is concerned but I have to look at how much memory is used and what my disk drives are doing and how well my storage is optimized on this server.

In this query I am going to use the dm_db_index_usage_stats DMV. I want to look at indexes and see how often are they used. If I have indexes that have not been used since the server’s restart and the server’s been running for a long time, then I probably do not need these indexes.

If the indexes are not being used, every time a table is updated by inserting, updating, deleting a record, the index is also being updated or changed in the table. The more indexes you have that are not being used, the more time you’re going to spend updating indexes and maintaining them. When it comes to defragmenting indexes, you’re going to be defragmenting indexes that you don’t even use.


USE mydb
GO

SELECT o.name ,
 indexname = i.name ,
 i.index_id ,
 reads = user_seeks + user_scans + user_lookups ,
 writes = user_updates ,
 rows = ( SELECT SUM(p.rows)
 FROM sys.partitions p
 WHERE p.index_id = s.index_id
 AND s.object_id = p.object_id
 ) ,
 CASE WHEN s.user_updates < 1 THEN 100
 ELSE 1.00 * ( s.user_seeks + s.user_scans + s.user_lookups )
 / s.user_updates
 END AS reads_per_write ,
 'DROP INDEX ' + QUOTENAME(i.name) + ' ON ' + QUOTENAME(c.name) + '.'
 + QUOTENAME(OBJECT_NAME(s.object_id)) AS 'drop statement'
FROM sys.dm_db_index_usage_stats s
 INNER JOIN sys.indexes i ON i.index_id = s.index_id
 AND s.object_id = i.object_id
 INNER JOIN sys.objects o ON s.object_id = o.object_id
 INNER JOIN sys.schemas c ON o.schema_id = c.schema_id
WHERE OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1
 AND s.database_id = DB_ID()
 AND i.type_desc = 'nonclustered'
 AND i.is_primary_key = 0
 AND i.is_unique_constraint = 0
 AND ( SELECT SUM(p.rows)
 FROM sys.partitions p
 WHERE p.index_id = s.index_id
 AND s.object_id = p.object_id
 ) > 20000
ORDER BY reads

Execute the  code above against your database and see the results.

Have a look at the picture below to see my results.

index-usage

The query pulls some information since our server restarted to show which indexes have not been run that often.I have placed a filter  that says I have to have at least 20,000 rows in a table for my query results to have a practical value. You can see that I am looking for reads, writes and their ratio to see if it is worth keeping the index.

With DMVs we are  monitoring performance with minimal overhead. There are many DMVs available to us at the moment and  more are added with new versions of SQL Server.

Hope it helps!!!

Looking into temporary tables in SQL Server December 4, 2013

Posted by fofo in Sql Server, Sql Server 2005, SQL Server 2008, SQL Server 2012.
Tags: ,
add a comment

I have been delivering a certified course in MS SQL Server 2012 recently and I was asked several questions about temporary tables, how to create them, how to manage them, when to use them and what are the limitations of them.

In this post I will try to shed light on this particular issue with lots of hands-on demos.

Temporary tables and table variables make use of the system tempdb database.

I have installed SQL Server 2012 Enterprise edition in my machine but you can use the SQL Server 2012/2008 Express edition as well.

I am connecting to my local instance through Windows Authentication and in a new query window I type (you can copy paste)

First I am going to create a new temporary table and populate it. Execute the script below.

USE tempdb
GO

IF OBJECT_ID('tempdb..#footballer') IS NOT NULL

DROP TABLE #footballer;

GO
CREATE TABLE #footballer
 (
 [FootballerID] INT IDENTITY NOT NULL PRIMARY KEY,
 [lastname] [varchar](15) NOT NULL,
 [firstname] [varchar](15) NOT NULL,
 [shirt_no] [tinyint] NOT NULL,
 [position_played] [varchar](30) NOT NULL,

);

GO

SET IDENTITY_INSERT [dbo].[#footballer] ON

GO

INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (1,N'Oliver', N'Regina', 4, N'goalkeeper')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (2,N'Alexander', N'Roy', 8, N'goalkeeper')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (3,N'Mueller', N'Dewayne', 10, N'defender')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (4,N'Buckley', N'Beth', 3, N'midfielder')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (5,N'Koch', N'Jolene', 7, N'striker')
GO

SELECT * FROM #footballer

As you can see there is a # prefix in front of the table. This table will be saved in the tempdb.

Finally I select everything from the temporary table.

If I open a new query window and try to select everything (see the query below) from the #footballer table.


USE tempdb
GO

SELECT * FROM #footballer

You will not receive any results. You will receive an error – Invalid object name ‘#footballer’.

This is a local temporary table and it is in scope only in the current connection-session.

We can also create global temporary tables. In a new query window execute the following script.


USE tempdb
GO

IF OBJECT_ID('tempdb..##footballernew') IS NOT NULL

DROP TABLE ##footballernew;

GO
CREATE TABLE #footballernew
(
[FootballerID] INT IDENTITY NOT NULL PRIMARY KEY,
[lastname] [varchar](15) NOT NULL,
[firstname] [varchar](15) NOT NULL,
[shirt_no] [tinyint] NOT NULL,
[position_played] [varchar](30) NOT NULL,

);

GO

SET IDENTITY_INSERT [dbo].[##footballernew] ON

GO

INSERT [##footballernew] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (1,N'Oliver', N'Regina', 4, N'goalkeeper')
INSERT [##footballernew] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (2,N'Alexander', N'Roy', 8, N'goalkeeper')
INSERT [##footballernew] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (3,N'Mueller', N'Dewayne', 10, N'defender')
INSERT [##footballernew] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (4,N'Buckley', N'Beth', 3, N'midfielder')
INSERT [##footballernew] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (5,N'Koch', N'Jolene', 7, N'striker')
GO

SELECT * FROM ##footballernew

We denote the global temporary table with ## – ##footballernew

The global temporary table is deleted when all users referencing the table disconnect.

Both global and local temporary tables should be deleted in code rather than depending on automatic drop.

A temporary table created in a stored procedure is visible to other stored procedures executed from within the first procedure.

In a new query window type the following.


USE tempdb
GO

SELECT * FROM ##footballernew

In this case there will be no error. Global temporary tables persist across sessions-connections.

You can also add columns to temporary tables and alter the definition of existing columns.

In this script I add another column and then alter the definition of an existing column.


USE tempdb
GO

IF OBJECT_ID('tempdb..#footballer') IS NOT NULL

DROP TABLE #footballer;

GO
CREATE TABLE #footballer
 (
 [FootballerID] INT IDENTITY NOT NULL PRIMARY KEY,
 [lastname] [varchar](15) NOT NULL,
 [firstname] [varchar](15) NOT NULL,
 [shirt_no] [tinyint] NOT NULL,
 [position_played] [varchar](30) NOT NULL,

);

GO

SET IDENTITY_INSERT [dbo].[#footballer] ON

GO

INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (1,N'Oliver', N'Regina', 4, N'goalkeeper')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (2,N'Alexander', N'Roy', 8, N'goalkeeper')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (3,N'Mueller', N'Dewayne', 10, N'defender')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (4,N'Buckley', N'Beth', 3, N'midfielder')
INSERT [#footballer] ([FootballerID], [lastname], [firstname], [shirt_no], [position_played]) VALUES (5,N'Koch', N'Jolene', 7, N'striker')
GO

ALTER TABLE #footballer
ADD [is_retired] BIT NULL;
GO

ALTER TABLE #footballer
ALTER COLUMN [lastname] [nvarchar](50);
GO

You can use any data type for columns definition in a temporary table. You can also use user-defined data types.

You can also have constraints in temporary tables.If you execute the code below, it will work perfectly fine.


USE tempdb
GO

IF OBJECT_ID('tempdb..#Movies') IS NOT NULL

DROP TABLE #footballer;

GO

CREATE TABLE #Movies
 (
 MovieID INT PRIMARY KEY ,
 MovieName NVARCHAR(50) ,
 MovieRating TINYINT
 )
GO
ALTER TABLE #Movies
 WITH CHECK
 ADD CONSTRAINT CK_Movie_Rating
CHECK (MovieRating >= 1 AND MovieRating <= 5)

But you have to be careful when creating-applying foreign keys. FOREIGN KEY constraints are not enforced on local or global temporary tables.
Execute the script below to see what I mean.The foreign key will not be created.

USE tempdb
go

CREATE TABLE #Persons
 (
 P_Id INT NOT NULL ,
 LastName VARCHAR(255) NOT NULL ,
 FirstName VARCHAR(255) ,
 Address VARCHAR(255) ,
 City VARCHAR(255) ,
 PRIMARY KEY ( P_Id )
 )

CREATE TABLE #Orders
(
O_Id int NOT NULL PRIMARY KEY,
OrderNo int NOT NULL,
P_Id int FOREIGN KEY REFERENCES #Persons(P_Id)
)

Please bear in mind that you can create temporary tables with clustered and non-clustered indexes on them.

Let’s investigate the behavior of temporary tables and IDENTITY columns.

If you execute the script below , it will fail. This is the same behavior when executing the same script to regular tables. You cannot specify values for the IDENTITY column.If you choose to do so you must set IDENTITY_INSERT ON.

USE tempdb
GO

IF OBJECT_ID('tempdb..#Persons') IS NOT NULL

DROP TABLE #Persons;

GO

CREATE TABLE #Persons
 (
 P_Id INT PRIMARY KEY CLUSTERED IDENTITY(1,1) ,
 LastName VARCHAR(255) NOT NULL ,
 FirstName VARCHAR(255) ,
 Address VARCHAR(255) ,
 City VARCHAR(255)
 )

--this will not work

INSERT #Persons(P_Id,LastName,FirstName,Address,City) VALUES (1,'Steven','Gerrard','123 liverpool street','liverpool')
SET IDENTITY_INSERT [#Persons] ON

GO

--this will work

INSERT #Persons(P_Id,LastName,FirstName,Address,City) VALUES (1,'Steven','Gerrard','123 liverpool street','liverpool')

Αlso note that transactions are honored in temporary tables. If I begin an explicit transaction -an insert- without committing it will insert the row of data but then if a rollback is issued the whole operation will be rolled back

Execute the script below.


USE tempdb
GO

IF OBJECT_ID('tempdb..#Persons') IS NOT NULL

DROP TABLE #Persons;

GO

CREATE TABLE #Persons
 (
 P_Id INT PRIMARY KEY CLUSTERED IDENTITY(1,1) ,
 LastName VARCHAR(255) NOT NULL ,
 FirstName VARCHAR(255) ,
 Address VARCHAR(255) ,
 City VARCHAR(255)
 )

SET IDENTITY_INSERT [#Persons] ON

GO

--this will insert the value

BEGIN TRAN
INSERT #Persons(P_Id,LastName,FirstName,Address,City) VALUES (1,'Steven','Gerrard','123 liverpool street','liverpool')

GO

SELECT * FROM #Persons

--this will rollback the transaction

ROLLBACK TRAN

Hope it helps!!!

Looking into Temp database usage in SQL Server December 4, 2013

Posted by fofo in Sql Server, Sql Server 2005, SQL Server 2008, SQL Server 2012.
Tags:
add a comment

I have been delivering a certified course in MS SQL Server 2012 recently and I was asked several questions about Tempdb usage and temporary objects.

In this post I will try to shed light on this particular issue.

Temporary tables and table variables make use of the system tempdb database.

There is only one tempdb system database per SQL Server instance so if there is a huge usage of temporary objects in this database it can be the point of contention.

When you create an entry in database it needs to allocate space.This is also valid for the tempdb database.There are three types of pages involved in the allocation process in the tempdb data file:

  • Page Free Space (PFS)
  • Shared Global Allocation Map (SGAM)
  • Global Allocation Map (GAM).

When there is a great page allocation contention in tempdb, the whole allocation process can suffer and we can experience with PAGELATCH waits.

In order to address the issue above, you can have a number of tempdb data files that are equal to the number of cores.For example if you have a system with less than 8 cores e.g 6 you should add/set up 6 data files for the tempdb.If you have a system with more than 8 cores you should add 8 data files for the tempdb and then if the contention is still big you can add 4 more data files.

By saying cores in this post I mean logical cores and not physical cores.So if you have 8 physical cores, then you have 16 logical cores and 32 logical cores if hyper-threading is enabled.

I will  provide some demos in order to show you what tempdb contention might look like and what are the main wait latches that occur.

I have installed SQL Server 2012 Enterprise edition in my machine but you can use the SQL Server 2012/2008 Express edition as well.

I am connecting to my local instance through Windows Authentication and in a new query window I type (you can copy paste)

In this snippet, I create a new database and then create a new table with some constraints

CREATE DATABASE mytempdbcontention
GO

USE mytempdbcontention;
GO
 CREATE TABLE dbo.footballer
 (
 [FootballerID] INT IDENTITY NOT NULL PRIMARY KEY,
 [lastname] [varchar](15) NOT NULL,
 [firstname] [varchar](15) NOT NULL,
 [shirt_no] [tinyint] NOT NULL,
 [position_played] [varchar](30) NOT NULL,

);

GO

ALTER TABLE dbo.footballer
ADD CONSTRAINT CK_Footballer_Shirt_No
CHECK (shirt_no >= 1 AND shirt_no <= 11)

GO

ALTER TABLE dbo.footballer
ADD CONSTRAINT CK_Footballer_Position
CHECK (position_played IN ('goalkeeper','defender','midfielder','striker'))
GO

Now I need to populate the table with 50.000 rows. This is the script you need to execute in order to make this happen.

You can download it here. Rename the insert-footballer.doc to insert-footballer.sql and execute the script in a new query window.

Now I need to create a script that will create tempdb contention. This is the script that creates a temporary object- #footballer, populates the #footballer from the footballer table and then selects from it. Finally it drops the temporary object.


USE mytempdbcontention;
GO

SET NOCOUNT ON;
GO

WHILE 1 = 1
 BEGIN

IF OBJECT_ID('tempdb..#footballer') IS NOT NULL

 DROP TABLE #footballer;

CREATE TABLE #footballer
 (
 [FootballerID] INT IDENTITY NOT NULL PRIMARY KEY,
 [lastname] [varchar](15) NOT NULL,
 [firstname] [varchar](15) NOT NULL,
 [shirt_no] [tinyint] NOT NULL,
 [position_played] [varchar](30) NOT NULL,

);
 INSERT INTO #footballer
 (lastname,
 firstname,
 shirt_no,
 position_played)
 SELECT TOP 4000
 lastname,
 firstname,
 shirt_no,
 position_played

 FROM dbo.footballer;

SELECT lastname
 FROM #footballer;

DROP TABLE #footballer;
 END
 GO

Now I am going to create a .cmd file where I will create contention to the tempdb.

You can download it here. Rename the temp-sql.cmd.doc to temp-sql.cmd. Make sure you execute it (by double-clicking it).

This will create lots of contention to the tempdb. We need to see exactly what this contention is and the wait latches that have occurred.

Execute the script below.


USE tempdb
go
SELECT session_id, wait_duration_ms,wait_type, resource_description
from sys.dm_os_waiting_tasks
where wait_type like 'PAGE%LATCH_%'

and

resource_description like '2:%'

As you can see from the picture below, I have PAGEIOLATCH_SH wait types.This wait type occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Shared mode.

wait_type

I have one tempdb data file, the default configurations

I have 8 cores in this machine so I will add 7 mores tempdb data files of equal size (MS recommendation).

Execute the script below.


USE [master]
GO

ALTER DATABASE [tempdb]
MODIFY FILE ( NAME = N'tempdev', SIZE = 500 MB ,
FILEGROWTH = 100 MB )
GO

ALTER DATABASE [tempdb]
ADD FILE ( NAME = N'tempdev2', FILENAME = N'd:\DATA\tempdb2.ndf' ,
SIZE = 500 MB , FILEGROWTH = 100 MB )
GO

ALTER DATABASE [tempdb]
ADD FILE ( NAME = N'tempdev3', FILENAME = N'd:\DATA\tempdb3.ndf' ,
SIZE = 500 MB , FILEGROWTH = 100 MB )
GO

ALTER DATABASE [tempdb]
ADD FILE ( NAME = N'tempdev4', FILENAME = N'd:\DATA\tempdb4.ndf' ,
SIZE = 500 MB , FILEGROWTH = 100 MB )
GO

ALTER DATABASE [tempdb]
ADD FILE ( NAME = N'tempdev5', FILENAME = N'd:\DATA\tempdb5.ndf' ,
SIZE = 500 MB , FILEGROWTH = 100 MB )
GO

ALTER DATABASE [tempdb]
ADD FILE ( NAME = N'tempdev6', FILENAME = N'd:\DATA\tempdb6.ndf' ,
SIZE = 500 MB , FILEGROWTH = 100 MB )
GO

ALTER DATABASE [tempdb]
ADD FILE ( NAME = N'tempdev7', FILENAME = N'd:\DATA\tempdb7.ndf' ,
SIZE = 500 MB , FILEGROWTH = 100 MB )
GO

Now run the query again and observe the results.


USE tempdb
go
SELECT session_id, wait_duration_ms,wait_type, resource_description
from sys.dm_os_waiting_tasks
where wait_type like 'PAGE%LATCH_%'

and

resource_description like '2:%'

You will see that there are no wait types hence no tempdb contention.

Stop now the temp-sql.cmd file so the contention of the tempdb stops.

Hope it helps!!!

Ownership chain in SQL Server November 26, 2013

Posted by fofo in Sql Server, Sql Server 2005, SQL Server 2008, SQL Server 2012.
Tags: ,
1 comment so far

I have been delivering a certified course in MS SQL Server 2012 recently and I was asked a very interesting question about SQL Server stored procedures and ownership chain.

Security is a big thing in SQL Server and can be implemented in various levels.

Before we move on, we should define some basic terms that are very important when we talk about security in SQL Server.

The principal access data. It should be authenticated by SQL Server which basically means that the principal should “prove” that it is , whom it claims to be.

Principal is a user a group or an application granted permission to access securables.So what is a securable? Securable is an object to which access privileges are granted.

Privilege is an action that a principal is allowed to perform an a securable.

I want to show you an example where we want to deny a user to the actual table-data but grant him access through a view or a stored procedure.UserA is the owner of tableA and creates StoredProcA that gets data from tableA. Then if the UserB is granted permission to execute the StoredProcA even though he does not have direct access to tableA he will be able to get access (indirectly) to the table. This is the concept of ownership chaining.

I have installed SQL Server 2012 Enterprise edition in my machine but you can use the SQL Server 2012/2008 Express edition as well.

I am connecting to my local instance through Windows Authentication and in a new query window I type (you can copy paste)


USE master
GO

--Create a sample database

CREATE DATABASE mysampledb;
GO

-- create a login

CREATE LOGIN login1 WITH PASSWORD ='login1', CHECK_POLICY = OFF;

USE mysampledb;
go

--Create a User for that login
CREATE USER thelogin1 FOR LOGIN login1;

--see the user that you are now (dbo)
SELECT user
--create table and populate it with sample data

CREATE TABLE dbo.person

(id INT IDENTITY (1,1),
 firstname varchar(40),
 lastname VARCHAR(50)

);
go

INSERT dbo.person VALUES ('nick','kenteris'),('maria','negri'),('apostolis','kaperis'),('kostas','gekas');

CREATE PROCEDURE dbo.getpersons
AS SELECT id,firstname,lastname FROM dbo.person
go

--give permissions to execute the procedure to the thelogin1 user

GRANT EXECUTE ON dbo.getpersons TO thelogin1

--change the current user as thelogin1

EXECUTE AS USER = 'thelogin1'

--check if we are indeed thelogin1 user

SELECT USER

--fails
SELECT * FROM dbo.person

--works

EXECUTE dbo.getpersons

--get back to the dbo user

REVERT;

--see if the user is indeed dbo

SELECT USER

--we change the stored proc to use dynamic sql

ALTER PROCEDURE dbo.getpersons
AS
EXEC('SELECT id,firstname,lastname FROM dbo.person')
GO

--change the current user as thelogin1

EXECUTE AS USER = 'thelogin1'

--check to see if indeed the user is now thelogin1

SELECT USER

--this will fail

EXECUTE dbo.getpersons

--get back to the dbo user
REVERT;

--check to see if we are dbo user

SELECT user

--this will work

ALTER PROCEDURE dbo.getpersons
WITH EXECUTE AS OWNER
AS
EXEC('SELECT id,firstname,lastname FROM dbo.person')
GO

EXECUTE AS USER = 'thelogin1'

--check to see if we are dbo user

SELECT USER

--this will work

EXECUTE dbo.getpersons

Let  me explain what I am doing in this bit of code

  • I am creating a database in line 6
  • I create a login in line 11
  • I create a user-thelogin1 for that login in line 17
  • I create a new table in lines 23-30
  • Then I populate the table – dbo.person in lines 32
  • Then I create a procedure – dbo.getpersons in lines 34-36
  • Then I grant permissions to the stored procedure to the user thelogin1
  • Then I execute the statements like user – thelogin1
  • If I try to access the table directly then I fail. That makes sense.
  • Then I execute the stored procedure as user – thelogin1 in lines 55
  • Then I revert back to the dbo user
  • Then I make a change in lines 67-70 to the stored procedure. I use in the body of stored procedure dynamic sql.
  • Then I change the user to thelogin1 in lines 74
  • Then I execute the stored procedure in line 82 . This will fail.The dynamic SQL statement causes a break in the ownership chain.
  • Then I revert back to line 85 as the dbo user
  • In the lines 93-97 I change the stored procedure again (WITH EXECUTE AS OWNER)
  • Then I change the user to thelogin1 in lines 99 
  • Finally, I execute the stored procedure again in lines 107 and it works

Hope it helps!!!