jump to navigation

Looking into Structured Logging with Serilog in an ASP.Net Core application April 16, 2017

Posted by fofo in ASP.Net Core, logging.
Tags: , , ,
1 comment so far

In this blog post I am going to look into logging and more specifically into structured Logging. There is no need to stress how important logging is in any .Net application.

I am going to talk about structured Logging with Serilog and Seq. I am going to create an ASP.Net Core application to demonstrate logging with Serilog.

Let me give you a definition what structured logging is before we go any further.

Structured logging is a way of logging where instead of writing just plain text data to our log files we actually write meaningful log data by writing properties that are relevant to our application and values for those properties. This is a huge plus because now that we have structured  data we can go on and write queries against.

Logging to a text file, we log all sorts of information. We have different levels of information like warnings, errors, critical messages e.t.c. Ιn order to get some meaningful information out of it we have to use external tools or use regular expressions. In many cases we have logged data which is written in many log files. As you can understand in order to get the data we want, we need either to merge all those files or run the same regular expressions across all the files.

With structured logging we can have different queries, more intelligent queries. We can query and get data for a particular order, we can get error log messages where order value is more thatn 5.000 USD. We could get data regarding failed login attempts.

One of the most important logging libraries is the Serilog logging library. When building your application, you can install the Serilog NuGet package and once you have installed this NuGet package you can start writing simple log messages. These could be messages such as information messages or more important error messages, and when you configure Serilog in your application you can decide where you want our log output to be be written to.

We could write to rolling text files, so this means a new text file will be created automatically each day or we could write our log messages to a NoSQL data store like RavenDB or Seq.We can configure multiple destinations to be used by the same logger, so when we write a log message it will automatically be written to multiple output stores.

When we’re using Serilog there’s a number of different places we can write our application log data to,  such as the console, rolling files or single files.We can also write log data to MongoDB, RavenDB and Seq. I have mentioned Seq a few times so let me explain what Seq is.

It runs as a Windows service and it allows us to configure our applications, via Serilog, to log structured messages to it via HTTP or HTTPS. When our applications log structured messages to Seq server it stores  named properties/values so we can query them later. As part of the Seq server Windows service, it also provides a web interface, so we can connect to it via a browser. This browser application allows us to view log messages and write queries against them.

You can download Seq here.

Follow the instructions of the wizard to install the Seq windows service. When you install it you must see something similar like the picture below. The default port for listening to events in Seq is 5341 but in my case I choose 5343 port.

 

Before we begin with our hands on example I must dive into the mechanics of Serilog.

Besides information messages we can log structured data using Serilog. We can do that by using named properties and values e.g

Log.LogInFormation (“Added user {Username}, Team {UserTeam}”, name, team);

You can see between the braces here we’ve got this UserName and Team and these are named properties in Serilog.

When we define Named Properties we need to give them some values, so after we’ve defined the message containing these named properties, we then specify the actual values for the name and the team.

When we embed structured data in our message here, if the underlying sink supports it, these named properties and their associated values will actually be stored in the underlying data store allowing them to be queried. If you want to see all the available sinks currently supported click here.

Serilog has a number of different concepts of data types, the first or what it defines as scalar data types. Serilog sees these scalar data types as things that can be reliably stored without needed to alter their representation in any way. The types of scalar data include Boolean, numbers, strings, dates and times, as well as a few other things.

In the Boolean category we’ll just have the bool type. In the numbers category we have the basic numbers.

In the string category we have our string type and for dates and times we have the DateTime, DateTime Offset, and TimeSpan types. For the others we have Guids, URIs, as well as nullable versions of these other scalar types, so when Serilog sees that we’re using one of these values for a named property it’ll just go ahead and store its basic representation without doing any further processing on the value.

As well as scalar values, Serilog will store collections and objects. If the type implements IEnumerable it will go ahead and iterate through the contents of this IEnumerable and store the values. Serilog will also store the contents of a dictionary for us, but with a caveat that the key to the dictionary must be one of the types that Serilog considers to be scalar, so here the key to the dictionary is a string type.

Now that we have a good understanding of structured logging and Serilog, we can create a simple ASP.Net Core application. Launch the VS 2015 Studio and create a default ASP.Net Core Web application. Build and run the project.

The application out of the box should look something like this

 

I am going to install some packages now.

I will install Serilog, Serilog. Extensions.Logging and Serilog.Sinks.Seq

I will use the Package Manager Console to do that. Have a look at the picture below.

 

We need to do some configurations in the Startup.cs file.

 

public class Startup
{
public Startup(IHostingEnvironment env)
{
var builder = new ConfigurationBuilder()
.SetBasePath(env.ContentRootPath)
.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
.AddEnvironmentVariables();

if (env.IsDevelopment())
{
// This will push telemetry data through Application Insights pipeline faster, allowing you to view results immediately.
builder.AddApplicationInsightsSettings(developerMode: true);
}
Configuration = builder.Build();
Log.Logger = new LoggerConfiguration()
 .MinimumLevel.Debug()
 .Enrich.FromLogContext()
 .WriteTo.Seq("http://localhost:5343")
 .CreateLogger();
}

I added the lines in bold in the Startup class.

I also need to add a line of code in the Configure method also in the Startup.cs file.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
loggerFactory.AddConsole(Configuration.GetSection("Logging"));
loggerFactory.AddDebug();
loggerFactory.AddSerilog();

app.UseApplicationInsightsRequestTelemetry();

if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
app.UseBrowserLink();
}
else
{
app.UseExceptionHandler("/Home/Error");
}

app.UseApplicationInsightsExceptionTelemetry();

app.UseStaticFiles();

app.UseMvc(routes =>
{
routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
}

I added the line in bold to add Serilog to the ILoggerFactory that is passed into the Configure() method.

Now if we build and run the application again, in the Seq browser window interface we will see something like the picture below.

All the events (all detailed events until the Home page is fully served) are logged with Serilog and we can see them in Seq. We can search through them very easily. Have a look at the picture below.

 

 

In order to use Serilog to log our custom events we simply need to start logging from our own controllers.

In my sample application so far I have only the Home Controller. In order to start logging from my controller is to create an instance of dependency on type ILogger<T>

public class HomeController : Controller
{

readonly ILogger<HomeController> _logger;
 public HomeController(ILogger<HomeController> logger)
 {

_logger = logger;
 }

public IActionResult Index()
{
 _logger.LogInformation("Hello, world!");
return View();
}

public IActionResult About()
{
ViewData["Message"] = "Your application description page.";

return View();
}

public IActionResult Contact()
{
ViewData["Message"] = "Your contact page.";

return View();
}

public IActionResult Error()
{
return View();
}
}

 

I have added some lines (in bold). Build and run your application. Switch to the Seq browser window and search for “Hello”.

Have a look at the picture below.

 

We can pass an object as a property that is of type IEnumerable. Serilog will treat this property as a collection.

public IActionResult Index()
{
var team = new[] { "Panathinaikos", "Barcelona", "Liverpool" };
 _logger.LogInformation("My fav team is {Team}", team);
_logger.LogInformation("Hello, world!");
return View();
}

I added some lines in bold below in the Index method of the Home Controller.

Build and run the project again.

When I switch to the Seq browser window I see something like the picture below.

 

Let me create a simple class object and log its values.

Firstly, create a folder “Models” in your project. Then add the following class in it.

 public class ContactModel
 {
 public string Name { get; set; }
 public string Surname { get; set; }
 public string Email { get; set; }
 }

Then inside the Index method of your HomeController.cs add the code (in bold)

public IActionResult Index()
{
//var team = new[] { "Panathinaikos", "Barcelona", "Liverpool" };
//_logger.LogInformation("My fav team is {Team}", team);
//_logger.LogInformation("Hello, world!");

var contact = new ContactModel();

contact.Name = "Nick";
 contact.Surname = "kantzelis";
 contact.Email = "nikolaosk@admin.com";

_logger.LogInformation("Contact Name is:{@ConctactModel}", contact.Name);

return View();
}

Build and run the project again.

When I switch to the Seq browser window I see something like the picture below.

 

There are different logging levels available to us in Serilog. We can use verbose events to log very low-level debugging and internal information about what our code’s doing. Then we have debug level of events, to log events related to the control logic, so which modules or branches of the application are being used, as well as diagnostic information that might help us diagnose faults, but not necessarily as low-level as the verbose events.

We can think of the information level events  as the ability to log events that are of some interest to external parties, so these high-level events could be something like a user has just placed an order for a certain value.

Next, we have the warning level events. We can use this level to log information about potential problems that may be occurring in the application, so even if the system is not currently experiencing any actual errors, we can use this log level to give us advanced warning about things that could potentially break in the future.

Next, we have the error level events and these errors allows us to log information about unexpected failures in the system, so if there’s been an error in the system, but we’re still able to continue running the system, we could use this error level event. If, however, a failure has occurred and it’s actually stopped the system from being able to process any more information, we can use the fatal level event. This level of event allows us to log information about critical system failures.

There are various ways you can format your messages as we saw earlier in our examples.Formatting is another great strength of Serilog.

You can also log events selectively by using filters. For example we could write something like that

Log.Logger = new LoggerConfiguration()
.MinimumLevel.Debug()
.Enrich.FromLogContext()
.WriteTo.Seq("http://localhost:5343")
.Filter.ByExcluding(Matching.WithProperty<int>("Count", c => C > 30))
.CreateLogger();

In this post we looked into structured logging, Serilog, sinks, Seq, data types supported by Serilog, formatting options of Serilog and filtering capabilities of Serilog.

Hope it helps!!!

Advertisements

Introduction to Entity Framework Core April 1, 2017

Posted by fofo in Entity framework.
Tags: , ,
add a comment

In this post I am going to look into Entity Framework Core, present the new features, the similarities and differences from EF6.

Ι hope that the people who will read this post are familiar with what EF is. In a nutshell EF is the official data access technology platform from Microsoft. It is a  mature platform since it is almost 9 years old. It is a new way of accessing and storing data, a set of .NET APIs for accessing data from any data store.

Going into more details Entity Framework fits into a category of data access technologies called Object Relational Mappers or ORMs.

ORMs reduce the friction between how data is structured in a relational database and how you define your classes. Without an ORM, we typically have to write a lot of code to transform database results into instances of the classes. An ORM allows us to express our queries using our classes, and then the ORM builds and executes the relevant SQL for us, as well as materializing objects from the data that came back from the database. Βy using an ORM we can really eliminate redundant data interaction tasks and we can really enhance developer productivity. Instead of writing the relevant SQL to target whatever relational database you’re working with, Entity Framework uses the LINQ syntax that’s part of the .NET framework. LINQ to Entities allows developers to use strongly-typed query language regardless of which database they’re targeting.

When we use EF Core in your application, firstly you need to create your domain classes. These are pure .Net classes or objects and have nothing to do with EF Core.Then you use Entity Framework Core APIs to define a data model based on those domain classes. You also use Entity Framework APIs to write and execute LINQ to Entities Queries against those classes. When you need to save data back to the database you use SaveChanges methods. EF Core keeps track of the state of objects that it’s aware of, it’ll determine the SQL it needs to save data back to the database. Entity Framework will transform your LINQ to Entities Queries into SQL, execute that SQL, and then create objects from query results.

As you many know, Entity Framework development was moved to CodePlex and became open source.  Entity Framework 6 has since then been moved to GitHub. The URL is github.com/aspnet/EntityFramework.

You can download and experiment with the latest version, you can clone the repository, add your own fixes/features, and then submit those as pull requests to become part of Entity Framework Core. All pull requests that are submitted from the community are examined from the ADO.Net team before becoming part of EF Core.

Another thing I wanted to mention is that different people call EF with different names. Let me be clear on that so that there is no confusion.

EF Core was released in late June of 2016 and it was called Entity Framework 7, but in January 2016, its name was changed to Entity Framework Core. At the same time ASP.NET 5 was changed to ASP.NET Core.

EF Core is not simply an update from EF6, it’s a different kind of Entity Framework, it is a complete rewrite.

For developers who have invested time and effort on EF6 and have projects on EF6 should not worry as it will be actively supported.

Entity Framework Core can run  on .NET Core. NET Core  can run on the CoreCLR and CoreCLR can run natively, not only in Windows, but also on Mac and Linux.

EF Core can also run inside the full .NET Framework that is any version that is 4.5.1 or newer.

Entity Framework Core is a brand new set of APIs and it doesn’t have all of the features that you might be used to with Entity Framework so it’s important to understand that before starting a  new project with Entity Framework Core.

If you want to target cross platform or UWP apps then you have no other way but to use Entity Framework Core. For .NET apps that you want to run on Windows, you can still use Entity Framework 6. If you are building an ASP.NET Core applications that will run on Windows you can still use Entity Framework 6, so bear that in mind as well.

There are Entity Framework features that will never be part of the Entity Framework Core, for example in EF Core there is no support for a designer-based model, there’s no EDMX and it isn’t supported by the Entity Framework Designer.

Having said that in EF Core you can can still define a data model with classes and a DbContext. The DbContext API is still there and so is DbSets.

You can also create and migrate to the database as your model changes, and you can still query with LINQ to Entities. Entity Framework continues to track changes to entities in memory.

There are some new features in  EF Core that there were no available in earlier versions. In EF Core we have the ability to do batch inserts, updates, and deletes.

We can specify unique foreign keys in entities and LINQ queries have become smarter and more efficient.

In EF Core there is an In Memory provider that makes it really easy to build automated tests using Entity Framework without hitting the database.

EF Core makes it really easy to use with inversion of control patterns and dependency injection. EF Core has the ability to populate backing fields not just properties. EF Core supports mapping to IEnumerables.

Hope you have a better understanding of EF Core now.

Hope it helps!!!

 

DDL Triggers are AFTER triggers December 24, 2016

Posted by fofo in Sql Server, Sql Server 2005, SQL Server 2008, SQL Server 2012, SQL Server 2014, SQL Server 2016.
Tags: ,
add a comment

In this post I would like to look into DDL triggers and explain their functionality.

Let me start with a short introduction on triggers

All SQL Server developers use SQL triggers that basically is a mechanism that is invoked when a particular action occurs on a particular table.

Triggers consist of

  • A name
  • An action
  • The execution

The maximum size of a trigger name is 128 characters.
The action of a trigger can be either a DML statement (INSERT, UPDATE, or DELETE) or a DDL statement (CREATE, ALTER, DROP)
Therefore, there are two trigger forms: DML triggers and DDL triggers.

 

The AFTER and INSTEAD OF options are two additional options that you can define for a trigger.

  • AFTER triggers fire after the triggering action occurs.
  • INSTEAD OF triggers are executed instead of the corresponding triggering action.
  • AFTER triggers can be created only on tables, while INSTEAD OF triggers can be created on both tables and views

DDL triggers were introduced in SQL Server 2005. DDL triggers are not INSTEAD OF triggers.

They are implemented as AFTER triggers, which means the operation occurs and is then caught in the trigger.

The the operation can be  optionally rolled-back, if you put a ROLLBACK statement in the trigger body.
This means they’re not quite as lightweight as you might think. Imagine doing the following:

ALTER TABLE MyTable ADD newcolumn VARCHAR (30) DEFAULT ‘default value’;

If we have a DDL trigger defined for ALTER_TABLE events, or DDL_TABLE_EVENTS, the trigger due to the above T-SQL batch will fire and every row in the table will be expanded to include the new column (as it has a non-null default), and the operation is rolled back by your trigger body.

Type (copy paste) the following T-SQL statements in a new query window in the SSMS



CREATE DATABASE sampleDB;

GO


USE sampleDB;

GO

CREATE TABLE Product
 (
 Pid INT PRIMARY KEY
 IDENTITY ,
 pname NVARCHAR(50) ,
 price DECIMAL(18, 4)
 );

CREATE TRIGGER Not_Alter_Tables ON DATABASE
 FOR ALTER_TABLE
AS
 PRINT 'Not Alter Statements are permitted on the tables of this DB'
 SELECT EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]',
 'nvarchar(max)')
 RAISERROR ('Tables cannot be altered in this database.', 16, 1) 
 ROLLBACK;
GO

ALTER TABLE dbo.Product ADD newcolumn VARCHAR (30) DEFAULT 'default value'; 

Let me explain what I do in the code above.

I create a sample database. I create sample empty table. I create a trigger “Not_Alter_tables” on this database. This trigger applies to all the tables in the database.

This trigger is a DDL trigger, hence an AFTER trigger. In the trigger I capture the T-SQL statement that invoked the trigger, I print a statement, raise an error and then rollback the operation.

Then I attempt to add a new column in the table.

In this example, when the trigger is invoked, the table will be expanded to include the new column (as it has a non-null default), and then the operation is rolled in the trigger body.

Have a look at the picture below.

trigger

 

In a nutshell DDL triggers are AFTER triggers and can be quite expensive. In the example above the best way is not to use DDL triggers but to use instead explicit permissions e.g REVOKE/DENY to prevent users from altering the schema of the table.

Hope it helps!!!

Drop if Exists T-SQL Enhancement statement in SQL Server 2016 December 23, 2016

Posted by fofo in Sql Server, SQL Server 2016.
Tags:
add a comment

This is the fourth post in a series of posts about the new features of SQL Server 2016. You can find the first post in the SQL Server 2016 (JSON support in SQL Server 2016) series here.

A post on Row Level Security in SQL Server 2016 can be found here.

A post on Temporal Tables in SQL Server 2016 can be found here .

SQL Server 2016 was officially released back in June 2016.

I have installed SQL Server 2016 Enterprise edition in my machine but you can use the SQL Server 2016 Developer edition or Express edition as well.

You can download the SQL Server 2016 Developer edition by joining the Visual Studio Dev Essentials which is free and simple. After joining you will be able to download SQL Server 2016 developer edition in your machine.

If we need to delete a database object e.g a table or a trigger, but we don’t know if they exist or not, then we are used to writing code like this e.g “If the table exists” then drop it  or “If the trigger exists” then drop it.

There is a new syntax “If EXISTS” in SQL Server 2016 that makes our life easier when it comes to drop objects.

You can use it to drop any database object you want.

Let’s move on and demonstrate the new enhancement with a hands-on demo.

Type (copy paste) the following query in a new query window and execute it.


CREATE DATABASE sampleDB;

GO


USE sampleDB;

GO

CREATE TABLE Product
 (
 Pid INT PRIMARY KEY
 IDENTITY ,
 pname NVARCHAR(50) ,
 price DECIMAL(18, 4)
 );


SELECT *
FROM sys.tables;

IF OBJECT_ID('dbo.Product', 'U') IS NOT NULL
 DROP TABLE dbo.Product;

SELECT *
FROM sys.tables;


CREATE TABLE Product
 (
 Pid INT PRIMARY KEY
 IDENTITY ,
 pname NVARCHAR(50) ,
 price DECIMAL(18, 4)
 );

SELECT *
FROM sys.tables;


DROP TABLE IF EXISTS dbo.Product;

SELECT * FROM sys.tables;

Let me explain what I am doing in the code above.

I create a sample database and then create a table in it. Then I query the sys.tables and my table appears in the results since it exists.

Then I use the old syntax to check if the table exists and drop it.

Then I query again the sys.tables and there are no rows returned since the table is dropped.

Then I create the sample table again. Again I query the sys.tables and my table appears in the results since it has been recreated.

Then I use the new syntax to drop it “DROP TABLE IF EXISTS dbo.Product;”

Then I query again the sys.tables and there are no rows returned since the table is dropped.

This is handy new T-SQL enhancement in SQL Server 2016.

Hope it helps!!!

Row Level Security in SQL Server 2016 December 10, 2016

Posted by fofo in Sql Server, SQL Server 2016.
Tags:
1 comment so far

This is the third post in a series of posts that highlight the new features of SQL Server 2016. You can find the first two post in SQL Server 2016 (JSON support in SQL Server 2016, temporal tables) series here & here.

SQL Server 2016 was officially released back in June 2016.

There are many new features in SQL Server 2016 and in  my opinion the most important are:

  • Always Encrypted
  • Query Store
  • Row Level Security
  • Temporal Tables
  • Stretch Database
  • Dynamic Data Masking
  • PolyBase
  • JSON Support

I am going to discuss the “Row-Level Security” in SQL Server 2016 in this post.

I have installed SQL Server 2016 Enterprise edition in my machine but you can use the SQL Server 2016 Developer edition as well.

You can download the SQL Server 2016 Developer edition by joining the Visual Studio Dev Essentials which is free and simple. After joining you will be able to download SQL Server 2016 developer edition in your machine.

In many cases in our applications we have a multi-tenant database with a shared schema.

In modern applications there are certain situations where we need to restrict the access to data to our users.

We  have sales people that need to access only their data, (sales, reports, targets that apply only to them) in a web sales application.

We need students to only see their data (grades, stats) in an online web university application.

In all the scenarios above we need to prevent users/tenants from accessing data that does not belong to them.

The solution to the problems above was to write custom application code or to create SQL views.

The problem with SQL Views is that if we need to protect 3-4 tables we will spread the business logic all over the place. Obviously this kind of implementation soon becomes difficult to maintain, and the views are prone to run time errors.

We can also use custom code to implement access restrictions in the application.

This solution is not the best either. If you need to use the same access logic for a database, but in different applications, you may need to do the same work multiple times.

It’s also difficult to maintain, especially as the code base grows. But the most important reason is that in some situations the custom code is actually impossible to use, especially if you do not own the code.

Row-Level security in SQL Server 2016 was implemented to help us solve these kind of problems in a much more efficient way. Now we have a mechanism where we can control who reads and writes data from our database tables, based on criteria that we define.

We can define a plethora of criteria.Row-level security is transparent to the application code. We don’t need to make any changes in the code. All our data access logic is stored within the database.

Row-Level Security once implemented will apply a security predicate to all queries executed on a table. The predicate will dictate to what rows a specific user has access.

Let’s us start with our hands-on example.

We need to have Project Managers (supposedly we have a Projects Table) looking into only to the project details they are overlooking and are responsible for.

 

Type (copy paste) the following query in a new query window and execute it.

USE master;
GO

CREATE DATABASE RowLevelSecurity
GO
 
USE RowLevelSecurity
GO

CREATE TABLE dbo.Projects
 (
 ProjectID INT PRIMARY KEY IDENTITY,
 ProjectName NVARCHAR(50) ,
 ProjectManager NVARCHAR(50) ,
 budget DECIMAL(18,4),
 DateDue DATETIME

 );

INSERT dbo.Projects
VALUES ('NopCommerce', 'George', 12512, '12/12/2016'),
 ('Custom Asp.Net MVC', 'George', 24512, '11/11/2017'),
 ('Social Media Marketing', 'Mary', 3562, '03/03/2017'),
 ('Google Adwords Remarketing', 'Mary', 4512, '04/04/2017'),
 ('Google Adwords Search', 'Mary', 9218, '04/06/2017'),
 ('SQL Server Reporting Services', 'Tom', 8765, '04/03/2017'),
 ('SQL Server Upgrade to 2016', 'Tom', 8512, '04/08/2017')
; 


create user George without login;
create user Mary without login;
create user Tom without login;


GRANT SELECT ON dbo.Projects TO George
GRANT SELECT ON dbo.Projects TO Mary
GRANT SELECT ON dbo.Projects TO Tom
 

SELECT * FROM dbo.Projects

In the code above, I create a sample database.

Then I create a Projects table and then insert some sample data. Then I create users without logins and grant them select rights on the table Projects. Finally I query the table to see the results.

Type (copy paste) the following query in a new query window and execute it.


CREATE FUNCTION dbo.fn_securityProjects (@FilterColumnName AS sysname)
RETURNS TABLE
WITH SCHEMABINDING
as
 RETURN SELECT 1 AS fn_securitypredicate_result 

where @FilterColumnName = user_name();


CREATE SECURITY POLICY FilterProjects
ADD FILTER PREDICATE dbo.fn_securityProjects(ProjectManager)
ON dbo.Projects
WITH (STATE = ON); 

In the code above I create an inline function and a security policy. My function will be used to filter out rows based on the database user.

The function code above accepts a parameter named @FilterColumnName, and then uses this parameter to filter out the rows based on the database user, using the user_name() function.

This function will be used to filter rows from my Projects table by associating this function with a security policy.

In the security policy  I added a FILTER PREDICATE that referenced my dbo.fn_securityProjects function.  By creating this security policy the SQL Server engine will make sure every time that a database user runs a SQL command that references the dbo.Projects table that the filter predicate  function will also be executed.

Now let’s try out our Row-Level Security functionality.

Type (copy paste) the following query in a new query window and execute it.

execute as user = 'George';
select * from dbo.Projects; 
revert;
 
execute as user = 'Mary';
select * from dbo.Projects; 
revert;
 
execute as user = 'Tom';
select * from dbo.Projects; 
revert;
 

Have a look at the picture below

users-db

As you can see, by looking at this output, the SELECT statement that was run when the database user was set to George returned 2 rows and each row had the value ‘George’ in the ProjectManager column. The user Mary returned 3 rows and Tom only had 2 rows associated with his user name.

This is a great new addition to SQL Server 2016 and help us having a fine-grained control over our data.

Hope it helps!!!