Tag Archives: sql

Set up Python for SQL Server


Summary

In SQL Server 2016, Microsoft extended SQL Server engine capabilities to be able to execute an external code written in R language. Sql Server 2017 added Python language to the mix and 2019 integrated Java. The new functionality is a part of Sql Server’s Machine Learning Services.
This blog explains how to set up SQL Server 2019 so we can use ML services and
the Python programming language.

Prepare the environment for Python

The first step is to add Machine Learning Service Extensions / Python to our SQL Server environment.

Figure 1, ML services

The next step is to allow Sql Server to execute external scripts by turning on the external scripts enabled user option on the server level

sp_configure 
     @configname = 'show advanced options'
    ,@configvalue = 1
GO
RECONFIGURE WITH OVERRIDE;
GO
 
sp_configure 
     @configname = 'external scripts enabled'
    ,@configvalue = 1            
GO
RECONFIGURE WITH OVERRIDE;
GO

At this stage, we should be ready to go.

Let’s explore how the SQL Server engine relates to the above-mentioned ML(Machine Learning)  services. The two components are “bound” by the Extensibility framework.

Launchpad

Launchpad service, one of several services in Sql Server, has a job to manage and execute external scripts – the scripts written in Python language. It instantiates an external script session as a separate, satellite process.


Figure 2, Extensibility framework

The purpose of the extensibility framework is to provide an interface between the SQL Server engine and Python (and other languages). Figure 2 shows how different components interact and demonstrates security concepts.
From the super high-level point of view, the process goes like this:

  1. Our application calls system stored proc. sys.sp_execute_external_script. The proc is designed to encapsulate code written in one of the external languages, Python in this case. The stored proc executes within the SQL Server engine – Sqlservr.exe. The engine knows that the code it needs to run contains a program written in Python and therefore starts the Launchpad.exe process. Sqlservr.exe uses Named Pipes protocol(often used for the communication between processes on the same machine) to pass the code to be executed.
  2. Launchpad service now call initiates the DLL specific to the language that needs to be run. In this case, it’s PythonLauncher.dll.
  3. PythonLauncher.dll now runs Python.exe/dll which compiles the program to bytecode and executes it.
  4. Python.exe then talks to yet another program, BxlServer.exe. Bxl stands for Binary Exchange Language. This program coordinates with the Python runtime to manage exchanges of data and storage of the working results.
  5. sqlsatellite.dll communicates with SQL Server engine over TCP/IP protocol and ODBC interface. It retrieves input data sets*, sends back result sets and Python’s standard I/O streams – stdout and stderr.
  6. BxlServer.exe communicates with Python.exe passing messages back and forth from Sql Server through sqlsatellite.dll.
  7. Sql Server gets the result set, closes the related tasks and processes, and sends the results back to the client e.g SSMS

Security

Security architecture for the extensibility framework can be observed in Figure 2.

  • The client uses Windows/SQL login to establish a connection with SQL Server and execute Python code encapsulated in sys.sp_execute_external_script stored proc.
  • Launchpad service runs as a separate process under its own account – NT Service\MSSQLLaunchpad$<instanceName>. The Windows account has all the necessary permissions to run external scripts and is a member of the SQLRUserGroup2019 Windows group.
    How to interact with the file system(Import/Export csv, xlsx, ..) or to access resources outside the Server env will be explained later in this blog
  • Launchpad, when invokes the appropriate runtime environment, in our case, the PythonLauncher.dll, initiates an AppContainer and assigns the process to it. By default, the Launcher service creates 21 containers AppContainer00AppContainer20. The number of containers can be changed through Sql Server Configuration Manager/Sql Server Launchpad/Advanced/Security Contexts Count property.
    AppContainer is an internal mechanism that provides an isolated execution environment for the applications assigned to it. An application running in an AppContainer can only access resources specifically granted to it. This approach prevents the application from influencing, or being influenced by, other application processes
  • Launchpad maps the User(used to make a connection) to an AppContainer that now defines credentials unique to user/application pair. This also means that the application, in this case, launchpad.exe cannot impersonate db Users and gain more access than it should have.

So, in a nutshell, a Client connects to Sql Server using Sql/Win login mapped to db user. The user runs sys.sp_execute_external_script with the Python code.  Sql Server runs launchpad.exe which runs under it’s own win account. The launcher then initiate one of twenty AppContainers

Firewall rules – how to access data from the Internet

As mentioned before, there are 20 separate application containers – AppContainer00 – AppContainer20 These objects restrict access to the applications assigned to them, in this case launchpad.exe.

Figure 3, Windows Firewall outbound rules related to the AppContainers

If we disable these rules, our Python code, encapsulated in sys.sp_execute_external_script, will be able to communicate with the “outside world” and e.g get up-to-date FX (Foreign Exchange) rates, BitCoin price for different currencies etc. – see. forex-python module. We can then e.g describe the result and send back to the Client/top level procedure. It is also possible to “join” the data to a tSQL query result, within the same query batch, and then to send it back to the caller – amazing stuff 🙂

EXEC sys.sp_execute_external_script
        @language = N'Python' --specify language to use
        ,@script   = N'
from pandas_datareader import data

start_date = "2021-11-20"
end_date = "2021-12-01"

UsToAus = data.DataReader("DEXUSAL", "fred",start=start_date, end=end_date).reset_index()

print(UsToAus)
'       
       ,@output_data_1_name =N'UsToAus'
WITH RESULT SETS  (([Date] DATE,[US to AUS]  FLOAT) )


In the example above, the code connects to FRED (Federal Reserve Bank of St. Louis, Missouri, USA) and pulls the exchange rates between the USD and  AUD for a given timeframe. 

File system accessibility – how to import/export result

Another cool thing that you can do with Python is to easily export results to disk  e.g save a query result (report) in a form of an Excel file, to a location where it can be picked up(pulled) by the authorised users. It is also possible to read e.g a lookup table – extended customer information, states, employee information etc,  from disk and join it with the tSQL query result-set, again within the same sp call.
To enable interaction with the file system we need to allow “Sql Server’s” Python to access specific folders. By default, Python can only access its working directory and its subdirectories, in this case c:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019\PYTHON_SERVICES\. As mentioned before, the Launchpad.exe service runs under its own account. It starts python.exe runtime environment in a process, again, under the  same account. Finally it instantiates a  new AppContainer object to contain its activities. This container now acts as a windows app. with very limited access. Windows apps can access only those resources (files, folders, registry keys, and DCOM interfaces) to which they have been explicitly granted access (“ALL APPLICATION PACKAGES”).

Figure 4, ALL APPLICATION PACKAGES access rights

Now, we can use Pivoting with Python script from another blog to export results to disk.

Figure 5, export results to disk

Memory allocations

It is important to understand how ML Services fits into Sql Server memory allocation scheme to be able to access the possible impact it can have on the database server performance.

As we can see from Figure 2,  Python code runs outside of the processes managed by Sql Server. This means that it doesn’t use memory allocated to Sql Server.

The query below shows that, by default, the maximum memory that ML services can use, outside the memory allocated to Sql Server, is 20 % e.g Total RAM 256GB, Sql Server allocated mem = 200GB, ML can use up to 56 * 0.2 = ~11GB

SELECT * 
FROM sys.resource_governor_external_resource_pools
WHERE [name] = N'default'


Figure 6, External resource pools

We can increase the max_memory_percent to e.g 50% by altering  the “default” external resource pool

ALTER EXTERNAL RESOURCE POOL "default" 
   WITH (max_memory_percent = 50);
GO
ALTER RESOURCE GOVERNOR 
RECONFIGURE;
GO

File locations and configurations

  • launchpad.exe: ..\MSSQL15.SQL2019\MSSQL\Binn\Launchpad.exe
  • pythonlauncher.log ..\MSSQL15.SQL2019\MSSQL\Log\ExtensibilityLog\pythonlauncher.log 
  • Config file (Python) :  ..\MSSQL15.SQL2019\MSSQL\Binn\pythonlauncher.confg
    Python home directory, encoding standard, env patrh, working directory – this is where python stores intermediate results during the calculations, etc.

Python version

SQL Server 2019 supports Python 3.7.1. The package comes with the Anaconda 4.6.8 distribution. The language runtimes and various packages e.g  pandas, are installed under the root directory:

c:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019\PYTHON_SERVICES

To check Python version use command prompt:

c:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019\PYTHON_SERVICES>python --version

To check Anaconda distribution version:

c:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019\PYTHON_SERVICES\Scripts>conda.exe --version

But wait, there is more 🙂

With SQL Server 2019 CU3+ (latest CU) we can use the Language Extension feature to “add” different versions of Python to the mix. In fact, we can add any language extension.
For example, I use Python – version 3.9.7 that is installed as a part of the Anaconda 4.10.3 distribution, for work, not related to Sql Server platform. Wouldn’t it  be cool if it was possible to use that installation in SQL Server?

Note:  There are two packages developed by Microsoft, specifically for out-of-the-box Python installation: revoscalepy and microsoftml. These packages are not available in the distribution mentioned above . 

Language extensions is a set of dll’s written in C++ that act as a bridge between Sql Server and the external runtime. This technology allows us to use relational data(query result sets) in the external code. In a nutshell, the current Python language extension program is an open source code available here. However, the code currently supports only versions 3.7.x. So, to be able to use Python 3.9 as the external code, we need to build our own “bridge”.

Here is an excellent article on how to do it. It looks complex , but once you start you’ll not be able to stop – it’s just so much fun.

Figure 7, Python 3.9 Language extension for SQL Server

In the next blog, I’ll present various dev tools we can use to construct our Python code in Sql Server. 🙂

Thanks for reading.

Dean Mincic

PIVOT, Multi Pivot & Dynamic Pivot in SQL Server


Summary

Pivoting is a technique used to rotate(transpose) rows to columns. It turns the unique values from one column in one table or table expression into multiple columns in another table. SQL Server 2005 introduced the PIVOT operator as a syntax extension for table expression in the FROM clause. PIVOT, the relational operator is a T-Sql proprietary operator and is not part of ANSI SQL Standard.

PIVOT operator structure

Rotating(Pivoting) one table or table expression into another  table requires three different elements

  1. Grouping element
  2. Aggregating element
  3. Spreading element

The PIVOT operator accepts only Aggregating and Spreading elements. To avoid possible logical errors we must have a clear understanding of all three parameters, especially the Grouping element.

The following example demonstrates the three elements in action.

Let’s say we want to present the sum of freight(Shipping cost) values per order year for each country that ordered our products.
Set up dbo.Orders_TestPivot table. The script can be found here.

The PIVOT queries below transpose columns from a table expression (ShipCountry, Freight, and OrderYear) into a new table.
The queries are logically identical although they use different types of table expressions. The version on the left uses a Derived query and the one on the right uses a Common table expression(CTE).
More on table expressions can be found here:
My personal preference is the CTE version, so I’ll use that in the following examples. 🙂

A derived query table expression Common Table Expression
SELECT PVT.Shipcountry
      ,PVT.[2018]
      ,PVT.[2019]
      ,PVT.[2020]  
FROM (
      SELECT ord.Freight
            ,OrderYear
            ,ord.Shipcountry
      FROM dbo.Orders_testPivot ord    
    ) AS tabExpr 
PIVOT ( 
        SUM(Freight)  
        FOR OrderYear 
        IN ([2018],[2019],[2020]) 
        ) AS PVT
;WITH tabExpr AS 
(
    SELECT ord.Shipcountry
          ,ord.Freight
          ,OrderYear
    FROM dbo.Orders_testPivot ord    
)
    SELECT  PVT.Shipcountry
           ,PVT.[2018]
           ,PVT.[2019]
           ,PVT.[2020]   
    FROM tabExpr
    PIVOT ( 
           SUM(Freight)  
           FOR OrderYear 
           IN ([2018],[2019],[2020]) 
          ) AS PVT;

The figure below visually maps the elements of the PIVOT operator and the final result set.

Figure 1, PIVOT Operation

My personal way of thinking when creating a PIVOT query is;

  1. Sketch the final result set and visualise all three elements required for the PIVOT operation
  2. Define a table expression(CTE) that returns:
    1. Spreading element – what we want to see on columns – OrderYear
    2. Aggregate element – what we want to see in the intersection of each row and column – Freight
    3. Grouping element* – what we want to see on rows – ShipCountry
  3. Add  PIVOT operator. The pivot operator returns a table result – in our example, the table result has alias PVT.
    1. Include aggregate function applied to the aggregate element – SUM(Freight).
    2. Include the FOR clause and the spreading column – FOR OrderYear.
    3. Specify the IN clause and the list of distinct, comma-separated values that appear in the spreading element. [2018],[2019],[2020] . In our example, we have a list of irregular identifiers* that needs to be delimited.
      If we added a non-existing value to the IN list e.g [2099], the query would execute with no error but with the NULL aggregated values 🙂
    4. Specify an alias for the PIVOT result table – PVT
  4. Specify the final SELECT. The columns are selected from the PIVOT result table. The sequence of the selected columns is not relevant.

Note: Irregular identifiers:
We use identifiers to name(identify) SQL Server’s objects i.e stored procedures, tables, views, constraints, column names, attributes ..etc. There is a set of rules for creating identifiers i.e The first character cannot be numeric, so e.g 2018 is an Irregular identifier. To be able to use irregular identifiers we need to “fix” their boundaries/limits or deLimit them. To do that we can use double quotation marks – 2018 or tSQL specific – square brackets;  [2018]. More about SQL Server Identifiers can be found here.

An interesting thing about the PIVOT operator is that it does not include a grouping element. The grouping element is “everything else” that is not a spreading or an aggregating element. In our example, the grouping element is the ShipCountry column selected in the table expression.
If we selected e.g ShipCity along with ShipCountry as the two columns that are not a spreading or an aggregate element, the result would be different.

;WITH tabExpr AS 
(
    SELECT ord.Shipcountry --grouping element 
          ,ord.Freight   
          ,OrderYear
          ,ord.Shipcity  --grouping element
    FROM dbo.Orders_testPivot ord    
)
    SELECT  PVT.Shipcountry
           ,PVT.[2018]
           ,PVT.[2019]
           ,PVT.[2020]   
    FROM tabExpr
    PIVOT ( 
            SUM(Freight)  
            FOR OrderYear 
            IN ([2018],[2019],[2020]) 
        ) AS PVT;


Figure 2, Group By ShipCountry and ShipCity

This behavior can cause logical errors, especially if we apply the PIVOT operator directly on a table.

In the next experiment, we are not using a table expression to prepare the data set for the PIVOT operator. Instead, PIVOT now operates over the entire table. It implicitly(automatically) groups data by all columns except the orderDate and Freight columns. As we can see in Figure 3, the query produces an unexpected result

--the skewed result
SELECT PVT.Shipcountry
      ,PVT.[2018]
      ,PVT.[2019]
      ,PVT.[2020]  
FROM dbo.Orders_testPivot
PIVOT ( 
       SUM(Freight)  
       FOR OrderYear 
        IN ([2018],[2019],[2020]) 
      ) AS PVT;


Figure 3, PIVOT operation directly on a table

To avoid possible logical errors, it is always a good practice to first construct a table expression with the implicitly defined PIVOT elements(grouping, spreading, and aggregating), and then apply the PIVOT operator to the prepared data set.

Horizontal Aggregates

Horizontal aggregates are aggregates across different columns per group.

It would be nice to add a total freight per country across the spreading element – the order years. Here is the base table definition.

There are several ways to achieve this e.g  We can simply check for NULL values and add the column values – total_freight = ISNULL(PVT.[2018]) + ISNULL(…

However, it would be cool to implement it by using Table constructor to create a virtual correlated query that summarizes freight data for all order years, per row – uh, that was a mouthful 🙂 

Here is what we want to achieve:

We can extend the original pivot CTE from above with the total_freight column.

;WITH tabExpr AS 
(
  SELECT ord.Shipcountry,
         ord.Freight,
         ord.OrderYear
  FROM dbo.Orders_testPivot ord
)
SELECT PVT.Shipcountry,
       PVT.[2018],
       PVT.[2019],
       PVT.[2020],
       total_freight = ( SELECT SUM(tab.freight_per_year)
                         FROM (VALUES (PVT.[2018]),
                                      (PVT.[2019]),
                                      (PVT.[2020])
                               ) AS tab(freight_per_year) )
FROM tabExpr
PIVOT(
      SUM(Freight)
      FOR OrderYear IN ([2018], [2019], [2020])
     ) AS PVT;

Multi aggregate pivot

A PIVOT operator can handle only one aggregate element at a time.  This means that if we want to use more aggregate elements we need to add more PIVOT operators to our query – a PIVOT operator per aggregate element 😐
In the previous example, our aggregate element was Freight when we calculated the total shipping costs in different countries per year.
This time, we want to calculate the average value of the orders placed in different countries per year and add the results to our query.
Figure 4 shows the desired result

Figure 4, Multi aggregate PIVOT- two aggregate elements

From the result, we can see that the second result set is just “appended” to the first. Basically, we just combined the two PIVOT results using an INNER JOIN table operator and an equality predicate on the ShipCountry column.
The final query uses column aliases to indicate the different data sets.
Figure 6, Multi aggregate PIVOT operation

The query in Figure 6 can be found here.

Dynamic PIVOT

A disadvantage of the PIVOT operator is that its IN clause only accepts a static list of spreading values. It does not support e.g a sub-query as input. This means that we need to know in advance all the distinct values in the spreading element. The “hard coding” may not necessarily be a problem in cases when we deal with a spreading element with known spreading values e.g OrderYear.
Going back to the first example, we can easily expand the IN list with the spreading values that are not available yet.

/*A future order  
 Note: If truncated, open the code in a new Window */
INSERT INTO dbo.Orders_testPivot (Custid,Orderdate,Shipperid,OrderValue,Freight,Shipname,Shipaddress,Shipcity,Shipregion,Shipcountry)
    VALUES (51,'20210201',2,$220,$350,N'Future shipment :)',N'10178 106 St',N'Edmonton',N'Alberta',N'Canada');
GO

;WITH tabExpr AS 
(
    SELECT ord.Shipcountry
          ,ord.Freight   
          ,OrderYear
    FROM dbo.Orders_testPivot ord    
)
    SELECT  PVT.Shipcountry
           ,PVT.[2018]
           ,PVT.[2019]
           ,PVT.[2020]
           ,PVT.[2021]
           ,PVT.[2022]
    FROM tabExpr
    PIVOT ( 
            SUM(Freight)  
            FOR OrderYear 
            IN ([2018],[2019],[2020],[2021],[2022]) 
        ) AS PVT;

Things get more complex when we cannot predict all possible spreading values. In these situations, we can first design a query that will give us a distinct list of spreading values and then use that list to dynamically construct the final PIVOT query, the Dynamic Pivot.
A typical scenario in which we use Dynamic pivoting is when transposing attributes of an EAV*(Entity-Attribute-Value) data model.

EAV* is one of the open-schema data models (XML, JSON …) that, in some cases, can provide more flexibility than the relational model. Here is an interesting post about EAV.

Let’s say we have a list of Products. Each product is different and can have a specific set of attributes. e.g a bicycle can have a specific type of tires and a hard drive can have a specific capacity..etc. Business frequently adds new products and product attributes. In the next example, I used a simplified EAV model to store the products. The table script can be found here.

Our next task is to return a row for each distinct product, a column for each distinct product attribute, and in the intersection of each product and attribute, we want to see the value of the attribute.

Figure 7 shows the desired output for all products and for a specific product
Figure 7, Dynamic pivot result

In this scenario, we cannot know all the possible Attributes(the spreading element values). Moreover, the list of attributes is constantly changing, so hard-coding the IN list is no longer an option.
The following is a  dynamic pivot query that can give us the result in Figure 7.

DECLARE @sprdElements AS NVARCHAR(MAX) --comma separated, delimited, distinct list of product attributes
        ,@tSql AS NVARCHAR(MAX)        --query text
        ,@ObjectName VARCHAR(255);     --specific product name

SET @ObjectName = NULL -- 'BMC Road Bike' --specific product

--comma separated list of attributes for a product
;WITH dsitSpreadElList AS
(
    SELECT DISTINCT Attribute
    FROM Products
    WHERE ObjectName = @ObjectName
        OR @ObjectName IS NULL
)
SELECT @sprdElements = COALESCE(@sprdElements+', ','')+'['+ CAST( Attribute AS NVARCHAR(255))+']'
--SELECT @sprdElements = STRING_AGG('['+Attribute+']',',') --Available in SQL2017+
FROM dsitSpreadElList;

--print @sprdElements

SET @tSql =N';WITH TabExp AS
             (
                SELECT ObjectName -- grouping element
                      ,Attribute  -- spreading element
                      ,[Value]    -- aggregating element
                FROM dbo.Products
                WHERE ObjectName = @ObjName
                  OR @ObjName IS NULL
	     )
             SELECT ObjectName,'+@sprdElements +N'
             FROM TabExp
             PIVOT (
                    MAX([Value])
                    FOR Attribute IN (' + @sprdElements +N') 
                    ) AS pvt';

 EXEC sys.sp_executesql
     @stmt = @tSql
    ,@params = N'@ObjName VARCHAR(255)'
    ,@ObjName = @ObjectName;

NOTE: To extract a known Attribute value, in this case, we can use MAX() or MIN() aggregate functions. Both functions will operate on a single value and will return a single value. Keep in mind that MIN and MAX as well as all other aggregate functions except COUNT(*), ignore NULL values.

The new attributes will be automatically handled by the dynamic query.

INSERT INTO dbo.Products
   VALUES ('BMC Road Bike',
           'Gearing',
           CAST(CAST('Triple chain-ring 50/39/30' AS VARCHAR(255)
                    ) AS SQL_VARIANT
               )
   );

A couple of versions of the dynamic query can be downloaded here.

Conclusion

Pivoting is a technique used to transpose rows to columns. PIVOT is a tSql proprietary operator and is not part of the ANSI Standard. PIVOT operator accepts two parameters; Spreading element or what we want to see on columns and aggregating element or what we want to see in the intersection of each distinct row and column. The grouping element is the third parameter involved in pivot operation. It is what we want to see on rows. The grouping element is not a formal part of the PIVOT operator and represents all columns that are not defined as spreading or aggregating elements. The implicit nature of the grouping element can lead to logical errors. This is why is recommended to construct a table expression for the PIVOT operator that provides exactly three elements needed for the operation.
A PIVOT operator is limited to only one aggregate function. To perform multi aggregate pivot we need to introduce a PIVOT operator per aggregation.
The IN clause of the PIVOT operator accepts only a hard-coded, comma-separated list of spreading element values. In the situations when the values are not known, we use dynamic sql to construct the query.

 

Thanks for reading.

Dean Mincic

Mutex in Sql Server

Mutex in Sql Server


Summary

RDBMS systems are multi-user systems which serve many clients at the same time. Being able to process large amounts of requests is very important up to the point that we often trade data consistency to improve concurrency. However, there are situations when access to a particular segment of code needs to be serialized. This is similar to the Critical section in concurrent programming when concurrent access to shared resources can lead to unexpected behavior. To protect the code e.g. in c#  we use lock/monitor, mutex, or semaphore, and in Sql Server we use dummy lock tables, isolation levels/lock hints or application locks. This post presents four different ways of protecting the critical code in SQL Server.

What is Mutex?

Mutex stands for mutually exclusive. It is a construct used to serialize access to the shared resources. Mutex is a locking mechanism that prevents race conditions allowing access to the protected code (critical section) to only one process/thread at a time.

Thread safe code – c# example

The next example shows a simple use of the mutex class to serialize access to a “critical section”.
The c# console application code can be found here.

The program performs the division of two random numbers. The operation that follows sets operands, num1, and num2 values to 0. This is done 5 times in a For loop.

for (int i = 0; i < 5; i++)
{
   num1 = rndNum.Next(1, 5); //min value 1, max value 5
   num2 = rndNum.Next(1, 5);

   result = (num1 / num2);

   num2 = 0;
   num1 = 0;                 
}

The critical section is executed concurrently by multiple threads* causing a DivideByZeroException exception. Thread1(t1) and Thread2(t2)  started executing the code at almost the same time. (t1) has performed the division and assigned value 0 to the num2 variable. At the same time, (t2) was in the middle of the division when (t1) set the divisor(num2) value to 0. The new condition caused Division by zero exception.

*Note: The runtime environment starts execution of the program with the Main () method in one thread and then creates three more threads using System.Threading.Thread class.

To avoid this situation we need to serialize access to the code above. One way to do that is to use the Mutex class to provide exclusive access to the critical section. (un-comment mutex objects in the code)

... 
public static Mutex m = new Mutex();

m.WaitOne(); //thread waits until its safe to enter
 for (int i = 0; i < 5; i++)
 {
    num1 = rndNum.Next(1, 5); //min value 1, max value 5
    num2 = rndNum.Next(1, 5);

    result = (num1 / num2);

    num2 = 0;
    num1 = 0;                 
 }

//releases ownership of the mutex and unblocks other threads that are 
//trying to gain ownership of the mutex
m.ReleaseMutex();

Now, treads (t1) and (t2) execute the code one at a time without causing the exception.
The code is now “thread safe”.  🙂

Mutex in SQL Server

There are several ways to serialize access to a critical section in SQL Server. Although some approaches are more proper than others, it’s good to understand them all because, sometimes a specific situation can limit our options.

Set up test environment

The code used in the experiments can be downloaded by following the links below.

  • Test table – The main test table is used to simulate the effects of the concurrent inserts. (download here)
  • Dummy table – table used to present one of the mutex implementation techniques. (download here)
  • ITVF – an inline function used to track table locks requested by the concurrent connections.
    (download here)

There are many scenarios that can be used to demonstrate the effects of concurrent query execution on the critical section i.e Lost updates, double inserts etc. In this post I’ll focus on the concurrent inserts only.

The base query for the following experiments. (also available here)

WAITFOR TIME '14:13:40';

    SET XACT_ABORT ON;
 
    DECLARE @spId VARCHAR(1000) = '54,64';

    DECLARE @ShipperId INTEGER
           ,@IdentifierValue VARCHAR(250);
     
    SET @ShipperId = 50009;
    SET @IdentifierValue = 4; -- add a new IdentifierValue
 
    BEGIN TRANSACTION SerializeCode

        SELECT * FROM dbo.itvfCheckLocks(@spId) --get metadata
        IF NOT EXISTS ( 
                        SELECT 1
                        FROM   dbo.ShipperIdentifier
                        WHERE  ShipperId = @ShipperId
                            AND IdentifierValue = @IdentifierValue
                      )
        BEGIN
            SELECT * FROM dbo.itvfCheckLocks(@spId) --get metadata
            WAITFOR DELAY '00:00:00.10'; --wait 10ms
            INSERT INTO dbo.ShipperIdentifier (
                                        ShipperId
                                       ,IdentifierValue)
                    VALUES (@ShipperId, @IdentifierValue)
 
            SELECT * FROM dbo.itvfCheckLocks(@spId)  --get metadata
        END

    COMMIT TRANSACTION;

Essentially, the query logic encapsulates a read-and-write query with the latter being executed if the first returns an empty set.

In this scenario, more than one connection is trying to insert a unique combination of ShipperId and IdentifierValue into the table. Only one “unique” insert is allowed and that is enforced by the Unique constraint on the two columns.

I’ll be executing the query from the context of the two different SSMS sessions. To simulate concurrent code execution, before each experiment, we define the exact time when the code will run i.e. WAITFOR TIME ’14:13:40′. We also need to capture the two SIDs (session IDs) for which we want to collect metadata i.e. DECLARE @spId VARCHAR(1000) = ‘54,64’;

Insert race condition

An insert race condition is a situation where multiple instances of the same code execute a conditional insert at the same time. The condition for the insert can evaluate to true for all concurrent calls causing multiple inserts of the same values. This can lead to logical errors – duplicate rows or violation of Unique constraint/Primary key etc.

So, lets execute the base query code as is and demonstrate the Insert race condition.

Figure 1, Constraint violation caused by the insert race condition

Table hints and isolation levels

The first* method to serialize access to a critical section is to use combinations of table hints and/or isolation levels. This will permit only one code execution at a time.

*NOTE: The only reason why I put this method as the first solution is because, for me personally, it was the most interesting approach to research. However, in production environment, depending on the situation, I would probably first try to implement Application locks explained in the following section.

Previous, unsuccessful attempt to insert a new row follows the sequence presented in Figure 2 below. The list is compiled using dbo.itvfCheckLocks outputs.


Figure 2, Locking pattern – key violation error

One of the first things that comes to mind is to elevate the transaction isolation level.
If we used REPEATABLE READ tran. isolation level the outcome will be exactly the same. The isolation level would keep S locks, if acquired during the first-read query, until the end of the transaction. That way repeatable read prevents the inconsistent analysis aka repeatable read anomaly. However, in this case, there won’t be any S locks acquired and held because the requested row  (ShipperId=50009, IdentifierValue=4) does not exist.

The next isolation level is SERIALIZABLE. For this test, I’ll use the (HOLDLOCK) table hint. The hint acts as a SERIALIZABLE transaction isolation level, only the scope of the isolation is reduced to a table level. However, as opposed to i.e NOLOCK,  HOLDLOCK “holds” its locks (sticks to its guns 🙂 ) until the end of the transaction.

The complete code can be found here.

... 
SELECT 1
FROM  dbo.ShipperIdentifier WITH(HOLDLOCK)
WHERE  ShipperId = @ShipperId
...

 “Reset”  dbo.ShipperIdentifier table, run the test query again, and observe the results.

Figure 3, Deadlock situation and SERIALIZABLE isolation level

This time Session 54 successfully completed the insert and Session 64 was chosen to be a deadlock victim. Figure 4 shows the deadlock diagram.

Figure 4 – the deadlock diagram – serializable isolation level

NOTE: I’ve used MS Profiler to get the graphical plan. Use the Deadlock graph, Lock:Deadlock and Lock:Deadlock events. Once you get the event, right-click on the Deadlock graph row/ Extract event data to save the diagram for further analysis

If we correlate information from Figure 4 and dbo.itvfCheckLocks we can conclude the following;

Similar to the situation presented in Figure 2, both sessions used NCI to access the requested information. During the read phase, both sessions acquired IS locks on an index page where IdentifierValue = 4 (and the subsequent, IdentifierValue=5) is supposed to be. The sessions have also acquired RangeS-S locks on the NCI key range, IdentifierValue=5. The locks are compatible and both sessions evaluated conditional expression to TRUE.
During the “write “phase – INSERT query, both sessions have acquired X locks on the two new rows to be inserted in the Clustered index -Session(64) on a new Id=10001 and Session(54) on Id=10002.
Now, in order to acquire an X lock on the new row(s) to be inserted in the NCI, the existing RangeS-S locks must be first converted to RangeI-N locks. This is the point where the deadlock happens – see Figure 4. Because RangeI-N and RangeS-S are not compatible, Sessions 64 and 54 wait on each other to release their RangeS-S locks. After a certain period of time, in this case, the SQL Server engine decided to “kill” session(64) and let (54) successfully finish.

The idea is to acquire non-compatible locks during the read phase and to keep the locks until the end of the transaction – see lock compatibility matrix. We can use the UPDLOCK table hint to force the lock manager to use U locks instead of S locks, in our case RangeS-U instead of RangeS-S.
Change the test query code and reset the test environment. Find the new code here.

... 
SELECT 1
FROM  dbo.ShipperIdentifier WITH(UPDLOCK,HOLDLOCK)
WHERE  ShipperId = @ShipperId
...

Figure 5, UPDLOCK and HOLDLOCK

If we run the concurrent code again, we’ll see that one of the sessions acquired RangeS-U lock on the non-clustered index Key (ShipperId=50009, ShipperIdentifier = 5, and Id =3). Both sessions have acquired IU locks on the NCI page which “hosts” the above key(UI locks are compatible). Other Session now must WAIT until the first Session releases the RangeS-U locks before it enters the conditional branching and performs the read query.
The first session releases the RangeS-U lock at the end of the transaction. At this point, the new row (ShipperIdentifier = 4) has already been inserted in the table (NCI and CI ). The blocked session now can continue and acquire its own RangeS-U and IU locks. This time the read query can find the requested row. The conditional expression evaluates to FALSE and skips the INSERT query.

We managed to serialize access to the critical code by acquiring non-compatible locks at the beginning of the process and holding the locks until the end of the code segment.

Application locks

Another way to prevent concurrent access to a critical section in sequel is to use Application locks. This “special” type of lock is designed to serialize access to a critical section purely from the code perspective – very much like mutex in c# demonstrated earlier.

Application locks are a mechanism that allows an application to acquire an app-lock on a critical section within a transaction or a connection(session). The locks do not affect tables/pages/rows but purely the code they encompass.

The available application lock types are:  S(Shared) IS(Intent Share), U(Update), X(Exclusive), and IX(Intent Exclusive). The rules follow the standard compatibility matrix. More on the application locks can be found here.

Application locks are implemented through system-stored procedures:

  • sys.sp_getapplock – used to acquire locks
    • @Resource: Specifies the case-sensitive name of the application lock.
    • @LockMode: specifies the lock type S, IS, U, X, IX
    • @LockOwner: specifies the scope of the lock -Transaction or
      Session
    • @LockTimeout: specifies the timeout in milliseconds. Stored proc. will return an error if it cannot acquire the lock in this interval
    • @DbPrincipal: specifies security context. The caller must be a member of one of the following security objects
        • database_principal
        • dbo  – special database user-principal
        • db_owner – fixed db role
        • (DEFAULT – Public db role)
  • sys.spreleaseapplock. – used to release locks
    • @Resource:
    • @LockOwner: specifies the scope of the lock -Transaction or
    • @DbPrincipal

In the next example, we use a new test query that implements application locks. Reset the test environment and concurrently execute two instances of the new test query.

My concurrent sessions were 54 and 56. Even if executed at the same time, Session 54 has acquired an app lock first making the second session(Sid=56) wait until 54 releases the app lock resource. The allowed wait time(@LockTImeout) is set to 1.5s.
Below is the output of the query execution.

Figure 6, Session 54 – Application locksFigure 7, Session 56 – Application locks

As we can see, the application lock has serialized access to the critical section within an explicit transaction. Application lock did not affect the “standard” data locking routine defined by the transaction’s isolation level and the query itself. The lock used a non-compatible mode (@LockMode = ‘Exclusive’) which prevented concurrent access.
If we used one of the compatible lock modes – ‘Shared‘, ‘IntentShared‘, or ‘IntentExclusive‘, the test would fail causing a Violation of UNIQUE KEY constraint UC_ShippierId_IdentifierValue … similar to one presented in Figure1.

My personal opinion is that this is the cleanest way to serialize access to a critical section.

The next two methods are more workarounds than proper solutions.

Dummy lock tables

This method includes a dummy table, a table that is not part of the database schema(at least not logically). Its sole purpose is to be exclusively locked by one of the concurrent sessions allowing only one session to access the subsequent code/queries at a time.

To execute the test, reset the test table and use the dummy table SQL script from here.

... 
    BEGIN TRANSACTION SerializeCode
        SELECT LockMe = COUNT(*) FROM dbo.DummyLock WITH(TABLOCKX); --exclusive table lock
...

In my experiment, I had two sessions, Sid=66 and, Sid=65. The former had exclusively locked the dummy table before Sid=65 requested the lock. This pattern ensured that only one session could execute the protected code at a time.
Similar to Application locks, the dummy table routine does not restrict access to the objects (tables, views.. etc.) within a critical section, through different access paths. i.e. Session(88) attempts to update a row in dbo.ShipperIdentifier table during the above action. The concurrent update will follow standard Transaction isolation level rules regardless of the status of the dummy table.
Figure 8, Dummy table pattern

Figure 9, Dummy table – blocked session

Tables and Loops

The last method encapsulates a critical section in an infinite loop.  A conditional branching within the loop checks for the existence of a dummy table (or a global aka double hash, temporary table). If the table does not exist, the current session will be able to access the “protected code” and subsequently drop the table and exit the loop. However, if the table already exists, the concurrent session(s) will keep looping, constantly checking if the table still exists. Once the table gets dropped by the current session(the only session that can access the DROP TABLE code), a concurrent session will be able to create a table and access the critical section.

As mentioned before, this method is more of a workaround than a proper solution and can introduce a number of performance issues i.e. excessive drop/create table actions, increased CPU workload, etc.

....
WHILE(1 = 1)
BEGIN 
    IF OBJECT_ID('dbo.LockCodeSection','U') IS NULL
    BEGIN
        BEGIN TRY 
            CREATE TABLE dbo.LockCodeSection(LockMe BIT)
 
            BEGIN TRANSACTION SerializeCode
 ....

The complete script can be found hereReset the test environment and run the script in two separate SSMS sessions and at the same time.

Figure 10, Tables and Loops 🙂

From the output, we can see that Session Sid=66 was the first one to create the dummy table and to access the critical section. At the same time, Session Sid=65 was constantly trying to enter the code segment by checking the existence of the dummy table. It made 8486 attempts in order to access the critical section. Finally, it accessed the code in a serial manner without causing any constraint violation..

Conclusion

Sometimes access to particular segments of code needs to be serialized between concurrent client connections. A protected segment of code is also known as a critical section. In concurrent programming, we use objects/constructs like mutex, semaphore, or locks in order to serialize threads’ access to the shared resources making them thread-safe.  In SQL programming critical sections/queries are of the declarative type usually describing what we want to achieve but not how. Therefore,  serializing SQL code i.e. one or more queries encapsulated in an explicit transaction, comes down to ensuring that only one session/connection can access the same code through the same object i.e. the same stored procedure at the same time. However, the same protected code can be concurrently accessed by other sessions through different objects i.e. views, other stored procs, dynamic queries, etc.
SQL Server’s application locks closely resemble mutexes in application programming. Implemented through a couple of system-stored procedures, application locks are easy to understand and implement. There are many different ways to achieve the same goal e.g. by controlling the types of locks (UPDLOCK) and/or mimicking behavior of the ANSI transnational isolation levels applied only to specific table(s) (SERIALIZABLE aka HOLDLOCK) within a critical section. Other solutions may seem like workarounds implementing more imperative approaches such as  Tables and Loops.

Thanks for reading.

Dean Mincic

Recursive CTE

Recursive Common Table Expressions , table expressions and more…


Summary

Common Table Expressions were introduced in SQL Server 2005. They represent one of several types of table expressions available in Sql Server. A recursive CTE is a type of CTE that references itself. It is usually used to resolve hierarchies.
In this post I will try to explain how CTE recursion works, where it sits within the group of table expressions available in Sql Server and a few case scenarios where the recursion shines.

Table Expressions

A table expression is a named query expression that represents a relational table.  Sql Server supports four types of table expressions;

  • Derived tables
  • Views
  • ITVF (Inline Table Valued Functions aka parameterised views)
  • CTE (Common Table Expressions)
    • Recursive CTE

In general, table expressions are not materialised on the disk. They are virtual tables present only in RAM memory (they may be spilled to disk as a result of i.e memory pressure, size of a virtual table etc..). The visibility of the table expressions may vary i.e views and ITVF are db objects visible on a database level, whereas they scope is always on an SQL statement level – table expressions cannot operate across different sql statements within a batch.

Benefits of table expressions are not related to query execution performances but to the logical aspect of the code !

Derived Tables

Derived tables are table expressions also known as sub-queries. The expressions are defined in the FROM clause of an outer query. The scope of derived tables  is always the outer query.

The following code represents a derived table called AUSCust.

SELECT AUSCust.*
FROM (
       SELECT  custid
              ,companyname
       FROM dbo.Customers
       WHERE country = N'Australia'
) AS AUSCust;
--AUSCust is a derived table

The derived table AUSCust is visible only to the outer query and the scope is limited to the sql statement.

Views

Views (sometimes referred to as virtual relations)  are reusable table expressions. A view definition is stored as an Sql Server object along with objects such as; user defined tables, triggers, functions, stored procedures etc.
The main advantage of Views over other types of table expressions is their re-usability i.e derived queries and CTE have scope limited  to a single statement.
Views are not materialised, meaning that the rows produced by views are not stored permanently on disk. Indexed views is Sql Server(similar but not the same as the materialised views in other db platforms)  are special type of views that can have their result-set permanently stored on disk – more on indexed views can be found here.

Just a few basic guidelines on how to define SQL Views.

  • SELECT * in the context of a View definition behaves differently then when used as a query element in a batch.

    CREATE VIEW dbo.vwTest
    AS
       SELECT *
       FROM dbo.T1
       ...

    The view definition will include all columns from the underlying table, dbo.T1 at the time of the view creation. This means that if we change the table schema (i.e add and/or remove columns) the changes will not be visible to the view – the view definition will not automatically change to support the table changes. This can cause errors in the situations when i.e a view try to select non-existing columns from an underlying table.
    To fix the problem, we can one of the two system procedures: sys.sp_refreshview or sys.sp_refreshsqlmodule.
    To prevent this behavior follow the best practice and explicitly name the columns in  the definition of the view.

  • Views are table expressions and therefore cannot be ordered. Views are not cursors! It is possible, though, to “abuse” the TOP/ORDER BY construct in the view  definition in attempt to force sorted output.  e.g .
    CREATE VIEW dbo.MyCursorView 
    AS 
      SELECT TOP(100 PERCENT) * 
      FROM dbo.SomeTable
      ORDER BY column1 DESC

    Query optimiser will discard the TOP/ORDER BY since the result of a table expression is always a table – selecting TOP(100 PERCENT) doesn’t make any sense anyway. The idea behind Table structures is derived from a concept in Relational database theory known as Relation.

  • During processing a query that references a view, the query from the view definition gets unfolded or Expanded  and implemented in the context of the main query. The consolidated code(query) will then be optimised and executed.

ITVF (Inline Table Valued Functions)

ITVFs are are reusable table expressions that support input parameters. The functions can be treated as parameterised views.

CTE (Common Table Expressions)

Common table expressions are similar to derived tables but with several important advantages;

A CTE is defined using a WITH statement, followed by a table expression definition. To avoid the ambiguity (TSQL uses WITH keyword for other purposes i.e WITH ENCRYPTION etc) the statement preceding CTE’s WITH clause MUST be terminated with a semi-column. This is not necessary if the WITH clause is the very first statement in a batch i.e in a VIEW/ITVF definition)

NOTE: Semi-column, the statement terminator is supported by ANSI standard and it is highly recommended to be used as a part of TSQL programming practice.

Recursive CTE

SQL Server supports recursive querying capabilities implemented trough Recursive CTEs since version 2005(Yukon).

Elements of a recursive CTE

  1. Anchor member(s) – Query definitions that;
    1. returns a valid relational result table
    2. is executed ONLY ONCE at the beginning of query execution
    3. is positioned always before the first recursive member definition
    4. the last anchor member must be followed by UNION ALL operator. The operator combines the last anchor member with the first recursive member
  2. UNION ALL multi-set operator. The operator operates on
  3. Recursive member(s) – Query definitions that;
    1. returns a valid relational result table
    2. have reference to the CTE name. The reference to the CTE name logically represents the previous result set in a sequence of executions. i.e The first “previous” result set in a sequence is the result the anchor member returned.
  4. CTE Invocation – Final statement that invokes recursion
  5. Fail-safe mechanism – MAXRECURSION option prevents database system from the infinite loops. This is an optional element.

Termination check

CTE’s recursive member has no explicit recursion termination check.
In many programming languages, we can design method that calls itself – a recursive method. Every recursive method needs to be terminated when a certain conditions are satisfied. This is Explicit recursion termination. After this point the method begins to return values. Without termination point recursion can end up calling itself “endlessly”.
CTE’s recursive member termination check is implicit , meaning that the recursion stops when no rows are returned from the previous CTE execution.

Here is a classic example of a recursion in imperative programming. The code below calculates the factorial of an integer using a recursive function(method) call.

...
static void Main(string[] args)
{
     int Number = 0; 
     long Result;   
     ...
     //function call
     Result = CalculateFactorial(Number); 
     ...        
}
public static long CalculateFactorial(int number)
{
    //Termination check (factorial of 0 is 1
    if (number == 0)
      return 1;

    //Recursive call
    return number * CalculateFactorial(number - 1);
}

Complete console program code can be found here.

MAXRECURSION

As mentioned above, recursive CTEs as well as any recursive operation may cause infinite loops if not designed correctly. This situation can have negative impact on database performance. Sql Server engine has a fail-safe mechanism that does not allow infinite executions.

By default, the number of times recursive member can be invoked is limited to 100 (this does not count the once-off anchor execution). The code will fail upon 101st execution of the recursive member.

Msg 530, Level 16, State 1, Line xxx
The statement terminated. The maximum recursion 100 has been exhausted before statement completion.

The number of recursions is manged by  MAXRECURSION n query option. The option can override the default number of maximum allowed recursions. Parameter (n) represents the recursion level.    0<=n <=32767

Important note:MAXRECURSION 0 – disables the recursion limit!

Figure 1 shows an example of a recursive CTE with its elements


Figure 1, Recursive CTE elements

Declarative recursion is quite different than traditional, imperative recursion. Apart of the different code structure, we can observe the difference between the explicit and the implicit termination check. In the CalculateFactorial example, the explicit termination point is clearly defined by the condition: if (number == 0) then return 1.
In the case of recursive CTE above, the termination point is implicitly defined by the INNER JOIN operation, more specifically by the result of the logical expression in its ON clause: ON e.MgrId = c.EmpId. The result of the table operation drives the number of recursions. This will become more clear in the following sections.

Use recursive CTE to resolve Employee hierarchy

There are many scenarios when we can use recursive CTEs i.e to separate elements etc. The most common scenario I have come across during many years of sequeling has been to use recursive CTE to resolve various hierarchical problems.

The Employee tree hierarchy is a classic example of a hierarchical problem that can be solved using Recursive CTEs.

Example

Let’s say we have an organisation with 12 employees. The following business rules applies;

  • An employee must have unique id, EmpId
    • enforced by: Primary Key constraint on EmpId column
  • An employee can be be managed by 0 or 1 manager.
    • enforced by: PK on EmpId, FK on MgrId and NULLable MgrId column
  • A manager can manage one or more employees.
    • enforced by: Foreign Key constraint(self referenced) on MgrId column
  • A manager cannot manage himself.
    • enforced by: CHECK constraint on MgrId column

The tree hierarchy is implemented in a table called dbo.Employees. The scripts can be found here.


Figure 2, Employees table

Lets present the way recursive CTE operate by answering the question: Who are the direct and indirect subordinates of the manager with EmpId = 3?

From the hierarchy tree in Figure 2  we can clearly see that Manager (EmpId = 3) directly manages employees; EmpId=7, EmpId=8 and EmpId=9 and indirectly manages; EmpId=10, EmpId=11 and EmpId=12.

Figure 3 shows the EmpId=3 hierarchy and the expected result. The code can be found here.


Figure 3, EmpId=3 direct and indirect subordinates

So, how did we get the final result.

The recursive part in the current iteration always references its previous result from the previous iteration. The result is a table expression(or virtual table) called cte1(the table on the right side of the INNER JOIN). As we can see, cte1 contains the anchor part as well. In the very first run(the first iteration), recursive part cannot reference its previous result because there was no previous iteration. This is why in the first iteration only the anchor part executes and only once during the whole process. The anchor query result-set gives recursive part its previous result in the second iteration. The anchor acts as a flywheel if you will 🙂

The final result builds up through iterations i.e Anchor result + iteration 1 result + iteration 2 result …

The logical execution sequence

The test query is executed by following the logical sequence below:

  1. The SELECT statement outside the cte1 expression invokes the recursion. The anchor query executes and returns a virtual table called cte1.  The recursive part returns an empty table since it has no its previous result. Remember, the expressions in set based approach are evaluated all at once.
    Figure 4, cte1 value after 1st iteration
  2. The second iteration begins.This is the first recursion. The anchor part played its part in the first iteration and from now on returns only empty sets. However, the recursive part can now reference it’s previous result(cte1 value after the first iteration) in the INNER JOIN operator. The table operation produces the result of the second iteration as shown in the figure below.
    FIgure 5, cte1 value after 2nd iteration
  3. Second iteration produces a non-empty set, so the process continues with the third iteration – the second recursion. Recursive element now references the cte1 result from the second iteration.

    FIgure 6, cte1 value after 3rd iteration
  4. An interesting thing happens in the 4th iteration – the third recursion attempt. Following the previous pattern, the recursive element uses the cte1 result from the previous iteration. However, this time there are no rows returned as a result of the INNER JOIN operation, and the recursive element returns an empty set. This is the implicit termination point mentioned before. In this case, INNER JOIN’s logical expression evaluation dictates the number of recursions.
    Because the last cte1 result is an empty result-set, the 4th iteration(or 3rd recursion) is “canceled” and the process is successfully finished.

    Figure 7, The final iteration

    The logical cancellation of the 3rd recursion (the last recursion that produced an empty result-set does not count) will become more clear in the following, recursive CTE execution plan analysis section.We can add OPTION(MAXRECURSION 2)  query option at the end of the query which will limit the number of allowed recursions to 2. The query will produce the correct result proving that only two recursions are required for this task.Note: From the physical execution perspective, the result-set is progressively(as rows bubble up) sent to the network buffers and back to the client application.

Finally, the answer on the question above is :
There are six employees who directly or indirectly report to the Emp = 3. Three employees, EmpId= 7, EmpId=8 and EmpId=9 are direct subordinates and EmpId=10, EmpId=11 and EmpId=12 are indirect subordinates.

Knowing the mechanics of recursive CTE, we can easily solve the following problems.

  • find all the employees who are hierarchically above the EmpId = 10 (code here)
  • find EmpId=8 ‘s direct and the second level subordinates(code here)

In the second example we control depth of the hierarchy by restricting the number of recursions.
Anchor element gives us the first level of hierarchy, in this case, the direct subordinates. Each recursion then moves one hierarchy level down from the first level. In the example, the starting point is EmpId=8 and his/hers direct subordinates. The first recursion moves one more level down the hierarchy where EmpId=8 ‘s second level subordinates “live”.

Circular reference problem

One of the interesting things with hierarchies is that the members of a hierarchy can form a closed loop where the last element in the hierarchy references the first element. The closed loop is also known as circular reference.
In the cases like this, the implicit termination point, like the INNER JOIN operation explained earlier, will simply not work because it will always return a non-empty result-set for the next recursion to go on. The recursion part will keep rolling until it hits Sql Server’s fail-safe, the MAXRECURSION query option.

To demonstrate circular reference situation using previously set up test environment, we’ll need to

  • Remove Primary and Foreign key constraints from dbo.Employees table to allow the closed loops scenarios.
  • Create a circular reference (EmpId=10 will manage his indirect manager , EmpId = 3)
  • Extend the test query used in the previous examples, to be able to analyse hierarchy of the elements in the closed loop.

The extended test query can be found here.

Before continuing with the circular ref. example, lets see how the extended test query works. Comment out the WHERE clause predicates(the last two lines) and run the query against the original dbo.Employee table

...
  )
   SELECT cte1.EmpId
          ,cte1.MgrId
          ,recLvl
          ,pth
          ,isCircRef
   FROM cte1
 --WHERE cte1.isCircRef = 1 --returns only circular ref. hierarchies
 --  AND cte1.recLvl <= 2; --limits final result to two recursions

Figure 8, Detecting existence of circular loops in hierarchies

The result of the extended query is exactly the same as the result presented in the previous experiment in Figure 3. The output is extended to include the following columns

  • pth – Graphically represents the current hierarchy. Initially, within the anchor part, it simply adds the first subordinate to MgrId=3, the manager we’re starting from. Now, each recursive element takes the previous pth value and adds the next subordinate to it.
  • recLvl – represents current level of recursion. Anchor execution is counted as recLvl=0
  • isCircRef – detects existence of a circular reference in the current hierarchy(row). As a part of recursive element, it searches for the existence of an EmpId that was previously included in the pth string.
    i.e if the previous pth looks like 3->8->10 and the current recursion adds ” ->3 “, (3->8 >10 -> 3) meaning that EmpId=3 is not only an indirect superior to EmpId=10, but is also EmpId=10’s subordinate – I am boss or your boss, and you are my boss kind of situation 😐

Lets now make necessary changes on dbo.Employees to see the extended test query in action.

Remove PK and FK constraints to allow circular references and add a “bad boy circular ref” to the table.

ALTER TABLE dbo.Employees
    DROP CONSTRAINT FK_MgrId_EmpId
        ,CONSTRAINT PK_EmpId;
GO
--insert circ ref.
INSERT INTO dbo.Employees (EmpId,MgrId)
    VALUES (3,10) -- EmpId 10 is managing EmpId 3
GO

Run the extended test query, and analyse the results (don’t forget to un-commet previously commented WHERE clause at the end of the script)
The script will execute 100 recursions before gets interrupted by the default MAXRECURSION. The final result will be restricted to two recursions .. AND cte1.recLvl <= 2;   which is required  to resolve EmpId=3’s hierarchy.

Figure 9 shows a closed loop hierarchy, maximum allowed number of recursions exhausted error and the output that shows the closed loop.

Figure 10, Circular reference detected

A few notes about the circular reference script.
The script is just an idea of how to find closed loops in hierarchies. It reports only the fist occurrence of a circular reference – try to remove WHERE clause and observe the result.
In my opinion, the script (or a similar versions of the script) can be used in production environment for i.e troubleshooting purposes or as a prevention from creating circular references in an existing hierarchy. However, it needs to be secured by appropriate MAXRECURSION n, where n is expected depth of the hierarchy.

This script is non-relational and relies on a traversal technique. It is always the best approach to use declarative constraints (PK, FK, CHECK..) to prevent any closed loops in data.

Execution plan analysis

This segment explains how Sql Server’s query optimiser(QO) implements a recursive CTE. There is a common pattern that QO uses when constructing the execution plan. Run the original test query and include the actual execution plan

Like the test query, the execution plan has two branches: the anchor branch and the recursive branch.  Concatenation operator, which implements the UNION ALL operator, connects results from the two parts forming the query result.

Let’s try to reconcile the logical execution sequence mentioned before and the actual implementation of the process.


Figure 11, Recursive CTE execution plan

Following the data flow (right to left direction) the process looks like:

Anchor element (executed only once)

  1. Clustered Index Scan operator – system performs index scan. In this example, it applies expression MgrId = @EmpId as a residual predicate. Selected rows(columns EmpId and MgrId) are passed (row by row) back to the previous operator.
  2. Compute Scalar The operator adds a column to the output. In this example, the added column’s name is [Expr1007]. This represents the Number of Recursions. The column has initial value of 0; [Expr1007]=0
  3. Concatenation – combines inputs from the two branches. In the first iteration, the operator receives rows only from the anchor branch. It also changes the names of the output columns. In this example the new column names are:
    1. [Expr1010] = [Expr1007] or [Expr1009]*    *[Expr1009] holds number of recursions assigned in the recursive branch. It does not have value in the first iteration.
    2. [Recr1005] = EmpId(from the anchor part) or EmpId(from the recursive part)
    3. [Recr1006] = MgrId(from the anchor part) or MgrId (from the recursive part)
  4. Index Spool (Lazy Spool) This operator stores the result received from the Concatenation operator in a worktable. It has property “Logical Operation” set to “Lazy Spool”. This means that the operator returns its input rows immediately and does not accumulate all rows until it gets the final result set (Eager Spool) . The worktable is structured as a clustered index with the key column [Expr1010] – the recursion number. Because the index key is not unique, the system adds an internal, 4 byte uniquifier to the index key to ensure that all rows in the index are, from the physical implementation perspective, uniquely identifiable. The operator also has property “With Stack” set to “True” which makes this version of the spool operator a Stack Spool  A Stack Spool operator always has two components –  an Index Spool that builds the index structure and a Table Spool that acts as a consumer of the rows stored in the worktable that was built by the Index Spool.
    At this stage, the Index Spool operator returns rows to the SELECT operator and stores the same rows in the worktable.
  5. SELECT operator returns EmpId and MgrId ([Recr1005] , [Recr1006]). It excludes [Expr1010] from the result. The rows are sent to the network buffer as they arrive from the operators downstream

After exhausting all rows from the Index Scan operator, the Concatenation operator switches context to the recursive branch. The anchor branch will not be executed again during the process.

Recursive element

  1. Table Spool (Lazy Spool). The operator has no inputs and, as mentioned in (4) acts as a consumer of the rows produced by the Index Spool and stored in a clustered worktable. It has property “Primary Node” set to 0 which points to the Index Spool Node Id. It highlights the dependency of the two operators. The operator
    1. removes rows it read in the previous recursion. This is the first recursion and there are no previously read rows to be deleted. The worktable contains three rows (Figure 4).
    2. Read rows sorted by the index key + uniquifier in descending order. In this example, the first row read is EmpId=9, MgrId=3.

    Finally, the operator renames the output column names. [Recr1003] =[Recr1005],  [Recr1004] =[Recr1006] and [Expr1010] becomes [Expr1008].
    NOTE: The table spool operator may be observed as the cte1 expression on the right side of the INNER JOIN (figure 4)

  2. Compute Scalar The operator adds 1 to the current number of recursions previously stored in column [Expr1007].The result is stored in a new column, [Expr1009]. [Expr1009] = [Expr1007] + 1 =  0 + 1 = 1. The operator outputs three columns, the two from the table spool ([Recr1003] and [Recr1004]) and [Expr1009]
  3. Nested Loop(I) operator receives rows from its outer input, which is the Compute Scalar from the previous step, and then use [Recr1003] – represents EmpId from the Table Spool operator, as a residual predicate in the Index Scan operator positioned in the Loop’s inner input. The inner input executes once for each row from the outer input.
  4. Index Scan operator returns all qualified rows from dbo.Employees table (two columns; EmpId and MgrId) to the nested loop operator.
  5. Nested Loop(II): The operator combines [Exp1009] from the outer input and EmpId and MgrId from the inner input and passes the three column rows to the next operator.
  6. Assert operator is used to check for conditions that require query to be aborted with an error message. In the case of recursive CTEs , assert operator implements “MAXRECURSION n” query option. It checks whether the recursive part reached the allowed (n) number of recursions or not. If the current number of recursions, [Exp1009](see step 7) is greater than (n), the operator returns 0 causing a run time error. In this example, Sql Server uses its default MAXRECURSION value of 100. The expression looks like: CASE WHEN [Expr1009]> 100 THEN 0 ELSE NULL If we decide to exclude the failsafe by adding MAXRECURSION 0, the assert operator will not be included in the plan.
  7. Concatenation combines inputs from the two branches. This time it receives input from the recursive part only and outputs columns/rows as shown in the step 3.
  8. Index Spool (Lazy Spool) adds the output from concatenation operator to the worktable and then passes it to the SELECT operator. At this point the worktable contains the total of 4 rows: three rows from the anchor execution and one from the first recursion. Following the clustered index structure of the worktable, the new row is stored at the end of the worktable
  9. The process now resumes from step 6. The table spool operator removes previously read rows (the first three rows) from the worktable and reads the last inserted row, the fourth row.

Conclusion

CTE(Common table expressions) is a type of table expressions available in Sql Server. A CTE is an independent table expression that can be named and referenced once or more in the main query.
One of the most important uses of CTEs is to write recursive queries. Recursive CTEs always follow the same structure – anchor query, UNION ALL multi-set operator, recursive member and the statement that invokes recursion. Recursive CTE is declarative recursion and as such has different properties than its imperative counterpart e.g declarative recursion termination check is of implicit nature – the recursion process stops when there are no rows returned in the previous cte.

 

Thanks for reading.

Dean Mincic

Bookmark lookup tipping point

Bookmark lookup critical point


Summary

It is common for production environments to have queries – query plans, that use non-covered, non-clustered indexes in combination with a bookmark(Key or RID) lookup operator. The combination of  the physical operators is one way how query optimiser can use a non-covered index to provide information required by a query. However, sometimes, for the same query,  query optimiser decides to scan the whole (cluster or heap) table instead. This drastic change in the plan shape may have negative impact on our query performance.
This post attempts to explain the mechanism behind QO decision on when to switch between the two plan shapes. The concept is known as The Tipping Point and represents the point at which the number of page reads required by the bookmark lookup operator exceeds a certain point at which a clustered index/heap table scan becomes less expensive than the non-clustered index seek.

How bookmark lookup works

Before diving into the tipping point specifics, it would be good to understand how bookmark lookup operator works in combination with a non-clustered , non covered index. Bookmark lookup (Key or RID)  is a physical operator used to find data rows in the base table(cluster or heap) using a clustered index key(Key lookup) or row locator(RID lookup).
Lets create a test environment we can use throughout the blog.

Create test environment

Create a test table

--DB compatibility level 150
DROP TABLE IF EXISTS dbo.TestBookmarkLookupthreshold;
GO

CREATE TABLE dbo.TestBookmarkLookupthreshold(
		EmployeeId INT IDENTITY(1,1)
		     PRIMARY KEY CLUSTERED     --4
		,[Name]  CHAR(120) NOT NULL    --120
		,Surname CHAR(120) NOT NULL    --120
		,Notes   CHAR(245)             --245
		,SearchValue INT  NOT NULL     --4
		     INDEX NCI_SearchValue UNIQUE NONCLUSTERED(SearchValue) 
)
GO

Insert 100,000 rows

;WITH generateRows AS
(
	SELECT TOP(100000) n = ROW_NUMBER() OVER(ORDER BY (SELECT NULL))	
	FROM sys.columns c
		CROSS APPLY sys.columns c1
			CROSS APPLY sys.columns c2
	ORDER BY (SELECT NULL)
)
INSERT INTO DBO.TestBookmarkLookupthreshold (
			   [Name]
			   ,Surname
			   ,Notes
			   ,SearchValue
)
	SELECT [Name]  = 'MyNameIs_'+CAST(n AS VARCHAR(10))
	      ,Surname = 'MyLastNameIs_' + CAST(n+100 AS VARCHAR(10))
	      ,Notes   = 'MyNotes..'
	      ,SearchValue = n
	FROM generateRows
	ORDER BY n
GO

The test objects properties that are interesting for our experiment:

  • Unique clustered index on EmployeeId.
  • Unique, non-clustered index on the SearchValue column.
  • SearchValue column contains unique, ever increasing integer values. The  values match EmployeeId values.
  • The row size is exactly 500bytes. 493bytes is used by the five fixed length columns + 7bytes row overhead. More on this can be found here.

Key lookup scenario

The query below returns 500 rows (all columns) for a range predicate.

DBCC TRACEON(652);
GO
  SET STATISTICS XML,IO  ON;
	SELECT *
	FROM dbo.TestBookmarkLookupthreshold
	WHERE SearchValue BETWEEN 1000 AND 1499
  SET STATISTICS XML,IO  OFF
DBCC TRACEOFF(652);
GO

Note: Traceflag 652 . The traceflag disables page pre-fetching scans (read-ahead). Disabling the storage engine feature will help us to reconcile the number of I/O operations reported by STATISTICS IO with the number of rows selected by the query. More on the trace flag later in the blog.

Analyse key lookup query plan

The figure below consolidates three sets of information related to our test query – a graphical execution plan shape, basic properties of the two physical operators and the number of IO reads performed on the test table.


Figure 1, Key lookup, index seek plan

We read the query plan as the following sequence of events.

  • Query optimiser chose a key lookup/non-clustered index seek routine to implement query request.
  • Nested Loop operator requested, for its outer input, all valid rows(rows that are passed the seek predicate ..SearchValue BETWEEN 1000 AND 1499.. ) on NCI_SearchValue index. The index seek(index bTree traverse) was executed once resulting in 500  rows and two columns – SearchValue and EmployeeId. The latter  also acts as a pointer to the full size rows stored in the clustered index leaf level.
  • Nested Loop operator requested, through its inner input, the rest of the columns selected by the query – Name, Surname and Note. The search(key lookup operator), was executed 500 times, once per row in the outer input returning a  new set of 500 rows – a row per key search. Each execution traversed the clustered index bTree structure using EmployeeId as a seek predicate, in order to pin-point the qualifying rows.
  • For each key lookup search, Nested Loop operator combined the two outputs, the SearchValue and EmployeeId from the outer input with the Name, Surname and Note from the inner input forming the shape of the final result set.

The next thing we need to understand is the relationship between the number of I/O reads required to implement the above routine and the number of rows processed in the process.

Figure 1 shows that the number of I/O reads required for the operation was 1503 logical reads.  A logical read is a situation when Sql Server processes an 8Kb page from a RAM memory segment called buffer cache. If the requested page is not already in the buffer cache,  storage engine needs to perform a physical read operation in order to get the same 8Kb page from the disk.
The properties of the two physical operators(NCI seek and Key lookup) shows that the system read 500 rows from the non-clustered structure,in one go and performed 500 operations on the clustered index, returning a row per operation.

Now we need to dive a little bit deeper into Sql Server’s storage protocol in order to find all physical pages that were processed during the operations. Query below gives us high level overview of the index structures used in the test. The non-clustered index bTree has two levels and total of 175 pages whereas clustered index bTree has three levels and the total of 6273 pages.

SELECT idx.[name]
	  ,idx.[type_desc]
	  ,pstats.index_level
	  ,pstats.index_depth
	  ,pstats.avg_record_size_in_bytes --all fixed length data type columns	  
	  ,pstats.record_count	  
	  ,pstats.page_count
	  ,idx.index_id
	  ,TotalPageNo = SUM(pstats.page_count) OVER(PARTITION BY idx.index_id) --total no of pages per index
 FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID(N'dbo.TestBookmarkLookupthreshold','U'), NULL, NULL , 'DETAILED') pstats
     LEFT OUTER JOIN sys.indexes idx
        ON idx.object_id = pstats.object_id
           AND idx.index_id = pstats.index_id;

Figure 3, the total number of pages per index 

The next query gives us a detailed view of the index pages across bTree levels – The Doubly Linked List data structure.

SELECT  idx.[name]
       ,idx.[type_desc]
       ,aloc.page_type
	   ,aloc.page_type_desc
	   ,aloc.previous_page_page_id
	   ,aloc.allocated_page_page_id
	   ,aloc.next_page_page_id
	   ,aloc.extent_page_id
	   ,idx.index_id
	   ,aloc.page_level
FROM sys.dm_db_database_page_allocations(DB_ID(), OBJECT_ID(N'dbo.TestBookmarkLookupthreshold','U'), NULL, NULL , 'DETAILED') aloc
LEFT OUTER JOIN sys.indexes idx
        ON idx.object_id = aloc.object_id
           AND idx.index_id = aloc.index_id
WHERE aloc.page_type IS NOT NULL 
   AND aloc.index_id = 1 --2 (0 - heap, 1- clustered index , > 1 non-clustered index
ORDER BY aloc.page_level DESC

Finally, the following query gives us a sneak-peek of the actual rows  on the index/data pages.

DBCC  TRACEON(3604); --allows DBCC PAGE command to send its output to SSMS
GO
DBCC PAGE ('Test2019DB', 1,832,3) --replace db name/page_id with your values
DBCC TRACEOFF(3604);
/*
DBCC PAGE ( { database_name | database_id | 0}, file_number, page_number
[,print_option ={0|1|2|3} ]) [WITH TABLERESULTS]
*/
GO

Data access pattern

The following diagram represents data access pattern used by the key lookup routine.

--test query
SELECT *
FROM dbo.TestBookmarkLookupthreshold
WHERE SearchValue BETWEEN 1000 AND 1499


Figure 2, Nonclustered index & key lookup internals

Data access pattern analysis

Our next goal is to reconcile the total number of logical I/O reads previously reported by the STATISTICS IO – (1503) with the diagram above.
Lets find out the total number of I/O read operations required to generate one row(out of a total of 500 rows). The following sequence of events is derived from Figure 2.

  1. Nested loop operator requests all eligible rows from the index seek operator. The non-clustered index is a non-covered index and can provide only SearchValue and EmployeeId columns. The index seek operator use the two seek predicates to find the range of values, SearchValue >=1000 AND SearchValue <=1499.
  2. Non-clustered index traverse. Index seek operator reads the Root page(PageId=1:16448) of the non-clustered index making the first I/O read (NoOfReads = 1).
  3. Index seek operation, following the lower boundary(SearchValue = 1000) of the search range, finds a pointer(PageId) which points to the index leaf level page which contains the full(two columns) index row. (ROOT PAGE: SearchValue = 597, PageId = 1:16456). It also knows that the PageId=1:16456 alone cannot provide complete range of the requested values but only SearchValues >=1000 AND SearchValues <1157. It “remembers” the following pointer, PageId = 1:16457 which can provide the rest of the values, SearchValue >=1157 and SearchValue <= 1499.
  4. Index seek operator performs the second I/O read(PageId =1:16457) following Path (A).  The total number of I/O reads is now (NoOfReads = 2).
  5. After storing first 165 rows found on PageId =1:16456 in memory, the operator follows Path (B). The operation is known as “partial index scan“. The operator knows that the subsequent page(PageId=1:16457) contains the rest of the requested rows(335 rows). The current page also has pointers to previous and next page(doubly linked list). The Path (B) makes the third read (NoOfReads = 3).
  6. Nested loop operator received all 500 requested rows from its outer input , the NCI index seek operator.
  7. Nested Loop operator now performs the Key Lookup operation over clustered index, 500 times, once per row collected from the outer input.
  8. Clustered index traverse (singleton search). On its very first execution, the key lookup operator uses the first row from the Nested Loop outer input, (EmployeeId = 1000) and performs its first page read(PageId = 1:2680). The page is the root level of clustered index bTree. The operation follows Path(C) increasing the total number of reads (NoOfReads = 4).
  9. Clustered root page provides a pointer(PageId = 1:832) which points to the first index intermediate level. The page maps all EnployeeIds between NULL and less then 4305. The row with EmployeeId=1000 falls into the range. Following Path(D) the operator makes its second read and increases the total number of reads ( NoOfReads = 5)
  10. Intermediate page 1:832 provides information about a leaf level page(PageId=1:1029) that contains the first full row.
  11. The process now follows Path(E) and make its final, third clustered index read ( NoOfReads = 6)
  12. The full row is then passed from the Nested Loop operator to the SELECT operator.
  13. Nested loop operator continue to progresses through the list of 500 rows from its outer input repeating steps 8 – 11 until all 500 rows have been processed.

The total number of reads is
Total No Of Reads = Index Seek operation (3 reads) + 500 * Key Lookup operation (3 reads) = 3 + 500 * 3 = 1503.
This is an exact match with the number of logical reads reported by STATISTICS IO.

Important observations

From the storage level perspective, one of the main differences between the two access patterns is that the Clustered index seek(partial scan) is a sequential read I/O operation, whereas Key lookup(singleton clustered index seek) is a random read I/O operation. Sequential reads are generally less expensive (less mechanical movements on the disk level) than the random reads. This is also true for RAM/SSD although they don’t have moving parts. This a very high level observation on data storage systems. 🙂

The number of random reads depends on size of row. The wider the row the more random reads key lookup needs to perform to get the same result-set. More about this later in the post.

Read ahead optimisation

Earlier in the blog I used TRACEFLAG 652 to disable the page pre-fetching aka read ahead functionality. Read ahead is an asynchronous I/O mechanism build to overcome the gap between CPU and I/O performances. Because CPU is many times faster than any storage system, Sql Server’s storage engine tries to read up to 64 sequential pages(8 extents) before they are requested by a query. This provides more logical reads, but on the other hand is faster than performing physical reads only when required. Although the number of read-ahead pages may vary, the mechanism reads pages from the index intermediate level(s) in order to identify the leaf level pages to be read in advance. In our case, the functionality would, if not turned off, “added a few” extra pages to the STATISTICS IO report and we wouldn’t be able to reconcile the reads with the diagram in Figure 2.
Try to run the same test query without turning on the traceflag. The number of logical reads should be greater than 1503.

The tipping point

The tipping point is the point which represents the critical number of rows after which query optimiser decides to perform cluster index scan instead non-clustered/key lookup data access pattern.
As previously shown, in non-clustered index/key lookup scenario, the number of rows requested by a query relates to the number of random reads. The main concern when determining the tipping point is actually the number of data pages that needs to be “randomly” read from clustered index leaf level – a read per each row requested . In our example this number is 500. The approach excludes the clustered index leaf level page reads and the non-cluster reads all together.
The number of pages that represents the tipping point is between 1/4 and 1/3 of clustered index data pages(leaf level). If a query requests less than 1/4 (25%) of the number of clustered index leaf level pages, query optimiser is most likely to allow random reads and non-clustered index/key lookup data access pattern. However, if a query requests more than 1/3(33%) pages, query optimiser will implement a clustered index scan data access pattern.
Figure 3, The tipping point

Knowing that a random read corresponds to a selected row, we can express the tipping point as a number of rows;

                      1/4(no of data pages) <= Tipping point (rows)  => 1/3(no of data pages)

In our example, the tipping point is expected to be somewhere between 1562 and 2083 rows.
So where exactly is our tipping point?
One approach is to apply binary search algorithm to the tipping point interval and perform trial and error approach until the query plan changes.
The other approach is to construct some kind of a program to do that for you 🙂

SET NOCOUNT ON;
GO

DECLARE @start INT
       ,@end INT 
       ,@IndexScan NVARCHAR(1000);

SELECT @start = 6250.00 / 4
      ,@end = 6250.00 / 3;

SELECT [Tipping point range lower boundary] = @start
      ,[Tipping point range higher boundary] = @end;


;WHILE @start <= @end
BEGIN

	SET FMTONLY ON	
		SELECT * --thisIsMyLittleQuery
		FROM DBO.TestBookmarkLookupthreshold
		WHERE SearchValue <= @start
		OPTION (RECOMPILE)
	SET FMTONLY OFF

	;WITH XMLNAMESPACES (default 'http://schemas.microsoft.com/sqlserver/2004/07/showplan' )
	,individualPlans AS 
	(
		SELECT  DatabaseName  = DB_NAME(s2.[dbid])
			   ,ProcName      = OBJECT_NAME(s2.objectid, s2.[dbid])
			   ,sql_statement = SUBSTRING( s2.[text]
										   ,(s1.statement_start_offset / 2) + 1 --/2 2byutes per character S2.text is of type nvarchar(max). Start position is 0byte
										   ,( 
												(CASE -- check for the end of the batch
													WHEN s1.statement_end_offset = -1 THEN  DATALENGTH(s2.[text]) --end of the last query in the batch  -1 represents the end of a batch
													ELSE s1.statement_end_offset
												 END) 
												- (statement_start_offset) / 2
											 ) + 1
								 )
				,query_plan = CAST(s3.query_plan AS XML) 
		FROM    sys.dm_exec_query_stats AS s1
			CROSS APPLY sys.dm_exec_sql_text(s1.[sql_handle]) AS s2
				CROSS APPLY sys.dm_exec_text_query_plan(s1.plan_handle, s1.statement_start_offset, s1.statement_end_offset) s3 -- get the query plans
		WHERE s2.text LIKE '%SELECT * --thisIsMyLittleQuery%'
	)
	SELECT @IndexScan = query_plan.value('(/ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple/QueryPlan/RelOp/@PhysicalOp)[1]','nvarchar(max)')
	FROM individualPlans
	WHERE sql_statement LIKE 'SELECT * --thisIsMyLittleQuery%';

	IF @IndexScan = 'Clustered Index Scan'
	BEGIN
		SELECT [TippingPoint (NoOfrows)] = @start
		BREAK;
	END 

	SET @start +=1;

END

The script runs the same query for different SearchValue values. The values are within the expected tipping point range, starting from the lower boundary. Query result is suppressed  by FMTONLY ON session setting. OPTION(RECOMPILE)* in this context ensures that the value of local variable @start is known to Query Optimiser when creating execution plan for the query*. For each query run, program checks the current query plan RelOp element. If the element’s attribute @PhysicalOp has value set to ‘Clustered Index Scan‘ the program terminates and selects the current SearchValue value. The value represents the Tipping point we are looking for.
Figure 4, The exact Tipping point number of rows

Note:  Instead using OPTION(RECOMPILE) we could use Dynamic string execution.

/*
SET FMTONLY ON	
   SELECT * --thisIsMyLittleQuery
   FROM DBO.TestBookmarkLookupthreshold
   WHERE SearchValue <= @start
   OPTION (RECOMPILE)
SET FMTONLY OFF
*/
EXEC ('SET FMTONLY ON
       SELECT * --thisIsMyLittleQuery
       FROM DBO.TestBookmarkLookupthreshold
       WHERE SearchValue <=' + @start + '
       SET FMTONLY ON')

The approach constructs and optimise the query during run-time. In this case, local variable @start gets evaluated and is treated as a literal within the dynamic string. It is most likely that the query plan will not be parameterised and the individual plans(one per execution) will be cached as Adhoc plans. This may lead to the Plan pollution situation, but this is a topic for a separate blog 🙂

Lets check the tipping point number of rows.

-- check the tipping point TP = 1677, Row Size 500bytes.
DBCC TRACEON(652);
GO
  SET STATISTICS XML,IO  ON;
	SELECT *
	FROM dbo.TestBookmarkLookupthreshold
	WHERE SearchValue < 1677;
  GO
	SELECT *
	FROM dbo.TestBookmarkLookupthreshold
	WHERE SearchValue < 1678;
  GO
  SET STATISTICS XML,IO  OFF;
DBCC TRACEOFF(652);
GO

Figure 5, Query plan change

In terms of the number of pages, tipping point is expected in the range from 25%33% of the total number of clustered index data pages(leaf level). In our example, the range was between 1562 and 2083 pages.
From the number of rows point of view, tipping point was 1677 rows which is  (1677 /100000)*100 = ~1.7% of the total number of rows in the table(clustered index leaf level). This means that Sql Server is very conservative when to use bookmark lookup data access pattern, although the percentage of rows depends on the row size and probably other conditions i.e memory pressure, parallel query execution ..etc.

Tipping point & row size

As mentioned above, the number of random reads in non-clustered/key lookup access pattern depends on the size of a row. The wider the row the more random reads key lookup needs to perform to get the same result-set.

Lets increase our test table row size from 500(493bytes + 7bytes overhead) to 1000bytes. The new rows size will expand the table footprint.

ALTER TABLE dbo.TestBookmarkLookupthreshold
	ALTER COLUMN Notes CHAR(745);
GO

ALTER INDEX NCI_SearchValue ON dbo.TestBookmarkLookupthreshold
	REBUILD;
GO
--this is why it's always better to name db constraints/indexes :)
ALTER INDEX [PK__TestBook__7AD04F11C1954CB9] ON dbo.TestBookmarkLookupthreshold
	REBUILD;
GO

The number of clustered index data pages required to store 100000 rows is now doubled, 12502 to be precise (Figure 3 query). The Tipping point is now expected to be in the range from (1/4) * 12502 and (1/3) * 12502 or 3125 and 4167 rows.

If we run query (Figure 4), we’ll get the exact tipping point , 3228 rows.

-- check the tipping point TP = 3228 , Row Size 1000bytes.
DBCC TRACEON(652);
GO
  SET STATISTICS XML,IO  ON;
	SELECT *
	FROM dbo.TestBookmarkLookupthreshold
	WHERE SearchValue < 3228;
  GO
	SELECT *
	FROM dbo.TestBookmarkLookupthreshold
	WHERE SearchValue < 3229;
  GO
  SET STATISTICS XML,IO  OFF;
DBCC TRACEOFF(652);
GO


Figure 6, Query plan change (row size 1000bytes)

The interesting thing here is that now, with the wider rows, the tipping point represents (3228/100000)*100 = ~3.2% of the total number of rows in the table which is almost double than 1.7% calculated for the 500byte rows.

The tipping point experiments can be put in the context of a cached plan. If we wrap our test query into a stored proceudre and make local variable @start  to be stored proc’s input parameter, on the very first sp call query optimiser will create and cache query plan using the value passed

the query execution plan will be crated using the

Conclusion

The concept known as The Tipping Point represents the point at which the number of page reads required by the bookmark lookup operator exceeds a certain point at which a clustered index/heap table scan becomes less expensive than the non-clustered index seek. In this context, a bookmark operator(Key or RID) is coupled with a non-clustered , non-covered index – Index Seek operator. The latter performs sequential I/O operations whereas the first performs a number of Random Access I/O read operations. Random I/O reads are generally more expensive than sequential I/O read operations regardless of the storage system (mechanical HDD, SSD, RAM ..etc). Query optimiser allows bookmark lookup/index seek data access pattern only if the number of clustered index pages needed to be randomly accessed does not exceed 1/4 of the total number of clustered index data pages(leaf level). If the number of pages exceeds 1/3 of the total number of the clustered index data pages, Query optimiser will choose Clustered index scan data access instead. This is also true for Rid Lookup/Table scan access pattern when table is a heap.
The range of data pages between 1/4(25%) and 1/3(33.3%) of the total data pages defines The Tipping Point space. In this scenario, the number of randomly accessed pages relates to the total number of the selected rows. However,  25% – 33% of pages represents only a fraction of the total number of rows – for 500byte row size, between 1.6% and 2%. The range also depends on the row size. For the same number of rows and with the row size set to 1000bytes, the range increases to 3% – 4% of  the total number of rows.

I wish to thank to my dear colleague and a great SQL Server enthusiast Jesin Jayachandran for inspiring me to write this blog.

Thanks for reading.

Dean Mincic

Conditional branching and OPTION(Recompile)

Conditional branching, OPTION(Recompile), and procedure plan


Summary

There are many cases when programmers use conditional branching in TSQL code to execute different queries or similar queries with different predicates based on a certain condition. In this post, I’ll try to explain how Query Optimizer handles queries in different conditional branches and how it relates to the option(recompile) hint and the procedure plan.

Conditional branching in stored procedures

Our TSQL code may implement logic that uses conditional branching to decide what business rule to apply. Take for example a simple, non-production process that selects all orders and their details associated with a productId. If the product is not included in any of the sales Orders, the code returns nothing or a warning message.

Create test data

The script below creates a sample table with the following ProductId data distribution.


Figure 1, ProductId data distribution 

The figure above reads as follows i.e
ProductId = 0 participates in 100 Orders. The number of orders makes up 0.1% of all orders. The same applies to ProductId 100,200,300 …4900, or 50 different ProductIds.
ProductId=40000 participates in 20,000 orders. The number of orders makes up 20% of all orders. The same applies to ProductId 60000 and 80000, or 3 different ProductIds.

/*SQL Server 2019 CTP3, Compatibility level 150 */

--create a test table
DROP TABLE IF EXISTS dbo.TestBranchPlans
GO

CREATE TABLE dbo.TestBranchPlans(  
    OrderDetailId INT  IDENTITY(1,1)
       RIMARY KEY CLUSTERED
   ,ProductId INT 
       INDEX NCI_ProductId
   ,filler CHAR(10)							
       DEFAULT('abcdefghij')							   
)
GO

--insert test data
;WITH getNumbers AS
(
    SELECT TOP(100000) 
        -- generate 100K records starting from 0
        rn = ROW_NUMBER() OVER(ORDER BY (c.[object_id])) -1 
    FROM sys.all_columns c, sys.all_columns c1

),setDistribution AS 
(
    SELECT  
      rn
     ,CASE 
        -- values 0 to 100 for the first 5000 rows. (0.1% per value)
        WHEN rn < 5000 THEN rn%100
        -- values 0 to 500 for the rows between 5000 and 10000 (0.5% per value) 
        WHEN rn < 10000 THEN rn % 500    
        -- values 0 to 10K for the rows between 10K and 50K (10% per value) 
        WHEN rn < 50000 THEN rn % 10000
        -- values 0 to 20K for the rows between 10K and 100K (20% per value) 
        ELSE rn %20000                  
      END as dstrb   	   
    FROM  getNumbers
)

INSERT INTO dbo.TestBranchPlans WITH(TABLOCKX) (ProductId)
    SELECT  
      CASE  --consolidate distributed values i.e make ProdId=200 100 times ..etc
        WHEN (dstrb = 0) THEN  rn 
        ELSE rn - dstrb 
      END as ProductId
    FROM setDistribution 
GO

The script used to check data distribution …

;WITH x AS
(
    SELECT  NoOfProducts = COUNT(*)
            ProductId
    FROM TestBranchPlans
    GROUP BY ProductId
)
SELECT [ProductId Distribution] = x.NoOfProducts
      ,[ProdId from] = MIN(ProductId)
      ,[ProdId To] = MAX(ProductId)
      ,NoOfDiffProducts = COUNT(x.ProductId)
      ,[Percentage] = CAST((NoOfProducts/100000.00 ) * 100.00 AS DECIMAL(3,1))
FROM x
GROUP BY NoOfProducts

Test stored procedure

CREATE OR ALTER PROCEDURE dbo.TestCodeBranching
    @ProductId INT = NULL

AS
BEGIN
    SET XACT_ABORT ON;
    SET NOCOUNT ON;
     
    IF NOT EXISTS (SELECT ProductId
                   FROM  dbo.TestBranchPlans
                   WHERE ProductId = @ProductId) 
       BEGIN 
         --RAISERROR('There are no orders for the ProductID',10,1) WITH NOWAIT;
         RETURN;
       END 		
    ELSE 
       SELECT OrderDetailId
              ,ProductId
              ,filler
       FROM dbo.TestBranchPlans
       WHERE ProductId = @ProductId
    RETURN;
END
GO

Experiment 1
Proc. plan is generated for all branch paths

The first experiment shows that QO (query optimizer) builds query plans for all code branches regardless of which one is executed on the very first sproc call. This is expected since the procedure plan( a set of query execution plans) is generated and cached without knowing which code path will be chosen.

Execute stored proc without passing @ProductID parameter value. The parameter is an optional parameter and will have a default value NULL.

EXEC dbo.TestCodeBranching 
GO 100

--check the plan
SELECT   DatabaseName = DB_NAME(x.dbid)
        ,ObjectName = OBJECT_NAME(x.objectid)
        ,c.cacheobjtype
        ,c.objtype
        ,NoOfExecutions = c.usecounts
        ,x.query_plan
        ,y.text
FROM    sys.dm_exec_cached_plans c
    CROSS APPLY sys.dm_exec_query_plan(plan_handle) x -- xml plan for the entire cache
    CROSS APPLY sys.dm_exec_sql_text(c.plan_handle) y
WHERE c.objtype = 'Proc'
    AND y.[text] LIKE '%TestCodeBranching%'
GO

Notes:
GO 100 is to ensure that the plan stays in memory for some time. This is not needed for server-level environments.
The 2nd query selects the procedure’s cashed plan(this is a set of estimated plans – no run-time values 🙂 ). In this example, the complete proc plan has two main query plans branched from T-SQL(COND WITH QUERY) operator.


Figure 2, Proc plan contains query plans for all code branches

The cached procedure plan shows that the second branch query plan is optimized by using the same parameter value (ProductId = NULL).

Note: The estimated number of rows is 1. Because the initial, NULL value is not represented by a RANGE_HI_KEY, SQL Server will use AVG_RANGE_ROWS value (stats histogram) instead. If we used i.e. ProductId =2008 instead of NULL, the estimated number of rows would be 100 – use DBCC SHOW_STATISTICS(‘dbo.TestBranchPlans’, ‘NCI_ProductId’) to observe this behavior. 

The stored procedure call did not return any rows and the execution plan was built for @ProductionId = NULL. All subsequent calls may have sub-optimal plans as presented below.

DBCC FREEPROCCACHE
GO
EXEC dbo.TestCodeBranching -- 0 rows
GO
EXEC dbo.TestCodeBranching --0.1% 100 rows
    @ProductId = 200;
GO
EXEC dbo.TestCodeBranching  --0.5% 500 rows
    @ProductId = 5500;
GO
EXEC dbo.TestCodeBranching --10% 10000 rows
    @ProductId = 20000;
GO
EXEC dbo.TestCodeBranching --20% 20000 rows
    @ProductId = 60000;
GO

Figure 2, Plan stability problem (Parameter sniffing problem)

*Accuracy[%] = (No of Actual Rows /  No. Of estimated rows ) * 100
This feature is available in SSMS 18.x+

Accuracy = 100%  – Ideal case, The estimated number of rows was spot on
Accuracy <100% – Overestimate. The estimated number of rows is higher than the actual no. of rows
Accuracy>100% – Underestimate. The estimated number of rows is lower than the actual number of rows.

Figure 2 shows the negative effect of the cached, sub-optimal procedure plan, on the subsequent procedure calls.

Unstable procedure plan

The previous experiment showed how SQL Server builds query plans for all code paths without knowing which one will be executed. The query optimizer uses the value passed into the @ProductID parameter to create query plans for all queries in the batch that references it. In the test we called the stored procedure without passing @ProductId, The absence of the value instructed the code to use the parameter’s optional value, @ParameterId = NULL. As a consequence, QO used an atypical value to build and cache a sub-optimal query plan, which then had a negative impact on all subsequent procedure calls.
But the effect could be the same even if we passed any value to @ProductId.
Figure 1 shows that the values in the ProductId column are not evenly distributed (which is usually true for the production systems 🙂 ). However,  most of the ProductIds, 76% (50 out of 66 different ProductIds) return the same number of rows(100 rows). There are 15% ProductIds (10 out of 66) that return 500 rows and only 3% ProductIds (3 out of 66) that return 10,000 and 20,000 rows.

Let’s say that our procedure call pattern assumes a similar probability of passing “small”, more selective*(returns only 100 rows), and “big”, less selective(returns 20,000 rows) ProductId values.

*Selectivity represents the uniqueness of values in a column. High selectivity = high uniqueness = low number of matching values. Selectivity = (rows that pass the predicate / total rows in the table). Selectivity[ProductId=200] = 100 / 100,000 =0.001(high selectivity)  and [ProductId = 40000] = 20000/100000 = 0.2 (low selectivity)

A cached procedure plan compiled for a “small” ProductId has a negative impact on the procedure executions with a “big” ProductId and vice versa.

DBCC FREEPROCCACHE
GO
SET STATISTICS XML ON 
    EXEC dbo.TestCodeBranching
        @ProductId = 200
SET STATISTICS XML OFF 
GO 
--check the plan
SELECT   DatabaseName = DB_NAME(x.dbid)
        ,ObjectName = OBJECT_NAME(x.objectid)
        ,c.cacheobjtype
        ,c.objtype
        ,NoOfExecutions = c.usecounts
        ,x.query_plan
        ,y.text
FROM    sys.dm_exec_cached_plans c
    CROSS APPLY sys.dm_exec_query_plan(plan_handle) x -- xml plan for the entire cache
    CROSS APPLY sys.dm_exec_sql_text(c.plan_handle) y
WHERE   c.objtype = 'Proc'
       AND y.[text] LIKE '%TestCodeBranching%'
GO
SET STATISTICS XML ON 
    EXEC dbo.TestCodeBranching
        @ProductId = 40000
SET STATISTICS XML OFF


Figure 3, Cached plan for a small ProductId

We would get similar results if we passed a “big” ProductId first, and then made a procedure call passing a “small” value to the parameter, only this time the cached procedure plan would work in favor of the “big” parameter.
This situation is known as the “parameter sniffing” problem. The result is an unstable procedure plan.
One of several different ways to resolve the problem is to instruct the query processor to recompile the statement in question, on every procedure execution.

OPTION(RECOMPILE)

OPTION(RECOMPILE) is a statement-level command that instructs the query processor to pause batch execution, discard any stored query plans for the query, build a new plan, only now using the run-time values (parameters, local variables..), perform “constant folding” with passed in parameters.

Experiment 2
Conditional branching and OPTION(RECOMPILE)

In this experiment, I’ll use OPTION(RECOMPILE) to stabilize the procedure plan. Let’s repeat the last test, but this time we instruct the query processor to recompile the statement in question

... 
  ELSE 
    SELECT OrderDetailId
           ,ProductId
           ,filler
    FROM dbo.TestBranchPlans
    WHERE ProductId = @ProductId
    OPTION(RECOMPILE)
..


Figure 4, Stable procedure plan with Option(Recompile)

Note: Using OPTION(recompile) to stabilize the plan comes with a certain cost. It adds a small overhead to the query compile time and can have some impact on the CPU. It is a good idea, if possible, to re-write/decouple stored proc in order to prevent high variations in the procedure plans.
Personally, I’ve witnessed great results with the option(recompile) in the frequently executed stored procedures with the plan stability problem.

Where is my procedure plan?

In the next test, we’ll run our stored procedure with the option(recompile), with a parameter value that does not match any existing ProductId value. This call will execute the first code branch and exit the batch.

DBCC FREEPROCCACHE
GO
    EXEC dbo.TestCodeBranching
        @ProductId = 721
GO 
--check the plan
SELECT   DatabaseName = DB_NAME(x.dbid)
        ,ObjectName = OBJECT_NAME(x.objectid)
        ,c.cacheobjtype
        ,c.objtype
        ,NoOfExecutions = c.usecounts
        ,x.query_plan
        ,y.text
FROM    sys.dm_exec_cached_plans c
    CROSS APPLY sys.dm_exec_query_plan(plan_handle) x -- xml plan for the entire cache
    CROSS APPLY sys.dm_exec_sql_text(c.plan_handle) y
WHERE c.objtype = 'Proc'
    AND y.[text] LIKE '%TestCodeBranching%'
GO


Figure 5, Incomplete procedure plan 

So, now we need to answer the question “Why we are not getting our cached procedure plan (cached_plan is NULL)”.    🙂

Deferred Query compilation

When a client app executes a stored procedure for the first time, the query processor may decide not to create query plans for each individual query. This behavior is called Deferred Query Compilation. Some of the possible reasons are related to the conditional branching.
If the first call does not execute a code branch that contains at least one option(recompile) statement – there may be more than one statement in a code branch, the query plans for the branch will not be generated and cached. This makes the procedure plan, as a set of individual query plans, incomplete.
Dynamic management function sys.dm_exec_query_plan(plan_handle) returns all individual query plans for the entire batch. If one or more query plans are not generated, the function returns NULL. This makes sense since we do not have a complete procedure plan.
The following test demonstrates this behavior.

1. Create a new test stored proc dbo.TestCodeBranching1

CREATE OR ALTER PROCEDURE dbo.TestCodeBranching1
    @ProductId INT = NULL
AS
BEGIN
    SET XACT_ABORT ON;
    SET NOCOUNT ON;
     
    IF NOT EXISTS (SELECT ProductId
                   FROM  dbo.TestBranchPlans
                   WHERE ProductId = @ProductId) 
        RETURN;
    ELSE 
    BEGIN 
        SELECT OrderDetailId
              ,ProductId
              ,filler
        FROM dbo.TestBranchPlans
        WHERE ProductId = @ProductId;

        SELECT 'hello there'
        OPTION(RECOMPILE);
/*
Test the same scenario using a temp table (replace the two queries above with the commented code
        CREATE  TABLE #TEMP(IOrderDetailId INT
              ,ProductId INT
              ,filler CHAR(10))

        INSERT INTO #TEMP 	
            SELECT OrderDetailId
                  ,ProductId
                  ,filler
            FROM dbo.TestBranchPlans
            WHERE ProductId = @ProductId
*/
          
        SELECT TOP(100) column_id, [name]
        FROM sys.columns 
        ORDER BY column_id DESC;	
    END 
    RETURN;
END
GO

2. Run the script below.

DBCC FREEPROCCACHE
GO
    EXEC dbo.TestCodeBranching1
        @ProductId = 40001
GO 

--check the plan
SELECT   DatabaseName = DB_NAME(x.dbid)
        ,ObjectName = OBJECT_NAME(x.objectid)
        ,c.cacheobjtype
        ,c.objtype
        ,NoOfExecutions = c.usecounts
        ,x.query_plan
        ,y.text
FROM    sys.dm_exec_cached_plans c
    CROSS APPLY sys.dm_exec_query_plan(plan_handle) x -- xml plan for the entire cache
    CROSS APPLY sys.dm_exec_sql_text(c.plan_handle) y
WHERE c.objtype = 'Proc'
    AND y.[text] LIKE '%TestCodeBranching1%'
GO

--EXEC sp_describe_first_result_set 
SELECT  DatabaseName  = DB_NAME(s2.[dbid])
       ,ProcName      = OBJECT_NAME(s2.objectid, s2.[dbid])
       ,sql_statement = SUBSTRING( s2.[text]
                                   ,(s1.statement_start_offset / 2) + 1 --/2 2bytes per character S2.text is of type nvarchar(max). Start position is 0byte
                                   ,( 
                                        (CASE -- check for the end of a batch
                                            WHEN s1.statement_end_offset = -1 THEN  DATALENGTH(s2.[text]) --end of the last query in the batch  -1 represents the end of a batch
                                            ELSE s1.statement_end_offset
                                         END) 
                                        - (statement_start_offset) / 2
                                     ) + 1
                         )
        ,query_plan = CAST(s3.query_plan AS XML) 
FROM    sys.dm_exec_query_stats AS s1
    CROSS APPLY sys.dm_exec_sql_text(s1.[sql_handle]) AS s2
        CROSS APPLY sys.dm_exec_text_query_plan(s1.plan_handle, s1.statement_start_offset, s1.statement_end_offset) s3 -- get the query plans
WHERE OBJECT_NAME(s2.objectid, s2.dbid) = 'TestCodeBranching1'
ORDER BY  s1.[sql_handle]
         ,s1.statement_start_offset
         ,s1.statement_end_offset

Results
Figure 6, Individual query plans

The sequence of events is as follows:

  • The first branch was executed during the very first stored procedure call.
  • The query processor finds an Option(recompile) statement within the second code branch and decides not to create execution plans for any of the queries in the code path.
  • Dynamic management fn, sys.dm_exec_query_plan(plan_handle) did not return the cached procedure plan because the plan was incomplete.
  • Dynamic management function sys.dm_exec_text_query_plan(plan_handle, statement_start_offset, statement_end_offset) returned all available single query plans within the batch. In our case, we have only one cached query plan. The plan belongs to the code path that was executed on the first call.

How does the sys.dm_exec_text_query_plan query work?

The query collects data from the following objects:
sys.dm_exec_query_stats – gets various statistical information for cached query plans. We use the sql_handle column (uniquely identifies the batch or stored procedure that the query is part of), plan_handle (uniquely identifies query execution plan for the queries that belong to the batch), statement_start_offset, statement_end_offset ( define, in bytes, the starting and ending position of a query within a batch)
sys.dm_exec_sql_text – gets the text of the batch of the queries identified by sql_handle.  It also provides info about the proc name, database, etc…
-sys.dm_exec_text_query_plan  – returns execution plan for a batch of statements or for specific statement(s) in the batch. statement_start_offset(0 beginning of the batch) and statement_end_offset (-1 end of the batch) define the start and end position of the query within the batch defined by plan_handle.

Conclusion

Conditional branching as an imperative construct in TSQL has a specific treatment by SQL Server’s query processor. A procedure batch with conditional branching may be optimized and cached for all code paths regardless of which branch is executed on the first procedure call. This may produce sub-optimal plans for a number of queries within the non-executed code branches. It is important to be aware of the behavior in order to prevent potential sub-optimal query executions.
Sql Server may choose not to compile statements(deferred query compilation) within the non-executed code paths if at least one of the queries within the code paths has the OPTION(recompile) hint – this is also true for temp tables. This will make the procedure plan (as a set of query plans)  incomplete, hence sys.dm_exec_query_plan function returns NULL for the plan handle. However, queries from the executed code branch will be cached and the query plans will be available through sys.dm_exec_text_query_plan.

Thanks for reading.

Dean Mincic

Its time for Datetime

Its time for Datetime


Summary

This post will try to explain the intricacies behind SQL Server’s Datetime/Datetime2 data type. The document will show the difference between the storage protocol used to store the data type and the way SQL Server interprets strings before storing them in datetime. It also mentions the visual representation of the data type by the client applications.
The differentiation between these three aspects can help us better understand and use datetime data types in everyday work.

DATETIME data type

The DATETIME data type has been around since the early days of SQL Server. It is widely used across old and new applications. In SQL Server 2008 Microsoft introduced a new set of superior data types that handle dates and time: Datetime2, Date, Time, and Datetimeoffset. Despite Microsoft’s recommendations to use the new datatypes, it seems to me that there is still a lot of post SQL Server 2008 applications that still use the old DATETIME.

The storage protocol

Datetime data type uses 8bytes to store date and time parts of a datetime information. The 8bytes is used to store two 4byte integers;

  • The first 4bytes store the DATE part. The information is stored as an integer value that represents the number of days before(-) or after(+) January 01, 1900.
  • The last 4bytes (from left to right) store the TIME part. The information is stored as an integer value that represents a number of 1/300 of a second from midnight (00:00:00.000). This is always a positive integer value.

Tips and Ticks

In computer hardware everything ticks. The CPU has its own rhythm set by the CPU cycles and other hardware components have their own pace. The internal clock in computers keeps accurate time and it’s based on specific hardware components. This article may be a good starting point for those who want to know more about timekeeping in SQL Server and in PCs in general.

The precision of DATETIME ‘s TIME part is represented as 300 ticks in 1 second aka 3 decimal seconds. The precision is not down to 1ms(millisecond) since there are only 300 units in a second, not 1000. To represent the precision in milliseconds we can say that

1tick = 1s /300,
1tick = 1000ms/300  =3.3ms.

So, in the case of SQL Server’s  DATETIME a tick is one 300th of a second or 3.3miliseconds.

Examples

The example below extracts date and time 4-byte integers from the date type and proves the above analysis.

DECLARE @DTime DATETIME 
       ,@DTime_HEX BINARY(8)          --datetime binary
       ,@DTime_DATEpart_HEX BINARY(4) --date part binary
       ,@DTime_TIMEpart_HEX BINARY(4) --time part binary
       ,@DTime_DATEpart_INT INT       --date part integer (no of days before or after 1900-01-01)
       ,@DTime_TIMEpart_INT INT       --number of ticks since midnight 00:00:00.000	   
       ,@DTime_TIMEpart_DEC_ms DECIMAL(38,3); --number of miliseconds since midnight 1tick = 3.3333333333(3) ms

SET @DTime ='20190712 08:18:00.973';---GETDATE(); --get current datetime

--split the datetime 8bytes into 4b Date and 4b Time
SET  @DTime_HEX =  CAST(@DTime AS BINARY(8)); --hex datetime

SET  @DTime_DATEpart_HEX = SUBSTRING(@DTime_HEX,1,4); --first 4 bytes (hex date)
SET  @DTime_TIMEpart_HEX = SUBSTRING(@DTime_HEX,5,4); --last 4 bytes (hex time)

SET  @DTime_DATEpart_INT = CAST(@DTime_DATEpart_HEX AS INTEGER); --first 4 bytes (integer)
SET  @DTime_TIMEpart_INT = CAST(@DTime_TIMEpart_HEX AS INTEGER); --last 4 bytes (integer)

SET @DTime_TIMEpart_DEC_ms = @DTime_TIMEpart_INT * 3.3333333333;


--analyse DATE part
SELECT [Current datetime]     = @DTime
      ,[datetime Size(Bytes)] = DATALENGTH(@DTime)
      ,[datetime Binary(hex)] = @DTime_HEX

      --date
      ,[Date_part(hex)]      = @DTime_DATEpart_HEX
      ,[Date_part(Integer)]  = @DTime_DATEpart_INT --days from 1900-01-01
      ,[DaysFrom 1900.01.01(datediff)] = DATEDIFF(DAY,'19000101',@DTime) -- confirm the difference

      --time
      ,[Time_part(hex)]     = @DTime_TIMEpart_HEX
      ,[Time_part(Integer)]  = @DTime_TIMEpart_INT --ticks from midnight 
      ,[Time_part(ms)]       = @DTime_TIMEpart_DEC_ms --miliseconds from midnight
      ,[Hours]               = FLOOR( (@DTime_TIMEpart_DEC_ms/1000/60/60) %24) --hours from time part
      ,[Minutes]             = FLOOR( (@DTime_TIMEpart_DEC_ms/1000/60) %60 )   --minutes from time part
      ,[Seconds]             = FLOOR( (@DTime_TIMEpart_DEC_ms/1000) %60 )      --seconds from time part

      ,[1/300 of a second]   = CAST ( (@DTime_TIMEpart_DEC_ms / 1000) % 1 AS DECIMAL(3,3)) --datetime's precision 1/300 of a second


Figure 1, DATETIME storage protocol

Add a day trick.

Knowing that the “first” 4bytes represents a number of days from the base date (January 01, 1900) we can easily add/subtract (n) number of days by just adding integer values.

SELECT CurrentDate = GETDATE()
       ,TheNextDay = GETDATE() + 2
       ,ThreeDaysAgo = GETDATE() - 3

The date and time presentation

The way applications present dates may be quite different i.e June 26, 2019, 26.06.2019, 06-26-19, etc. However, this should be decoupled from the way SQL Server stores the data type. For DATETIME it is always an 8byte (2x4byte) integer. As shown before, there is no ambiguity in how the storage engine keeps the date and time. However, very often Client applications pass strings to databases, and the strings represent date and time in a certain format. SQL Server has to convert these strings to be able to store the information into the DATETIME data type.

Insert strings as date and time

Take for an example application that sends the date as ’12/06/50′. This may be generated by a date-time picker control and sent to SQL Server as a string.
Now, SQL Server needs to figure out how to interpret the date before storing it in a DATETIME column, variable, or parameter. The string value looks ambiguous since it can represent a few different dates. 12th of June 1950,  December the 6th 2050, 12th of June 3050, etc.

CREATE OR ALTER PROCEDURE dbo.uspGetDate		
    @myDate DATETIME
AS
BEGIN
    SET XACT_ABORT ON;
    SET NOCOUNT ON;

    SELECT @myDate;

    RETURN;
END

Let’s call the stored proc passing the date parameter. EXEC dbo.uspGetDate @myDate = ’12/06/50′
My SSMS returns 1950-12-06 00:00:00.000.  Again, I am not sure if 12 represents December or the 12th day of June. Why does 50 happen to be 1950 and not, say, 2050? Why the time part shows midnight? The next section will try to answer these questions.

  1. Midnight. If we skip the time part of the DATETIME couple, the second 4byte integer will get the value of 0.  00:00:00.000
    DECLARE @DTimeTest DATETIME;
    
    SET @DTimeTest = '20190709'--no time part
    SELECT @DTimeTest , [Date]=SUBSTRING(CAST(@DTimeTest AS BINARY(8)),1,4),[Time]=SUBSTRING(CAST(@DTimeTest AS BINARY(8)),5,4)
  2. if we insert only the time part of the DATETIME couple, the first 4byte integer will get the value of 0, representing the starting point for counting the number of days from. January 01, 1900
    DECLARE @DTimeTest DATETIME;
    
    SET @DTimeTest = '12:23:34.456' -- no date part
    SELECT @DTimeTest , [Date]=SUBSTRING(CAST(@DTimeTest AS BINARY(8)),1,4),[Time]=SUBSTRING(CAST(@DTimeTest AS BINARY(8)),5,4)
  3. What’s a month and what’s a day?  The way SQL Server understands string value when converting it to DATETIME is to use a pre-defined order of date elements; dmy, mdy ..etc. This sequence (DATEFORMAT) tells SQL Server the position of the elements of the string that represents the month, day, and year. DATEFORMAT depends on the default language used on the connection level. The language defines the sequence of date elements. On the instance(server) level, the default language is set to us_english.
    --Default Sql Server language
    SELECT * FROM sys.syslanguages WHERE [alias]='English'
    
    --read the current session language settings
    DBCC USEROPTIONS;
    
    --capture the Login event during the application login

    From the queries above, we can learn that the DATEFORMAT SQL Server expects when trying to convert ’12/06/50′ into a DATETIME is mdy (month-date-year). Following this rule SQL Server converts  ’12/06′ as the 6th of December. We can change the mdy sequence in six different ways(six variations of the three letters 🙂 ) using the DATEFORMAT set option.

    SET DATEFORMAT DMY;
    GO
    EXECUTE dbo.uspGetDate @myDate = '12/06/50'

    Now, SSMS returns 1950-16-12 00:00:00.000.  12th of June. We can also use the names of the months – see BOL for more examples.

    EXECUTE dbo.uspGetDate @myDate = '12/06/50';
    EXECUTE dbo.uspGetDate @myDate = '06 December 50';
    EXECUTE dbo.uspGetDate @myDate = 'December 06 50';
    EXECUTE dbo.uspGetDate @myDate = 'Dec 06 50';
    ... etc
  4. Why 50 is 1950? There is a setting called “Two-digit Year Cutoff” and it has the default value of 2049. The setting is available on the Server level(Advanced settings) and on the database level for contained databases only.
    --minimum 1753, max 9999
    EXECUTE sys.sp_configure 
            @configname = 'two digit year cutoff' 
            ,@configvalue = 2049;
    RECONFIGURE;
    
    --only contained databases
    ALTER DATABASE MyContainedDB
        SET TWO_DIGIT_YEAR_CUTOFF =2050;
    GO

    The setting defines how SQL Server interprets the two-digit year.
    – A two-digit year that is less than or equal to the last two digits of the cutoff year will have the same first two digits as the cutoff year.
    – A two-digit year that is greater than the last two digits of the cutoff year, SQL Server interprets the first two digits as a number that is one less than the cutoff year’s first two digits.

    In our case, the two-digit year is 50. 50 is greater than 2049. Following the rule, SQL Server subtracts 1 from 20 (20 -1 = 19)to get the first two digits for the year. That’s why our date has the year 1950.

ISO Standard

As mentioned above, converting strings to DATETIME data type can be ambiguous and may lead to logical errors. To avoid confusion, we can use a string format that follows ISO Standards. The format ALWAYS represents date and time as YYYYMMDD hh:mm:ss.nnn. regardless of DATEFORMAT and/or LANGUAGE settings.

DECLARE @isoDTime DATETIME;
SET @isoDTime ='19501206 13:30:55.123';

SELECT @isoDTime;
--The date will always be the 6th of December 1950

Round my Ticks

Another interesting thing about DATETIME is that the precision is  1/300 of a sec. The rounding happens on the  00:00:00.00x decimal place. The third decimal place is ALWAYS  0, 3, or 7 :).
So, let’s test this behavior.

DECLARE @iWantMyPrecisionBack DATETIME; 
SET @iWantMyPrecisionBack ='19710626 07:30:55.995';
SELECT @iWantMyPrecisionBack;

The stored DATETIME value is rounded to 1971-06-26 07:30:55.997.

The script below shows the rounding in a visual manner.

;WITH getNums AS --generate 100 numbers
(
    SELECT TOP(100)
           rn = ROW_NUMBER() OVER(ORDER BY (SELECT NULL))
    FROM sys.columns, sys.columns c
    ORDER BY (SELECT NULL)
), getLZrs AS --generate the expected results (date and time represented as a string)
(
     SELECT TOP(100)
        rnZ = IIF(rn<10,'00',IIF(rn<100,'0',''))+ cast (rn AS VARCHAR(3))
     FROM getNums
     ORDER BY rnZ
)

SELECT  DateTime_Expected =  '20191206 13:30:55.' +rnZ
       ,DateTime_Rounded = CONVERT(DATETIME,'20191206 13:30:55.' +rnZ) --rounding    
FROM getLZrs


Figure 2, DATETIME rounding

Minimum date quirk

And yes, there is another unusual thing about datetime data type. We learned that the starting point for storing the DATE part of DATETIME is 01 January 1900, or 19000101 in the ISO standard format. So, if we want to store a date before the starting point, say 01 January 1753, storage engine protocol will store (-)No. Of days from 01 January 1900.

DECLARE @minDate DATETIME =  '17530101'; --'17521231'

SELECT Date = @minDate
      ,Date_hex = SUBSTRING(CAST(@minDate AS BINARY(8)),1,4) -- 4byte date part
      ,Date_int = CAST(SUBSTRING(CAST(@minDate AS BINARY(8)),1,4) AS INTEGER) --no of days integer
      ,Check_diff = DATEDIFF(DAY,'19000101','17530101') --just checking

The output shows -53690 days from the start date. The number matches the DATEDIFF output. Now, if we try to store just a day later, ’31 December 1752′ The conversion will fail with
Msg 242, Level 16, State 3, Line 184
The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.

Hmm, we are definitely not running out of 4byte integers, so why 1753?
The reason seems to be more of a historical nature. On the 2nd of September 1752, Great Britain replaced Julian with the Gregorian calendar, which differed from the original by 12 days. So, the “next” day was the 14th of Sept :). In order to avoid confusion when calculating the number of days between a date before 02.09.1752 and a date after, Sybase engineers decided not to allow dates before 1753. Simple as that 🙂

Datetime presentation

SQL Server can present information stored as datetime data type as a string in many different ways. The formatted strings that represent dates are usually sent to Client applications i.e for reporting purposes. This should be completely decoupled from the storage protocol mentioned before. For presentation purposes, SQL Server provides the CONVERT function. The function’s third, optional parameter defines the output format of the string that represents the converted DATETIME. The script below shows the idea behind the formatted strings.

DECLARE @mydate DATETIME= '20190710 09:30:55.123' --iso standard used to insert date into a variable

SELECT  [SSMS - style=121] = @mydate
       ,[CONVERT date style=109] = CONVERT(VARCHAR(30),@mydate,109)
       ,[CONVERT date style=108] = CONVERT(VARCHAR(30),@mydate,108) --only time

I found the following script very useful for a quick overview of the different formatting styles.

DECLARE @dt DATETIME = GETDATE() 
;WITH getStyleNo AS
(
    SELECT TOP(200)
          StyleId     = ROW_NUMBER() OVER(ORDER BY (SELECT NULL))
        ,[DateFormat] = TRY_CONVERT(NVARCHAR(40),@dt,ROW_NUMBER() OVER (ORDER BY (SELECT NULL)))
    FROM sys.columns,sys.columns c
    ORDER BY (SELECT NULL)
)
SELECT *
FROM getStyleNo
WHERE getStyleNo.[DateFormat] IS NOT NULL

The full list of the formatting style codes can be found here.

DATETIME2 data type

It would be impossible to conclude this blog without mentioning DATETIME’s younger brother, DATETIME2.
The new data type was introduced in SQL Server 2008 as a part of a new set of date/time data types and functions.  After the release, Microsoft strongly recommended new date and time data types:
Use the timedatedatetime2, and datetimeoffset data types for new work. These types align with the SQL Standard. They are more portable. timedatetime2, and datetimeoffset provide more seconds precision. datetimeoffset provides time zone support for globally deployed applications.”

My personal opinion is that programmers, now over a decade since datetime2 was first introduced,  still prefer DATETIME. Not sure why, but this is something I see in the industry regardless of the size and/or complexity of the SQL applications.

The main functional advantages of datetime2 over DATETIME are:

  • higher precision. 100 nanoseconds.  1ns = 10E-9s, 100ns = 10E-7s
  • larger date range:  from 01 January 0001 00:00:00.0000000 – 31 December 9999 23:59:59.9999999
  • Is an ANSI Compliant
  • Optional user-specified precision (0-7) i.e DATETIME(3), precision 1/1000 of a second.

Datetime2 ticks better

The precision of DATETIME2 is 100ns (nanoseconds), or 10E-7s (1/10,000,000), hence 7 decimal places in the TIME part of the data type.
The following test shows the storage protocol used for the datetime2 data type.

DECLARE @DTime DATETIME2
       ,@DTime_HEX BINARY(9)          --datetime binary
       ,@DTime_DATEpart_HEX BINARY(3) --date part binary
       ,@DTime_TIMEpart_HEX BINARY(5) --time part binary
       ,@DTime_DATEpart_HEX_rev BINARY(3) --reversed binary date part
       ,@DTime_TIMEpart_HEX_rev BINARY(5) --reversed binary time part
       
       ,@DTime_DATEpart_INT INT       --date part integer (no of days before or after 1900-01-01)
       ,@DTime_TIMEpart_INT BIGINT    --number of ticks since midnight 00:00:00.0000000	   
       ,@DTime_TIMEpart_DEC_ns BIGINT --number of miliseconds since midnight 1tick = 100ns
       ,@Precision BINARY(1);

    SET @DTime ='2019-07-16 11:02:06.7695434' --SYSDATETIME(); --get current datetime

    --split the datetime(8+1)byte into 1b precisoin, 3byte date and 5byte time parts
    SET  @DTime_HEX =  CAST(@DTime AS BINARY(9)); --hex datetime

    --precision (1st byte)
    SET  @Precision = SUBSTRING(@DTime_HEX,1,1); -- first byte is precision 0,1,2...7

    --time (bytes 2-6 reversed)
    SET @DTime_TIMEpart_HEX     = SUBSTRING(@DTime_HEX,2,6)
    SET @DTime_TIMEpart_HEX_rev = CAST(REVERSE(@DTime_TIMEpart_HEX) AS BINARY(5))
    SET @DTime_TIMEpart_INT     = CAST(@DTime_TIMEpart_HEX_rev AS BIGINT) --number of ticks

    --date (bytes 7-9 reversed)
    SET  @DTime_DATEpart_HEX     = SUBSTRING(@DTime_HEX,7,3)
    SET  @DTime_DATEpart_HEX_rev = CAST(REVERSE(@DTime_DATEpart_HEX) AS BINARY(3))
    SET  @DTime_DATEpart_INT     = CAST(@DTime_DATEpart_HEX_rev AS INTEGER)

    --convert ticks into nano-seconds
    SET @DTime_TIMEpart_DEC_ns = @DTime_TIMEpart_INT * 100  --@DTime_TIMEpart_INT  --No of ticks since midnight. 1tick =100ns

    --analyse DATE part
    SELECT [Current datetime]     = @DTime
          ,[datetime Size(Bytes)] = DATALENGTH(@DTime) -- +1b for the precision		
          ,[datetime Binary(hex)] = @DTime_HEX

          ,[Precision] = @Precision
      
          --date
          ,[Date_part(hex)]          = @DTime_DATEpart_HEX 
          ,[Date_part(hex) reversed] = @DTime_DATEpart_HEX_rev
          ,[Date_part(Integer)]      = @DTime_DATEpart_INT  --days from January 01 0001
          ,[DaysFrom 00001.01.01(datediff)] = DATEDIFF(DAY,'00010101',@DTime) -- confirm the difference

    SELECT 
          --time
           [Time_part(hex)]          = @DTime_TIMEpart_HEX
          ,[Time_part(hex)_reversed] = @DTime_TIMEpart_HEX_rev
          ,[Time_part(Integer/ticks)]= @DTime_TIMEpart_INT --ns from midnight
          ,[Time_part(ns)]           = @DTime_TIMEpart_DEC_ns --ns from midnight
          ,[Hours]                   = (@DTime_TIMEpart_DEC_ns /1000000000/60/60 ) % 24 --/1000000000 gets seconds from ns 1s = 10e-9ns
          ,[Minutes]                 = (@DTime_TIMEpart_DEC_ns/1000000000/60) %60       --minutes from time part
          ,[Seconds]                 = (@DTime_TIMEpart_DEC_ns/1000000000) %60          --seconds from time part
          ,[Precision(100ns)]        = CAST((@DTime_TIMEpart_DEC_ns/1000000000.0000000) % 1 AS DECIMAL(8,7)) --datetime's precision 100ns - 7 decimal places

GO


Figure 3, datetime2 storage protocol

There are a few interesting things in the way the SQL Server storage engine deals with datetime2.

  • The maximum length of the datatype is 8bytes, SELECT dt2_maxSize = DATALENGTH(SYSDATETIME()). The size required to store datetime2 into a binary data type is 9bytes.  The one extra byte is used to store precision*.
  • Bytes are stored in reversed order – more details on the famous argument between Lilliputians can be found in an excellent post here 🙂
  • The date part is stored in the “last” 3bytes.
  • The storage size required to store the Time part may vary between 3 – 5 bytes depending on the requested precision.

*NOTE: DATETIME2(n) is a fixed data type. Depending on the scale(n) SQL server storage engine use 6-8 bytes to physically store the information on the data page. One extra byte is not needed since the precision cannot vary across different rows.

Configurable precision

The younger brother gives us the ability to control the date type’s precision. As mentioned above, the maximum precision is 100ns or 7 decimal places in the time part. If not specified, DATETIME2 uses the maximum precision.
The query below shows different precision of the same datetime2 value.

DECLARE @tsql NVARCHAR(MAX);

SET @tsql = N'DECLARE @dtime2 DATETIME2 = SYSDATETIME(); ' + CHAR(13);

;WITH precisionNums AS 
(
    SELECT TOP(8) n= ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) - 1
    FROM sys.columns c
)
SELECT TOP 100 PERCENT @tsql +=N'SELECT [Precision] = N''DATETIME2('+CAST(n AS NCHAR(1))+')'', [value] = CAST(@dtime2 AS DATETIME2('+CAST(n AS NCHAR(1))+N')), [DATALENGTH]=DATALENGTH(CAST(@dtime2 AS DATETIME2('+CAST(n AS NCHAR(1))+N')))'
                             +CHAR(13)+ IIF(n=0,N'',N'UNION ALL'+CHAR(13))
FROM precisionNums
ORDER BY n DESC

--PRINT @tsql
EXEC(@tsql)
Precision    value                       DATALENGTH
------------ --------------------------- -----------
DATETIME2(7) 2019-07-17 11:23:17.1738815 8
DATETIME2(6) 2019-07-17 11:23:17.1738820 8
DATETIME2(5) 2019-07-17 11:23:17.1738800 8
DATETIME2(4) 2019-07-17 11:23:17.1739000 7
DATETIME2(3) 2019-07-17 11:23:17.1740000 7
DATETIME2(2) 2019-07-17 11:23:17.1700000 6
DATETIME2(1) 2019-07-17 11:23:17.2000000 6
DATETIME2(0) 2019-07-17 11:23:17.0000000 6

Datetime(3) precision is “more precise” than DATETIME since it provides 1 millisecond (1/1000 of a second) precision rather than DATETIME’s 1/300 of a second precision.

Add a day trick does not work with DATETIME2.

DECLARE @dt2 DATETIME2 = SYSDATETIME();
SELECT CurrentDate = @dt2+ 1;

Msg 206, Level 16, State 2, Line 106
Operand type clash: datetime2 is incompatible with int

Minimum date quirk or not?

Sybase engineers decided not to “mess” with the Julian to Gregorian calendar switch from 02.09.1752 (just for the record, 09 is September 🙂 ), so they limited DATETIME with a minimum date and set the value to 01.01.1753.
Datetime2 has no similar limitation and allows dates starting from the “beginning of the time”, 01.01.0001. Now, we can ask a philosophical question: “How many days are between 01.09.1752  (midnight) and 03.09.1752 (midnight)

SELECT DATEDIFF(DAY,CAST('17520901' AS DATETIME),CAST('17520903' AS DATETIME))
SELECT DATEDIFF(DAY,'17520901','17520903') --DATEDIFF implicitly casts string literals as a datetime2 type

So, what is the best result: Conversion failure, 2 days or 14 days?

Conclusion

The DATETIME data type is widely used across many SQL Server applications. It has its quirks and features as well as its modern and more powerful version – datetime2. Both data types unambiguously store date and time. This should be decoupled from the way our program presents these values.  The presentation is always in a form of a formatted string i.e ’01 September 05′, ‘June 01, 1998’ etc. This is how SSMS or other Client applications present date and time. Passing string values that represent dates into DATETIME data types may be unclear since SQL Server needs to figure out what comes first; day, month, or year. Different Client applications may use different components to retrieve date/time information passed through the UI. Those components may format dates in different ways, so we need to be sure that our database understands the formats properly i.e 01.11.19 should be the 11th of Jan 2019 not i.e 01st of November 1819 🙂

Thanks for reading.

Dean Mincic

Data providers and User Options

Data providers and User Options


Summary

Some time ago I decided to write a quick post about ANSI_WARNINGS, one of Sql Server’s user options. Half way through, I discovered many interesting things around the mechanisms that sets Client connection/session user settings. That was definitely more fun to research and blog about 🙂
Sql Server configuration settings include several settings called User options.  Those options, along with some other settings define the user connections environment related to query processing. The options define i.e how queries handle the 3VL(Three Valued Logic – ANSI_NULLS) or how they enforce atomicity of the explicit transactions (XACT_ABORT) etc.
There are a few levels where the options can be set: Server, Database and session levels. The values can also be independently  set by Sql Server engine and the data providers after the successfully established Client connection. Different Db tools like SSMS may have their own settings on top of the previously mentioned. This blog aims to shed some light on the processes that change the connection/session user option settings.

SQL Server Configuration Settings overview

Sql Server’s configuration settings can be set on three different levels.

  1. Server configuration settings
  2. Database configuration settings
    2.1 Connection settings set by Sql Server/ Data providers during the Database Login process/after the successfully established Client connection.
  3. SQL Server session(connection) configuration settings (SET statements on a session level)

For the most of the settings, the values can be set on more than one level. The overlap introduces the precedence of the values set on the lower levels over those set on the higher levels i.e QUOTED_IDENTIFIER setting value defined on the session level (3) overrides the same setting value on the database(2) and/or Server level(1).
Sometimes, the concept may be confusing and this is only my personal feel, since there is a number of different ways to assign values to the same settings and on a few different levels.

Some of the settings can only be set on certain levels. i.e  Max workers treads (configures the number of worker threads that are available to SQL Server processes) can be set only on the server(instance) level, and AUTO_CREATE_STATISTICS( If not already available, Query Optimizer creates statistics on individual columns used in a predicate) can only be set on database level*.

NOTE: All database settings are inherited from the model db during the database creation.

As mentioned, this post will be focusing only on a sub-set of the server settings, the User Options settings.

User Options

Figure 1 below shows the categories of the user options and their values. The bit settings are presented as decimal numbers – this will be explained later in the blog.


Figure 1, User Options

User Options can be set up on three different levels; Server, Database and Session level.

Server level

Server level defines User Option settings for all Client connections. User options on this level can be managed in a few different ways.

  • sys.sp_configure , system stored procedure
EXEC sys.sp_configure 
    @configname = 'user_options'


Figure 2, Default User Option(s)

The system stored procedure manages server configuration settings.
@configname is an option name or, in our case  the name of a group of options.  config_value is a combination of values presented in Figure 1.  The default config_value is 0 representing the default state of user settings – all set to OFF.

More about config_value

The config_value is a small, positive integer. This means that the maximum value that can be used is 32768 – 1 = 32767   or 215 – 1.  One  bit is used for the  integer sign.

The config_value is also a 2 bytes(16 bits) bitmap used to store the user setting values. The first 15bits are used to store the values (although the 1st – DISABLE_DEF_CNST_CHK has been discontinued since Sql Server 2012).

Lets say we want to turn ON the following user settings

ANSI_WARNINGS  (decimal value 8)
ARITHABORT (decimal value 64) and
CONCAT_NULL_YIELDS_NULL (decimal value 4096)

Figure 3 is a graphical representation of the 2byte bitmap and the corresponding binary and  decimal values. The last bit (binary 215 ,decimal 32768) is not in use.

Figure 3, User Options – config_value

To turn ON the corresponding bits, we just add decimal values like :  8 + 64 +4096 = 4168 (or using bitwise OR) i.e SELECT (8 | 64 | 4096)  We then pass the value to the  configvalue parameter of the sys.sp_configure system stored procedure.

EXEC sys.sp_configure @configname = 'user_options'
                     ,@configvalue = 4168
RECONFIGURE;

This means that the all subsequent client connections may have the settings turned ON. More on this in the following sections.

  • SSMS GUI, Server Settings/Properties/Connections

Another way to manage User Options on the server level is by using SSMS tool. Figure 4 shows the user_options accessed through the UI.


Figure 4, User Options – SSMS

The default state of the user option values on the Server level  is all OFF.

Database level

Similar to the Server level user options, database level user options define the user option values for all the Clients that will be connecting to the particular database. The database settings are supposed to override the same settings that were set on a higher, server level.
Database level user options as well as server level user options are, by default all set to OFF.
NOTE: The Database level user options becomes more interesting in the context of Contained Sql server(2012+) databases or its Cloud counterpart , the Azure SQL Database

To change user options on the database level use ALTER DATABASE statement.

ALTER DATABASE TestDB
 SET QUOTED_IDENTIFIER ON
     ,NUMERIC_ROUNDABORT ON 
GO

To check the current user option values we can use system functions or system views.

SELECT DATABASEPROPERTYEX('TestDB','IsNumericRoundAbortEnabled')

--OR

SELECT [name]
      ,is_ansi_warnings_on
      ,is_ansi_padding_on
      ,is_ansi_nulls_on
      ,is_ansi_null_default_on
      ,is_arithabort_on
      ,is_numeric_roundabort_on
      ,is_quoted_identifier_on
      ,is_cursor_close_on_commit_on
      ,is_concat_null_yields_null_on
FROM sys.databases 
WHERE [name] = N'TestDB'

Figure 5 shows a sub-set of the User Options available on the database level.

Database user options settings
Figure 5, Database level user options

The following user options are available on the server level but not on the database level.
– ARITHIGNORE
– NOCOUNT
– XACT_ABORT
– IMPLICIT_TRANSACTIONS

ANSI NULL Default (ANSI_NULL_DEFAULT) represents the Server level  ANSI_NULL_DFLT_ON and ANSI_NULL_DFLT_OFF.

By default, server level user option settings should be overwritten by the corresponding database level settings. However,  for most of the options this is not true. Data providers responsible for establishing connections between client applications and database will override the settings defined on both levels.

Connection settings set by Sql Server/ Data provider

I was not able to find many white papers that covers Sql Server Data providers’ role in configuring user options. The following conclusions are based on my personal research and experiments.

For a start, let’s set up a simple Ext. Event session that reports on the  following events;

  • Login event. The main Event field of interest will be the option_text field.
  •  sql_batch_starting and sql_batch_completed – sql_text field
  •  sql_statement_starting and sql_statement_completed – sql_text field

The last two events will be used in the subsequent experiments.


Figure 6, Login event

Now, let’s connect to a test database using a simple powershell script and observe the Login event attributes

## Create a simple connection to Sql Server using .NET Framework Data Provider for Sql Server

$sqlConn = New-Object System.Data.SqlClient.SqlConnection;

$sqlConn.ConnectionString = @('Data Source=tcp:(local);' `
                             +'Initial Catalog=TestDB;' `
                             +'Integrated Security=True;' `
                             +'Pooling=False;' `
                             +'Application Name=TestDataProviderConnSettings;');

## Open and Close the connection. 
$sqlConn.Open(); ##This is where we capture the Login event
$sqlConn.Close();   
$sqlConn.Dispose();


Figure 7, options_text field value

By analysing the output we can observe a couple of interesting things.

  • The Server/Database user settings have been overwritten by the new values i.e ANSI_PADDING is originally set to OFF on both levels and then turned ON during the SqlConnection.Open() process.
  • Some other connection settings are set i.e dateformat, transaction isolation level ..etc

In addition, I compared the user option values set by different  data providers;

  • .NET Framework Data Provider for Sql Server/ODBC/OLEDB (pshell script)
  •  ODBC v3 and v7 (Python and sqlCmd)
  •  Microsoft JDBC Driver 7.0 for SQL Server (java app)

The results were always exactly the same – the identical set of user option values.

Now, the question I was trying to answer was ; What was the process that set the values presented in the Figure 7? The option_text event field  description says:
“Occurs when a successful connection is made to the Server. This event is fired for new connection or when connections are reused from a connection pool”
This may suggest that Data Providers may be responsible for setting the option values and at some stage during the login process.
To  further investigate this, I have ran a powershell script similar to the one used in the previous test. This time I used ODBC Data Source Administrator (for 32bit env) to trace the ODBC communication(function calls) between the Client app and Sql Server.

## Create a simple connection to Sql Server using .NET Framework Data Provider for ODBC
$sqlConn = New-Object System.Data.Odbc.OdbcConnection;
$sqlConn.ConnectionString =  @('Driver={SQL Server Native Client 11.0};' `
                             +'Server=tcp:(local);' `
                             +'Database=TestDB;' `
                             +'Trusted_Connection=yes;');
##Note: Connection pooling is disabled for all ODBC connections
##      see odbcad32.exe (or odbcad64.exe) Connection Pooling Tab


## Open and Close the connection
$sqlConn.Open();
$sqlConn.Close();   
$sqlConn.Dispose();


Figure 8, ODBC Administrator – Trace ODBC function calls

The trace did not show any ODBC function calls that set up the user options captured by the Login event. It was “nice and clean” log.
In the next experiment, I used a simple sqlcmd.exe script to connect to the same database. Sqlcmd utility uses (at least ver. 14.0.1 I have on my laptop) ODBC Driver 13 for Sql Server. This is visible in the ODBC trace file – Figure 10.

C:\Users>sqlcmd -S tcp:(local) -d TestDB

Again, no “interesting” function calls during the login phase. However,  there were two function (SQLSetStmtAttrW) calls that took place after the successfully established connection.

Figure 9, Post Login events

Figure 10, ODBC Trace snapshot  – SQLSetStmtAttrW() function

The log shows that there were two sets of changes on the user options  initiated by data providers;

1. Pre-login/During the login process (captured by the Login event)
2. Post-login changes (captured by sql_batch/sql_statement events)

Post-login changes

In this experiment I’ve tried to consolidate and document the post login user options values set by different providers. The idea is to execute a simple program using different data providers, that will:

  1. connect to the test database
  2. execute a view that selects the user option values for the current session.
  3. output the results of the view
    i. e C:\Users>sqlcmd -S tcp:(local) -d TestDB -q “select * from dbo.vwGetSessionSETOptions”

The view definition:

CREATE OR ALTER VIEW [dbo].[vwGetSessionSETOptions]
AS 
	SELECT TAB.OptionName
		  ,Setting = IIF(TAB.Setting = 0,'OFF','ON')
	FROM (VALUES ('DISABLE_DEF_CNST_CHK'    ,(1 & @@OPTIONS))	   --not in use
				,('ANSI_WARNINGS'           ,(8 & @@OPTIONS))	   --iso settings statements
				,('ANSI_PADDING'            ,(16 & @@OPTIONS))
				,('ANSI_NULLS'              ,(32 & @@OPTIONS))
				,('ANSI_NULL_DFLT_ON'       ,(1024 & @@OPTIONS))
				,('ANSI_NULL_DFLT_OFF'      ,(2048 & @@OPTIONS))
				,('ARITHABORT'              ,(64 & @@OPTIONS))	   --query execution statements
				,('ARITHIGNORE'             ,(128 & @@OPTIONS))
				,('NUMERIC_ROUNDABORT'      ,(8192 & @@OPTIONS))
				,('NOCOUNT'                 ,(512 & @@OPTIONS))
				,('QUOTED_IDENTIFIER'       ,(256 & @@OPTIONS))	   --miscellaneous statements
				,('CURSOR_CLOSE_ON_COMMIT'  ,(4 & @@OPTIONS))
				,('CONCAT_NULL_YIELDS_NULL' ,(4096 & @@OPTIONS))
				,('XACT_ABORT'              ,(16384 & @@OPTIONS))  --transaction statements
				,('IMPLICIT_TRANSACTIONS'   ,(2 & @@OPTIONS))				
	) TAB(OptionName,Setting)
UNION ALL
	SELECT '-------','-------'
UNION ALL 
	SELECT  'PROGRAM_NAME: ' +[program_name] + '** CLIENT_INTERFACE_NAME : '+client_interface_name 
	       ,'CLIENT_VERSION: ' +CAST(client_version as varchar(3))
		FROM sys.dm_exec_sessions
		WHERE session_id = @@SPID
GO

The output shows the user options that would be applied on any batch/stored proc, function, ad-hoc query if executed within the active connection (or the @@spid session from SQL Server’s perspective). The partitioned view includes the data provider’s details available through sys.dm_exec_sessions dynamic management view.

Figure 11, Post-login user options values

From the result (the figure above may be an overkill 🙂 ) we can conclude that data providers can change some of the user settings and that can produce unexpected results. Consider the scenarios below;

  • If we execute a Python script that i.e inserts a some rows into a table..
import pyodbc;
#import pytds 
conn = pyodbc.connect('Driver={SQL Server};'
                      'Server=tcp:localhost;'
                      'Database=TestDB;'
                      'Trusted_Connection=yes;'
		      'Poolig = false')

cursor = conn.cursor()
cursor.execute('INSERT INTO dbo.testImplicitTran(Id) SELECT 511')

.. the INSERT will NOT happen since the data provider had set the IMPLICIT_TRANSACTIONS to ON. To see how SQL Server implements the post-login change, run the script along with the Extended Event Session – Figure 9.

  • If we execute an sqlCmd query to i.e insert some values in a table that has a filtered index, we’ll get the following error.
C:\Users>sqlcmd -S tcp:(local) -d TestDB -q "INSERT INTO dbo.testQIdentifierOFF SELECT 555"

Msg 1934, Level 16, State 1, Server ENETT-NB290, Line 1 INSERT failed because the following SET options have incorrect settings: ‘QUOTED_IDENTIFIER’. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations.

Other data providers may change settings like;  dbLib  data provider sets ANSI_WARNINGS user option value to OFF. This may cause the logical errors that may be difficult to debug.

SSMS post login changes

SSMS application performs its own set of post-login changes. The changes can be managed through the application UI on two levels;

  • Current session level(query editor) – The changes affects only the current session.
    • Right Click on the query editor area/ Query Options/Execution – ANSI
  • Application level where the changes affects all sessions.
    • Tools/Options/Query Execution/ SQL Server/ANSI

Developers, should be aware of the fact that the same query executed from SSMS and a Client application may behave differently and produce different results. i.e The INSERT query executed through the Python script above would commit changes if executed in SSMS.

Pre-login changes

As mentioned before – Figure 7, there are nine User settings set by Data providers (or Sql Server engine) during the login process. In the next experiment I’ll try to show how the nine Server level settings get affected/overridden by the pre-login changes – loosely speaking 🙂

Initially, all server level user settings are turned OFF – Figure 2  and that includes the 9 options mentioned before.
Let’s see what happens on the session level if we set ALL Server level user options to ON. For the experiment I’ll use a PowerShell script similar to the one used before, to  connect to Sql Server.

## Create a simple connection to Sql Server using .NET Framework Data Provider for Sql Server
$sqlConn = New-Object System.Data.SqlClient.SqlConnection;

$sqlConn.ConnectionString = @('Data Source=tcp:(local);' `
                             +'Initial Catalog=TestDB;' `
                             +'Integrated Security=True;' `
                             +'Pooling=False;' `
                             +'Application Name=TestDataProviders;');

## Open and Close the connection
$sqlConn.Open();

###$sqlCmd = New-Object System.Data.OleDb.OleDbCommand;
$sqlCmd = $sqlConn.CreateCommand()
$sqlCmd.CommandText = "SELECT * FROM dbo.vwGetSessionSETOptions";

$result = $sqlCmd.ExecuteReader();

$table=New-Object System.Data.DataTable;
$table.Load($result);

$format = @{Expression={$_.OptionName};Label=”Option Name”;width=90},@{Expression={$_.Setting};Label=”Setting”; width=50}

$table | Format-Table $format;   

$sqlConn.Close();   
$sqlConn.Dispose();

Before executing the script, lets change server level user settings and check the effects of the changes through SSMS GUI, Figure 4.

DECLARE @user_settingsBitMask SMALLINT;

SET @user_settingsBitMask = ( POWER(2,15)- 1 ) -- all 15 settings turned on
                             - 1024  --turn off ANSI_NULL_DFLT_ON
							 - 64    --turn off ARITHABORT 
EXEC sys.sp_configure 
	 @configname = 'user_options' 
	,@configvalue = @user_settingsBitMask; --31679
GO
RECONFIGURE;

NOTE: Some of the user settings are mutually exclusive.

  • ANSI_NULL_DFLT_ON and ANSI_NULL_DFLT_OFF
  • ARITHABORT and ARITHIGNORE

The figure below shows the session level user options settings output for the two Server level user options settings scenarios. The snapshot combines two outputs.

FIgure 12, Session level user settings 

As we can see, the user options business is pretty unclear and confusing. Combining findings from the previous experiments, I compiled a table that may explain the sequence in which the user options are set, but before that, just another thing to mention – the ANSI_DEFAULTS setting. The setting controls(sets to ON) a group of user options values. This is to provide the ANSI standard behavior for the options below;

  • ANSI NULLS
  • ANSI_PADDING
  • ANSI_WARNINGS
  • QUOTED_IDENTIFIER
  • ANSI_NULL_DFLT_ON
  • CURSOR_CLOSE_ON_COMMIT
  • IMPLICIT_TRANSACTIONS

The status of the ANSI_DEFAULTS can be checked using the query below.

SELECT ansi_defaults
FROM sys.dm_exec_sessions 
WHERE session_id = @@SPID

When checked, status is usually 0 (not set) because some of the user settings from the list gets overridden by the pre-login/post login processes mentioned before.

Figure 13, User Option settings sequence

During application login attempt, data provider/Sql server engine turns ON a set of user options grouped in the ANSI_DEFAULT settings. This overrides the user options set on server level(Phase 1). Right after the change, another set of changes overrides a couple of ANSI_DEFAULTS options and a few server level user options(Phase 2). At this stage the login process is finished and  9 out of 15 option values is reported by the login xEvent(option_text field).The color of user options at the final session level shows the levels from which the values ​​came from.

*The ARITHABORT and ARITHIGNORE shows a different behavior.

  • The options are mutually exclusive if set on server level (only one setting can be set to ON.
  • If one of the settings is set on server level the setting will not be changed through the login process.
  • If none of the settings is set on server level, ARITHABORT will be set to ON during the Phase 2.
  • Only ARITHABORT value will be reported by the login_xevent.

Data providers like ODBC, dbLib perform additional changes to the user options after successfully establishing a database connection.

Session level setting

Once connected, Client code can override previously set user options  by using SET statements directly in the code. This would be the final override of the user option values 🙂 . The scope of the settings will be the current session( or from the Client’s perspective, the current connection). The following coder shows a few user options set in a stored procedure’s header…

CREATE OR ALTER PROCEDURE dbo.uspSetUserOptions
AS
BEGIN
	SET ANSI_DEFAULTS ON;
	SET IMPLICIT_TRANSACTIONS OFF;
	SET ARITHABORT ON;
	SET QUOTED_IDENTIFIER ON;

	SET NOCOUNT ON;
	SET XACT_ABORT ON;

	-- code ....

	RETURN;
END

Conclusion

The User Options are a set of configurable elements that affect query behavior i.e handling 3VL(three valued logic), division by zero situations,  data truncation rules etc. As Sql developers we need to be aware of the environment in which our queries are being executed. There are many different levels where user options can be set; Server, Database and session levels. Session level has precedence over the other two levels. However, Sql server can set up its own default user option values during and after the login phase – when a client application makes an attempt to connect to Sql Server. Data providers can also independently change user option settings, during and after the login process. The impact of the changes can introduce logical errors such as e.g always rolled back inserts/updates/deletes ( implicit transaction set to on) or code execution errors such as insert failures on a table with a filtered index. There are scenarios when user options settings can cause the sneaky logical errors that are very difficult to debug e.g.  what if NULL = NULL suddenly evaluates to true(ansi nulls off).
The session level can provide a place where we can set user option values which will override all previously set options. Designing templates for Sql Server objects (stored procedures, functions..etc) that will include the option settings in the header may be a good way to put the whole user option confusion to the rest 🙂

Thanks for reading.

Dean Mincic

 

 

Statistics used in the cached execution plans

Statistics used in the cached execution plans – Stored Procedures


Summary

The query optimization process sometimes requires an understanding of how the SQL Server’s Query engine compiles, re-compiles, and executes SQL batches. Some of the most important elements used by the Query optimizer when constructing a good plan are the “Interesting statistics”. These are statistical information used by the Query optimizer when constructing a good enough query execution plan.
This blog attempts to explain what are the “interesting statistics”, when they are updated and how the statistical information relates to the query recompilation process. The topic is related to Temporary tables statistics when used in stored procedures.

Batch compilation and recompilation

To begin with, let’s analyze the batch compilation/recompilation diagram (By Arun Marathe, Jul 2004, Batch Compilation, Recompilation and Plan Caching Issues in SQL Server 2005). The idea is to create a set of experiments that will capture the behavior of a stored procedure through the different phases of the query compilation/recompilation process, particularly those related to the statistics that are used to generate the execution plan.


Figure 1, Batch Compilation/Recompilation diagram

I’ve used the AdventureWorks database to set up the test environment and MS Profiler to capture various Events relevant to the experiments.

MS Profiler events

    1. Attention (Errors and Warnings)
    2. Auto Stats (Performance)
    3. SP:CacheHit (Stored Procedures)
    4. SP:CacheInsert  (Stored Procedures)
    5. SP:CacheMiss  (Stored Procedures)
    6. SP:CacheRemove  (Stored Procedures)
    7. SP:Completed  (Stored Procedures)
    8. SP:Recompile  (Stored Procedures)
    9. SP:Starting  (Stored Procedures)
    10. RPC: Starting (Stored Procedures)*
    11. RPC:Completed (Stored Procedures)*
    12. SP:StmtStarting  (Stored Procedures)
  1. SQL:StmtRecompile (TSQL)
  2. SQL:StmtStarting  (TSQL)

Database Objects
Set AdventureWorks DB compatibility level to 140 – SQL Server 2017. The version provides easy access to the information about the interesting statistics saved with the query plan (SSMS – SELECT Plan Operator, Properties,OptimizerStatsUsage).

Below is the set of SQL Server object definitions used for the testing.

USE master
GO
ALTER DATABASE AdventureWorks
    SET COMPATIBILITY_LEVEL = 140;
GO

USE AdventureWorks
GO

/* dbo.SalesOrderDetail table */
DROP TABLE IF EXISTS dbo.SalesOrderDetail
GO

SELECT *
    INTO dbo.SalesOrderDetail
FROM Sales.SalesOrderDetail;
GO

--add primary, clustered key
ALTER TABLE dbo.SalesOrderDetail
    ADD CONSTRAINT PK_SalesOrderDetailID PRIMARY KEY CLUSTERED (SalesOrderDetailID);  
GO

--NCI on ProductID
CREATE NONCLUSTERED INDEX IX_SalesOrderDetail_ProductID 
    ON dbo.SalesOrderDetail (ProductID);
GO

/* dbo.Products table */
DROP TABLE IF EXISTS dbo.Products
GO
    
SELECT *
    INTO dbo.Products
FROM Production.Product
GO

--add primary, clustered key
ALTER TABLE dbo.Products
    ADD CONSTRAINT PK_ProductID PRIMARY KEY CLUSTERED(ProductID)
GO

--NCI on ListPrice
CREATE NONCLUSTERED INDEX NCI_Products_ListPrice
    ON dbo.Products (ListPrice)
        INCLUDE([Name],ProductNumber)
GO

/* dbo.TestQueryExectuion stored proc*/
DROP PROCEDURE IF EXISTS dbo.TestQueryExecution;
GO

CREATE PROCEDURE dbo.TestQueryExecution
          @ProductID INT
         ,@ProdName NVARCHAR(50)
         ,@MinLinePrice MONEY = $100
AS
BEGIN
    SET XACT_ABORT ON; 
    SET NOCOUNT ON

    --query1
    SELECT   d.CarrierTrackingNumber,
             d.OrderQty,
             d.UnitPrice,     
             d.ProductID,
             p.[Name]
    FROM    dbo.SalesOrderDetail d
        INNER JOIN dbo.Products p
            ON d.ProductID = p.ProductID
    WHERE d.ProductID = @ProductID 

    --query2
    SELECT [Name]
           ,ProductNumber
           ,ListPrice
           ,Color
    FROM dbo.Products
    WHERE ListPrice >= @MinLinePrice
        AND  [Name] LIKE (@ProdName +'%')

    RETURN;
END

Information about the statistics/indexes on the tables can be retrieved using the queries below.

USE AdventureWorks
go

--sys.stats
SELECT
     TableName = OBJECT_NAME(sp.object_id)
    ,[Statistic ID] = sp.stats_id -- If statistics correspond to an index, the stats_id value is the same as the index_id value in the sys.indexes.
    ,[Statistics] = s.[name] 
    ,[Last Updated] = sp.last_updated 
    ,sp.[rows]
    ,sp.rows_sampled
    ,sp.unfiltered_rows
    ,Modifications = sp.modification_counter
    ---
    ,s.object_id
FROM sys.stats AS s
OUTER APPLY sys.dm_db_stats_properties (s.object_id,s.stats_id) AS sp
WHERE s.object_id IN (OBJECT_ID(N'dbo.SalesOrderDetail'), (OBJECT_ID(N'dbo.Products')));

--sys.sysindexes
SELECT TableName = OBJECT_NAME(i.id)
      ,IndexName = i.[name]
      ,i.indid --If statistics correspond to an index, the stats_id value in the sys.stats is the same as the index_id
      ,IndexDesc = CASE 
                        WHEN i.indid = 0 THEN 'HEAP'
                        WHEN i.indid = 1 THEN 'CLUSTERED'
                        WHEN i.indid > 1 THEN 'NONCLUSTERED'
                   END
      ,i.rowcnt
      ,i.dpages
      ,i.rowmodctr
FROM sys.sysindexes i
WHERE i.id  IN (OBJECT_ID(N'dbo.SalesOrderDetail'), (OBJECT_ID(N'dbo.Products')));

The following examples assume the default settings for the Sql  Server’s options related to the statistics:
 AUTO_CREATE_STATISTICS ON
– AUTO_UPDATE_STATISTICS ON
AUTO_UPDATE_STATISTICS_ASYNC OFF 

A bit of theory first before proceeding with the tests. : )

colmodctr

colmodctr is an ever-increasing counter that tracks the changes made on tables (a counter per column excluding the non-persistent computed columns). colmodctr is not transactionally consistent which means that is not affected by the rolled-back changes i.e if a transaction inserts 10 rows in a table and then roll-back, the counter will still report 10 changes.
SQL Server Statistics (automatically/manually created/updated) on a column(s) will store the snapshot value of the colmodctr for the leftmost column in the stats blob.
The counter is very important since it’s one of the elements that drive the query recompilation decisions related to the statistics changed reasons. 

colmodctr counter can be accessed through the following system views.


Figure 2, colmodctr, system views – standard and hidden

One way  to access the hidden tables is to; Open a separate SSMS instance, close the object explorer, and create a single connection using the Server name: i.e ADMIN:(local)
NOTE: The structure of the hidden tables and the tables’ accessibility are not documented and may be changed in future versions.

Recompile thresholds (RT)

RT concept defines the number of changes on a table column needed to be done in order to indicate the statistical information of that column as stale. 
The changes include the column values changes through the DML operations such as INSERT, UPDATE, DELETE… i.e Inserting 10 new rows in a table is considered as 10 changes(identified by the colmodctr counters mentioned before).
If the table does not have statistical information i. e HEAP table with no indexes and no manually created statistics, and the query plans that reference the table does not load/automatically create interesting statistics, the only relevant change when performing the RT crossing test will be the change in the number of rows inserted and/or deleted.

colmodctr(current) – colmodctr(snapshot) |  >= RT

or

 | cardinality(current) – cardinality(snapshot) |  >= RT

current     – refers to the current value of the modification counter
snapshot – refers to the value of the mod. counter captured during the last plan compilation(re-compilation).
cardinality* – the number of rows in the table.

Cardinality

In mathematics, the cardinality of a set is defined as the number of elements in a SET.  A SET is an unordered collection of elements in which each element is unique.

In RDBMS – see RDBMS fundamentals), data is presented in a form of a Table. An RDBMS table has its roots in a structure called Relation (attributes ~ columns, tuple ~ row). The number of tuples is represented by the cardinality of the relation – a Relation does not have duplicate tuples.
However, the table structure deviates from the strict rules and allows, e.g., duplicate rows. This means, that we use cardinality to represent the number of rows in a table, only if the table has no duplicates.
Sometimes in the literature, we find that the cardinality of a table represents the number of unique rows out of the total number of rows. So, cardinality in this context represents uniqueness. 

Cardinality may represent the uniqueness of data values in a particular column – the lower the cardinality the selectivity of a value decreases, and the more duplicated values in the column. It is a measure that defines the density of a column – column density is a reciprocal value of the column’s cardinality. So the more selective values the less density. Then there is a metric called Density Vector that consists of the densities of the individual columns… super interesting stuff, but not for this post 🙂 

In a different context, Cardinality is a way to define the relationship between two entities in a data model. It is also known as the degree of relationship i 1-1, 1-m, m-n.

 

The Threshold Crossing  Test evaluates to TRUE if the number of changes is greater than the predefined RT value (see Figure 3)

Recompilation thresholds(RT) for all the tables referenced in the query are stored along with the query plan.

RT depends on the table type(permanent vs temporary) and the number of rows in the table.


Figure 3, Recompile thresholds

Special case. RT = 1 if the table has 0 rows (with or without statistics)

NOTE: Starting from SQL Server 2008 R2 SP1, Microsoft introduced TF2371. The trace flag activates the dynamic recompile threshold calculation. The higher number of rows in a table, the lower the RT. The functionality is implemented to allow automatic statistics updates to kick off more frequently for the big tables. i.e RT for a 10,000-row table is 500 + 0.20*10,000 = 2,500 – the number of changes required to trigger query recompile. For a table with 100M rows, the RT is 20,000,500. For some applications the RT may be too high, resulting in sub-optimal plans due to the lack of query recompilation. Hence the TF2371.
Starting from SQL Server 2016, the TF2371 is turned on by default.

Here are a couple of examples to illustrate Figure 3.
If there is a table A that contains 230 rows, RT for the table will be set to 500. This means that if we i.e insert 500 rows, the total number of rows (c)  will change to 730 (c>=230+500) which is enough changes to make the table’s statistics stale.
The change itself does not mean much if there are no queries that reference the table : )
The query plans may or may not initiate the auto-statistics creation on the specific table columns. Also, the referenced tables may not have any statistical information i.e HEAP table with no non-clustered indexes.

Experiments

Experiment 1 (stats change before query execution)

In this experiment, we will make “enough” changes to the ListPrice column (dbo.Products table) BEFORE running the stored procedure for the first time, 
The column is a key column in NCI_Products_ListPrice, the non-clustered index, and has statistical information attached to it (the stats object name is the same as the NCI)

Let’s begin the experiment by creating the test objects and checking the statistical information on the tables.

Step 1, Check the initial stats/rowmodctr information

Figure 4, Initial rowmodctr information

Step 2, Check stats BLOB and make changes on dbo.Products table

Run the DBCC  command below before and after the UPDATE to confirm that there were no changes in the stats BLOB information.

DBCC SHOW_STATISTICS ('dbo.Products',NCI_Products_ListPrice) 
    WITH STAT_HEADER
BEGIN TRANSACTION  
    UPDATE dbo.Products  --504 changes
        SET ListPrice +=ListPrice *0.10 --add 10% to the price
    UPDATE TOP(106) dbo.Products --106 changes
        SET ListPrice +=$10.00 -- add $10 dollars to the prices
ROLLBACK TRANSACTION

NOTE: rowmodctr is not transactionally consistent.


Figure 5, stats BLOB information


Figure 6, rowmodctr after the initial dbo.Products update

The changes are detected and available through SQL Server’s metadata.

Step 3, Run the stored procedure and observe the captured events by the Profiler.

EXEC dbo.TestQueryExecution
       @ProductID =897
      ,@ProdName = N'G'
     ,@MinLinePrice = $0

Figure 7, Statistics refresh

Following the batch compilation diagram, we can identify the following steps.

  1. The Cache Lookup step resulted in the SP:CasheMiss event. dbo.TestQueryExecution stored proc. does not exist in the cache.
  2. Query Compilation Begins. SQL Server engine is about to load all of the  “interesting statistics”. The loaded statistics can be retrieved from the Actual Execution Plan, the SELECT physical operator – OptimiserStatsUsage property.
  3. The Query engine checks if any of the loaded interesting statistics are stale. If yes, the system stops the batch compilation process and refreshes the statistics. In our case the system has 
    • Identified the number of changes made on the ListPrice column. From the stats/index information gathered after the initial update, the number of changes (rowmodctr/Modifications) is 610
    • Performed RT crossing test.  The test passed since the number of changes(610) exceeded the RT for tables with a number of rows greater than 500. RT = 500 + 0.20 * 504 ~ 601, 601 < 610
    • Executed StatMan, an internal process that automatically maintains statistics. The process updated the stale statistics NCI_Products_ListPrice on dbo.Product table
      SELECT StatMan([SC0], [SC1]) --from MS Profiler
      FROM   (SELECT  TOP 100 PERCENT [ListPrice] AS [SC0]
                                     ,[ProductID] AS [SC1]
              FROM  [dbo].[Products] WITH (READUNCOMMITTED)
              ORDER BY [SC0],[SC1]) AS _MS_UPDSTATS_TBL
      OPTION (MAXDOP 16);

      If we check the stats blob from Step 2, we will see that the Updated column changed its value to the current date – the stats blob has been updated.
      The AutoStats event reported the UPDATE of the statistics with EventSubClass = 1 – Other. More on the event can be found here.

  4. Query Optimiser starts to generate the query plan – a plan for each query statement.
    • The second query in the batch has a predicate on the Name column of the dbo.Products table. In an attempt to make better cardinality estimates on the rows that need to be processed, Query Optimiser decided to automatically create statistical information on the column.
      The system stops the batch compilation process and again executes the StatsMan process to create the new statistics.
      SELECT StatMan([SC0])
      FROM  ( SELECT TOP 100 PERCENT [Name] AS [SC0]
              FROM  [dbo].[Products] WITH (READUNCOMMITTED)
              ORDER BY [SC0]) AS _MS_UPDSTATS_TBL
      OPTION (MAXDOP 16);

      After creating the stats, QO decided not to use it  : (
      Below is the list of the “interesting statistics” loaded during the Query compilation process. The list does not include automatically created stats on the Name column.

      As a result of the updated statistics on the ListPrice column , the rowmodctr for the column was reset. 

    • QO sets the new recompilation thresholds(RT) for all tables used in the queries.
      1. RT(dbo. SalesOrderDetail) = 500 + 0.20(121317) =24763.4 (~24764)
      2. RT(dbo.Products) = 500 + 0.20(504)= 600.8(~601)
        This means that QO will initiate query recompile due to the “Statistics changed” reason if
        1. dbo. SalesOrderDetail
          1. 24764 or more inserted/deleted rows
          2. 24764 or more changes on SalesOrderDetailID, ProductID columns
        2. dbo.Products
          1. 601 or more inserted rows
          2. 601 or more changes on ProductID, ListPrice, and Name columns
  5. The query execution starts. The query plans are constructed and cached. SP:CacheInsert event reported that the stored procedure has been cached.

Experiment 2 (stats change during the query execution)

In this experiment, we will make “enough” changes to the Name column (dbo.Products table) HALFWAY THROUGH the stored procedure execution.

Step 1 Set up the environment

  • Run the script to reset the test environment
  • Add a WAITFOR statement between the two queries in the stored procedure
... 
SELECT   d.CarrierTrackingNumber,--query1
             d.OrderQty,
             d.UnitPrice,     
             d.ProductID,
             p.[Name]
    FROM    dbo.SalesOrderDetail d
        INNER JOIN dbo.Products p
            ON d.ProductID = p.ProductID
    WHERE d.ProductID = @ProductID 

    WAITFOR DELAY '00:00:06' -- buy some time to change statistisc
    
    SELECT [Name]--query2
           ,ProductNumber
           ,ListPrice
           ,Color
    FROM dbo.Products
    WHERE ListPrice >= @MinLinePrice
        AND  [Name] LIKE (@ProdName +'%')
...
  • Use PowerShell to execute the stored procedure. Add HostName property. Use the HostName to capture only the events related to the PS call. This will prevent MS Profiler from capturing events related to the UPDATE statement that will run in parallel.
    PS C:\Users\dean.mincic> Invoke-Sqlcmd -ServerInstance "." `
                                           -Database "AdventureWorks" `
                                           -HostName "experiment" `
                                           -Query "EXEC dbo.TestQueryExecution `
                                                      @ProductID =897 `
                                                      ,@ProdName = N'G' `
                                                      ,@MinLinePrice = 0.00;"
  • Add an ApplicationName filter to the Profiler trace (ApplicationName LIKE experiment)

Step 2, Run the PowerShell cmdlet, switch to SSMS, and run the UPDATE query below. The queries will generate enough changes to make the automatically created statistics on the Name column stale.

BEGIN TRANSACTION  
    UPDATE dbo.Products  --504 changes (all rows)
        SET [Name] +='_ABC' 
    UPDATE TOP(106) dbo.Products --106 changes (random rows)
        SET [Name] +='_ABC'
COMMIT TRANSACTION

Step 3. Analyze the captured MS Profiler trace.
Figure 8, Query recompile

  • The first thing that is different from the last run is the SP:CacheHit event. The event shows that our stored procedure was found in the Plan cache. The previously set RTs and the interesting statistics are part of the cached information.
    NOTE: Auto-created statistic on the Name column was not used during the initial query compilation – the stats are not part of the interesting stats.
  • This time there were no changes on the columns that would initiate statistics updates, no new auto-created stats and the existing cached query plan does not need to be recompiled due to “statistic changed” reasons. The process proceeds with the query execution.
  • The first query is successfully executed followed by the  WAITFOR statement. During the statement execution (6 seconds delay) a separate query has made enough changes on the Name column(dbo.Products) to pass the RT crossing test for the table and flag the auto-created statistics on the column as stale. Even if not used by QO during the plan generation, the stats are marked as stale.
  • (1) The query execution stops at the  “Any stats stale?”  step. The System initiates the query recompile process – SP: Recompile due to 2 – Statistics changed reason. The event is followed by the statement level SQL:StmtRecompile event which indicates that only the second query needs to be recompiled.
  • (2) Again, the StatsMan process kicks in and updates the stale statistics. The RTs are set (in this case the number of rows has not changed, hence the RTs stayed the same). Rowmodctr value for the Name column is reset. to 0 
  • (3) The AutoStats event reported Statistics Update  having EventSubClass = 1 – Other
  • (4) The SP:StmtStarting event reports that the second query has been recompiled and the batch execution continues.

Experiment 3 (tables with no stats on columns)

The experiment demonstrates how queries get recompiled when referencing tables with no statistics. The recompiles due to the “statistics changed” reasons are initiated by the RT-table cardinality crossing test results only.
As previously mentioned, the cardinality-based RT crossing test is defined as

 | cardinality(current) – cardinality(snapshot) |  >= RT

Let’s create a test table and a stored procedure to perform the above experiment.

Step 1, set up the test environment

--a heap table - no statistics
DROP TABLE IF EXISTS dbo.testRecomp;
GO
CREATE TABLE dbo.testRecomp(id INT
                            ,filler CHAR(100)
                            );
GO
--test sp
DROP PROCEDURE IF EXISTS dbo.testRecompile;
GO
CREATE PROCEDURE dbo.testRecompile
AS
BEGIN
    SET XACT_ABORT ON;
    SET NOCOUNT ON;
  
    SELECT TOP(100) Average = AVG(t.id)
                   ,NoOfRows =  COUNT(t1.id)
    FROM dbo.testRecomp t
       CROSS JOIN dbo.testRecomp t1 
   RETURN;
END 
GO

.. and insert some data into the table…

--add 230 rows, RT will be set to 500
INSERT INTO dbo.testRecomp(id,filler)
    SELECT TOP(230) c.column_id,c.[name]
    FROM master.sys.columns c1 ,master.sys.columns  c;
GO

The initial statistical information looks like this (find how to retrieve the metadata related to the statistical information at the beginning of the post)


Figure 9, rowmodctr with no statistical information

Step Run the stored proc for the first time. The RT is set to 500.

EXEC dbo.testRecompile;

Step 3 Make enough changes to the table to pass the cardinality crossing test. Insert 500 rows. Do not use explicit transaction yet.

--BEGIN TRANSACTION
    INSERT INTO dbo.testRecomp 
        SELECT TOP(500)ABS(CHECKSUM(NEWID())) % 500,  b.[name] + '_' + cast (ROW_NUMBER() OVER(ORDER BY (SELECT NULL )) AS varchar(5))
        FROM sys.columns a, sys.columns b
--COMMIT TRANSACTION; --ROLLBACK TRANSACTION;

Step 3 Run the stored procedure again and observe the query execution behavior in Profiler.

Figure 10, Query recompile, table cardinality change – no stats

  • The new rowmodctr information looks like

    The new number of rows (rowcnt) is recorded along with the number of changes, rowmodctr=730. In this case, the rowmodctr value is not relevant since the RT crossing test depends only on changes in the table cardinality. This will be more visible if we ROLLBACK the row insertion operation which will be covered later.
  • The second execution followed the “Cashe lookup = Success” path (see the batch compilation diagram) and failed to pass the very last step  “Any stats stale?“.
  • At this stage, the system has detected that the RT cardinality crossing test has passed due to the number of changes(new rows) inserted in the table.
  • The system stopped the execution process and initiated the stored proc/statement recompile – SP:Recompile, SQL:StmtRecompile.  As in the previous examples, the reason for the recompile was 2 – Statistics changed.
    NOTE: The recompile process is not followed by the StatMan process since the query does not have any statsBlob information to be refreshed/created.

Experiment 3.1 (rowmodcnt not in use)

The next example shows that the RT cardinality crossing test is not related to rowmodctr as it may seem from the previous example where the number of changes followed table cardinality changes.

  • Follow the steps from the previous example.
  • Execute the INSERT query  from Step 3 within an explicit transaction
BEGIN TRANSACTION
    INSERT INTO dbo.testRecomp 
        SELECT TOP(500)ABS(CHECKSUM(NEWID())) % 500,  b.[name] + '_' + cast (ROW_NUMBER() OVER(ORDER BY (SELECT NULL )) AS varchar(5))
        FROM sys.columns a, sys.columns b
ROLLBACK TRANSACTION;
  • Observe that there are no query recompiles due to “statistic change since there were no table cardinality changes – the ROLLBACK “canceled” row insertions.
  • The statistical information shows that the rowmodctr= 720.

Conclusion

Query compilation, execution, and recompilation sequence among other steps include; loading interesting statistics – the statistical information on different table columns that Query Optimiser may find useful when creating a good plan and auto-creating statistical information on the columns that participate in i.e WHERE filter, GROUP BY ..etc. 
SQL Server query engine also checks the validity of the loaded statistical information during the initial stored procedure compilation and again during the stored procedure execution phase. If the loaded statistics are found to be stale, the former pauses stored procedure compilation, refreshes(re-samples/refreshes) the loaded statistical information, and continues the compilation process. If the Query engine detects stale loaded statistics during the execution phase,  the process stops refreshes(re-samples/updates) statistics and restarts the compilation process – query recompilation. The re-compiles are done per query, not per batch.
The examples in this blog showed that the statistical information can be automatically maintained by the queries that use them. Statistics can be also maintained manually.
To mark statistics as “Stale”, QO uses the Recompile Threshold(RT) crossing test. The test tracks the number of changes on the significant(leftmost) columns within the statistic BLOBs. The information is stored in an ever-increasing, non-transactionally consistent counter – “rowmodctr”.  The RTs are stored per table and within the compiled query.
The RT crossing test can be based only on the changes in the number of rows in a table.

 

Thanks for reading.

Dean Mincic

Client Requests & Sql events in Sql Server

Client Requests & Sql events in Sql Server


Summary

Very often we, the sequel guys, use MS Profiler/Sql Server’s Extended events to intercept and analyse the traffic between client applications and our precious Sql Server. Sometimes we capture an hour/day of traffic, and sometimes we just want to capture a slow running stored procedure’s  input param values.  Also, we are involved in  the data layer code reviews in order to keep Client data requests as optimal as possible. This post is a sort of an internal notes/documentation about different types of requests  that can be sent from  Clients to Sql Server and the corresponding events visible in MS Profiler. The article also gives a high overview on the space between ADO.NET and Sql Server

MDAC – Microsoft Data Access Components

To start off, lets first take a brief look at the MDAC framework and where ADO.NET sits in the stack. MDAC technology allows applications to communicate with almost any data-store.
MDAC architecture implements three logical layers:

  • Programming interface layer (ADO, ADO.NET)
  • DB access layer (ODBC, OLEDB, .NET Managed providers)
  • Data store

All of the layers are accessible through the MDAC API. (+ MS Sql Server network library)*


Figure 1, MDAC architecture

NOTE: Microsoft Network Library(Net-lib) is also part of MDAC and sits between TDS(Tabular data stream) Formatter and  the network protocol. The DLLs are specifically used by SQL Server to communicate with the Clients on the network level.  It supports the network protocols: Named Pipes, TCP/IP, Shared Memory, VIA).

ADO.NET

ADO.NET provides a generic interface(set of classes that expose data access services)  to a number of different data-stores (SQL Server,CSV, Excel..) from .NET applications. It is a set of libraries included in MS .NET Framework.
The main objectives of ADO.NET is to;

  • connect to a data source
  • submit queries
  • process results
  • provide a powerful disconnected data structure to perform data operations offline.

There are two main  components of ADO.NET for accessing and manipulating data

    • .NET Framework data providers – a set of class libraries that understand how to interact with specific data platform e.g Sql Server, or with a common data layer i.e OLE DB. Data providers transport data between proprietary data platforms and the generic ADO.NET data layer. ADO.NET framework includes three providers
      • The MS Sql Server Provider – exposed through the System.Data.SqlClient namespace
      • The OLE DB provider – exposed through the System.Data.OleDb namespace.
      • The ODBC provider – exposed through the System.Data.Odbc namespace.
    • DataSet – a class(System.Data namespace) that provides a disconnected representation of result sets from the Data Source. The object provides a consistent relational programming model regardless of the data source. It includes objects like: DataTable, DataRow,  DataView, DataColumn, PrimaryKey ..etc

The key classes within each provider(each provider has its own specific class names for the same objects) include:

  • Command (SqlCommand),
  • Connection (SqlConnection),
  • DataAdapter (SqlDataAdapter)
  • DataReader (SqlDataReader).


Figure 2, ADO.NET components, high overview

TDS (Tabular data stream)

TDS is an application layer protocol used to transfer tsql command between client and server. The protocol includes facilities for;  authentication and identification, encrypted negotiation, issuing of Sql batch(language events) and stored procedures(RPC) calls, managing and distributing the outcome of atomic transactions. TDS describe the names, types and optional descriptions of the rows being returned.   It was developed by Sysbase in 1984 and adopted by Microsoft in 1990. MS-TDS(Microsoft’s version of the protocol) specification can be found here.

Figure 3,Communication flow in TDS

More on how the Client talks to Sql Server

Client communicates with Sql Server in a layered fashion. Each layer exchanges information with its neighboring layer. Sometimes the layers can be bypassed.  Only the Transport protocol layer enables physical communication between Client and Sql Server. For all other layers the exchange of information is done locally – in RAM memory and  between different dlls.
The layers are:

  1. Application Layer (C# code, etc)
  2. High level data access API (ADO,ADO.NET…)
  3. Client data interface(OLEDB,ODBC, db-lib ..)
  4. Client TDS Formatter. The tsql commands sent to Sql server has to be in TDS format.
  5. Client net protocol (Shared memory, TCP/IP sockets, net-lib)
  6. Client transport protocol (TCP/IP ..)
  7. Server transport protocol (TCP/IP ..)
  8. Server net protocol (Shared memory, TCP/IP sockets, net-lib)
  9. Server data interface
  10. SQL Server

NOTES:  App level(A) can bypass ADO.NET  level(B) if we directly access teh Client data interface (direct API calls to OLE-DB, ODBC..etc).
Client protocol(E) and Client transport protocol(F) can be bypassed if the Client and Sql Server communicate through the Shared memory protocol. In this scenario Client and SQL Server run on the same machine. The components exchange information through the RAM memory space. The network elements are not required.

Client data requests

The only way clients can interact with Sql Server is by sending requests that contain TSQL commands for the database engine. The requests are sent over MS-TDS protocol(Microsoft’s version of TDS).
There are three main forms of the requests

  • Batch requests
  • RPC (Remote Procedure Requests)
  • Bulk Load Request

In the following examples I’ll use ADO.NET components to present the different  types of Client requests. I’ll  use MS Profiler to capture and analyse Events which corresponds to the requests.

Events

When Sql Server receives a message from a client(through TDS), it can be thought of as an event occurring – see  Figure 3, First client sends a connection request(a login event) and gets back success or failure response. Once connected, client requests typically fall into one of two categories:

  • Language events
  • RPC events

Language Events

A language event is a tsql batch sent from the client to the server. From ADO.NET perspective a tsql batch* is a set of tsql commands defined in the CommandText property of the SqlCommand Class. The command can be executed using one of the commands; ExecuteReader, ExecuteScalar, ExecuteNonQuery and ExectueXMLReader.

The commandType enum property of the SqlCommand Class specifies how the command string is interpreted. The Field values can be:

  • CommandType.StoredProcedure (contains the name of a stored procedure or a scalar function)
  • CommandType.Text (contains an tsql batch – one or more tsql statements, no GO directives)
  • CommandType.TableDirect (contains a table name. Only supported by the .NET Framework Data Provider for OLD DB)

NOTE: A tsql batch can be defined as one or more tsql statements before the GO* directive(end of batch signal). This is specific to Microsoft SQL stack tools i.e SSMS, sqlcmd, osql. isql- Sybase It is not part of tSql language and may not be recognised by other similar tools. It is not permitted to use GO directive within CommandText property.
After a TDS request reaches SQL Server, the db  engine will create a TASK to handle the request. One task handles only one batch(a task may spawn a number of sub-tasks to support parallel executions). When ADO.NET “executes” a batch(one or more tsql commands), one TASK will handle the request. When SSMS executes i.e two batches(the same tsql command twice – GO 2, or two tsql statements divided by the GO terminator), Sql Server engine(SQLOS) allocates two separate TASKS for the job.

Non-parameterised batch request

This type of request does not have parameters. It can contain local variable declarations. The request is treated by Sql Server engine as an ad-hoc query.
Adhoc queries  can be cached and reused only if a subsequent batch matches exactly – exact textual match.  i.e 

SELECT * FROM dbo.Products WHERE [name]='bread';
--and
SELECT * FROM dbo.Products WHERE [name]='butter';

The query plans above are  compiled and cashed separately. This behavior can lead to the problem known as “the cache pollution”.
Sql server optimiser may decide to automatically parameterise an ad-hoc query, but it’s very conservative when making those decisions. The query template must be considered to be safe, meaning that the same query plan doesn’t change even if the actual parameter values change – the paramterisation must not degrade query performances. Also, query engine decides on the auto-parameter data type (smallint in the example below). The datatype may be changed due to change in the input parameter size(PurshaseOrderId = 3420 can fit in 2bytes-smallint, but PurshaseOrderId = 210 fits into 1byte-tinyint). In that case the optimiser cashes another copy of the plan that includes the new data type.
In addition, many query elements disallow automatic parameterisation, such as; JOIN, IN, INTO, DISTINCT, TOP.. just to name a few.

--this query may be autoparameterised since it uses the same plan regardless of the 
--predicate's value (PurchaseOrderID is a clustered primary key)
USE AdventureWorks2012
GO
SELECT PurchaseOrderID
      ,[Status]
      ,OrderDate
      ,SubTotal
FROM Purchasing.PurchaseOrderHeader
WHERE PurchaseOrderID = 3420; --843, --1 ..
--the plan template looks like
/*
(@1smallint)SELECT[PurchaseOrderID],[Status],[OrderDate],[SubTotal] 
            FROM [Purchasing].[PurchaseOrderHeader] 
            WHERE [PurchaseOrderID]=@1
*/

-- the query below uses a predicate with an uneven distribution of values. This means that the
-- plan shape depends on the predicate value and therefore the auto parameterisation would degrade 
-- the query's performance
USE AdventureWorks2012
GO
SELECT  ProductID
       ,PurchaseOrderID
       ,OrderQty
 FROM Purchasing.PurchaseOrderDetail 
 WHERE ProductID = 707 --319 --391

Let’s execute the second query through a simple console application. The predicate value will be passed as a part of the batch.  Pass 707 as a command line argument; (ProductId = 707)
NOTE: We can also use SSMS to run the batch.

using System;
using System.Data;
using System.Data.SqlClient;

namespace TestClientRequest_batch
{
    class Program
    {
        static void Main(string[] args)
        {
            int ProductId = 0;

            //get arguments
            if (args.Length != 0)
            {
                ProductId = Convert.ToInt16(args[0]);
            }

            using (SqlConnection conn = new SqlConnection())
            {
                conn.ConnectionString = @"Data Source=tcp:(local);
                                          Initial Catalog=AdventureWorks2012;
                                          Integrated Security=True;
                                          Application Name=TestClientRequests";

                using (SqlCommand cmd = new SqlCommand())
                {
                    cmd.Connection = conn;
                    cmd.CommandType = CommandType.Text;
                    cmd.CommandText = @"SELECT  ProductID
                                               ,PurchaseOrderID
                                               ,OrderQty
                                        FROM Purchasing.PurchaseOrderDetail 
                                        WHERE ProductID= " + ProductId.ToString();
                    conn.Open();

                    //use sql reader to execute the query 
                    using (SqlDataReader dr = cmd.ExecuteReader())
                    {

                        Console.WriteLine("ProductID PurchaseOrderID OrderQty\t");
                        while (dr.Read())
                        {
                            Console.WriteLine(dr[0].ToString() + "\t " + dr[1].ToString() + "\t\t\t" + dr[2].ToString());
                        }
                    }
                }
                Console.ReadKey(); //prevent auto close
            }
        }
    }
}

Figure 4, MS Profiler – tsql batch execution events 

The Client request is executed as a tsql batch. SQL:BatchStarting signals that the batch execution is starting.  SQL:StmtStarting event was fired for each statement in the batch(once in this example).

RPC Events

There are two basic ADO.NET client request types which invoke RPC(Remote Procedure Call) events

  • Parameterised batch request  – uses sys.sp_executesql Sql Server’s system extended stored proc
  • Stored procedure and  scalar user defined function call

NOTE: RPC(Remote Procedure Call) name came from Sybase and in context of the Client request represents a type of formatting of the query inside a TDS packet before it is sent to Sql Server. The requests are pre-formated by the driver/provider i.e OLEDB, ODBC,JDBC. ..etc
Windows RPC(Remote Procedure Call) is completely different technology which implements the interprocess communication(IPC) that allows applications to talk to each other locally or over the network. More on the RPC  here.

Parameterised batch request

Parameterised type of request does have parameter values passed to tSQL statements. Parameter input is treated as a literal value, not  as an executable code.
This type of request significantly improves the way Sql Server process the tsql commands. The sql batch  separates the “body” of the query from the data values managed by that query.
Lets change SqlCommand object from the previous example to use a parameterised tsql statement and see what happens.

...
using (SqlCommand cmd = new SqlCommand())
{
    cmd.Connection = conn;
    cmd.CommandType = CommandType.Text;
    cmd.CommandText = @"SELECT  ProductID
                               ,PurchaseOrderID
                               ,OrderQty
                        FROM Purchasing.PurchaseOrderDetail 
                        WHERE ProductID= @ProductId";

    SqlParameter ProdId = new SqlParameter("@ProductId", SqlDbType.Int);
    cmd.Parameters.Add(ProdId).Value = ProductId;

    conn.Open();
...

Figure 5, MS Profiler – parameterised batch RPC event

This time ADO.NET constructs the request as a RPC(Remote Procedure Call) type. The client sends an RPC message data stream(TDS protocol level) to Sql server. The message contains: Procedure name( or in this case the ProcID) and the Parameters.  ProcID identifies Sql Server’s internal sp to be executed. In the example above ProcID = 10. The id value maps sys.sp_exeuctesql extended stored procedure in Sql Server.

Figure 6, sys.sp_executesql extended stored procedure

NOTE: We cannot use SSMS to invoke RPC event types since it’s not possible to send Parameters as a part of the TDS binary stream.

The parameterised batch request results with a cached plan, just like a regular stored procedure call.

SELECT   cp.usecounts
        ,cp.cacheobjtype
        ,cp.objtype
        ,txt.[text]
        ,qp.query_plan
FROM    sys.dm_exec_cached_plans cp
    CROSS APPLY sys.dm_exec_sql_text(plan_handle) txt
    CROSS APPLY sys.dm_exec_query_plan(plan_handle) qp
WHERE   cacheobjtype = 'Compiled Plan'
    AND [text] NOT LIKE '%dm_exec_cached_plans%'
    AND [text] LIKE '%FROM Purchasing.PurchaseOrderDetail%'

Figure 7, parameterised batch request – cached plan

Stored procedure call

This type of RPC request is similar to the parameterised batch request. The main difference is that the commandType enum property of the SqlCommand Class is set to “StoredProcedure“, and the CommandText property value represents the name* of the stored procedure.

Quick note: In order to avoid cache misses/compile locks, use the two part naming convention when referencing  sql objects – dbo.uspGetPurchaseOrderDetails. More about sp caching can be found here.

Let’s encapsulate previous tsql batch in a stored proc…

USE AdventureWorks2012;
GO
CREATE PROCEDURE dbo.uspGetPurchaseOrderDetails
     @ProductId int 
AS 
BEGIN
     SET NOCOUNT ON;

     SELECT   ProductID
             ,PurchaseOrderID
             ,OrderQty
     FROM Purchasing.PurchaseOrderDetail 
     WHERE ProductID= @ProductId;  

     RETURN;
END

and then modify CommandType and CommandText properties as follows…

.....
using (SqlCommand cmd = new SqlCommand())
                {
                    cmd.Connection = conn;
                    cmd.CommandType = CommandType.StoredProcedure;
                    cmd.CommandText = @"dbo.uspGetPurchaseOrderDetails";

                    SqlParameter ProdId = new SqlParameter("@ProductId", SqlDbType.Int);
                    cmd.Parameters.Add(ProdId).Value = ProductId;

                    conn.Open();
...

Figure 8, MS Profiler – RPC Stored proc call

Using RPC: Starting and/or SP:Starting  events we can find out the value of the stored procedure’s input parameter(s).  It is interesting to notice the absence of the stored procedure’s ObjectId within the RPC call. The initial request doesn’t know whatID (object_id) is used by Sql Server to identify the stored procedure.

SELECT  o.[name]
        ,o.[object_id]
        ,spdef.[definition]
FROM sys.objects o 
    INNER JOIN  sys.all_sql_modules spdef
        ON spdef.[object_id] = o.[object_id]
WHERE o.[name] = N'uspGetPurchaseOrderDetails' 
/* --result
name                        ObjectId      Definition
-----------------------------------------------------
uspGetPurchaseOrderDetails  1364199910    CREATE PROC..

*/

Not directly related to the topic but nevertheless interesting observation is the SP:CacheMiss event. The event describes the failed cache lookup for the stored procedure on the very first run as the plan was not generated and stored in the cache.

Now, let’s execute a  stored procedure that has a nested sp.

--------------------- top level stored proc --------------
USE AdventureWorks2012
GO

ALTER PROCEDURE dbo.uspGetPurchaseOrderDetails
        @ProductId int 
       ,@ProdName dbo.[Name] OUTPUT --NVARCHAR(50)
AS 
BEGIN
    SET NOCOUNT ON;
      
    SELECT   ProductID
            ,PurchaseOrderID
            ,OrderQty
    FROM Purchasing.PurchaseOrderDetail 
    WHERE ProductID= @ProductId

    --get the product name
    EXEC dbo.uspGetProductName
            @ProductId
           ,@ProductName = @ProdName OUTPUT      
    RETURN;
END
go 
--------------------- nested stored proc --------------

CREATE  PROCEDURE dbo.uspGetProductName
      @ProductId INT
     ,@ProductName dbo.[Name] OUTPUT --NVARCHAR(50)
AS
BEGIN
    SET NOCOUNT ON;

    IF @ProductId IS NULL
        RETURN;

    SELECT @ProductName = [Name]
    FROM Production.Product 
    WHERE ProductID = @ProductId;
    
    RETURN;
END 
-----------------------------------------------------

… add an output parameter to the code ..

...
using (SqlCommand cmd = new SqlCommand())
{
          cmd.Connection = conn;
          cmd.CommandType = CommandType.StoredProcedure;
          cmd.CommandText = @"dbo.uspGetPurchaseOrderDetails";

          SqlParameter ProdId = new SqlParameter("@ProductId", SqlDbType.Int);
          SqlParameter ProdName = new SqlParameter("@ProdName", SqlDbType.NVarChar,-1); // -1 varchar max
               
          ProdName.Direction = ParameterDirection.Output;
                    
          cmd.Parameters.Add(ProdId).Value = ProductId;
          cmd.Parameters.Add(ProdName);

          conn.Open();

          //use sql reader to execute the query 
          using (SqlDataReader dr = cmd.ExecuteReader())
          {
               Console.WriteLine("ProductID PurchaseOrderID OrderQty\t");
               while (dr.Read())
               {
                 Console.WriteLine(dr[0].ToString() + "\t " + dr[1].ToString() + "\t\t" + dr[2].ToString());
               }
                        
           }
//data reader object must be closed before retrieving the output parameter
Console.WriteLine(ProdName.Value.ToString());
}
....

From the Profiler’s trace below we can see that the only one RPC request was initiated for the top level stored procedure.

Figure 9, RPC request and nested stored procedures

From the trace above we can observe a few interesting things

  • Only the first, top level stored procedure call is initiated through the RPC event. This is the request initiated by ADO.NET.
  • The nested stored procedure call was captured through SP:Starting and SP:StmtStarting events. The input parameter values are not visible(more about this later).
  • RPC Output parameter* event is triggered AFTER the RPC:Completed event. This explains why we need to close Data reader object before retrieving the output parameter(see the c# code above)

NOTE: When the RPC is invoked, some or all of its parameters are designated as output parameters. All output parameters will have values returned from the server. For each output parameter, there is a corresponding return value, sent via the RETURNVALUE token. More about this can be found here.

How to capture nested stored procedure parameter values

In the example below, the nested stored procedure call was captured through the SP:Starting and SP:StmtStarting events. it is not possible to capture nested procedure(s) input parameter values only by examining this two events. Sql Profiler is not a debugging tool and it shows only what has been executed. This means that it can show only the parameters values that are initially passed from the data layer, not during the procedure execution.
So how can we capture a nested stored procedure input parameter values?  Well, we can get the values if

  • The top level stored proc parameter(which we can get through the RPC event) value is passed unchanged to the nested sp.
  • The nested stored proc parameters are passed as literals i.e .. EXECUTE dbo.MyStoredProc @name=’John Doe’, @Age=42 ..

However, there is another, more interesting way to capture the parameter values.

We can include the Showplan XML Statistic Profile  event class to our trace. The event class triggers when Sql Server executes an SQL statement and shows complete, compile-time data, including the parameter values we are interested in.

Figure 10, Showplan XML Statistic Profile event class

We can export the plan(XML format) and then search for the ParameterList element.

<ColumnReference  Column="@ProductName" 
                  ParameterDataType="nvarchar(50)" 
                  ParameterCompiledValue="NULL" 
                  ParameterRuntimeValue="N'HL Crankarm'">
</ColumnReference>
<ColumnReference  Column="@ProductId" 
                  ParameterDataType="int" 
                  ParameterCompiledValue="(319)" 
                  ParameterRuntimeValue="(319)">
</ColumnReference>

Now we can analyse the parameters’ runtime/compiled values and the hunt for the parameter sniffing issues can begin 🙂

NOTE: Showplan XML Statistic Profile is an expensive event to track. It may be a good idea to use the server side tracking instead or the Profiler UI. ..

Conclusion

In my opinion,  It is beneficial for the Sql database developers to be familiar with the database access patterns from the client end. Knowing how different components i.e ADO.NET, OLEDB, ODBC, TDS etc. fit together can help when making design decisions. The way Client constructs its data requests dictates the database engine behavior i.e Sql Server treats non-parametersied queries as ad-hoc queries whereas it caches the execution plans for the parameterised queries the same way as it does for stored procedures. Understanding how different events( captured by Profiler or Extended events) relate to the client requests can help us find and understand the root causes  for the slow running queries, excessive locking etc. that may not be on the Sql Server side but on the data layer.

Thanks for reading

Dean Mincic