Tag Archives: sql

Statistics used in the cached execution plans

Statistics used in the cached execution plans – Stored Procedures


Summary

Query optimisation process sometimes requires understanding on how Sql Server’s Query engine compiles, re-compiles and executes sql batches. Some of the most important elements used by Query optimiser when constructing a good plan are the “Interesting statistics”. These are statistical information used by Query optimiser  when constructing a good enough query execution plan.
This blog attempts to explain what are the “interesting statistics”, when they are updated and how the statistical information relates to the query recompilation process. The topic is related to Temporary tables statistics when used in stored procedures.

Batch compilation and recompilation

To begin with, let’s analyse the batch compilation/recompilation diagram (By Arun Marathe, Jul 2004, Batch Compilation, Recompilation and Plan Caching Issues in Sql Server 2005). The idea is to create a set of  experiments that will capture the behavior of a stored procedure  through the different phases of the query compilation/recompilation process, particularly those related to the statistics that are used to generate the execution plan.


Figure 1, Batch Compilation/Recompilation diagram

I’ve used AdwentureWorks database to set up the test environment and MS Profiler to capture various Events relevant for the experiments.

MS Profiler events

    1. Attention (Errors and Warnings)
    2. Auto Stats (Performance)
    3. SP:CacheHit (Stored Procedures)
    4. SP:CacheInsert  (Stored Procedures)
    5. SP:CacheMiss  (Stored Procedures)
    6. SP:CacheRemove  (Stored Procedures)
    7. SP:Completed  (Stored Procedures)
    8. SP:Recompile  (Stored Procedures)
    9. SP:Starting  (Stored Procedures)
    10. RPC: Starting (Stored Procedures)*
    11. RPC:Completed (Stored Procedures)*
    12. SP:StmtStarting  (Stored Procedures)
  1. SQL:StmtRecompile (TSQL)
  2. SQL:StmtStarting  (TSQL)

Database Objects
Set AdventureWorks DB compatibility level to 140 – Sql Server 2017. The version provides easy access to the information about the interesting statistics saved with the query plan (SSMS – SELECT Plan Operator, Properties,OptimizerStatsUsage).

Below is the set of Sql Server object definitions used for the testing.

Information about the statistics/indexes on the tables can be retrieved using the queries below.

The following examples assume the default settings for the Sql  Server’s options related to the statistics:
 AUTO_CREATE_STATISTICS ON
– AUTO_UPDATE_STATISTICS ON
AUTO_UPDATE_STATISTICS_ASYNC OFF 

A bit of theory first before proceeding with the tests. : )

colmodctr

colmodctr is an ever increasing counter that tracks the changes made on tables (a counter per column excluding the non-persistent computed columns). colmodctr is not transactionally consistent which means that is not affected by the rolled back changes i.e if a transaction inserts 10 rows in a table and then roll-back, the counter will still report 10 changes.
Sql Server Statistics (automatically/manually created/updated) on a column(s) will store the snapshot value of the colmodctr for the leftmost column in the stats-blob.
The counter is a very important since it’s one of the elements that drives the query recompilation decisions related to the statistics changed reasons. 

colmodctr counter can be accessed through the following system views.


Figure 2, colmodctr, system views – standard and hidden

One way  to access the hidden tables is to; Open a separate SSMS instance, close the object explorer, create a single connection using Server name: i.e ADMIN:(local)
NOTE: The structure of the hidden tables and the tables’ accessibility is not documented and may be changed in the future versions.

Recompile thresholds (RT)

RT concept defines the number of changes on a table column needed to be done in order to indicate the statistical information of that column as stale. 
The changes includes the column values changes through the DML operations such as INSERT, UPDATE, DELETE… i.e Inserting 10 new rows in a table is considered as 10 changes(identified by the colmodctr counters mentioned before).
If the table does not have statistical information i. e HEAP table with no indexes and no manually created statistics, and the query plans that references the table does not load/automatically create interesting statistics, the only relevant change when performing the RT crossing test will be the change in the number of rows inserted and/or deleted.

colmodctr(current) – colmodctr(snapshot) |  >= RT

or

 | cardinality(current) – cardinality(snapshot) |  >= RT

current     – refers to the current value of the modification counter
snapshot – refers to the value of the mod. counter captured during the last plan compilation(recopilation).
cardinality* – the number of rows in the table.

*cardinality has different meaning in the different contexts: Cardinality may represent the uniqueness of data values in a particular column – the lower the cardinality the more duplicated values in the column.
Cardinality is also a way to define the relationship between two entities in a data model. It is also known as the degree of relationship i 1-1, 1-m, m-n.

The Threshold Crossing  Test evaluates to TRUE if the number of changes is greater than the predefined RT value (see Figure 3)

Recompilation thresholds(RT) for all the tables referenced in the query are stored along with the query plan.

RT depends on the table type(permanent vs temporary) and the number of rows in the table.


Figure 3, Recompile thresholds

Special case. RT = 1 if the table has 0 rows (with or without statistics)

NOTE: Starting from SQL Server 2008 R2 SP1, Microsoft introduced TF2371. The trace flag activates the dynamic recompile threshold calculation. The higher number of rows in a table, the lower the RT. The functionality is implemented to allow automatic statistics updates to kick off more frequently for the big tables. i.e RT for a 10,000 row table is 500 + 0.20*10,000 = 2,500 – the number of changes required to trigger query recompile. For a table with 100M rows, the RT is 20,000,500. For some applications the RT may be too high, resulting in the sub-optimal plans due to the lack of query recompilation. Hence the TF2371.
Starting from SQL Server 2016, the TF2371 is turned on by default.

Here is a couple of examples to illustrate Figure3.
If there is a table A that contains 230 rows, RT for the table will be set to 500. This means that if we i.e insert 500 rows, the total number of rows (c)  will change to 730 (c>=230+500) which is enough changes to make the table’s statistics stale.
The change itself does not mean much if there are no queries that references the table : )
The query plans may or may not initiate the auto-statistics creation on the specific table columns. Also, the referenced tables may not have any statistical information i.e HEAP table with no non-clustered indexes.

Experiments

Experiment 1 (stats change before query execution)

In this experiment we will make “enough” changes to the ListPrice column (dbo.Products table) BEFORE running the stored procedure for the first time, 
The column is a key column in NCI_Products_ListPrice, the non-clustered index and has statistical information attached to it (the stats object name is the same as the NCI)

Lets begin the experiment by creating the test objects and checking the statistical information on the tables.

Step 1, Check the initial stats/rowmodctr information

Figure 4, Initial rowmodctr information

Step 2, Check stats BLOB and make changes on dbo.Products table

Run the DBCC  command below before and after the UPDATE to confirm that there were no changes in the stats BLOB information.

NOTE: rowmodctr is not transactionally consistent.

Figure 5, stats BLOB information

Figure 6, rowmodctr after the initial dbo.Products update

The changes are detected and available through Sql Server’s metadata.

Step 3, Run the stored procedure and observe the captured events by the Profiler.

Figure 7, Statistics refresh

Following the batch compilation diagram we can identify the following steps.

  1. Cache Lookup step resulted in the SP:CasheMiss event. dbo.TestQueryExecution stored proc. does not exist in the cache.
  2. Query Compilation Begins. SQL Server engine is about to load all of the  “interesting statistics”. The loaded statistics can be retrieved from the Actual Execution Plan, the SELECT physical  operator – OptimiserStatsUsage property.
  3. Query engine checks if any of the loaded  interesting statistics are stale. If yes, the system stops the batch compilation process and refreshes the statistics. In our case the system has 
    • Identified the number of changes made on the ListPrice column. From the stats/index information gathered after the initial update, the number of changes (rowmodctr/Modifications) is 610
    • Performed RT crossing test.  The test passed since the number of changes(610) exceeded the RT for tables with the number of rows greater than 500. RT = 500 + 0.20 * 504 ~ 601, 601 < 610
    • Executed StatMan, an internal process which automatically maintains statistics. The process updated the stale statistics NCI_Products_ListPrice on dbo.Product table

      If we check the stats blob from the Step 2, we will see that the Updated column changed its value to the current date – the stats blob has been updated.
      The AutoStats event reported the UPDATE of the statistics with EventSubClass = 1 – Other. More on the event can be found here.

  4. Query Optimiser starts to generate the query plan – a plan for each query statement.
    • The second query in the batch has a predicate on the Name column of the dbo.Products table. In an attempt to make better cardinality estimates on the rows that needs to be processed, Query Optimiser decided to automatically create statistical information on the column.
      The system stops the batch compilation process and again executes the StatsMan process to create the new statistics.

      After creating the stats, QO decided not to use it  : (
      Below is the list of the “interesting statistics” loaded during the Query compilation process. The list does not include automatically created stats on the Name column.

      As a result of the updated statistics on the ListPrice column , the rowmodctr for the column was reset. 

    • QO sets the new recompilation thresholds(RT) for all tables used in the queries.
      1. RT(dbo. SalesOrderDetail) = 500 + 0.20(121317) =24763.4 (~24764)
      2. RT(dbo.Products) = 500 + 0.20(504)= 600.8(~601)
        This meas that QO will initiate query recompile due to “Statistics changed” reason if
        1. dbo. SalesOrderDetail
          1. 24764 or more inserted/deleted rows
          2. 24764 or more changes on: SalesOrderDetailID, ProductID columns
        2. dbo.Products
          1. 601 or more inserted rows
          2. 601 or more changes on: ProductID, ListPrice, Name columns
  5. The query execution starts. The query plans are constructed and cached. SP:CacheInsert event reported that the stored procedure has been cached.

Experiment 2 (stats change during the query execution)

In this experiment we will make “enough” changes to the Name column (dbo.Products table) HALFWAY THROUGH the stored procedure execution.

Step 1 Set up the environment

  • Run the script to reset the test environment
  • Add a WAITFOR statement between the two queries in the stored procedure
  • Use PowerShell to execute the stored procedure. Add HostName property. Use the HostName to capture only the events related to the PS call. This will prevent MS Profiler from capturing events related to the UPDATE statement that will run in parallel.
  • Add an ApplicationName filter to the Profiler trace (ApplicationName LIKE experiment)

Step 2, Run the PowerShell cmdlet, switch to SSMS and run the UPDATE queries below. The queries will generate enough changes to make the automatically created statistics on the Name column stale.

Step 3. Analyse the captured MS Profiler trace.
Figure 8, Query recompile

  • The first thing that is different from the last run is the SP:CacheHit event. The event shows that our stored procedure was found in the Plan cache. The previously set RTs and the interesting statistics are part of the cached information.
    NOTE: Auto created statistic on the Name column was not used during the initial query compilation – the stats are not part of the interesting stats.
  • This time there were no changes on the columns that would initiate statistics updates, no new auto created stats and the existing cached query plan does not need to be recompiled due to “statistic changed” reasons. The process proceeds with the query execution.
  • The first query is successfully executed following by the  WAITFOR statement. During the statement execution (6 seconds delay) a separate query has made enough changes on the Name column(dbo.Products) to pass the RT crossing test for the table and flag the auto created statistics on the column as stale. Even if not used by QO during the plan generation, the stats are marked as stale.
  • (1) The query execution stops at the  “Any stats stale?”  step . The System initiates the query recompile process – SP: Recompile due to 2 – Statistics changed reason. The event is followed by the statement level SQL:StmtRecompile event which indicates that only the second query needs to be recompiled.
  • (2) Again, the StatsMan process kicks in and updates the stale statistics. The RTs are set (in this case the number of row  has not changed ,hence the RTs stayed the same).Rowmodctr value for the Name column is reset. to 0 
  • (3) The AutoStats event reported Statistics Update  having EventSubClass = 1 – Other
  • (4) The SP:StmtStarting event reports that the second query has been recompiled and the batch execution continues.

Experiment 3 (tables with no stats on columns)

The experiment demonstrates how queries get recompiled when referencing tables with no statistics. The recompiles due to the “statistics changed” reasons are initiated by the RT-table cardinality crossing test results only.
As previously mentioned, the cardinality based RT crossing test is defined as

 | cardinality(current) – cardinality(snapshot) |  >= RT

Lets create a test table and a stored procedure to perform the above experiment.

Step 1, set up the test environment

Add some data to the table..

The initial statistical information looks like (find how to retrieve the metadata related to the statistical information at the beginning of the post)


Figure 9, rowmodctr with no statistical information

Step Run the stored proc for the first time. The RT is set to 500.

Step 3 Make enough changes to the table to pass the cardinality crossing test. Insert 500 rows. Do not use explicit transaction yet.

Step 3 Run the stored procedure again and observe the query execution behavior in Profiler.

Figure 10, Query recompile, table cardinality change – no stats

  • The new rowmodctr information looks like

    The new number of rows (rowcnt) is recorded along with the number of changes, rowmodctr=730. In this case the rowmodctr value is not relevant since the RT crossing test depends only on changes in the table cardinality. This will be more visible if we ROLLBACK the row insertion operation which will be covered later.
  • The second execution followed the “Cashe lookup = Success” path (see the batch compilation diagram) and failed to pass the very last step  “Any stats stale?“.
  • At this stage, the system has detected that the RT cardinality crossing test has passed due to the number of changes(new rows) inserted in the table.
  • The system stopped the execution process and  initiated the stored proc/statement recompile – SP:Recompile, SQL:StmtRecompile.  As in the previous examples, the reason for the recompile was 2 – Statistics changed.
    NOTE: The recompile process is not followed by the StatMan process since the query does not have any statsBlob information to be refreshed/created.

Experiment 3.1 (rowmodcnt not in use)

The next example shows that the RT cardinality crossing test is not related to rowmodctr as it may seem from the previous example where the number of changes followed table cardinality changes.

  • Follow the steps from the previous example.
  • Execute the INSERT query  from the Step 3 within an explicit transaction
  • Observe that there are no query recompiles due to “statistic change since there were no table cardinality changes – the ROLLBACK “canceled” row insertions.
  • The statistical information shows that the rowmodctr= 720.

Conclusion

Query compilation, execution and recompilation sequence among other steps includes; loading interesting statistics – the statistical information on different table columns that Query Optimiser may find useful when creating a good plan and auto-creating statistical information on the columns that participate in i.e WHERE filter, GROUP BY ..etc. 
Sql Server query engine also checks the validity of the loaded statistical information during the initial stored procedure compilation and again during the stored procedure execution phase. If the loaded statistics are found to be stale, the former pauses stored procedure compilation, refreshes(re-samples/refreshes) the loaded statistical information and continues compilation process. If Query engine detects stale loaded statistics during the execution phase,  the process stops, refreshes(re-samples/updates) statistics and restarts compilation process – query recompilation. The re-compiles are done per query not per batch.
The examples in this blog showed that the statistical information can be automatically maintained by the queries that use them. Statistics can be also maintained manually.
To mark statistics as “Stale”, QO uses the Recompile Threshold(RT) crossing test. The test tracks the number of changes on the significant(leftmost) columns within the statistic BLOBs. The information is stored in an ever-increasing, non transactionally consistent counter – “rowmodctr”.  The RTs are stored per table and within the compiled query.
The RT crossing test can be based only on the changes in the number of rows in a table.

 

Thanks for reading.

Dean Mincic

Client Requests & Sql events in Sql Server

Client Requests & Sql events in Sql Server


Summary

Very often we, the sequel guys, use MS Profiler/Sql Server’s Extended events to intercept and analyse the traffic between client applications and our precious Sql Server. Sometimes we capture an hour/day of traffic, and sometimes we just want to capture a slow running stored procedure’s  input param values.  Also, we are involved in  the data layer code reviews in order to keep Client data requests as optimal as possible. This post is a sort of an internal notes/documentation about different types of requests  that can be sent from  Clients to Sql Server and the corresponding events visible in MS Profiler. The article also gives a high overview on the space between ADO.NET and Sql Server

MDAC – Microsoft Data Access Components

To start off, lets first take a brief look at the MDAC framework and where ADO.NET sits in the stack. MDAC technology allows applications to communicate with almost any data-store.
MDAC architecture implements three logical layers:

  • Programming interface layer (ADO, ADO.NET)
  • DB access layer (ODBC, OLEDB, .NET Managed providers)
  • Data store

All of the layers are accessible through the MDAC API. (+ MS Sql Server network library)*


Figure 1, MDAC architecture

NOTE: Microsoft Network Library(Net-lib) is also part of MDAC and sits between TDS(Tabular data stream) Formatter and  the network protocol. The DLLs are specifically used by SQL Server to communicate with the Clients on the network level.  It supports the network protocols: Named Pipes, TCP/IP, Shared Memory, VIA).

ADO.NET

ADO.NET provides a generic interface(set of classes that expose data access services)  to a number of different data-stores (SQL Server,CSV, Excel..) from .NET applications. It is a set of libraries included in MS .NET Framework.
The main objectives of ADO.NET is to;

  • connect to a data source
  • submit queries
  • process results
  • provide a powerful disconnected data structure to perform data operations offline.

There are two main  components of ADO.NET for accessing and manipulating data

    • .NET Framework data providers – a set of class libraries that understand how to interact with specific data platform e.g Sql Server, or with a common data layer i.e OLE DB. Data providers transport data between proprietary data platforms and the generic ADO.NET data layer. ADO.NET framework includes three providers
      • The MS Sql Server Provider – exposed through the System.Data.SqlClient namespace
      • The OLE DB provider – exposed through the System.Data.OleDb namespace.
      • The ODBC provider – exposed through the System.Data.Odbc namespace.
    • DataSet – a class(System.Data namespace) that provides a disconnected representation of result sets from the Data Source. The object provides a consistent relational programming model regardless of the data source. It includes objects like: DataTable, DataRow,  DataView, DataColumn, PrimaryKey ..etc

The key classes within each provider(each provider has its own specific class names for the same objects) include:

  • Command (SqlCommand),
  • Connection (SqlConnection),
  • DataAdapter (SqlDataAdapter)
  • DataReader (SqlDataReader).


Figure 2, ADO.NET components, high overview

TDS (Tabular data stream)

TDS is an application layer protocol used to transfer tsql command between client and server. The protocol includes facilities for;  authentication and identification, encrypted negotiation, issuing of Sql batch(language events) and stored procedures(RPC) calls, managing and distributing the outcome of atomic transactions. TDS describe the names, types and optional descriptions of the rows being returned.   It was developed by Sysbase in 1984 and adopted by Microsoft in 1990. MS-TDS(Microsoft’s version of the protocol) specification can be found here.

Figure 3,Communication flow in TDS

More on how the Client talks to Sql Server

Client communicates with Sql Server in a layered fashion. Each layer exchanges information with its neighboring layer. Sometimes the layers can be bypassed.  Only the Transport protocol layer enables physical communication between Client and Sql Server. For all other layers the exchange of information is done locally – in RAM memory and  between different dlls.
The layers are:

  1. Application Layer (C# code, etc)
  2. High level data access API (ADO,ADO.NET…)
  3. Client data interface(OLEDB,ODBC, db-lib ..)
  4. Client TDS Formatter. The tsql commands sent to Sql server has to be in TDS format.
  5. Client net protocol (Shared memory, TCP/IP sockets, net-lib)
  6. Client transport protocol (TCP/IP ..)
  7. Server transport protocol (TCP/IP ..)
  8. Server net protocol (Shared memory, TCP/IP sockets, net-lib)
  9. Server data interface
  10. SQL Server

NOTES:  App level(A) can bypass ADO.NET  level(B) if we directly access teh Client data interface (direct API calls to OLE-DB, ODBC..etc).
Client protocol(E) and Client transport protocol(F) can be bypassed if the Client and Sql Server communicate through the Shared memory protocol. In this scenario Client and SQL Server run on the same machine. The components exchange information through the RAM memory space. The network elements are not required.

Client data requests

The only way clients can interact with Sql Server is by sending requests that contain TSQL commands for the database engine. The requests are sent over MS-TDS protocol(Microsoft’s version of TDS).
There are three main forms of the requests

  • Batch requests
  • RPC (Remote Procedure Requests)
  • Bulk Load Request

In the following examples I’ll use ADO.NET components to present the different  types of Client requests. I’ll  use MS Profiler to capture and analyse Events which corresponds to the requests.

Events

When Sql Server receives a message from a client(through TDS), it can be thought of as an event occurring – see  Figure 3, First client sends a connection request(a login event) and gets back success or failure response. Once connected, client requests typically fall into one of two categories:

  • Language events
  • RPC events

Language Events

A language event is a tsql batch sent from the client to the server. From ADO.NET perspective a tsql batch* is a set of tsql commands defined in the CommandText property of the SqlCommand Class. The command can be executed using one of the commands; ExecuteReader, ExecuteScalar, ExecuteNonQuery and ExectueXMLReader.

The commandType enum property of the SqlCommand Class specifies how the command string is interpreted. The Field values can be:

  • CommandType.StoredProcedure (contains the name of a stored procedure or a scalar function)
  • CommandType.Text (contains an tsql batch – one or more tsql statements, no GO directives)
  • CommandType.TableDirect (contains a table name. Only supported by the .NET Framework Data Provider for OLD DB)

NOTE: A tsql batch can be defined as one or more tsql statements before the GO* directive(batch terminator). This is specific to Microsoft SQL stack tools i.e SSMS, sqlcmd, osql. isql- Sybase It is not part of tSql language and may not be recognised by other similar tools. It is not permitted to use GO directive within CommandText property.
After a TDS request reaches SQL Server, the db  engine will create a TASK to handle the request. One task handles only one batch(a task may spawn a number of sub-tasks to support parallel executions). When ADO.NET “executes” a batch(one or more tsql commands), one TASK will handle the request. When SSMS executes i.e two batches(the same tsql command twice – GO 2, or two tsql statements divided by the GO terminator), Sql Server engine(SQLOS) allocates two separate TASKS for the job.

Non-parameterised batch request

This type of request does not have parameters. It can contain local variable declarations. The request is treated by Sql Server engine as an ad-hoc query.
Adhoc queries  can be cached and reused only if a subsequent batch matches exactly – exact textual match.  i.e 

The query plans above are  compiled and cashed separately. This behavior can lead to the problem known as “the cache pollution”.
Sql server optimiser may decide to automatically parameterise an ad-hoc query, but it’s very conservative when making those decisions. The query template must be considered to be safe, meaning that the same query plan doesn’t change even if the actual parameter values change – the paramterisation must not degrade query performances. Also, query engine decides on the auto-parameter data type (smallint in the example below). The datatype may be changed due to change in the input parameter size(PurshaseOrderId = 3420 can fit in 2bytes-smallint, but PurshaseOrderId = 210 fits into 1byte-tinyint). In that case the optimiser cashes another copy of the plan that includes the new data type.
In addition, many query elements disallow automatic parameterisation, such as; JOIN, IN, INTO, DISTINCT, TOP.. just to name a few.

Let’s execute the second query through a simple console application. The predicate value will be passed as a part of the batch.  Pass 707 as a command line argument; (ProductId = 707)
NOTE: We can also use SSMS to run the batch.

Figure 4, MS Profiler – tsql batch execution events 

The Client request is executed as a tsql batch. SQL:BatchStarting signals that the batch execution is starting.  SQL:StmtStarting event was fired for each statement in the batch(once in this example).

RPC Events

There are two basic ADO.NET client request types which invoke RPC(Remote Procedure Call) events

  • Parameterised batch request  – uses sys.sp_executesql Sql Server’s system extended stored proc
  • Stored procedure and  scalar user defined function call

NOTE: RPC(Remote Procedure Call) name came from Sybase and in context of the Client request represents a type of formatting of the query inside a TDS packet before it is sent to Sql Server. The requests are pre-formated by the driver/provider i.e OLEDB, ODBC,JDBC. ..etc
Windows RPC(Remote Procedure Call) is completely different technology which implements the interprocess communication(IPC) that allows applications to talk to each other locally or over the network. More on the RPC  here.

Parameterised batch request

Parameterised type of request does have parameter values passed to tSQL statements. Parameter input is treated as a literal value, not  as an executable code.
This type of request significantly improves the way Sql Server process the tsql commands. The sql batch  separates the “body” of the query from the data values managed by that query.
Lets change SqlCommand object from the previous example to use a parameterised tsql statement and see what happens.

Figure 5, MS Profiler – parameterised batch RPC event

This time ADO.NET constructs the request as a RPC(Remote Procedure Call) type. The client sends an RPC message data stream(TDS protocol level) to Sql server. The message contains: Procedure name( or in this case the ProcID) and the Parameters.  ProcID identifies Sql Server’s internal sp to be executed. In the example above ProcID = 10. The id value maps sys.sp_exeuctesql extended stored procedure in Sql Server.

Figure 6, sys.sp_executesql extended stored procedure

NOTE: We cannot use SSMS to invoke RPC event types since it’s not possible to send Parameters as a part of the TDS binary stream.

The parameterised batch request results with a cached plan, just like a regular stored procedure call.

Figure 7, parameterised batch request – cached plan

Stored procedure call

This type of RPC request is similar to the parameterised batch request. The main difference is that the commandType enum property of the SqlCommand Class is set to “StoredProcedure“, and the CommandText property value represents the name* of the stored procedure.

Quick note: In order to avoid cache misses/compile locks, use the two part naming convention when referencing  sql objects – dbo.uspGetPurchaseOrderDetails. More about sp caching can be found here.

Let’s encapsulate previous tsql batch in a stored proc…

and then modify CommandType and CommandText properties as follows…

Figure 8, MS Profiler – RPC Stored proc call

Using RPC: Starting and/or SP:Starting  events we can find out the value of the stored procedure’s input parameter(s).  It is interesting to notice the absence of the stored procedure’s ObjectId within the RPC call. The initial request doesn’t know whatID (object_id) is used by Sql Server to identify the stored procedure.

Not directly related to the topic but nevertheless interesting observation is the SP:CacheMiss event. The event describes the failed cache lookup for the stored procedure on the very first run as the plan was not generated and stored in the cache.

Now, let’s execute a  stored procedure that has a nested sp.

… add an output parameter to the code ..

From the Profiler’s trace below we can see that the only one RPC request was initiated for the top level stored procedure.

Figure 9, RPC request and nested stored procedures

From the trace above we can observe a few interesting things

  • Only the first, top level stored procedure call is initiated through the RPC event. This is the request initiated by ADO.NET.
  • The nested stored procedure call was captured through SP:Starting and SP:StmtStarting events. The input parameter values are not visible(more about this later).
  • RPC Output parameter* event is triggered AFTER the RPC:Completed event. This explains why we need to close Data reader object before retrieving the output parameter(see the c# code above)

NOTE: When the RPC is invoked, some or all of its parameters are designated as output parameters. All output parameters will have values returned from the server. For each output parameter, there is a corresponding return value, sent via the RETURNVALUE token. More about this can be found here.

How to capture nested stored procedure parameter values

In the example below, the nested stored procedure call was captured through the SP:Starting and SP:StmtStarting events. it is not possible to capture nested procedure(s) input parameter values only by examining this two events. Sql Profiler is not a debugging tool and it shows only what has been executed. This means that it can show only the parameters values that are initially passed from the data layer, not during the procedure execution.
So how can we capture a nested stored procedure input parameter values?  Well, we can get the values if

  • The top level stored proc parameter(which we can get through the RPC event) value is passed unchanged to the nested sp.
  • The nested stored proc parameters are passed as literals i.e  .. EXECUTE dbo.MyStoredProc @name='John Doe', @Age=42 ..

However, there is another, more interesting way to capture the parameter values.

We can include the Showplan XML Statistic Profile  event class to our trace. The event class triggers when Sql Server executes an SQL statement and shows complete, compile-time data, including the parameter values we are interested in.

Figure 10, Showplan XML Statistic Profile event class

We can export the plan(XML format) and then search for the ParameterList element.

Now we can analyse the parameters’ runtime/compiled values and the hunt for the parameter sniffing issues can begin 🙂

NOTE: Showplan XML Statistic Profile is an expensive event to track. It may be a good idea to use the server side tracking instead or the Profiler UI. ..

Conclusion

In my opinion,  It is beneficial for the Sql database developers to be familiar with the database access patterns from the client end. Knowing how different components i.e ADO.NET, OLEDB, ODBC, TDS etc. fit together can help when making design decisions. The way Client constructs its data requests dictates the database engine behavior i.e Sql Server treats non-parametersied queries as ad-hoc queries whereas it caches the execution plans for the parameterised queries the same way as it does for stored procedures. Understanding how different events( captured by Profiler or Extended events) relate to the client requests can help us find and understand the root causes  for the slow running queries, excessive locking etc. that may not be on the Sql Server side but on the data layer.

Thanks for reading

Dean Mincic

 

 

 

Sysname datatype and NULL column constraint

Sysname datatype and NULL column constraint


Sysname is a system supplied user defined datatype that is functionally equivalent to NVARCHAR(128). The unicode data type may be used to reference database object names.  The maximum length of the Sql Server object names is 128 unicode characters with an exception of the local temp tables*

*The maximum length of a local temporary table name is 116(N’#’+115) unicode characters. The “missing” 12 characters belongs to the hexadecimal  value appended to the name. The value makes the temp table names unique across an Sql Server instance.

When we create a table, by default, the columns can store NULL values.
Dynamic query below creates a table with 34 columns. Each column is of a different Sql Server data type.


Figure 1, Create table script

We didn’t explicitly specified the NOT NULL column constraints. This means, by default, that it’s possible to store NULL values in the columns (the columns are nullable). However, column COL26_sysname  refuses to follow the rule 🙂

Msg 515, Level 16, State 2, Line 108
Cannot insert the value NULL into column ‘COL26_sysname’, table ‘tempdb.dbo.DefaultColumnConstraints’; column does not allow nulls. INSERT fails.

The statement has been terminated.

A Column of a SYSNAME data type is not nullable by default. To allow NULL values to be stored in the column, we need to explicitly specify the nullability of the column.

After the change the insert query executes with no errors.

Fgure 2, Nullable Columns

Timestamp is another data type that does not allow NULL values by default.

The query below selects all non-nullable columns (by default).


Figure 3, Not nullable datatypes by default

Side note: 
TIMESTAMP is the data type synonym for the ROWVERSION datatype(Sql Server 2012+). A ROWVERSION is automatically generated, unique 8b binary number within a database. It is simply an incrementing number generally used to indicate a row version.

Columns of the ROWVERSION data type automatically assign a new, unique binary number. It is not possible to insert any value except NULL (which then gets replaced with a proper binary number). Column NULL  constraints does not work with the ROWVERSION data type.


Figure 4, Rowversion data type

A question for the audience would be; why rowversion is the only Sql Server system data type that is represented with its synonym, timestamp, in sequel’s metadata

FIgure 5, timestamp vs rowversion

Conclusion

By default, all Sql Server system data types are nullable when assigned to the table columns – except sysname datatype. In case of SYSNAME, the default behavior does not allow NULL values. It is a good practice to explicitly include NULL constraints (nullability) when creating a table.

Thanks for reading.

 

ORDER BY – Changes everything

SQL programming has a few unique aspects such as Three value logic(3VL), set based programming logic and logical processing order. Divination from the programming patterns based on the unique nature of the language usually leads to poor performing SQL code.

RDBMS fundamentals

Sql Server as a RDBMS(Relational Database Management System), and its SQL language dialect/variant TSQL(Transact-Structured Query Language) has strong foundation in mathematics. TSQL as Sql Servers language for data management and manipulation is based on two cornerstones of mathematics – Set theory and Predicate logic.

Edgar F. Codd was a computer scientist who invented the relational model for database management systems. He used mathematical theories (Set theory and predicate logic) to create a general theory of data management.
IBM’s System R is the very first database system build on Codd’s relational model. In order to “force” DB vendors to follow the relational principles when designing db systems, Codd published his 12 rules which describes a relational database.
Various database vendors(Oracle, Sybase, Sql server..) implemented the Relational principles in a similar but different way. This means that the db systems “slightly” deviate from Codd’s relational theory, and the theory itself “slightly” deviates* from the original set theory and predicate logic – but that’s ok 🙂

*e.g One of the differences between predicate logic and relational algebra is that the first supports the two-valued logic (True/False). Relational model supports three way logic(True/false and Unknown)

Although tsql querying design patterns are derived from the Relational algebra, Tsql does not strictly follow relational rules. One of the differences, that is also related to this article is the that a Relation is not sorted → There is no relevance to the order of the elements in a set. However, Tsql allow us to sort(ORDER BY) the final result-set(the one that is returned to Client). We are allowed to replace the Relational result(as a “final product” of the relational operations in a Relational database based on relational theory…. ) with a non- relational result – CURSOR. That is why Order by “Changes everything” 🙂

Logical query processing sequence

To understand ORDER BY operation, it is important to understand the logical query processing sequence. The concept is unique to SQL world and defines the order of execution of different query segments.  It’s also known as Theoretical execution order because the sequence may be changed by SQL Srv optimiser due to may different reasons i.e data access patterns, query simplification etc. The query sequence:

  1. FROM – The FROM phase identifies the query’s source tables. This phase processes CROSS JOIN, ON Filter and in case of OUTER JOINs it adds the non-matched rows back to the result-set.
  2. WHERE Filter. Removes all those rows from the previous stage for which the predicate evaluates false.
  3. GROUP BY – Performs grouping and aggregation calculations. This phase generates one row per group.
  4. HAVING – This is the last of the three query filters. It filters the results of aggregate functions.
  5. SELECT – Processes the elements in the Select clause i.e Evaluates expressions within Select statement and/or removes duplicates(DISTINCT) and/or filters the specified percentage/number of rows(TOP) defined by the ORDER BY clause. The phase returns TABLE.
  6. ORDER BY – Sorts the rows according to the list of columns specified and returns CURSOR.
  7. FOR XML/FOR JSON The phase converts tabular result from the SELECT statement into XML/JSON outupt.

For the queries that include SET operations, the logical sequence will be.

  1. Query 1
  2. SET operation (union, except, intersect) or multi-set Union All
  3. Query 2
  4. Order By

Order by and …

ORDER BY operation guarantees ordered result set – Cursor. TSQL deviates from ANSI standard and allows us to sort results using the columns and/or expressions based on columns that are not part of the SELECT statement(This is not true if query uses DISTINCT clause) . ANSI standard allows sorting only by using the columns mentioned in the select statement. ORDER BY can also be used as a logical sorting operator when operating as a part of TOP/Offset functions.

Because ORDER BY converts query result-set into a CURSOR, it cannot be used to define table expressions;

  • Views
  • ITVF(Inline table valued functions aka parameterised views)
  • Derived tables
  • Subqueries
  • CTE(common table expressions)

… unless used as a logical sorting with(top,offset,for xml/json)

The CURSOR must be returned to the client application that is able to “consume” cursor records, one at a time and in order.

ORDER BY treats NULL values as equal. NULLs are sorted FIRST in ASC order.

SET operations (Union, Except, Intersect) and multi-set Union All

When using SET based operations (UNION, EXCEPT, INTERSECT) and/or multi-set operation UNION ALL  we cannot use ORDER BY operator in the individual statements. The following query will fail the parsing phase.

Msg 156, Level 15, State 1, Line 24
Incorrect syntax near the keyword ‘INTERSECT’

Sql Server’s SET operations correspond to operators defined in mathematical set theory. Therefore, the operands must be sets – represented as Tables (Set → Relation → Table)
ORDER BY changes query output to something that ANSI SQL defined as CURSOR. Set operations do not support ordered structures.

We can use ORDER BY clause to format the FINAL result-set for the presentation purposes. The ORDER BY in the example below logically does not belong to the last query. It operates on (Set1 U Set2)

TOP/OFFSET-FETCH

TOP option is unique to tsql and does not follow ANSI guidelines. The directive simply allows us to select a number of rows or percentage of rows to return. The option is supposed to be paired up with ORDER BY – always.
This time ORDER BY does not serve a presentation purpose. This time it serves as a logical “TOP sorting” operator. It supposed to answer the question “TOP according to what order?”. The result of a TOP/ORDER BY operation is a table – not a cursor.
Unfortunately, tsql allows us to use TOP without ORDER BY and in that case the general query result will be random and non-deterministic TOP(n) rows / TOP(n) PERCENT result set.

It is a good practice to specify ORDER BY(SELECT NULL) if our intention is to select random and/or non-deterministic TOP(n) rows. Query Optimiser will remove the logical ordering during the simplification phase. The purpose of this is to let other developers know that this was done intentionally.

Test data:

Test 1: The query returns two randomly chosen rows(TOP without ORDER BY)

Test 2: The query returns a non-deterministic result – “non unique” ORDER BY is used aka. non-deterministic Order By.

The query “decided” to select Customers 100 and 105 but not 102. In this case, if we had a clustered index on customerId, the query would probably select customers 100 and 102.

Test 3: Two ways to make the “TOP(n)” query to return a deterministic result.

(1) The deterministic order by provides more specific information about which TOP(2) order dates to select. Now the query knows exactly which TOP(2) rows to select. a combination of Order By columns that provides a deterministic result-set is called Tiebreaker.

(2) The “WITH TIES” guaranties deterministic result based on Order by column(s). WITH TIES must have ORDER BY defined. In case of non-deterministic Order By, WITH TIES will add all relevant rows to the result-set in order to provide determinism. In the example above it added an extra row.

OFSET-FETCH (tsql 2012+)

OFSET-FETCH is an ANSI SQL supported feature. Basically, the function is an advanced version of the TOP function. The option allows us to select(fetch) only a window or a page of results from the result set.
Opposed to the similar, TOP function, OFFSET–FETCH can not operate without Order By. On the other hand OFFSET-FETCH does not support WITH TIES therefore it cannot guarantee deterministic result set.
The following query fails because offset-fetch always operate with order by:

Msg 102, Level 15, State 1, Line 25 Incorrect syntax near ‘0’. Msg 153, Level 15, State 2, Line 25 Invalid usage of the option FIRST in the FETCH statement.

If we want to select random 2 rows from a table, we can use

The query below will return the same result as the query from the Test3.

Same as TOP, OFFSET-FETCH returns a table therefore it can be used in other table expressions, e.g.. (the query does not make much sense – its only for testing purposes …:)

..and this will fail. Table operators (JOIN and non-ANSI APPLY,PIVOT and UNPIVOT)  operates only between tables(table expressions).

Msg 1033, Level 15, State 1, Line 38
The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP, OFFSET or FOR XML is also specified.

VIEWS

A view is a named virtual table that is defined by a query and used as a table
A view represents a table, and a table is a logical entity that has no order to its rows. This is the reason why it’s not allowed to use presentation ORDER BY in a view. VIEWS ARE NOT ORDERED!
The view definition below will fail

Test 4. Views are not ordered

Msg 1033, Level 15, State 1, Procedure TestViews, Line 7 [Batch Start Line 1]
The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP, OFFSET or FOR XML is also specified.

Now, some programmers might think that they somehow managed to override RDBMS fundamentals by defining a view with TOP clause specifying all 100% rows and ORDER BY.

Test 5, Views with TOP 100% ordered rows

This will pass the parsing stage and will be stored as a valid view definition. In reality the view definition does not make much sense(see the TOP/ORDER BY example above). During the simplification phase, query optimizer will remove the top100%/ORDER BY as an unnecessary logic.

The result is not ordered by create_date DESC
In some cases i.e when we use TOP and ORDER by the clustered index key columns, the result may be appearing to be sorted – QO may decide to use sorted table scan. Again, there is no guarantee that the returned rows will be ordered.
e.g. During the final SELECT phase, QO may decide to use Sql Server’s storage engine advanced scan functionality also known as merry-go-round scan or scan piggybacking to return the same rows to two or more concurrent sessions. Without diving into the details of how the advanced scan works(will cover that interesting optimisation mechanism in one of the next posts), the different sessions will have the same result-sets but with different order.

The presentation ORDER BY will convert un-ordered view result into an ordered CURSOR.

Window Functions

Windows functions are intorduced in ANSI SQL-2003 standard. Microsoft partially implemented  the functionality in tSql 2005 and expanded to a full range of window functions in tSql 2012,
A Window function operates on a number of rows(window) in a set(table).  Window functions use ORDER BY as a logical operator which defines the behaviour of the aggregate operations performed by the functions. In This scenario, ORDER BY, as a part of the functions does not perform the final sort.

It is important to understand where Window function sits in the logical query execution sequence.

(5) SELECT
(5.1) Evaluate expressions in the SELECT statement
(5.2) WINDOWS Fn can be executed from here…
(5.3) DISTINCT
(5.4) TOP
(5.5) ….to here

(6) ORDER BY (presentation)

The example below demonstrates how window functions are positioned in the logical query exec. sequence.

Test Data.

The query below sums the OrderValues per OrderPackDescription. The OrderPackDesc = NULL will be summarised as a distinct value resulting OrderValue=10. ISNULL(OrderPackDesc,’Media Pack’) evaluates the NULL value as “Media Pack”

This could lead to a logical error. We want to find out the sums of order values per order pack. We also want to treat the missing OrderPacks as ‘Media Pack’. The previous query sums the “missing order packs” separately. This is a tipical logical error. The error would be more difficult to find if we had OrderValue = 190 for the unknown orderPackDesc. In that case we will have total of 190 instead of 380 for ‘Media Pack’.
Taking into the consideration the query execution sequence we can modify our query as:

…OVER(PARTITION BY ( ISNULL(OrderPackDesc,’Media Pack’) … is evaluated BEFORE the WINDOW fn.
Now our window fn, partition knows how to treat the missing values before the aggregation part and the execution sequence works in your favor 🙂
The example also shows that DISTINCT executes after WINDOW functions.
NOTE: It is not possible to use alias OrderPackDescEXPR in other expressions within the same query, like  … OVER(Partition by OrderPackDescEXPR ..) The limitation is another unique aspect of SQL language – All-at-once operation.
E.g in most programming languages, to swap values between variables we use a third, temp var. A→ temp, B→A  , temp→ B . In SQL we can swap table column values without using the temp var storage..

Window aggregate funtions

Starting from SQL 2012 the ORDER BY clause can be used along with the window aggregate functions. ORDER BY dictates how the aggragate functions operate within the window partitions. It it important to understand the ORDER BY in this content to be able toavoid logical errors.

Create some test data

The following query explains how ORDER BY affects SUM – window aggregate function

ORDER BY activates cumulative aggregations on the ordered column over defined partition. The same applies to other aggregate functions MAX, MIN, AVG and COUNT.
Failing to understand this behavior can lead to logical error 🙂

CONCLUSION

The main purpose of ORDER BY is to force a query to return an ordered result – CURSOR. This changes the relational nature of SQL language – query result-sets are NEVER ordered. The presentation ORDER BY is an expensive operation (it needs to collect the entire result-set and to put it in the requested order). The same syntax (ORDER BY) is used as a logical operator for TOP/OFFSET FETCH/WINDOW Functions/FOR XML formatter. In those cases, the purpose of the operator is completely different and DOES NOT guarantee the ordered output.

Thank you for reading.

Semi Joins, anti-joins and Nulls in Sql Server


Sql Joins are table operators(binary operations in Relational Algebra) used to combine columns from one or more tables. The expression(predicate) that define the columns which are used to join the tables is called Join Predicate. The result of a join is a set (relational database implementation of a set).
ANSI standard recognises five types of joins: Inner Join, Left Outer Join, Right Outer Join, Full Outer Join and Cross Join.
Joins are typically used to retrieve data from the normalised tables in a relation, e.g. one-to-many, zero-one-to-many, etc.,usually with an equality predicate between  primary and  foreign key columns.
One of the most complex tasks for the Query Optimiser is “join ordering” i.e. finding the optimal join sequence when constructing the execution plan(a query requesting data from n tables requires n-1 joins)

Semi join is one of a few operators in relational algebra that does not have representation in Tsql language. Some of the  “missing” operators are:

  • Semi join
  • Anti-join (anti semi join)
  • Natural join
  • Division

Semi join is a type of join whose result-set contains only the columns from one of the “semi-joined” tables. Each row from the first table(left table  if Left Semi Join) will be returned maximum once, if matched in the second table. The duplicate rows from the first table will be returned, if matched once in the second table. A distinct row from the first table will be returned no matter how many times matched in a second table.
Below is the pseudo-code representation of the above statement.

SemiJoinsPCode

Anti semi join will do the exact opposite. The join will select any rows from the first table that do not have at least one matching row in the second table.

Sql Server engine has three physical(showplan) operators that can be used to perform semi join/anti semi join logical operations, when recognised by the  Query Optimiser.

The table below maps the physical operators and the semi join algorithms that they support.
PhysicalOpSemiJoins
*The full list of the Sql Server showplan operators can be found here.

There are a number of scenarios when Query Optimiser decides to implement a semi join algorithm to optimise query request. Typically, the logical operations that represents semi joins are: IN, NOT IN, EXISTS, NOT EXISTS. The EXCEPT and INTERCEPT set operators may use the same physical, Semi Join operators to perform different logical operations.

The following examples illustrates a few of these scenarios.

Set up the test environment:

Tab1AndTab2
Left Semi Join

The Left Semi Join operator returns each row from the first (top table in the execution plan) input where there is at least one matching row in the second(bottom) input.

LeftSemiJoin
Left Anti Semi Join

The Left Anti Semi Join operator returns each row from the first (top) input when there are no matching rows in the second (bottom) input.

LeftAntiSemiJoin

Both of the queries use the same physical operator – Loop Join( Left Anti Semi Join) to perform different logical operations. The second query performs a logical Left Anti Semi Join whereas the first query performs an operation based on the Difference of Sets  (Set A – SetB) operation.
Set operators EXCEPT, INTERSECT, UNION treats NULL values as equal(non distinct) whereas operator EXISTS evaluates NULL=NULL as FALSE(even if the 3VL result is UNKNOWN). More about the behavior here.
Also, it is worth to mention that all set operators (except  the multi-set operator UNION ALL) remove duplicates. 

Right Semi Join

The Right Semi Join operator returns each row from the second (bottom) input when there is a matching row in the first (top) input.

RightSemiJoin

NOTE: The INTERSECT set operator treats NULL values as equal and therefore a NULL value appears in the result set(as being one of the  common values in tab1 and tab2) .

Right Anti Semi Join

The Right Anti Semi Join operator outputs each row from the second (bottom) input when a matching row in the first (top) input does not exist.

RightAntiSemiJoin

Anti semi Joins, NOT IN and NULLs

Again, things gets “special” when it comes to work with NULLs.
If we execute the Left Anti Semi Join query again, but this time using NOT IN instead of EXISTS, we will get the empty result-set. This is one of the common logical errors that can cause unexpected results.

The image below shows the query execution plans, predicate property of the significant physical operators and the result sets.

AntiJoinNot_IN_NULL1

Just as a reminder, the Nested Loop algorithm compares each row from the OUTER table (tab1 in the example) to each row  from the INNER table (tab2). Only the rows that satisfy the join predicate will be returned.

The query result sets  should contain all rows from the first (top) input when there are no matching rows in the second (bottom) input. The NULLs are treated as distinct. The results should be the same but there are not. Why?

The answer lies in the way the operator NOT IN was implemented.

Query1 (NOT IN)
From the Nested Loop’s Predicate property we can see that the operator uses a sequence of OR logical expressions to implement the request.

t1.ProductName IS NULL
OR t2.Name IS NULL
OR t2.ProductName = t2.Name

To make the tab1 table rows qualify for the output, the results of all the expressions in the predicate must evaluate to FALSE.

The expression t2.Name IS NULL will always evaluate to TRUE for every iteration, resulting in the empty result-set. This is because of a NULL value in the tab2.Name column.

This is important to know to be able to avoid possible logical errors. One way to prevent the behavior is to set NOT NULL column property on the tab2.Name column. The other way is to use  ISNULL() function in the query to prevent NULL expressions.

Query2 (“Correlated” NOT IN)
The query use the same execution plan as NOT IN, except now the Nested Loop’s Predicate property adds an extra AND expression to the sequence of OR expressions.

t2.Name = t1.ProductName
AND t1.ProductName IS NULL
OR t2.Name IS NULL
OR t2.Name = t1.ProductName

The same logic applies as with the first query. The predicate(as a list of the logical expressions) must evaluate to FALSE for the tab1 rows  to qualify for the output. As before, at least one of the expressions on the left side of the AND operator will always evaluate to TRUE (t2.Name IS NULL is TRUE for every iteration). However, the additional  expression (the one on the right side of the AND operator) may evaluate to FALSE making the whole predicate FALSE and allowing the qualified rows to be returned.

The query returns the correct result.

Query3 (“Hard-coded” NOT IN)
From the lack of the Constant Scan Predicate property in the execution plan, we can conclude that the Query Optimiser has decided to return an empty result set knowing that at least one of the “hard coded, NOT IN values” is NULL.
To be able to get the predicate property and analyse its logic, we can run the same query, but this time without the NULL value in the NOT IN list  i. e.

The Constant Scan operator has the Predicate property now.

LeftAntiJoinNOTIN_HC

The predicate is constructed of n not-equal expressions (n is a number of distinct values in the NOT IN list, in this case 4) and n-1 AND operators.

t1.ProductName <>’EPROM’
AND t1.ProductName <>’Flash Memory’
AND t1.ProductName <>’Hard disk’
AND 1.ProductName <>’SSD disk’
— AND 1.ProductName <>NULL –if we had NULL here, the predicate would evaluate to UNKNOWN  for every row resulting in an empty result set.

There are a couple of interesting points here. The first one is the result-set.

The query excludes t1.ProductName  = NULL from the result set. The reason for excluding the NULL  is because of the way “hard coded NOT IN” was implemented.
The NULL values from the left table will be excluded, whereas the NULL value(s) in the NOT IN list will cause an empty result set.
In this example, the algorithm excludes tab1.ProductName NULL value by evaluating predicate as UNKNOWN i.e

NULL <>’EPROM’ (1)
AND NULL <>’Flash Memory’   (2)
AND NULL<>’Hard disk’  (3)
AND NULL<>’SSD disk’  (4)

NULL (1) AND NULL (2) AND NULL (3) AND NULL (4)  is  NULL

It is good to know the differences in the results to prevent possible logical errors.

The other interesting thing is the number of expressions in the predicate that directly depend on the number of hard coded literals. The question is: Is there a maximum number of literals that can be processed? The number is related to a maximum length of a string containing Sql Statements(batch size). The Maximum Capacity Specifications for SQL Server can be found here. The another reason may be running out of stack size(stack overflow) causing the errors 8623 and/or 8632.

Query4(NOT EXISTS) 
The NOT EXIST operator works straightforward here. All rows from tabt1 that does not match the rows from the tab2 (where the correlated predicate t1.ProductName = t2.Name evaluates to FALSE)  will be returned. NULLs are treated as distinct.
The query returns the correct result.
Conclusion

Semi join is a join that has no representation in tsql. The result of the operation contains only columns from one of the tables. All returned rows from the first table must be matched at least once in the second table. The rows from the first table will be returned only once even if the second table contains more than one match. Semi joins are usually implemented using IN or EXISTS operators. Any of the three physical operators that SQL Server uses to perform joins can be used to implement Semi Joins. The set operators EXCEPT and INTERSECT may use the same Semi Join physical operators to perform set operations.
Logical Anti semi join operation may be implemented using NOT IN or NOT EXISTS operators. The NOT IN operator may cause the unexpected results when it operates on the tables that contain NULL values in the joined columns.

Thanks for reading.

Dean Mincic

 

 

 

Orphaned DB Users and Sql Server logins


One of the common tasks in db community is moving databases from one Sql server instance to another.
For simple environments, it could be just a matter of backing up  database on one server/instance and restoring it on another.

Sometimes, even if we successfully restore database on a new server, create all necessary logins and change the application connection string to point to the new instance we still get the Database not accessible or Login failed error messages.

The common reason for the errors is the disconnection between the db users and their logins – The orphaned DB users.
The technique we usually use to re-connect the users, is to call a system stored procedure that will do the Login-DbUser remapping.

Why does the problem occur in the first place and what we can do to prevent it. The following is a little bit of theory and a couple of examples that will illustrate the problem.

In Sql Server, there are eleven different database user types. The type I am focusing on today is:  Db User based on the SQL Server authenticated Login . The complete list can be found here.

Every principal (entity that can request SQL Server resources) has two main properties:

  • Principal ID or ID
  • Security ID or SID

The scope of influence of a principal depends on the level that the principal operates on e.g Sql Server Level principals operates on  an Sql Server Instance level while the Database level principals operates on a database level.

  • Logins are principals whose scope is Sql Server instance
  • DB Users are principals whose scope is Database
DB Users –  based on Sql Server Logins

When  we create an Sql Server  Login,  Sql Server assigns ID and SID to the created principal.

  • ID – (INT) -Uniquely identifies Login as an Sql Server securable (resource managed by Sql Server). The ID is generated by Sql Server
  • SID – (VARBINARY(85)) –  Uniquely identifies the security context of the Login. The security context depends on how the identifier was created. The Login  SIDs can be created by :

    • Windows User/Group. The SID will be unique across the User/group domain and will be created by the domain.
    • Sql Server. The SID is unique within Sql Server and created by Sql Server – used in the example below.
    • Certificate or asymmetric key. (cryptography in Sql Server will be covered in one of the following posts)

The information about the Logins are stored in the master database

The following code will create a few Sql objects to illustrate the problem.

The logins are stored in the master database and can be viewed using sys.server_principals system view:

Query Results (the identifiers may be different on different PCs)
Logins_srvPrincipals

Now we need to create a few database users.

As mentioned before, there are eleven different types of database users. For this exercise we’ll create db users based on  Sql server logins.

Sql Server has  assigned the Principal IDs and Security IDs to the newly created users.

  • ID – (INT) -Uniquely identifies db users as a database securable.
  • SID – (VARBINARY(85)) – Uniquely identifies the security context of the User. The security context depends on how the identifier was created – In the example the db Users security context depends on the Logins and therefore the User SIDs will match the Login SIDs .

The information about the database users are stored on database level an can be viewed using the sys.database_principals system view.

Users_DB1

..and for the second database..

Users_DB2

The diagram below shows the relations between Logins and Users.

LoginsUsers

Image 1, Logins/Users mapping

Case Scenario:
An application uses two databases, TestOrphanedUsers_1 and TestOrphanedUsers_2. We decided to move the application’s backend to a new instance by backing up and restoring the two on the new server.
The restored databases contain all of the previously defined db users since the principals are a part of the databases. The original server logins were not transferred because they belong to the original master database.
At this stage the logins are not mapped to the users and the application is not able to access the backend.

To simulate the scenario, we’ll remove the previously created logins.

If we removed the logins using SSMS UI, we would get a message

DropLoginMsgSSMS

The users left in the databases are now called “The orphaned Users”. The users without the corresponding logins cannot be used to access the databases. This situation mimics the database restore on a new Sql Server instance.

The next step is to create new logins. At this stage we can do two things.

  • Create new logins (as we did before). Sql server’s engine will assign new SIDs to the logins. These identifiers will not match the existing user SIDs and consequently we’ll have to remap the Logins to the Users (to make SIDs match). To make a match, the process will replace the old user SIDs with the new Login ones .

Using the principals’ metadata we can see the mismatch between SIDs .

Login SIDs:
Logins_srvPrincipalsNEW_notMatch

To restore the previous mapping (see Image 1) we need to remap the orphaned users as:
Logins_UsersMapping

.. using the sys.sp_change_users_login system stored procedure.

This will replace the user SIDs with the new Login SIDs
The stored procedure supports Auto_Fix action type that can be used in a specific scenario in which we create a missing login with the same name as the orphaned user the login was created for.
More information about the procedure can be found here.

NOTE: SQL Server 2016 is the last database engine version to support sys.sp_change_users_login procedure.
Microsoft recommends to use ALTER USER instead.

  • Create logins implicitly specifying  SIDs to match the db. user SIDs,

Now we can test the mapping using the Logins to access the databases.

Conclusion:

The database users created for(or based on) Sql Server authenticated logins must have a valid link to the logins. The link is the Security identification(SID) varbinary.
Due to different scopes of the principals, db Users, when restored on a different server, may became disconnected(orphaned) from the corresponding Logins(the new logins may have new SIDs that don’t match the original db User SIDs). In this situation the applications are not able to connect to the database.
To remap the principals we can use two approaches; system sp  sys.sp_change_users_login to change the db user SIDs to match the new Login SIDs  or to create new Logins using the original user SIDs.
It’s worth mentioning that the latter may cause the “Supplied parameter sid is in use” error if the specified SID is already in use by an existing Login.

Thanks for reading.

Dean Mincic

 

Short–Circuit evaluation in Sql Server


The Short-Circuit(SC) or The minimal evaluation is the way programing languages evaluates arguments involved in Boolean expressions.
SC means that the arguments are evaluated from left to right until the value of the expression is determined. The rest of the arguments (if any left) will not be evaluated.
For example, c# evaluates the arguments (if logical operators && and || are used) from left to right.The evaluation of the expression stops as soon as the outcome is determined.

With SqlServer(and RDBMS systems in general) things are different when it comes to SC. In short, in Sql Server we can not relay on the minimal evaluation. Essentially, this is because of declarative nature of TSQL language. In TSQL we almost always describe only WHAT to do (e.g Give me the list of all orders processed on a particular day from the specific geographic area made by Customers who..etc) but not HOW to do it (Query optimiser-QO and other Sql Server engine components) will find the optimal way to fulfill your request.
Therefore, the order of evaluation of the query predicates may be sequential, from left to right, but it’s not guaranteed. QO may change the ordering and/or push the predicate evaluation down the query tree e.g to the data access operator (e.g index seek).
The following example shows a few case scenarios of how Sql Server evaluates predicates(logical expressions).

Create a test table:

Test 1: Conjunctive predicates (multiple predicates separated by AND Logical operators) and predicate evaluation sequence.

The query executes with no error.

Test1Plan
Figure 1: Conjunctive predicates – second expression evaluated first

Switching the predicate places will have no effect to the QO’s reasoning.
If we ran the same query without the second expression the query would fail due to conversion error.

Successful query: Query optimiser knows that the second expression, CurrencyId = N’HA!’ includes Primary Key constraint on the column(enforced by an unique clustered index) and therefore can evaluate to true maximum once. QO decides to “push” second predicate (CurrencyId = N’HA!’e) down to the query tree to the Index Seek data access operator making it “Seek Predicate”.

If the Seek operator does not find clustered N’HA!’ the WHERE filter evaluates to false making the first expression redundant. The first expression is shown in the Predicate section of the execution plan only for consistency purpose.

On the other hand , if the operator finds the row, the first predicate will be evaluated (Name=98765). QO will then try to convert the column value(N’98765′) to INT and than to compare with the given argument(98765) . If the conversion succeed(like in the example) the query will execute with no errors. However, if the conversion fails, the query will return error

Test 1 shows that we cannot know which expression will be evaluated first(if evaluated at all) and we must not relay on the minimal evaluation.

Test 2: Disjunctive predicates (multiple predicates separated by OR Logical operators) and predicate evaluation sequence).

The query below will execute with no conversion errors.

The successful execution of the query may lead us to believe that Sql Server uses minimal evaluation, but it’s not the case.

If the predicates switch the places, the query will still run with no errors. This wouldn’t be possible if Name=98765 predicate evaluated first, due to conversion error. The answer lies, again, in the way QO handles the query. In this case, during the optimisation, QO simplifies the query by excluding all predicates knowing that A=A will always evaluate to true for all rows. It simply uses table scan operator to return all rows.

Test1PlanORFigure 2: Disjunctive predicates, query simplification and SC

Test2 again shows that we the sequence of predicate evaluation is controlled by QO and is not short-circuited.

Sidenote:  the query below will fail due to conversion error (the order of the predicates is irrelevant). The QO cannot use clustered index seek because of the OR operator.  Even if we had an index on Name column, the index wouldn’t be used because of the implicit conversion on the column.

Conclusion:

It is not recommended to relay on the minimal evaluation in Sql Server (I would say in any DB engine) because of declarative nature of SQL language. Query Optimiser,  one of Sql Server’s relational engine components, will transform/simplify the query and during the process may reorganise the “WHERE” predicates making the order of the logical expressions irrelevant.

Thanks for reading.

Dean Mincic