Tag Archives: statistics

Statistics used in the cached execution plans

Statistics used in the cached execution plans – Stored Procedures


Summary

The query optimization process sometimes requires an understanding of how the SQL Server’s Query engine compiles, re-compiles, and executes SQL batches. Some of the most important elements used by the Query optimizer when constructing a good plan are the “Interesting statistics”. These are statistical information used by the Query optimizer when constructing a good enough query execution plan.
This blog attempts to explain what are the “interesting statistics”, when they are updated and how the statistical information relates to the query recompilation process. The topic is related to Temporary tables statistics when used in stored procedures.

Batch compilation and recompilation

To begin with, let’s analyze the batch compilation/recompilation diagram (By Arun Marathe, Jul 2004, Batch Compilation, Recompilation and Plan Caching Issues in SQL Server 2005). The idea is to create a set of experiments that will capture the behavior of a stored procedure through the different phases of the query compilation/recompilation process, particularly those related to the statistics that are used to generate the execution plan.


Figure 1, Batch Compilation/Recompilation diagram

I’ve used the AdventureWorks database to set up the test environment and MS Profiler to capture various Events relevant to the experiments.

MS Profiler events

    1. Attention (Errors and Warnings)
    2. Auto Stats (Performance)
    3. SP:CacheHit (Stored Procedures)
    4. SP:CacheInsert  (Stored Procedures)
    5. SP:CacheMiss  (Stored Procedures)
    6. SP:CacheRemove  (Stored Procedures)
    7. SP:Completed  (Stored Procedures)
    8. SP:Recompile  (Stored Procedures)
    9. SP:Starting  (Stored Procedures)
    10. RPC: Starting (Stored Procedures)*
    11. RPC:Completed (Stored Procedures)*
    12. SP:StmtStarting  (Stored Procedures)
  1. SQL:StmtRecompile (TSQL)
  2. SQL:StmtStarting  (TSQL)

Database Objects
Set AdventureWorks DB compatibility level to 140 – SQL Server 2017. The version provides easy access to the information about the interesting statistics saved with the query plan (SSMS – SELECT Plan Operator, Properties,OptimizerStatsUsage).

Below is the set of SQL Server object definitions used for the testing.

USE master
GO
ALTER DATABASE AdventureWorks
    SET COMPATIBILITY_LEVEL = 140;
GO

USE AdventureWorks
GO

/* dbo.SalesOrderDetail table */
DROP TABLE IF EXISTS dbo.SalesOrderDetail
GO

SELECT *
    INTO dbo.SalesOrderDetail
FROM Sales.SalesOrderDetail;
GO

--add primary, clustered key
ALTER TABLE dbo.SalesOrderDetail
    ADD CONSTRAINT PK_SalesOrderDetailID PRIMARY KEY CLUSTERED (SalesOrderDetailID);  
GO

--NCI on ProductID
CREATE NONCLUSTERED INDEX IX_SalesOrderDetail_ProductID 
    ON dbo.SalesOrderDetail (ProductID);
GO

/* dbo.Products table */
DROP TABLE IF EXISTS dbo.Products
GO
    
SELECT *
    INTO dbo.Products
FROM Production.Product
GO

--add primary, clustered key
ALTER TABLE dbo.Products
    ADD CONSTRAINT PK_ProductID PRIMARY KEY CLUSTERED(ProductID)
GO

--NCI on ListPrice
CREATE NONCLUSTERED INDEX NCI_Products_ListPrice
    ON dbo.Products (ListPrice)
        INCLUDE([Name],ProductNumber)
GO

/* dbo.TestQueryExectuion stored proc*/
DROP PROCEDURE IF EXISTS dbo.TestQueryExecution;
GO

CREATE PROCEDURE dbo.TestQueryExecution
          @ProductID INT
         ,@ProdName NVARCHAR(50)
         ,@MinLinePrice MONEY = $100
AS
BEGIN
    SET XACT_ABORT ON; 
    SET NOCOUNT ON

    --query1
    SELECT   d.CarrierTrackingNumber,
             d.OrderQty,
             d.UnitPrice,     
             d.ProductID,
             p.[Name]
    FROM    dbo.SalesOrderDetail d
        INNER JOIN dbo.Products p
            ON d.ProductID = p.ProductID
    WHERE d.ProductID = @ProductID 

    --query2
    SELECT [Name]
           ,ProductNumber
           ,ListPrice
           ,Color
    FROM dbo.Products
    WHERE ListPrice >= @MinLinePrice
        AND  [Name] LIKE (@ProdName +'%')

    RETURN;
END

Information about the statistics/indexes on the tables can be retrieved using the queries below.

USE AdventureWorks
go

--sys.stats
SELECT
     TableName = OBJECT_NAME(sp.object_id)
    ,[Statistic ID] = sp.stats_id -- If statistics correspond to an index, the stats_id value is the same as the index_id value in the sys.indexes.
    ,[Statistics] = s.[name] 
    ,[Last Updated] = sp.last_updated 
    ,sp.[rows]
    ,sp.rows_sampled
    ,sp.unfiltered_rows
    ,Modifications = sp.modification_counter
    ---
    ,s.object_id
FROM sys.stats AS s
OUTER APPLY sys.dm_db_stats_properties (s.object_id,s.stats_id) AS sp
WHERE s.object_id IN (OBJECT_ID(N'dbo.SalesOrderDetail'), (OBJECT_ID(N'dbo.Products')));

--sys.sysindexes
SELECT TableName = OBJECT_NAME(i.id)
      ,IndexName = i.[name]
      ,i.indid --If statistics correspond to an index, the stats_id value in the sys.stats is the same as the index_id
      ,IndexDesc = CASE 
                        WHEN i.indid = 0 THEN 'HEAP'
                        WHEN i.indid = 1 THEN 'CLUSTERED'
                        WHEN i.indid > 1 THEN 'NONCLUSTERED'
                   END
      ,i.rowcnt
      ,i.dpages
      ,i.rowmodctr
FROM sys.sysindexes i
WHERE i.id  IN (OBJECT_ID(N'dbo.SalesOrderDetail'), (OBJECT_ID(N'dbo.Products')));

The following examples assume the default settings for the Sql  Server’s options related to the statistics:
 AUTO_CREATE_STATISTICS ON
– AUTO_UPDATE_STATISTICS ON
AUTO_UPDATE_STATISTICS_ASYNC OFF 

A bit of theory first before proceeding with the tests. : )

colmodctr

colmodctr is an ever-increasing counter that tracks the changes made on tables (a counter per column excluding the non-persistent computed columns). colmodctr is not transactionally consistent which means that is not affected by the rolled-back changes i.e if a transaction inserts 10 rows in a table and then roll-back, the counter will still report 10 changes.
SQL Server Statistics (automatically/manually created/updated) on a column(s) will store the snapshot value of the colmodctr for the leftmost column in the stats blob.
The counter is very important since it’s one of the elements that drive the query recompilation decisions related to the statistics changed reasons. 

colmodctr counter can be accessed through the following system views.


Figure 2, colmodctr, system views – standard and hidden

One way  to access the hidden tables is to; Open a separate SSMS instance, close the object explorer, and create a single connection using the Server name: i.e ADMIN:(local)
NOTE: The structure of the hidden tables and the tables’ accessibility are not documented and may be changed in future versions.

Recompile thresholds (RT)

RT concept defines the number of changes on a table column needed to be done in order to indicate the statistical information of that column as stale. 
The changes include the column values changes through the DML operations such as INSERT, UPDATE, DELETE… i.e Inserting 10 new rows in a table is considered as 10 changes(identified by the colmodctr counters mentioned before).
If the table does not have statistical information i. e HEAP table with no indexes and no manually created statistics, and the query plans that reference the table does not load/automatically create interesting statistics, the only relevant change when performing the RT crossing test will be the change in the number of rows inserted and/or deleted.

colmodctr(current) – colmodctr(snapshot) |  >= RT

or

 | cardinality(current) – cardinality(snapshot) |  >= RT

current     – refers to the current value of the modification counter
snapshot – refers to the value of the mod. counter captured during the last plan compilation(re-compilation).
cardinality* – the number of rows in the table.

Cardinality

In mathematics, the cardinality of a set is defined as the number of elements in a SET.  A SET is an unordered collection of elements in which each element is unique.

In RDBMS – see RDBMS fundamentals), data is presented in a form of a Table. An RDBMS table has its roots in a structure called Relation (attributes ~ columns, tuple ~ row). The number of tuples is represented by the cardinality of the relation – a Relation does not have duplicate tuples.
However, the table structure deviates from the strict rules and allows, e.g., duplicate rows. This means, that we use cardinality to represent the number of rows in a table, only if the table has no duplicates.
Sometimes in the literature, we find that the cardinality of a table represents the number of unique rows out of the total number of rows. So, cardinality in this context represents uniqueness. 

Cardinality may represent the uniqueness of data values in a particular column – the lower the cardinality the selectivity of a value decreases, and the more duplicated values in the column. It is a measure that defines the density of a column – column density is a reciprocal value of the column’s cardinality. So the more selective values the less density. Then there is a metric called Density Vector that consists of the densities of the individual columns… super interesting stuff, but not for this post 🙂 

In a different context, Cardinality is a way to define the relationship between two entities in a data model. It is also known as the degree of relationship i 1-1, 1-m, m-n.

 

The Threshold Crossing  Test evaluates to TRUE if the number of changes is greater than the predefined RT value (see Figure 3)

Recompilation thresholds(RT) for all the tables referenced in the query are stored along with the query plan.

RT depends on the table type(permanent vs temporary) and the number of rows in the table.


Figure 3, Recompile thresholds

Special case. RT = 1 if the table has 0 rows (with or without statistics)

NOTE: Starting from SQL Server 2008 R2 SP1, Microsoft introduced TF2371. The trace flag activates the dynamic recompile threshold calculation. The higher number of rows in a table, the lower the RT. The functionality is implemented to allow automatic statistics updates to kick off more frequently for the big tables. i.e RT for a 10,000-row table is 500 + 0.20*10,000 = 2,500 – the number of changes required to trigger query recompile. For a table with 100M rows, the RT is 20,000,500. For some applications the RT may be too high, resulting in sub-optimal plans due to the lack of query recompilation. Hence the TF2371.
Starting from SQL Server 2016, the TF2371 is turned on by default.

Here are a couple of examples to illustrate Figure 3.
If there is a table A that contains 230 rows, RT for the table will be set to 500. This means that if we i.e insert 500 rows, the total number of rows (c)  will change to 730 (c>=230+500) which is enough changes to make the table’s statistics stale.
The change itself does not mean much if there are no queries that reference the table : )
The query plans may or may not initiate the auto-statistics creation on the specific table columns. Also, the referenced tables may not have any statistical information i.e HEAP table with no non-clustered indexes.

Experiments

Experiment 1 (stats change before query execution)

In this experiment, we will make “enough” changes to the ListPrice column (dbo.Products table) BEFORE running the stored procedure for the first time, 
The column is a key column in NCI_Products_ListPrice, the non-clustered index, and has statistical information attached to it (the stats object name is the same as the NCI)

Let’s begin the experiment by creating the test objects and checking the statistical information on the tables.

Step 1, Check the initial stats/rowmodctr information

Figure 4, Initial rowmodctr information

Step 2, Check stats BLOB and make changes on dbo.Products table

Run the DBCC  command below before and after the UPDATE to confirm that there were no changes in the stats BLOB information.

DBCC SHOW_STATISTICS ('dbo.Products',NCI_Products_ListPrice) 
    WITH STAT_HEADER
BEGIN TRANSACTION  
    UPDATE dbo.Products  --504 changes
        SET ListPrice +=ListPrice *0.10 --add 10% to the price
    UPDATE TOP(106) dbo.Products --106 changes
        SET ListPrice +=$10.00 -- add $10 dollars to the prices
ROLLBACK TRANSACTION

NOTE: rowmodctr is not transactionally consistent.


Figure 5, stats BLOB information


Figure 6, rowmodctr after the initial dbo.Products update

The changes are detected and available through SQL Server’s metadata.

Step 3, Run the stored procedure and observe the captured events by the Profiler.

EXEC dbo.TestQueryExecution
       @ProductID =897
      ,@ProdName = N'G'
     ,@MinLinePrice = $0

Figure 7, Statistics refresh

Following the batch compilation diagram, we can identify the following steps.

  1. The Cache Lookup step resulted in the SP:CasheMiss event. dbo.TestQueryExecution stored proc. does not exist in the cache.
  2. Query Compilation Begins. SQL Server engine is about to load all of the  “interesting statistics”. The loaded statistics can be retrieved from the Actual Execution Plan, the SELECT physical operator – OptimiserStatsUsage property.
  3. The Query engine checks if any of the loaded interesting statistics are stale. If yes, the system stops the batch compilation process and refreshes the statistics. In our case the system has 
    • Identified the number of changes made on the ListPrice column. From the stats/index information gathered after the initial update, the number of changes (rowmodctr/Modifications) is 610
    • Performed RT crossing test.  The test passed since the number of changes(610) exceeded the RT for tables with a number of rows greater than 500. RT = 500 + 0.20 * 504 ~ 601, 601 < 610
    • Executed StatMan, an internal process that automatically maintains statistics. The process updated the stale statistics NCI_Products_ListPrice on dbo.Product table
      SELECT StatMan([SC0], [SC1]) --from MS Profiler
      FROM   (SELECT  TOP 100 PERCENT [ListPrice] AS [SC0]
                                     ,[ProductID] AS [SC1]
              FROM  [dbo].[Products] WITH (READUNCOMMITTED)
              ORDER BY [SC0],[SC1]) AS _MS_UPDSTATS_TBL
      OPTION (MAXDOP 16);

      If we check the stats blob from Step 2, we will see that the Updated column changed its value to the current date – the stats blob has been updated.
      The AutoStats event reported the UPDATE of the statistics with EventSubClass = 1 – Other. More on the event can be found here.

  4. Query Optimiser starts to generate the query plan – a plan for each query statement.
    • The second query in the batch has a predicate on the Name column of the dbo.Products table. In an attempt to make better cardinality estimates on the rows that need to be processed, Query Optimiser decided to automatically create statistical information on the column.
      The system stops the batch compilation process and again executes the StatsMan process to create the new statistics.
      SELECT StatMan([SC0])
      FROM  ( SELECT TOP 100 PERCENT [Name] AS [SC0]
              FROM  [dbo].[Products] WITH (READUNCOMMITTED)
              ORDER BY [SC0]) AS _MS_UPDSTATS_TBL
      OPTION (MAXDOP 16);

      After creating the stats, QO decided not to use it  : (
      Below is the list of the “interesting statistics” loaded during the Query compilation process. The list does not include automatically created stats on the Name column.

      As a result of the updated statistics on the ListPrice column , the rowmodctr for the column was reset. 

    • QO sets the new recompilation thresholds(RT) for all tables used in the queries.
      1. RT(dbo. SalesOrderDetail) = 500 + 0.20(121317) =24763.4 (~24764)
      2. RT(dbo.Products) = 500 + 0.20(504)= 600.8(~601)
        This means that QO will initiate query recompile due to the “Statistics changed” reason if
        1. dbo. SalesOrderDetail
          1. 24764 or more inserted/deleted rows
          2. 24764 or more changes on SalesOrderDetailID, ProductID columns
        2. dbo.Products
          1. 601 or more inserted rows
          2. 601 or more changes on ProductID, ListPrice, and Name columns
  5. The query execution starts. The query plans are constructed and cached. SP:CacheInsert event reported that the stored procedure has been cached.

Experiment 2 (stats change during the query execution)

In this experiment, we will make “enough” changes to the Name column (dbo.Products table) HALFWAY THROUGH the stored procedure execution.

Step 1 Set up the environment

  • Run the script to reset the test environment
  • Add a WAITFOR statement between the two queries in the stored procedure
... 
SELECT   d.CarrierTrackingNumber,--query1
             d.OrderQty,
             d.UnitPrice,     
             d.ProductID,
             p.[Name]
    FROM    dbo.SalesOrderDetail d
        INNER JOIN dbo.Products p
            ON d.ProductID = p.ProductID
    WHERE d.ProductID = @ProductID 

    WAITFOR DELAY '00:00:06' -- buy some time to change statistisc
    
    SELECT [Name]--query2
           ,ProductNumber
           ,ListPrice
           ,Color
    FROM dbo.Products
    WHERE ListPrice >= @MinLinePrice
        AND  [Name] LIKE (@ProdName +'%')
...
  • Use PowerShell to execute the stored procedure. Add HostName property. Use the HostName to capture only the events related to the PS call. This will prevent MS Profiler from capturing events related to the UPDATE statement that will run in parallel.
    PS C:\Users\dean.mincic> Invoke-Sqlcmd -ServerInstance "." `
                                           -Database "AdventureWorks" `
                                           -HostName "experiment" `
                                           -Query "EXEC dbo.TestQueryExecution `
                                                      @ProductID =897 `
                                                      ,@ProdName = N'G' `
                                                      ,@MinLinePrice = 0.00;"
  • Add an ApplicationName filter to the Profiler trace (ApplicationName LIKE experiment)

Step 2, Run the PowerShell cmdlet, switch to SSMS, and run the UPDATE query below. The queries will generate enough changes to make the automatically created statistics on the Name column stale.

BEGIN TRANSACTION  
    UPDATE dbo.Products  --504 changes (all rows)
        SET [Name] +='_ABC' 
    UPDATE TOP(106) dbo.Products --106 changes (random rows)
        SET [Name] +='_ABC'
COMMIT TRANSACTION

Step 3. Analyze the captured MS Profiler trace.
Figure 8, Query recompile

  • The first thing that is different from the last run is the SP:CacheHit event. The event shows that our stored procedure was found in the Plan cache. The previously set RTs and the interesting statistics are part of the cached information.
    NOTE: Auto-created statistic on the Name column was not used during the initial query compilation – the stats are not part of the interesting stats.
  • This time there were no changes on the columns that would initiate statistics updates, no new auto-created stats and the existing cached query plan does not need to be recompiled due to “statistic changed” reasons. The process proceeds with the query execution.
  • The first query is successfully executed followed by the  WAITFOR statement. During the statement execution (6 seconds delay) a separate query has made enough changes on the Name column(dbo.Products) to pass the RT crossing test for the table and flag the auto-created statistics on the column as stale. Even if not used by QO during the plan generation, the stats are marked as stale.
  • (1) The query execution stops at the  “Any stats stale?”  step. The System initiates the query recompile process – SP: Recompile due to 2 – Statistics changed reason. The event is followed by the statement level SQL:StmtRecompile event which indicates that only the second query needs to be recompiled.
  • (2) Again, the StatsMan process kicks in and updates the stale statistics. The RTs are set (in this case the number of rows has not changed, hence the RTs stayed the same). Rowmodctr value for the Name column is reset. to 0 
  • (3) The AutoStats event reported Statistics Update  having EventSubClass = 1 – Other
  • (4) The SP:StmtStarting event reports that the second query has been recompiled and the batch execution continues.

Experiment 3 (tables with no stats on columns)

The experiment demonstrates how queries get recompiled when referencing tables with no statistics. The recompiles due to the “statistics changed” reasons are initiated by the RT-table cardinality crossing test results only.
As previously mentioned, the cardinality-based RT crossing test is defined as

 | cardinality(current) – cardinality(snapshot) |  >= RT

Let’s create a test table and a stored procedure to perform the above experiment.

Step 1, set up the test environment

--a heap table - no statistics
DROP TABLE IF EXISTS dbo.testRecomp;
GO
CREATE TABLE dbo.testRecomp(id INT
                            ,filler CHAR(100)
                            );
GO
--test sp
DROP PROCEDURE IF EXISTS dbo.testRecompile;
GO
CREATE PROCEDURE dbo.testRecompile
AS
BEGIN
    SET XACT_ABORT ON;
    SET NOCOUNT ON;
  
    SELECT TOP(100) Average = AVG(t.id)
                   ,NoOfRows =  COUNT(t1.id)
    FROM dbo.testRecomp t
       CROSS JOIN dbo.testRecomp t1 
   RETURN;
END 
GO

.. and insert some data into the table…

--add 230 rows, RT will be set to 500
INSERT INTO dbo.testRecomp(id,filler)
    SELECT TOP(230) c.column_id,c.[name]
    FROM master.sys.columns c1 ,master.sys.columns  c;
GO

The initial statistical information looks like this (find how to retrieve the metadata related to the statistical information at the beginning of the post)


Figure 9, rowmodctr with no statistical information

Step Run the stored proc for the first time. The RT is set to 500.

EXEC dbo.testRecompile;

Step 3 Make enough changes to the table to pass the cardinality crossing test. Insert 500 rows. Do not use explicit transaction yet.

--BEGIN TRANSACTION
    INSERT INTO dbo.testRecomp 
        SELECT TOP(500)ABS(CHECKSUM(NEWID())) % 500,  b.[name] + '_' + cast (ROW_NUMBER() OVER(ORDER BY (SELECT NULL )) AS varchar(5))
        FROM sys.columns a, sys.columns b
--COMMIT TRANSACTION; --ROLLBACK TRANSACTION;

Step 3 Run the stored procedure again and observe the query execution behavior in Profiler.

Figure 10, Query recompile, table cardinality change – no stats

  • The new rowmodctr information looks like

    The new number of rows (rowcnt) is recorded along with the number of changes, rowmodctr=730. In this case, the rowmodctr value is not relevant since the RT crossing test depends only on changes in the table cardinality. This will be more visible if we ROLLBACK the row insertion operation which will be covered later.
  • The second execution followed the “Cashe lookup = Success” path (see the batch compilation diagram) and failed to pass the very last step  “Any stats stale?“.
  • At this stage, the system has detected that the RT cardinality crossing test has passed due to the number of changes(new rows) inserted in the table.
  • The system stopped the execution process and initiated the stored proc/statement recompile – SP:Recompile, SQL:StmtRecompile.  As in the previous examples, the reason for the recompile was 2 – Statistics changed.
    NOTE: The recompile process is not followed by the StatMan process since the query does not have any statsBlob information to be refreshed/created.

Experiment 3.1 (rowmodcnt not in use)

The next example shows that the RT cardinality crossing test is not related to rowmodctr as it may seem from the previous example where the number of changes followed table cardinality changes.

  • Follow the steps from the previous example.
  • Execute the INSERT query  from Step 3 within an explicit transaction
BEGIN TRANSACTION
    INSERT INTO dbo.testRecomp 
        SELECT TOP(500)ABS(CHECKSUM(NEWID())) % 500,  b.[name] + '_' + cast (ROW_NUMBER() OVER(ORDER BY (SELECT NULL )) AS varchar(5))
        FROM sys.columns a, sys.columns b
ROLLBACK TRANSACTION;
  • Observe that there are no query recompiles due to “statistic change since there were no table cardinality changes – the ROLLBACK “canceled” row insertions.
  • The statistical information shows that the rowmodctr= 720.

Conclusion

Query compilation, execution, and recompilation sequence among other steps include; loading interesting statistics – the statistical information on different table columns that Query Optimiser may find useful when creating a good plan and auto-creating statistical information on the columns that participate in i.e WHERE filter, GROUP BY ..etc. 
SQL Server query engine also checks the validity of the loaded statistical information during the initial stored procedure compilation and again during the stored procedure execution phase. If the loaded statistics are found to be stale, the former pauses stored procedure compilation, refreshes(re-samples/refreshes) the loaded statistical information, and continues the compilation process. If the Query engine detects stale loaded statistics during the execution phase,  the process stops refreshes(re-samples/updates) statistics and restarts the compilation process – query recompilation. The re-compiles are done per query, not per batch.
The examples in this blog showed that the statistical information can be automatically maintained by the queries that use them. Statistics can be also maintained manually.
To mark statistics as “Stale”, QO uses the Recompile Threshold(RT) crossing test. The test tracks the number of changes on the significant(leftmost) columns within the statistic BLOBs. The information is stored in an ever-increasing, non-transactionally consistent counter – “rowmodctr”.  The RTs are stored per table and within the compiled query.
The RT crossing test can be based only on the changes in the number of rows in a table.

 

Thanks for reading.

Dean Mincic

Temporary tables statistics when used in stored procedures

Temporary tables statistics when used in stored procedures


Summary

Sometimes when we design solutions that implement complex business rules we tend to use temporary objects, temporary tables in particular. Decoupling complex queries into the smaller “intermediate” results may help optimiser to come up with a better plan since it needs to resolve simpler queries. It can also make code more readable and maintainable.  This approach, of course, needs to be carefully planned since the excessive use of temporary objects may degrade query performances, deviate from set-based design principles, and do more damage than good. Some of the common patterns “When to break down big queries” can be found here.
Depending on the task, we might decide to use temporary tables over temp variables. This may be due to the different scope and/or due to the fact that temp tables are more “robust” and support statistics.

This blog explores specific case scenarios, including temp tables used in stored procedures and the unique behavior of related statistical information that can lead to suboptimal query plans.

Sample data

Let’s create a couple of test tables and a stored procedure that we’ll use throughout the article.
Platform: Microsoft SQL Server 2017 (RTM) Developer Ed.

DROP TABLE IF EXISTS dbo.Transactions --fk constr.
GO
DROP TABLE IF EXISTS  dbo.Products
GO
--Products table
CREATE TABLE dbo.Products(
     ProductID INTEGER IDENTITY(1,1) NOT NULL
        CONSTRAINT PK_ProductID PRIMARY KEY CLUSTERED
    ,[Name] NVARCHAR(256) NOT NULL
    ,ListPrice DECIMAL(10,2) NOT NULL
    ,ModifiedDate DATETIME2
        INDEX NCI_Name([Name])
)
GO
--Populate Products table with 500 distinct products
;WITH getRowNums AS
(
    SELECT TOP 500
         RowNo = ROW_NUMBER() OVER(ORDER BY(SELECT NULL) )   
        ,[Name]
    FROM sys.columns c
),getDistribution AS 
(
    SELECT  [Name] = CASE 
                         WHEN rws.RowNo % 5 =0   THEN N'A' -- 100 
                         WHEN rws.RowNo % 99 = 0 THEN N'B' -- 4 
                         WHEN rws.RowNo % 36 = 0 THEN N'C' -- 10 
                         WHEN rws.RowNo % 14 = 0 THEN N'E' -- 27 
                         WHEN rws.RowNo % 499 = 0 THEN 'Z' -- 1 
                     ELSE N'F' --358 products with name that starts with letter 'A'
                 END + rws.[Name] + CAST(rws.RowNo AS NVARCHAR(3))
            ,Price = ABS(CHECKSUM(NEWID())) % 44.23 
            ,ModifiedDate = DATEADD(MINUTE,RowNo,SYSDATETIME()) 
    FROM getRowNums rws
)
INSERT INTO Products([Name],ListPrice,ModifiedDate)
    SELECT [Name],Price,ModifiedDate
    FROM getDistribution

-- check the product names distribution
/* 
SELECT [Name Starts With..] = LEFT([Name],1)
      ,[Distribution] = COUNT([Name])
FROM dbo.Products
GROUP BY LEFT([Name],1)
ORDER BY [Distribution] ASC
*/
-- Order Transactions table
DROP TABLE IF EXISTS dbo.Transactions
GO
CREATE TABLE dbo.Transactions(
    OrderId INT IDENTITY(1000,1) NOT NULL
        CONSTRAINT PK_OrderId PRIMARY KEY CLUSTERED
    ,ProductId INT
        CONSTRAINT FK_ProductId_ProductsProductId 
            REFERENCES dbo.Products(ProductID)
    ,OrderQuantity INT
    ,ModifiedDate DATETIME2
  ,INDEX NCI_ProductId(ProductId)
)
GO
--each product was ordered 500 times
INSERT INTO dbo.Transactions(ProductID
                            ,OrderQuantity
                            ,ModifiedDate
)
    SELECT ProductID
          ,Quantity = ABS(CAST(NEWID() AS BINARY(6)) % 10) + 1 --random int between 0-10
          ,ModifiedDate = SYSDATETIME()
    FROM dbo.Products p
     CROSS JOIN (SELECT TOP 500 r = 1
                 FROM sys.columns)  x
GO

--stored proc: 
--Show the number of orders for products whose names begin with certain letters.
DROP PROCEDURE IF EXISTS dbo.testTempTableStats
GO
CREATE PROCEDURE dbo.testTempTableStats(
    @StartsWith NVARCHAR(5)
)
AS
BEGIN
    CREATE TABLE #temp( 
            ProductId INTEGER
            ,ProductName NVARCHAR(256)
    )
    
    INSERT INTO #temp
        SELECT pr.ProductID
               ,pr.[Name]
        FROM dbo.Products pr
        WHERE pr.[Name] LIKE @StartsWith + '%'                 

        SELECT t.ProductName
              ,NoOfOrders = COUNT(DISTINCT tr.OrderId) 
        FROM dbo.Transactions tr
            INNER JOIN #temp t
                ON tr.ProductId = t.ProductId
        GROUP BY t.ProductName

    DROP TABLE #temp;
END

The code above is a “complex query” that we decided to decouple by using a temp table. First, we store all relevant products in the temp table and then join the table with a big table in order to calculate the required aggregates. This is one of the common temp tables use cases. We expect our stored procedure to be called with widely varying parameters i.e For all products names that start with ‘B'(4 products) we expect 2000 matching rows that needs to be aggregated (500 orders per product), and for those products whose name starts with ‘A’ (100 products), we expect 25x more -50000.
Since temp tables support statistics and our DB is set up to perform auto create statistics/auto update stats, we also expect to get the good plans for the varying input parameters. In other words we expect query plan to be recompiled if needed – In addition, we also expect query to recompile due to the correctness related reasons (the temp table is created and destroyed every time we run the query)

Test scenario

Run the code from Session1.

DBCC FREEPROCCACHE
GO
EXEC testTempTableStats
        @StartsWith = N'C';
GO


Figure 1, The plan created for the first input parameter

As we can see, the plan is good. The temp table Carnality estimates are good, Nested Loop physical operator estimates are aligned with the actual no of rows and the Sort operator was granted enough memory to perform distinct sort.
Now, say another client runs the same stored proc in the context of a different session with a different parameter.
Run the code from Session2

EXEC testTempTableStats
        @StartsWith = N'A';


Figure 2, The original plan did not change for the new parameter

Session2 Client gets a horrible plan. Starting from the temp table carnality, everything seems to be off. Nested Loop physical operator estimates are now totally wrong, and the Sort operator does not have enough granted memory to perform the sort operation and needs to spill 231 pages(~2MB)  to the disk.
If we execute the same stored proc in the context of a session passing @StartsWith parameter value N’F’, we’ll get the same plan but again with wrong estimates and even worse Sort operator spills.

Analysis

From the previous example, we can see that the first passed parameter dictates the query plan structure. The plan “never changes” for consecutive calls. This looks like a parameter sniffing problem, but our query predicates do not include any parameters.
To investigate and explain this behavior let’s start by monitoring the following parameters:

  • Query compilation/recompilation  – Using MS Profiler
  • Temp table’s automatically created statistics – Using database console command SHOW_STATISTICS.

Set up Profiler
Include the following events in Profiler’s trace


Figure 3, Profiler – events and properties

Add temp table statistic tracking to the query

ALTER PROCEDURE dbo.testTempTableStats
   ...
   ..         
        SELECT t.ProductName
              ,NoOfOrders = COUNT(DISTINCT tr.OrderId) 
        FROM dbo.Transactions tr
            INNER JOIN #temp t
                ON tr.ProductId = t.ProductId
        GROUP BY t.ProductName

    -- Show automatically created statistics for the column "ProductName"
    DBCC SHOW_STATISTICS ('tempdb.dbo.#Temp', ProductName) 
         WITH STAT_HEADER
             ,HISTOGRAM;

    DROP TABLE #temp;
END

Start the trace and run the stored procedure again.
Note: Altering stored procedure automatically removes existing, cached batch.

EXEC testTempTableStats @StartsWith = N'C';

The query result shows auto-created statistics on the temp table’s ProductName column.
Figure 4, First parameter statistics

The plan created is the same as on Figure1

The Profiler trace shows the following sequence of events:

Figure 5, Profiler -The first plan compilation

This plan is exactly what we expected to see.
The unit of compilation and caching is a batch – in this case, the content of our stored procedure. During batch compilation, the individual statements are compiled one after another.

Short detour: What happens when we create a stored proc? After hitting F5(executing CREATE PROCEDURE …), SQL Server parses the procedure definition for syntactical accuracy. If the statements are syntactically correct, the text of the stored procedure is stored in an internal table accessible through the sys.sql_modules system view. At this stage, it is possible to reference non-existing objects within the batch i.e it is possible to include a query that inserts data in a non-existing table, or to select data from a non-existing function. Later, when we execute the stored procedure for the first time, the process known as “Deferred Name Resolution” will check the names of the referenced objects and consequently initiate the Recompile of the query segment which references the objects.

Follow steps 1- 6 on the trace above.

  1. (not captured by the trace). The execution environment has performed the initial batch compilation. Each query within the batch is compiled separately(Starting from SQL Server Yukon :). The initial compilation was incomplete and only a skeleton plan was created – CREATE TABLE query is the only one fully compiled. The second, INSERT query and the following SELECT query could not be compiled since the temp table did not exist before the very first execution. The skipped compilations are deferred until the execution time.
  2. Batch execution started and the first query was executed (a brand new temp table was created)
  3. The second query (INSERT INTO..#Temp) has to be recompiled due to Correctness related reasons. The EventSubClass property describes the cause of Recompilations
        • A new object(temp table) has been created. The query recompile has been initiated.
        • Query Optimiser loads all “interesting” statistics that can help build a good plan. The statistic belongs to the tables included in the query. If needed, QO will pause the process to automatically create additional statistics on the columns that can help build the plan*. For this query, QO does not need to auto-create stats.
  4. The INSERT INTO …#Temp.. query is now recompiled and executed.
  5. The third query(the one we are analysing) has to be recompiled due to similar reasons as the previous query. After loading all “interesting” stats – which belong to dbo.Transactions, QO decided to stop the execution process and to auto-create statistics on #temp table columns: ProductId and ProductName
  6. StatMan – an internal process, that creates auto-stats on the ProductId column(#temp table). The ProductId column is used in the JOIN predicate.
  7. StatMan creates auto-stats on the ProductName column(#temp table). The ProductName column is used for the grouping. The stat header and histogram are selected in the query output.
    Auto stat name convention: _WA_Sys_prefix. * In our example:  _WA_Sys_00000002_AA7DA731,  00000002 – Column ordinal position, AA7DA731 – Hexadecimal #Temp table ObjectID.SELECT CAST(0xAA7DA731 as INTEGER) /* Object_id = -1434605775 */
  8. The SELECT query is now recompiled and executed.
  9. (Batch execution successfully completed)

*NOTE: _WA – Stands for Waterloo, Canada(Sybase, the Father of Sql Server was developed in Waterloo) . This is mentioned in 70-462.

Clear Profiler’s trace and execute the same stored procedure, this time with parameter  ‘A’

EXEC testTempTableStats @StartsWith = N'A';

Figure 6, Stale statistics

Strangely enough, the statistical information (header and histogram) is totally wrong. i.e Table carnality should be 100, not 10 and the histogram steps should show product names that start with ‘A’ not ‘C’.

Figure 6 – Profiler, No query recompiles

The trace shows no recompiles due to schema changes(CREATE TABLE #Temp) and no stat. updates. From the optimality point of view, the Query engine executed the first query(input param ‘C’). The result set is correct, but it seems that everything else is wrong.  The first question is why we are missing recompiles due to schema changes (a brand new temp table was created and populated with the new values). The second question is why the auto-created statistics on the temp table were never updated. Is this why our query never gets recompiled due to plan optimality-related reasons.

Temporary tables caching

The temp table caching mechanism may explain the lack of schema-related recompiles in the previous example. This may also explain the stale statistics.
The caching feature was introduced in SQL Server 2005. This functionality provides the caching of temporary objects(temp tables, table variables, and TVFs) across repeated calls of routines(Stored procedures, triggers, and UDFs).
In short, When a stored procedure execution ends, SQL Server truncates* and renames the table, keeping only one IAM and one data page. The structure will be used by the subsequent calls instead of allocating new pages from scratch when the object is created again.

NOTE: For the temp objects smaller than 8MB, the truncation happens immediately after module execution ends. For the larger temp objects, SQL Serer performs “deferred drop” and immediately returns control to the application.

The caching mechanism works for the temp tables that is created by using CREATE TABLE or SELECT INTO statements. The caching is not possible when there is an explicit DDL on a temp table after it is created i.e ALTER #table ADD Constraint, CREATE STATISTICS** on table colums or there is a named constraint on a temp table(this is generally a bad idea since it can lead to constraint name conflict). Temporary tables are also not cached if they are part of dynamic SQL or ad-hoc batch.

NOTE: Statistics created using an explicit CREATE STATISTICS statement are not linked to a cached temporary object. Auto-Created statistics(like the ones presented in the example) are linked to a cached temp object. UPDATE STATISTICS does not prevent temp table caching.

The following query shows that there is one cached temp object currently not in use

SELECT  mcc.[name],
        mcc.[type],
        mcc.pages_kb,
        mcc.pages_in_use_kb,
        mcc.entries_count,
        mcc.entries_in_use_count
FROM sys.dm_os_memory_cache_counters AS mcc
WHERE mcc.[type] = N'CACHESTORE_TEMPTABLES'
name                      type                   pages_kb  pages_in_use_kb  entries_count  entries_in_use_count
------------------------  ---------------------  --------  ---------------  -------------  --------------------
Temp Tables & Table Vars  CACHESTORE_TEMPTABLES  16        0                1              0

We track the temp table name by adding SELECT OBJECT_ID(‘tempdb.dbo.#temp’)  in the example stored proc. This will show that the temp table object_id never changes. When the stored procedure completes execution, the internal process changes the name of the temp table to hexadecimal form. This will be the case even if we explicitly drop the temp table at the end of the sproc batch.

If a stored procedure is executed concurrently, multiple separate cached objects may be created in tempdb.  There is a cached temp object per execution context.

This feature explains why our plan has not been recompiled due to schema changes related reasons. But what about statistics? As previously mentioned, it is expected for the query processor to be able to detect the statistical changes that can affect the cached plan validity and to adapt to it. But it didn’t t happen, why?

Recompilation Reasons

There are many different reasons that can cause a query to be recompiled(EventSubClass Figure 5). The list of all possible recompilation reasons (SP:Recompile event) can be retrieved using the query below.

SELECT xemv.map_key,
       xemv.map_value
FROM sys.dm_xe_map_values AS xemv
WHERE xemv.name = N'statement_recompile_cause'
ORDER BY xemv.map_key;

For the purpose of this analysis, we are interested only in Schema changed and Statistic changed reasons. Temp table caching can explain the absence of the former reason.
Updating statistics (both manual and auto-update) may trigger the Optimality(data) related recompilation of the plans that use these statistics.

RT – Recompile thresholds

SQL Server query processor is designed to adapt the cached query plans to the data changes. The changes are tracked by statistical information(histograms) linked to the “interesting” table columns. Statistical information changes may cause query recompilation due to plan optimality-related reasons.
During the query optimisation phase, the Query processor identifies and loads the “interesting” statistics. This helps QP to create good enough plans. In our case, QP loads the following, existing statistics.

  • NCI_Name (Statistics on dbo.Products Name column created as a part of the NCI index on the column)
  • NCI_ProductId(Statistics on dbo.Transactions ProductId column also created with the NCI index on the column)
  • (Auto Created)#temp table auto-created statistics on #temp.ProductName and #temp.ProductId

Use the “undocumented”  trace flag to find out the interesting statistics used in our example:

-- use execution plan XML output to find 
-- <ModTrackingInfo> tag
DBCC FREEPROCCACHE
GO
DBCC TRACEON(8666)
    EXEC dbo.testTempTableStats
        @StartsWith = 'C'
    GO
DBCC TRACEOFF(8666)
GO

The table contents are constantly changing (INSERT, UPDATE, DELETE, MERGE). SQL Server query processor tracks those changes in order to estimate their impact  on the existing cached plans.
For each “interesting statistics” loaded, Query processor creates a snapshot of a counter that counts the number of table modifications. Generally, every table referenced in a query will have such counters loaded.
The table contents are tracked:

  • Directly     – using table cardinality metadata information
  • Indirectly – using statistics on table columns.
 Each column has an RT (recompilation threshold) associated with it. RT is a function of table cardinality.

 RT = f(n), n -number of rows in a table (table cardinality)

Essentially, RT for a table defines the frequency with which queries that refer to the table recompile.

During the first query compilation, for each interesting statistics, a snapshot value of a counter(colmodctr) that counts a number of table modification is stored – a counter per column that participates in the interesting statistics.

Along with colmodctr snapshot values, Query processor sets the Recompile thresholds for every colmodctr created.
Before using an existing cached plan, the Query optimiser performs The threshold crossing test defined by the formula below.

 | colmodctr(current) – colmodctr(snapshot) |  >= RT

current     – refers to the current value of the modification counter
snapshot – refers to the value of the mod. counter captured during the last plan compilation(recompilation).
If the threshold crossing test evaluates to TRUE (the number of changes exceeds the pre-defined RT), for any of the interesting statistics, the query plan is recompiled.
The threshold crossing test will be performed even if the query processor does not load any interesting statistics. In that case, the test will be based simply on table cardinalities.

 | cardinality(current) – cardinality(snapshot) |  >= RT

The following table shows how RT is calculated for permanent and temporary tables

Temporary Tables
Table cardinality Recompile Threshold Passing RT test caridnality
n<6 6 >= (n+6)
6 <= n <= 500 500 >= (n+500)
n > 500 500 +0.20*n >= (n +500 +0.20*n)
Permanent tables
Table cardinality Recompile Threshold Passing RT test caridnality
6 <= n <= 500 500 >= (n+500)
n > 500 500 +0.20*n >= (n +500 +0.20*n)


colmodctr

colmodctr is the name of a counter which counts the number of modifications that a table has undergone. The counter is per column and is non-transnational (if we insert 10 rows and rollback the transaction, colmodctr value will increase by 20). The counter is also ever-increasing.

So far, Temp table caching can explain why our query does not get recompiled due to schema changes. Now, let’s find out why our query use stale statistics and why it does not perform optimality-related query recompilation.

Automatically created statistics are cashed along with the cached temp tables. In this case,  auto-created statistics are NOT reset at the end of the stored procedure execution cycle nor at the beginning of a consecutive stored proc. execution.
This is why statistics on a cached temporary table may belong to one of the previous stored procedure calls and are absolutely not related to the context of the current temp table.
If we used a “normal”(dbo.Temp) table instead of the #temp table, the table would be dropped and recreated on each stored procedure call. Consequently, auto-created statistics would be repeatedly dropped and created which would make them always up-to-date.

The modification counters which count changes on the interesting columns, in our example columns: ProductName and ProductId will not cause query recompiles even if the number of the changes passes the RT threshold test. This behavior is unique to the temp tables(Cached or non-cached *) when used within stored procedures.

Note*: To prevent temp table caching, perform a DDL operation on the newly created temp table. e.g

...
    CREATE TABLE #temp( 
             ProductId INTEGER)
    ALTER TABLE #temp
        ADD ProductName NVARCHAR(256)
..
/* -- this change will cause Err 2767, Could not locate statistics 'ProductName' in the system catalogs
   -- starting from the second execution
 DBCC SHOW_STATISTICS ('tempdb.dbo.#Temp', ProductName) 
             WITH STAT_HEADER
                 ,HISTOGRAM;

*/

Err 2667: The statistics will be internally present, but not available through sp_helpstats, DBCC SHOW_STATISTICS, and tempdb.sys.stats. My assumption would be that the auto-crated stats. BLOB entry gets “disconnected” from temp table’s object_id -> For non-cached temp tables, the object_id changes on every sp run.
What is also interesting with the non-cached table is that it does not force our query to be recompiled due to schema change reasons as we might expect.

Back to the original example, to be able to track the modification counters, expand dbo.testTempTableStats stored procedure with the dynamic query below.

--tSQL 2017 syntax
CREATE OR ALTER PROCEDURE dbo.testTempTableStats(
    @StartsWith NVARCHAR(5)
)
AS
BEGIN
    CREATE TABLE #temp( 
            ProductId INTEGER
            ,ProductName NVARCHAR(256)
    )
    
    INSERT INTO #temp
        SELECT pr.ProductID
               ,pr.[Name]
        FROM dbo.Products pr
        WHERE pr.[Name] LIKE @StartsWith + '%'                 
 
        SELECT t.ProductName
              ,NoOfOrders = COUNT(DISTINCT tr.OrderId) 
        FROM dbo.Transactions tr
            INNER JOIN #temp t
                ON tr.ProductId = t.ProductId
        GROUP BY t.ProductName

    ---sys.dm_db_stats_properties works only of the current db context :(
    EXECUTE( N'USE tempdb; 
                  SELECT  [Object] = object_name(s.object_id)
                            ,s.[name]
                            ,sp.last_updated
                            ,sp.[rows]
                            ,s.auto_created
                            ,sp.modification_counter 
                  FROM sys.stats s 
                  CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id) sp
                  WHERE s.object_id = object_id(''#temp'',''U'');'
    );

    -- Show automatically created statistics for the column "ProductName"
    DBCC SHOW_STATISTICS ('tempdb.dbo.#Temp', ProductName) 
         WITH STAT_HEADER
             ,HISTOGRAM;
 
    DROP TABLE #temp;
END

Now execute the query with the parameter ‘C’

DBCC FREEPROCCACHE
GO
EXEC testTempTableStats
        @StartsWith = N'C';
GO

Figure 7, colmodctr – modification counters

On the first query execution, the query optimiser sets up the RT values for the two auto-created columns. The Recompilation thresholds are based on the temp table’s cardinality (NoOfRows = 10). Based on the previously mentioned formula ( 6 < n < 500, RT = 500 ), in order to pass the RT crossing test we’ll need at least 510(500+10) changes on either of the two columns to initiate query recompilation. This means that, if we execute stored proc 6 times passing parameter ‘A‘ (populates temp table with 100 rows), we will reach 6*100 = 600 changes which are more than we need to pass the test.

EXEC dbo.testTempTableStats 
             @StartsWith = 'A'
GO 6

The 6th execution shows the following results, but no expected recompile.


Figure 8, colmodctr – modification counters do not initiate query recompiles

The table below shows colmodctr counter’s progress:

Figure 9, modification counter progress

As we can see, the query was not recompiled due to optimality reasons. Non-cached temp tables will show the same behavior. The colmodctr simply does not work for temp tables within stored procedures.
If we repeat the experiment, but this time we pass parameter ‘B'(param ‘B’ results in 4 rows in the temp table) for the very first stored procedure call, we will get a similar output as in Figure 7. RT value is now set to 6,  1<n<6, RT = 6, which means that we need at least 10(6+4) changes on the interesting temp table columns to initiate query recompilation.
If we execute the same stored procedure(with param ‘B’), say two more times, the modification counters will show 16 changes and, as expected, with no recompile.
But if we execute the stored procedure with a parameter that generates 10 or more rows in the temp table, i.e ‘C’  or ‘E’ (10 and 27 rows) our query will recompile.

DBCC FREEPROCCACHE
GO
-- this will set up RT = 10
EXEC testTempTableStats 
        @StartsWith = 'B'
GO
-- modification counter does not initiate recompiles
EXEC testTempTableStats 
    @StartsWith = 'B'
GO 2 -- no recompiles

DBCC FREEPROCCACHE
GO
-- this will set up RT = 10
EXEC testTempTableStats 
        @StartsWith = 'B'
GO
--  a parameter that generates 10 or more rows in temp table do initiate query recompile
EXEC testTempTableStats 
    @StartsWith = 'C' -- 'E', [A-Z],'A' ...

The experiment shows that the Query processor does track temp table cardinality changes.
The threshold crossing test will be based on temp table cardinality.

 | cardinality(current) – cardinality(snapshot) |  >= RT

In our case RT =6, the expected current table cardinality is 10, | 10 – 4 | >=6, and the  RT crossing test evaluates to TRUE.

The interesting thing about this is if our code uses a temp table that has a relatively stable number of rows i.e ~500, it is likely that we’ll end up with one plan that will never change due to optimality reasons. If our sp creates a query plan for @StartsWith = ‘C’ parameter that sets up RT=500, the query optimizer will expect the temp table’s cardinality to increase to at least 510 rows, which will never happen if our table has a maximum of 500 rows. The difference will be more visible i.e in the case when the Query optimizer sets RT to 500 +20%*n (total number of rows – see the RT tables above). The expected cardinality that will initiate recompile will be n + 500 +0.2*n.

Possible solutions

WITH RECOMPILE

One of the obvious solutions for this problem would be to create a stored procedure WITH RECOMPILE. This will

  • Prevent temp table caching
  • Prevent stored procedure query plan caching
  • Recompile all queries in the stored procedure on each execution.
--tSQL 2017 syntax
CREATE OR ALTER PROCEDURE dbo.testTempTableStats(
    @StartsWith NVARCHAR(5)
)
WITH RECOMPLILE
AS
BEGIN
..

This method may work out of the box, but may also be expensive since the whole batch that can contain quite a few complex queries must compile on each call, and then, during the execution, separate queries may be recompiled due to previously mentioned reasons.

EXECUTE .. WITH RECOMPILE

In situation, we know that the cached query plan works for the most common range of input parameters. If we are able to identify those parameters for which the plan would be sub-optimal, we can create a wrapper stored procedure that controls the main stored procedure execution properties.

--wrapper stored procedure
CREATE OR ALTER PROCEDURE testTempTableStatsWrapper(
      @StartsWith NVARCHAR(5)
)
AS
BEGIN
    SET NOCOUNT ON;

    IF @StartsWith IN ('Z','B','C','E')
        EXECUTE testTempTableStats
                @StartsWith
    ELSE  
        EXECUTE testTempTableStats
                @StartsWith
            WITH RECOMPILE
    RETURN;
END

A stable plan will be cached for the parameters Z, B, C, and E, and a second, non-cached plan will be constructed each time we pass any other parameter value than the four mentioned above.

OPTION RECOMPILE and UPDATE STATISTICS

To overcome the problem of stale statistics, we can add OPTION RECOMPILE to our original query. This will force our query to recompile.

--tSQL 2017 syntax
CREATE OR ALTER PROCEDURE dbo.testTempTableStats(
    @StartsWith NVARCHAR(5)
)
AS
BEGIN
    ..
        SELECT t.ProductName
              ,NoOfOrders = COUNT(DISTINCT tr.OrderId) 
        FROM dbo.Transactions tr
            INNER JOIN #temp t
                ON tr.ProductId = t.ProductId
        GROUP BY t.ProductName
        OPTION (RECOMPILE);
...

 

If we repeat the first test and pass parameters ‘C’ and ‘F’ respectively, we can see that the “F” param‘s option recompile updates only the temp table’s cardinality, from 10 rows to 358 rows. The process skips updating auto-created statistics on ProductName and ProductId.

DBCC FREEPROCCACHE; 
GO
EXEC dbo.testTempTableStats @StartsWith = 'C'
GO
EXEC dbo.testTempTableStats @StartsWith = 'F'
GO


Figure 10, Stale statistics, correct cardinality

OPTION(RECOMPILE) forces the query to recompile, but it does not detect stale statistics. It also “activates” rowmodcnt (modification_counter). In this case, after the first execution(param ‘C’), RT is set to 500. QO needs to detect at least 510 changes on the significant columns to initiate the recompile. In our case, when the modification counter reaches 510+ changes(Two consecutive calls passing param. “A”), including temp table truncates,  QO will update statistics as a part of the query recompilation.

1st Execution Param “C” : RowNo=10: RT=500; modification_counter = 0
2nd Execution Param “A” : RowNo=358: RT=500; modification_counter = 10(truncate) + 358 = 368
3rd Execution Param “A” : RowNo=358: RT=500; modification_counter = 358(Truncate) +  368 = 726** RT crossing test evaluates to TRUE -> Update Statistics.

For non-cached temp tables, OPTION RECOMPILE does update auto-created statistics as well.

We can force the query optimizer to update auto-created statistics using UPDATE STATISTICS  #temp. The complete solution would be …

--tSQL 2017 syntax
CREATE OR ALTER PROCEDURE dbo.testTempTableStats(
    @StartsWith NVARCHAR(5)
)
AS
BEGIN
    CREATE TABLE #temp( 
            ProductId INTEGER
            ,ProductName NVARCHAR(256)
    )
    
    INSERT INTO #temp
        SELECT pr.ProductID
               ,pr.[Name]
        FROM dbo.Products pr
        WHERE pr.[Name] LIKE @StartsWith + '%'                 
 
        UPDATE STATISTICS #temp;        

        SELECT t.ProductName
              ,NoOfOrders = COUNT(DISTINCT tr.OrderId) 
        FROM dbo.Transactions tr
            INNER JOIN #temp t
                ON tr.ProductId = t.ProductId
        GROUP BY t.ProductName
        OPTION(RECOMPILE);
 
    ---sys.dm_db_stats_properties works only off the current db context :(
    EXECUTE( N'USE tempdb; 
                  SELECT  [Object] = object_name(s.object_id)
                            ,s.[name]
                            ,sp.last_updated
                            ,sp.[rows]
                            ,s.auto_created
                            ,sp.modification_counter 
                  FROM sys.stats s 
                  CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id) sp
                  WHERE s.object_id = object_id(''#temp'',''U'');'
    );
 
    -- Show automatically created statistics for the column "ProductName"
    DBCC SHOW_STATISTICS ('tempdb.dbo.#Temp', ProductName) 
         WITH STAT_HEADER
             ,HISTOGRAM;
 
    DROP TABLE #temp;
END

It is a bit unusual place to put the UPDATE STATISTICS statement since we expect auto-created statistics to be constructed during our query compilation/recompilation process, not before. But then, if we put the statement after the query, UPDATE STATISTICS will not be able to have an effect on the preceding query.

BATCH MODE MEMORY GRANT FEEDBACK (SQL Server 2017+)

This solution is just an experimental one. In SQL Server 2017 Microsoft introduced Adaptive Query Processing, a few new query processing features set to improve query performance.  The new features (v1.0) are

  • Batch mode memory grant feedback
  • Batch mode adaptive joins and
  • Interleaved execution

More about this interesting set of features can be found here.
The first two features are available in queries that reference one or more objects with columnstore indexes. Opposed to row-mode execution style, when rows are passed/processed between iterators(operators) one at a time, the batch processing operates on a 64bytes structure(8 pages). The allocated memory space can host between 64 and 900 rows depending on the number of columns selected.
To activate the features(and the batch processing) we can create a “dummy” columnstore index with an int column and LEFT JOIN it with our query. The new join will not change the query logic nor the result-set.

--Create a dummy columnstore index 
DROP TABLE IF EXISTS dbo.dummyColumnstore;
GO
CREATE TABLE dbo.dummyColumnstore(i int)
GO
CREATE NONCLUSTERED COLUMNSTORE INDEX nc_csi_id 
    ON dbo.dummyColumnstore(i)
GO

--add an extra LEFT JOIN to the original query
CREATE OR ALTER PROCEDURE dbo.testTempTableStats(
    @StartsWith NVARCHAR(5)
)
AS
BEGIN

    CREATE TABLE #temp( 
            ProductId INTEGER
            ,ProductName NVARCHAR(256)
    )
    
    INSERT INTO #temp
        SELECT pr.ProductID
               ,pr.[Name]
        FROM dbo.Products pr
        WHERE pr.[Name] LIKE @StartsWith + '%'                 

        SELECT t.ProductName
              ,NoOfOrders = COUNT(DISTINCT tr.OrderId) 
        FROM dbo.Transactions tr
            INNER JOIN #temp t
                ON tr.ProductId = t.ProductId
            LEFT OUTER JOIN dbo.dummyColumnstore dcc -- columnstore idx
                ON tr.ProductId = dcc.i
        GROUP BY t.ProductName;

    DROP TABLE #temp;

    RETURN;
END

If we run the original test again, first with param ‘C’ and then with param ‘F’, we’ll notice a couple of interesting changes in the query execution plan.

DBCC FREEPROCCACHE; 
GO
EXEC dbo.testTempTableStats @StartsWith = 'C'
GO
EXEC dbo.testTempTableStats @StartsWith = 'F'
GO

The first execution creates a new plan

Figure 11, Adaptive Query Processing features

We can notice a new Adaptive Join operator and a new, batch execution mode for Hash Match Aggregate and Compute scalar operators.
The next execution (param ‘F’) will result in underestimated memory grants for the Hash Match Aggregate operator. This is due to stale statistics, as it was shown earlier.
Now, if we execute the same query again(param ‘F’), the Batch mode memory grant feedback feature will FIX the problem with the operator by allocating enough memory for the operation. This will fix the problem with the Aggregate operator and prevent data spills. However, the original problem with the stale statistics, etc… will stay.

Conclusion

Temporary tables when used in stored procedures may have totally wrong statistics. This can lead to performance degradation due to sub-optimal plans including not enough accurate memory grants, wrong temp table cardinality estimates, etc. This is partially due to the temp table caching mechanism.  The feature is extremely useful since it provides more efficient temp table reuse across frequent stored procedure calls, particularly in OLTP workloads.  Between stored proc. calls, the column modification counters on interesting temp table columns do not initiate query recompiles ( even if the RT crossing test evaluates to true).  However, the RT crossing test that depends on the temp table’s cardinality changes, if evaluated to true, will initiate query recompiles.
Forcing the Query optimizer to perform a statement recompile does not resolve the problem. For the cached temp tables, it fixes cardinality estimates but does not update relevant statistics. For non-cached tables, it resolves the problem (everything is up to date). Combining UPDATE STATISTICS #tempTable and OPTION RECOMPILE is one way to workaround the problem.
The example demonstrates a common scenario when a temp table is used to reduce query plan complexity.  The table stores a relatively small number of rows and drives a more complex query. If the cached plan is sensitive to the cardinality and statistical distribution of the values in the temp table, the plan reuse may result in stored procedure performance degradation.
For the larger temp tables with widely differing numbers of rows on every execution, it is likely that the main query recompiles due to temp table cardinality changes.
For the query plans that are less dependent on the temp table statistics and cardinality, we might consider using table variables instead(lightweight, no statistics, cardinality presented as 1row etc.)

 

Thanks for reading.

Dean Mincic