Tag Archives: sqlserver

Set up Python for SQL Server


Summary

In SQL Server 2016, Microsoft extended SQL Server engine capabilities to be able to execute an external code written in R language. Sql Server 2017 added Python language to the mix and 2019 integrated Java. The new functionality is a part of Sql Server’s Machine Learning Services.
This blog explains how to set up SQL Server 2019 so we can use ML services and
the Python programming language.

Prepare the environment for Python

The first step is to add Machine Learning Service Extensions / Python to our SQL Server environment.

Figure 1, ML services

The next step is to allow Sql Server to execute external scripts by turning on the external scripts enabled user option on the server level

sp_configure 
     @configname = 'show advanced options'
    ,@configvalue = 1
GO
RECONFIGURE WITH OVERRIDE;
GO
 
sp_configure 
     @configname = 'external scripts enabled'
    ,@configvalue = 1            
GO
RECONFIGURE WITH OVERRIDE;
GO

At this stage, we should be ready to go.

Let’s explore how the SQL Server engine relates to the above-mentioned ML(Machine Learning)  services. The two components are “bound” by the Extensibility framework.

Launchpad

Launchpad service, one of several services in Sql Server, has a job to manage and execute external scripts – the scripts written in Python language. It instantiates an external script session as a separate, satellite process.


Figure 2, Extensibility framework

The purpose of the extensibility framework is to provide an interface between the SQL Server engine and Python (and other languages). Figure 2 shows how different components interact and demonstrates security concepts.
From the super high-level point of view, the process goes like this:

  1. Our application calls system stored proc. sys.sp_execute_external_script. The proc is designed to encapsulate code written in one of the external languages, Python in this case. The stored proc executes within the SQL Server engine – Sqlservr.exe. The engine knows that the code it needs to run contains a program written in Python and therefore starts the Launchpad.exe process. Sqlservr.exe uses Named Pipes protocol(often used for the communication between processes on the same machine) to pass the code to be executed.
  2. Launchpad service now call initiates the DLL specific to the language that needs to be run. In this case, it’s PythonLauncher.dll.
  3. PythonLauncher.dll now runs Python.exe/dll which compiles the program to bytecode and executes it.
  4. Python.exe then talks to yet another program, BxlServer.exe. Bxl stands for Binary Exchange Language. This program coordinates with the Python runtime to manage exchanges of data and storage of the working results.
  5. sqlsatellite.dll communicates with SQL Server engine over TCP/IP protocol and ODBC interface. It retrieves input data sets*, sends back result sets and Python’s standard I/O streams – stdout and stderr.
  6. BxlServer.exe communicates with Python.exe passing messages back and forth from Sql Server through sqlsatellite.dll.
  7. Sql Server gets the result set, closes the related tasks and processes, and sends the results back to the client e.g SSMS

Security

Security architecture for the extensibility framework can be observed in Figure 2.

  • The client uses Windows/SQL login to establish a connection with SQL Server and execute Python code encapsulated in sys.sp_execute_external_script stored proc.
  • Launchpad service runs as a separate process under its own account – NT Service\MSSQLLaunchpad$<instanceName>. The Windows account has all the necessary permissions to run external scripts and is a member of the SQLRUserGroup2019 Windows group.
    How to interact with the file system(Import/Export csv, xlsx, ..) or to access resources outside the Server env will be explained later in this blog
  • Launchpad, when invokes the appropriate runtime environment, in our case, the PythonLauncher.dll, initiates an AppContainer and assigns the process to it. By default, the Launcher service creates 21 containers AppContainer00AppContainer20. The number of containers can be changed through Sql Server Configuration Manager/Sql Server Launchpad/Advanced/Security Contexts Count property.
    AppContainer is an internal mechanism that provides an isolated execution environment for the applications assigned to it. An application running in an AppContainer can only access resources specifically granted to it. This approach prevents the application from influencing, or being influenced by, other application processes
  • Launchpad maps the User(used to make a connection) to an AppContainer that now defines credentials unique to user/application pair. This also means that the application, in this case, launchpad.exe cannot impersonate db Users and gain more access than it should have.

So, in a nutshell, a Client connects to Sql Server using Sql/Win login mapped to db user. The user runs sys.sp_execute_external_script with the Python code.  Sql Server runs launchpad.exe which runs under it’s own win account. The launcher then initiate one of twenty AppContainers

Firewall rules – how to access data from the Internet

As mentioned before, there are 20 separate application containers – AppContainer00 – AppContainer20 These objects restrict access to the applications assigned to them, in this case launchpad.exe.

Figure 3, Windows Firewall outbound rules related to the AppContainers

If we disable these rules, our Python code, encapsulated in sys.sp_execute_external_script, will be able to communicate with the “outside world” and e.g get up-to-date FX (Foreign Exchange) rates, BitCoin price for different currencies etc. – see. forex-python module. We can then e.g describe the result and send back to the Client/top level procedure. It is also possible to “join” the data to a tSQL query result, within the same query batch, and then to send it back to the caller – amazing stuff 🙂

EXEC sys.sp_execute_external_script
        @language = N'Python' --specify language to use
        ,@script   = N'
from pandas_datareader import data

start_date = "2021-11-20"
end_date = "2021-12-01"

UsToAus = data.DataReader("DEXUSAL", "fred",start=start_date, end=end_date).reset_index()

print(UsToAus)
'       
       ,@output_data_1_name =N'UsToAus'
WITH RESULT SETS  (([Date] DATE,[US to AUS]  FLOAT) )


In the example above, the code connects to FRED (Federal Reserve Bank of St. Louis, Missouri, USA) and pulls the exchange rates between the USD and  AUD for a given timeframe. 

File system accessibility – how to import/export result

Another cool thing that you can do with Python is to easily export results to disk  e.g save a query result (report) in a form of an Excel file, to a location where it can be picked up(pulled) by the authorised users. It is also possible to read e.g a lookup table – extended customer information, states, employee information etc,  from disk and join it with the tSQL query result-set, again within the same sp call.
To enable interaction with the file system we need to allow “Sql Server’s” Python to access specific folders. By default, Python can only access its working directory and its subdirectories, in this case c:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019\PYTHON_SERVICES\. As mentioned before, the Launchpad.exe service runs under its own account. It starts python.exe runtime environment in a process, again, under the  same account. Finally it instantiates a  new AppContainer object to contain its activities. This container now acts as a windows app. with very limited access. Windows apps can access only those resources (files, folders, registry keys, and DCOM interfaces) to which they have been explicitly granted access (“ALL APPLICATION PACKAGES”).

Figure 4, ALL APPLICATION PACKAGES access rights

Now, we can use Pivoting with Python script from another blog to export results to disk.

Figure 5, export results to disk

Memory allocations

It is important to understand how ML Services fits into Sql Server memory allocation scheme to be able to access the possible impact it can have on the database server performance.

As we can see from Figure 2,  Python code runs outside of the processes managed by Sql Server. This means that it doesn’t use memory allocated to Sql Server.

The query below shows that, by default, the maximum memory that ML services can use, outside the memory allocated to Sql Server, is 20 % e.g Total RAM 256GB, Sql Server allocated mem = 200GB, ML can use up to 56 * 0.2 = ~11GB

SELECT * 
FROM sys.resource_governor_external_resource_pools
WHERE [name] = N'default'


Figure 6, External resource pools

We can increase the max_memory_percent to e.g 50% by altering  the “default” external resource pool

ALTER EXTERNAL RESOURCE POOL "default" 
   WITH (max_memory_percent = 50);
GO
ALTER RESOURCE GOVERNOR 
RECONFIGURE;
GO

File locations and configurations

  • launchpad.exe: ..\MSSQL15.SQL2019\MSSQL\Binn\Launchpad.exe
  • pythonlauncher.log ..\MSSQL15.SQL2019\MSSQL\Log\ExtensibilityLog\pythonlauncher.log 
  • Config file (Python) :  ..\MSSQL15.SQL2019\MSSQL\Binn\pythonlauncher.confg
    Python home directory, encoding standard, env patrh, working directory – this is where python stores intermediate results during the calculations, etc.

Python version

SQL Server 2019 supports Python 3.7.1. The package comes with the Anaconda 4.6.8 distribution. The language runtimes and various packages e.g  pandas, are installed under the root directory:

c:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019\PYTHON_SERVICES

To check Python version use command prompt:

c:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019\PYTHON_SERVICES>python --version

To check Anaconda distribution version:

c:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019\PYTHON_SERVICES\Scripts>conda.exe --version

But wait, there is more 🙂

With SQL Server 2019 CU3+ (latest CU) we can use the Language Extension feature to “add” different versions of Python to the mix. In fact, we can add any language extension.
For example, I use Python – version 3.9.7 that is installed as a part of the Anaconda 4.10.3 distribution, for work, not related to Sql Server platform. Wouldn’t it  be cool if it was possible to use that installation in SQL Server?

Note:  There are two packages developed by Microsoft, specifically for out-of-the-box Python installation: revoscalepy and microsoftml. These packages are not available in the distribution mentioned above . 

Language extensions is a set of dll’s written in C++ that act as a bridge between Sql Server and the external runtime. This technology allows us to use relational data(query result sets) in the external code. In a nutshell, the current Python language extension program is an open source code available here. However, the code currently supports only versions 3.7.x. So, to be able to use Python 3.9 as the external code, we need to build our own “bridge”.

Here is an excellent article on how to do it. It looks complex , but once you start you’ll not be able to stop – it’s just so much fun.

Figure 7, Python 3.9 Language extension for SQL Server

In the next blog, I’ll present various dev tools we can use to construct our Python code in Sql Server. 🙂

Thanks for reading.

Dean Mincic

What is Collation is SQL Server and how it works


Summary

Collation is one of those settings in SQL Server that most of the developers would rather avoid understanding and just go with the defaults. At some point during the production life of an application, collations may decide to “strike back” causing unpredictable errors and frustration. This blog is a high-level overview of SQL Server’s Collation, how it works and what is the basic idea behind it.

Character encoding,  strings, and code points

Words in a text are created from Characters. Characters are grouped into Character sets aka repertoires.
Computers, because they only deal with numbers, use Character encoding to represent character sets. Each character is encoded(represented as something else) as a unique number also known as a Code point e.g. letter “A” might be encoded as decimal 65 or 0100 0001 in binary.

Character string data type stores a sequence of characters. In terms of length, SQL Server offers two types of character string data types:

  • Fixed-length : CHAR(n), NCHAR(n)
  • Variable-length : VARCHAR(n), NVARCHAR(n) ,VARCHAR(max), NVARCHAR(max), text*, ntext*

*text and ntext will be deprecated in future versions of SQL Server

VARCHAR(max) and NVARCHAR(max) aka the LOB data types are variable-length data types that can store up to 2GB of data.

Broadly speaking, there are two main standards for mapping code points to characters: non-Unicode and Unicode.

Non-Unicode

In SQL Server, most of the non-Unicode characters require 1 byte (1) of storage space.  The characters are encoded as numbers between  0 and 255. This means that there can be a maximum of 256 distinct, non-Unicode encoded characters stored in a single byte. e.g. Character D is encoded as decimal 68 or binary 01000100 and m is encoded as decimal 109 or binary 01101101.
The problem with this is that the total of characters in all the world’s languages exceeds 256 available code points. So how to cram thousands of different characters into just one byte? Well, someone came up with the idea to create Code pages.

1:Most of the available code pages in the Sql Server support only Single-Byte Character Sets. However, there are several code pages that allow for Double-Byte Character Sets – the characters that require 2bytes of storage space. SQL Server 2019 supports a new code page for the UTF-8 character encoding. This code page supports characters that may require 1, 2,3, or 4bytes of storage space.
This means that e.g VARCHAR(50), where 50 represents the number of bytes,  depending on the collation, may not be able to store 50 non-Unicode character length string despite the common perception.
For varchar(n)/nvarchar(n)/char(n)/nchar(n), (n) represents the storage size in bytes, not the number of characters that can be stored i.e. try to cram this bad boy ” 😆” character into, say NCHAR(1).

Code page

A code page is basically a character set that represents an ordered list of 256 different code points that define characters specific to a group of similar languages. As of SqlServer 2019, there are 17 + 1 different code pages available.

SELECT [CodePage] = COLLATIONPROPERTY([name],'CodePage') 
      ,[No. of Collations] = COUNT(*)
FROM sys.fn_helpcollations()
GROUP BY COLLATIONPROPERTY([name],'CodePage');


Figure 1, Code Pages 

Each code page has many collations associated with it (one or more collations can define rules such as case sensitivity and sorting over the same code page). CodePage = 0 encapsulates collations that only support Unicode. These collations cannot be used for non-Unicode encoded characters – see an example here. More on the collations in the next paragraph.

The ASCII character Set

As mentioned above, in the non-Unicode world, different code pages support characters relevant to different languages. The 256 available code points are split into two groups.

  • Standard ASCII Character set – code points from 0 to 127 (decimal). These code points require 7 bits of storage space where every single bit represents a unique character. But we have one more bit that we can use in a byte …
  • Extended ASCII Character set – code points from 128-255(decimal). This is an 8-bit character set where the code points represent different characters depending on the code page.

The good thing is that the standard ASCII char. set contains all the characters (alphabet, numbers, punctuation symbols, etc.) that we need to write in the English language. On the other hand, the extended ASCII character set covers characters that are specific to languages such as Greek, Spanish, Serbian, Hebrew, Turkish, etc. This means that the non-Unicode covers most of the world’s languages, except Chinese, Japanese, and Korean which require more than one byte to store a character.

The figure below shows a comparison between Cyrillic and Greek ASCII character set vs. the first 255 Unicode code points. T-SQL code used for this experiment along with the output (txt) files can be found here:  T-sql script, ASCII_Cyrillic_output, ASCII_Greek_output, and Unicode_output.

Figure 2. The first 128 Unicode code points are the same as ASCII

The first 0-127 ASCII characters share the same code points across different code pages and in the Unicode world. This means that, from the encoding perspective, it is irrelevant if we go with non-Unicode or Unicode standard as long as our application uses the first 126 ASCII characters only. However, from a database perspective, we also need to take into consideration SQL Server collations as these “bad boys” define not only code pages but also the rules on how to sort and compare characters.

Figure 2 shows that e.g. character “w” is always encoded as decimal 119 regardless of the code page/Unicode (hex 0x77) – see ASCII  and UnicodeCharacters from 128-255 may have different encodings depending on the code page and/or Unicode

If a system uses a non-Unicode system to store e.g. Cyrillic characters specific to a particular language, we need to be sure that both, the client and SQL Server use the same code page to encode text. Otherwise, the client app may send e.g. Cyrillic character “ч” (like ch in chocolate), and SQL Server, if set up to use a Greek code page, may decode and store it as a Greek “χ” (chi)  – Figure 2.

Unicode

Some time ago, someone came up with an idea to create a universal character encoding standard that would contain all possible characters from all the world’s languages and more. The idea was for each character to have their own unique code point that never changes e.g. Cyrillic “ч” is always encoded as decimal 1095 and the Greek letter “χ” is always encoded as decimal 967. It basically means that the code points are the same regardless of platform, device, or application. The Unicode standard is maintained by the Unicode Consortium.

There are several different implementations of the Unicode standard depending on the way code points(numbers) are stored in memory;

  • UTF-8 – By far, the most popular implementation. it is a variable-length encoding that requires up to 4 bytes (32 bits) per character. It uses 1 byte per character for the standard ASCII characters and 2 bytes or 4 bytes for others.
  • UTF-16SQL Server default Unicode implementation. Although the standard allows 1,114,111 different characters, SQL Server’s NCHAR/NVARCHAR can store the Unicode characters in the code point range only* from 0 to 65,535. This is also known as BMP – Basic Multilingual Plane. Every character in this range requires 2 bytes of storage space, hence i.e NCHAR(1) requires 2 bytes. The characters above this range would require 4 bytes.
  • UTF-32 – Opposed to the variable-length encodings, this one always uses 32 bits (4 bytes) per character. Hmm, that can waste a lot of space.

*Characters with code points above decimal 65,535 are called Supplementary characters(SC). SQL Server 2012 introduced a new set of collations that enable Unicode data types NVARCHAR, NCHAR, and  SQL_VARIANT to store the whole Unicode range, the BMP, and the Supplementary characters.

Collations

SQL Server collations have two main purposes:

  • To implement Code pages for the non-Unicode characters. This does not apply to the Unicode characters, because, as mentioned above, they have their own code points that are “set in stone”.
  • To define rules on how to sort and compare characters. These rules apply to both non-Unicode and Unicode characters.

Different collations can implement the same Code page e.g. from the code above (Figure 1), we can see that there are e.g. 894 different collations based on the same Code Page, 1252. Each of the 894 collations implements different sorting rules for the same code e.g. The example below demonstrates how the result of the LIKE operator depends on the collation of its operands.

USE master
go
ALTER DATABASE testCollation
    COLLATE Latin1_General_100_CI_AI;
GO

USE testCollation
GO

DROP TABLE IF EXISTS #compareNonUnicode;
GO
CREATE TABLE #compareNonUnicode(txtCI VARCHAR(20) COLLATE Latin1_General_100_CI_AI
                                ,txtCS VARCHAR(20) COLLATE  Latin1_General_100_CS_AS );
GO

INSERT INTO #compareNonUnicode (txtCI,txtCS)
    SELECT 'Hello World','Hello World';
GO

SELECT *
FROM #compareNonUnicode
WHERE txtCS LIKE 'h%' -- no match
GO

SELECT *
FROM #compareNonUnicode
WHERE txtCI LIKE 'h%' --match
GO

The comparison rules apply for the Unicode data types as well – try to use NVARCHAR(20) instead of VARCHAR(20) in the code above).

There is a set of collations that support only the Unicode encodings and cannot be set on a database level.  These collations have Code Page = 0 (see Figure 1). Here is an interesting example of how these collations work with the non-Unicode data.

Sorting rules

SQL Server collations can be divided into two sets depending on the sorting rules they implement.

  • Windows Collation  – based on Windows OS system locale. These collations use the same algorithms for sorting Unicode and non-Unicode data.
  • SQL Collation – based on previous versions of SQL Server. This set of collations use different algorithms for sorting Unicode and non-Unicode data. This can produce different results when comparing the same character string encoded as non-Unicode and Unicode. SQL Collation names begin with “SQL_%“.

This script lists all SQL and Windows collation sets available in SQL Server.

--SQL and Windows collation sets
SELECT CodePage = COLLATIONPROPERTY([name],'CodePage')
                 ,CollationSet = IIF(name LIKE '%SQL%','SQL Collation','Windows Collation')
                ,[name]
                ,[description]
FROM sys.fn_helpcollations()
ORDER BY CodePage ,CollationSet 

--No. of SQL vs. Windows collation sets per code page
SELECT [CodePage] = COLLATIONPROPERTY([name],'CodePage')
       ,NoOfWinCollations = SUM(IIF([name] NOT LIKE '%SQL%',1,0))
       ,SQLCollations = SUM(IIF([name] LIKE '%SQL%',1,0))
       ,TotalNo = COUNT(*)
FROM sys.fn_helpcollations()
GROUP BY COLLATIONPROPERTY([name],'CodePage')
ORDER BY [CodePage]

Both Windows and SQL collation sets also support Binary-based collation (‘%_BIN’ or ‘%_BIN2′). Binary collations compare characters by comparing their code points.

Windows vs SQL Collation sorting quirks

It is interesting that, when installing a brand new SQL Server instance, the default collation is initially set to SQL_Latin1_General_CP1_CI_AS, if the OS is using the U.S. English Locale.
From the SQL Server 2000 Retired Technical documentation, we can learn that the Db installer chooses the Windows collation that supports the Windows locale of the computer on which the instance of SQL Server is being installed. This is followed by the note below:

The Setup program does not set the instance default collation to the Windows collation Latin1_General_CI_AS if the computer is using the U.S. English locale. Instead, it sets the instance default collation to the SQL collation SQL_Latin1_General_Cp1_CI_AS. This may change in a future release“.
Well, it hasn’t been changed yet. 🙂

A later document states:
During SQL Server setup, the default installation collation setting is determined by the operating system (OS) locale. For backward compatibility reasons, the default collation is set to the oldest available version that’s associated with each specific locale. To take full advantage of SQL Server features, change the default installation settings to use Windows collations. For example, for the OS locale “English (United States)” (code page 1252), the default collation during setup is SQL_Latin1_General_CP1_CI_AS, and it can be changed to its closest Windows collation counterpart, Latin1_General_100_CI_AS_SC.

Different sorting rules

This example demonstrates how an SQL collation uses different sorting rules for the non-Unicode strings. The query output shows the different sort orders for the same values encoded as the Unicode/non-Unicode e.g. SQL Collation sorts a non-Unicode “Mountain BIke A-F100”  before “Mountain BIke ABC” because, it treats the hyphen as a separate character, whereas the windows collation, for the same string, use a “word sort” sorting rules that ignores the hyphen, hence Mountain BIke ABC is less than “Mountain BIke A-F100


Figure 3, Windows vs SQL Collation sorting rules

If for some reason, we want to return cursor* to the client, the order of the cursor elements may differ depending on the collation and the encoding of the sorted column(s).

Note: A query with a presentation ORDER BY clause i.e. the one that is not coupled with the TOP clause, returns an object with rows organized in a specific order. ANSI recognizes such an object as a CURSOR.

The query optimizer cannot use proper statistics

Now, when we think about sorting, we cannot avoid thinking about indexes. So the next “quirk” demonstrates how a simple query that returns a single row, with an index on the searched column, and with a predicate that appears to be  SARGable,  may “decide” to do a full clustered index scan instead of the expected index seek/key lookup.

The example uses a simple Python code that executes a parameterized batch request* against a single table. The code can be found here and the sample table and data SQL script are here.

NoteClient requests & sql events in SQL server post explain parameterised batch requests.

The example shows how exactly the same query can behave differently depending on the database collation. If the DB uses the default SQL collation, not the one recommended by Microsoft for OS locale “English (United States)” (code page 1252) – the most commonly used locale, the query will be implemented through a sub-optimal plan.

Fire up MS Profiler and include the following events:

  • Performance:  Showplan XML Statistics Profile
  • Sessions: Existing Connection
  • Stored Procedures: RPC Completed
  • TSQL: Exec Prepared SQL, Prepare SQL, SQL:StmtCompleted, Unprepare SQL

With the testCollation database set to use SQL_Latin1_General_CP1_CI_AS, we have the following output.

Figure 4, SQL collation and sub-optimal plan

So, the client code executes a parameterized batch. It then passes a Unicode parameter value “Goldberg“. The query executes in the context of testCollation DB set to use SQL_Latin1_General_CP1_CI_AS collation. The collation implements different sorting rules for the Unicode and non-Unicode characters. On the left side of the predicate, we have non-Unicode (column:LastName“) values that we compare to a Unicode value.

declare @p1 int
set @p1=1
exec sp_prepexec @p1 output,N‘@P1 nvarchar(16)’,N’SELECT * FROM dbo.Employees WHERE LastName = @P1;’,N‘Goldberg’
select @p1
–note: @p1 != @P1, @p1- is a prepared query handle int value.

Query Optimiser, because of the different sorting rules, cannot compare the values and access the available index through a seek operation.  Instead, it decides to perform a full scan on the clustered index. Using a residual predicate operation on the Clustered Index Scan Operator, it implicitly converts each LastName value to NVARCHAR(50), the Unicode value, before comparing it with the query parameter value N”Goldberg”.

Interestingly enough, if we set up our test DB to use Windows-based collation(Latin1_General_100_CI_AS_SC), for the same OS locale, the batch request will be implemented as expected (Index seek/Key lookup).

Change the T-SQL script to set the Windows collation on the testCollation db, and repeat the test.

...
ALTER DATABASE testCollation
  --COLLATE SQL_Latin1_General_CP1_CI_AS;
    COLLATE Latin1_General_100_CI_AS_SC; --windows collation;
GO
...

Following a similar execution sequence, the Query optimizer was able to use the Index on the searched column and construct a more optimal plan.

Figure 5, Windows collation produces an optimal plan

From the experiments above, we can see that we need to be aware of the DB collation characteristics and how they may affect query execution.

It is worth mentioning that it’s always a good idea to define proper parameter data types when making RPC calls whether it is a sproc or a parameterized batch. In this case, it was supposed to be varchar(50)).

..
query = "SELECT * FROM dbo.Employees WHERE LastName = CAST(? AS VARCHAR(50));"
..

Comparing a non-Unicode string to a Unicode string, in this case, the LastName column VARCHAR to an NVARCHAR requires the Implicit Conversion operation. This conversion follows the Data type precedence rules that say that the varchar value must be converted to nvarchar which has higher type precedence. This means that the query must convert every LastName value to nvarchar before evaluating the predicate; LastName = N”Goldberg”. However, QA is still able to utilize the index seek. More on that in the next segment.

More on the ODBC param. batch requests and dynamic index seek

This is not directly related to the topic of this post, but I found it super interesting, so here it is 🙂

In addition to the query behavior presented above, we can observe a few more interesting things.

ODBC Driver

ODBC driver used by Python’s pyodbc library implements parameterized batch requests using the sys.sp_prepexec system stored proc- see Figure 4. The sproc implements sys.sp_prepare and sys.sp_execute. It is interesting that the plan generated by the sys.sp_prepexec does not “sniff” the first parameter passed – in our case, Mr. “Goldberg”. We would expect QO to use Histogram info to find out how many Goldbergs there are in the LastName column. In this particular case, it would use the AVG_RANGE_ROWS = 1.
Instead, QO used General (All) Density(aka Density Vector) = 0.003703704. The reciprocal of the density vector represents the number of unique LastNames. (1/0.003703704 = 270). So how many Goldbergs QO estimated?  Density Vector * TableCardinality–> 0.003703704 x 296 = 1.0963. This information is available in the properties of the Index Seek operator.
If we run the same code using e.g. .NET Framework Data Provider for SQL Server (pshell example), the parameterized batch will be implemented using sys.sp_executesql system sproc and QO will use histogram data to make a more accurate estimate of the number of qualified rows for the “LastName” predicate.

Dynamic index seek

The shape of the plan presented in Figure 5, includes ConstantScan and ComputeScalar operators which interact with the Index Seek operator. This plan shape implements an index seek with a dynamic seek range data access pattern.
Let’s generate the same plan in SSMS.

DBCC TRACEON(2486);
    SET STATISTICS XML ON;
       DECLARE @p1 INT
       SET @p1=-1
       EXEC sp_prepexec @p1 OUTPUT,N'@P1 nvarchar(16)',N'SELECT * FROM dbo.Employees WHERE LastName = @P1 OPTION(RECOMPILE);',N'Goldberg'
    SET STATISTICS XML OFF;
DBCC TRACEOFF(2486);

EXEC sys.sp_unprepare @p1
GO

The undocumented traceflag 2486 exposes the Expressions values (Expr1003, Expr1004, and Expr1005) assigned during the runtime. Values are visible in the XML query plan.

<ValueVector>
    <ColumnReference Column="Expr1004" />
    <ColumnReference Column="Expr1005" />
    <ColumnReference Column="Expr1003" />
</ValueVector>
<ScalarOperator ScalarString="GetRangeThroughConvert(N'Goldberg',N'Goldberg',(62))">
    <Intrinsic FunctionName="GetRangeThroughConvert">
    <ScalarOperator>
        <Const ConstValue="N'Goldberg'" />
    </ScalarOperator>
    <ScalarOperator>
        <Const ConstValue="N'Goldberg'" />
    </ScalarOperator>
    <ScalarOperator>
        <Const ConstValue="(62)" />
    </ScalarOperator>
    </Intrinsic>
</ScalarOperator>

An interesting thing about this output is that it exposes an Intrinsic(built-in) Function “GetRangeThroughConvert”. The purpose of this function is to narrow down the set of the LastName candidates for the final result(dynamic seek range). It is visible in the Seek Predicates plan segment. This significantly reduces the number of the LastName column values to be implicitly converted from VARCHAR to NVARCHAR. Without this optimization, QO would decide to go with a full clustered index scan performing the conversion for all LastName values.
Once the function reduces the number of LastName candidates, the system performs implicit conversion through the Residual Predicate.

Figure 6, GetRangeThroughConvert built-in function

The only thing missing is Expr1003 = 62. The number(bitmask) defines the actual test performed – dynamic seek range is always presented in a generic way Start: Column >Expr, End Column<Expr. In this case, 62 includes the Expr values in the interval narrowing down the seek range to just one value, Mr. “Goldberg”.

Conclusion

Words in a text are created from Characters. Characters are encoded as numbers. Some complex characters may be presented as a combination of two or more numbers. Non-Unicode characters generally support 256 different characters, the first 128 being ASCII charset. The second “half” depends on the Code page. Different code pages define different sets of characters. Unicode characters implement a universal character encoding standard where each character has its unique, set in stone, code.
Collations in SQL Server have two main purposes: to implement code pages for non-Unicode characters and to define sorting rules for Unicode and non-Unicode characters. Conversions between different collations may result in errors or in the sub-optimal query execution plans.
It is very important that the application code and the database are on the same page (pun intended) and the same characters are understood in the same way.

Thanks for reading.

Dean Mincic

Bookmark lookup tipping point

Bookmark lookup critical point


Summary

It is common for production environments to have queries – query plans, that use non-covered, non-clustered indexes in combination with a bookmark(Key or RID) lookup operator. The combination of  the physical operators is one way how query optimiser can use a non-covered index to provide information required by a query. However, sometimes, for the same query,  query optimiser decides to scan the whole (cluster or heap) table instead. This drastic change in the plan shape may have negative impact on our query performance.
This post attempts to explain the mechanism behind QO decision on when to switch between the two plan shapes. The concept is known as The Tipping Point and represents the point at which the number of page reads required by the bookmark lookup operator exceeds a certain point at which a clustered index/heap table scan becomes less expensive than the non-clustered index seek.

How bookmark lookup works

Before diving into the tipping point specifics, it would be good to understand how bookmark lookup operator works in combination with a non-clustered , non covered index. Bookmark lookup (Key or RID)  is a physical operator used to find data rows in the base table(cluster or heap) using a clustered index key(Key lookup) or row locator(RID lookup).
Lets create a test environment we can use throughout the blog.

Create test environment

Create a test table

--DB compatibility level 150
DROP TABLE IF EXISTS dbo.TestBookmarkLookupthreshold;
GO

CREATE TABLE dbo.TestBookmarkLookupthreshold(
		EmployeeId INT IDENTITY(1,1)
		     PRIMARY KEY CLUSTERED     --4
		,[Name]  CHAR(120) NOT NULL    --120
		,Surname CHAR(120) NOT NULL    --120
		,Notes   CHAR(245)             --245
		,SearchValue INT  NOT NULL     --4
		     INDEX NCI_SearchValue UNIQUE NONCLUSTERED(SearchValue) 
)
GO

Insert 100,000 rows

;WITH generateRows AS
(
	SELECT TOP(100000) n = ROW_NUMBER() OVER(ORDER BY (SELECT NULL))	
	FROM sys.columns c
		CROSS APPLY sys.columns c1
			CROSS APPLY sys.columns c2
	ORDER BY (SELECT NULL)
)
INSERT INTO DBO.TestBookmarkLookupthreshold (
			   [Name]
			   ,Surname
			   ,Notes
			   ,SearchValue
)
	SELECT [Name]  = 'MyNameIs_'+CAST(n AS VARCHAR(10))
	      ,Surname = 'MyLastNameIs_' + CAST(n+100 AS VARCHAR(10))
	      ,Notes   = 'MyNotes..'
	      ,SearchValue = n
	FROM generateRows
	ORDER BY n
GO

The test objects properties that are interesting for our experiment:

  • Unique clustered index on EmployeeId.
  • Unique, non-clustered index on the SearchValue column.
  • SearchValue column contains unique, ever increasing integer values. The  values match EmployeeId values.
  • The row size is exactly 500bytes. 493bytes is used by the five fixed length columns + 7bytes row overhead. More on this can be found here.

Key lookup scenario

The query below returns 500 rows (all columns) for a range predicate.

DBCC TRACEON(652);
GO
  SET STATISTICS XML,IO  ON;
	SELECT *
	FROM dbo.TestBookmarkLookupthreshold
	WHERE SearchValue BETWEEN 1000 AND 1499
  SET STATISTICS XML,IO  OFF
DBCC TRACEOFF(652);
GO

Note: Traceflag 652 . The traceflag disables page pre-fetching scans (read-ahead). Disabling the storage engine feature will help us to reconcile the number of I/O operations reported by STATISTICS IO with the number of rows selected by the query. More on the trace flag later in the blog.

Analyse key lookup query plan

The figure below consolidates three sets of information related to our test query – a graphical execution plan shape, basic properties of the two physical operators and the number of IO reads performed on the test table.


Figure 1, Key lookup, index seek plan

We read the query plan as the following sequence of events.

  • Query optimiser chose a key lookup/non-clustered index seek routine to implement query request.
  • Nested Loop operator requested, for its outer input, all valid rows(rows that are passed the seek predicate ..SearchValue BETWEEN 1000 AND 1499.. ) on NCI_SearchValue index. The index seek(index bTree traverse) was executed once resulting in 500  rows and two columns – SearchValue and EmployeeId. The latter  also acts as a pointer to the full size rows stored in the clustered index leaf level.
  • Nested Loop operator requested, through its inner input, the rest of the columns selected by the query – Name, Surname and Note. The search(key lookup operator), was executed 500 times, once per row in the outer input returning a  new set of 500 rows – a row per key search. Each execution traversed the clustered index bTree structure using EmployeeId as a seek predicate, in order to pin-point the qualifying rows.
  • For each key lookup search, Nested Loop operator combined the two outputs, the SearchValue and EmployeeId from the outer input with the Name, Surname and Note from the inner input forming the shape of the final result set.

The next thing we need to understand is the relationship between the number of I/O reads required to implement the above routine and the number of rows processed in the process.

Figure 1 shows that the number of I/O reads required for the operation was 1503 logical reads.  A logical read is a situation when Sql Server processes an 8Kb page from a RAM memory segment called buffer cache. If the requested page is not already in the buffer cache,  storage engine needs to perform a physical read operation in order to get the same 8Kb page from the disk.
The properties of the two physical operators(NCI seek and Key lookup) shows that the system read 500 rows from the non-clustered structure,in one go and performed 500 operations on the clustered index, returning a row per operation.

Now we need to dive a little bit deeper into Sql Server’s storage protocol in order to find all physical pages that were processed during the operations. Query below gives us high level overview of the index structures used in the test. The non-clustered index bTree has two levels and total of 175 pages whereas clustered index bTree has three levels and the total of 6273 pages.

SELECT idx.[name]
	  ,idx.[type_desc]
	  ,pstats.index_level
	  ,pstats.index_depth
	  ,pstats.avg_record_size_in_bytes --all fixed length data type columns	  
	  ,pstats.record_count	  
	  ,pstats.page_count
	  ,idx.index_id
	  ,TotalPageNo = SUM(pstats.page_count) OVER(PARTITION BY idx.index_id) --total no of pages per index
 FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID(N'dbo.TestBookmarkLookupthreshold','U'), NULL, NULL , 'DETAILED') pstats
     LEFT OUTER JOIN sys.indexes idx
        ON idx.object_id = pstats.object_id
           AND idx.index_id = pstats.index_id;

Figure 3, the total number of pages per index 

The next query gives us a detailed view of the index pages across bTree levels – The Doubly Linked List data structure.

SELECT  idx.[name]
       ,idx.[type_desc]
       ,aloc.page_type
	   ,aloc.page_type_desc
	   ,aloc.previous_page_page_id
	   ,aloc.allocated_page_page_id
	   ,aloc.next_page_page_id
	   ,aloc.extent_page_id
	   ,idx.index_id
	   ,aloc.page_level
FROM sys.dm_db_database_page_allocations(DB_ID(), OBJECT_ID(N'dbo.TestBookmarkLookupthreshold','U'), NULL, NULL , 'DETAILED') aloc
LEFT OUTER JOIN sys.indexes idx
        ON idx.object_id = aloc.object_id
           AND idx.index_id = aloc.index_id
WHERE aloc.page_type IS NOT NULL 
   AND aloc.index_id = 1 --2 (0 - heap, 1- clustered index , > 1 non-clustered index
ORDER BY aloc.page_level DESC

Finally, the following query gives us a sneak-peek of the actual rows  on the index/data pages.

DBCC  TRACEON(3604); --allows DBCC PAGE command to send its output to SSMS
GO
DBCC PAGE ('Test2019DB', 1,832,3) --replace db name/page_id with your values
DBCC TRACEOFF(3604);
/*
DBCC PAGE ( { database_name | database_id | 0}, file_number, page_number
[,print_option ={0|1|2|3} ]) [WITH TABLERESULTS]
*/
GO

Data access pattern

The following diagram represents data access pattern used by the key lookup routine.

--test query
SELECT *
FROM dbo.TestBookmarkLookupthreshold
WHERE SearchValue BETWEEN 1000 AND 1499


Figure 2, Nonclustered index & key lookup internals

Data access pattern analysis

Our next goal is to reconcile the total number of logical I/O reads previously reported by the STATISTICS IO – (1503) with the diagram above.
Lets find out the total number of I/O read operations required to generate one row(out of a total of 500 rows). The following sequence of events is derived from Figure 2.

  1. Nested loop operator requests all eligible rows from the index seek operator. The non-clustered index is a non-covered index and can provide only SearchValue and EmployeeId columns. The index seek operator use the two seek predicates to find the range of values, SearchValue >=1000 AND SearchValue <=1499.
  2. Non-clustered index traverse. Index seek operator reads the Root page(PageId=1:16448) of the non-clustered index making the first I/O read (NoOfReads = 1).
  3. Index seek operation, following the lower boundary(SearchValue = 1000) of the search range, finds a pointer(PageId) which points to the index leaf level page which contains the full(two columns) index row. (ROOT PAGE: SearchValue = 597, PageId = 1:16456). It also knows that the PageId=1:16456 alone cannot provide complete range of the requested values but only SearchValues >=1000 AND SearchValues <1157. It “remembers” the following pointer, PageId = 1:16457 which can provide the rest of the values, SearchValue >=1157 and SearchValue <= 1499.
  4. Index seek operator performs the second I/O read(PageId =1:16457) following Path (A).  The total number of I/O reads is now (NoOfReads = 2).
  5. After storing first 165 rows found on PageId =1:16456 in memory, the operator follows Path (B). The operation is known as “partial index scan“. The operator knows that the subsequent page(PageId=1:16457) contains the rest of the requested rows(335 rows). The current page also has pointers to previous and next page(doubly linked list). The Path (B) makes the third read (NoOfReads = 3).
  6. Nested loop operator received all 500 requested rows from its outer input , the NCI index seek operator.
  7. Nested Loop operator now performs the Key Lookup operation over clustered index, 500 times, once per row collected from the outer input.
  8. Clustered index traverse (singleton search). On its very first execution, the key lookup operator uses the first row from the Nested Loop outer input, (EmployeeId = 1000) and performs its first page read(PageId = 1:2680). The page is the root level of clustered index bTree. The operation follows Path(C) increasing the total number of reads (NoOfReads = 4).
  9. Clustered root page provides a pointer(PageId = 1:832) which points to the first index intermediate level. The page maps all EnployeeIds between NULL and less then 4305. The row with EmployeeId=1000 falls into the range. Following Path(D) the operator makes its second read and increases the total number of reads ( NoOfReads = 5)
  10. Intermediate page 1:832 provides information about a leaf level page(PageId=1:1029) that contains the first full row.
  11. The process now follows Path(E) and make its final, third clustered index read ( NoOfReads = 6)
  12. The full row is then passed from the Nested Loop operator to the SELECT operator.
  13. Nested loop operator continue to progresses through the list of 500 rows from its outer input repeating steps 8 – 11 until all 500 rows have been processed.

The total number of reads is
Total No Of Reads = Index Seek operation (3 reads) + 500 * Key Lookup operation (3 reads) = 3 + 500 * 3 = 1503.
This is an exact match with the number of logical reads reported by STATISTICS IO.

Important observations

From the storage level perspective, one of the main differences between the two access patterns is that the Clustered index seek(partial scan) is a sequential read I/O operation, whereas Key lookup(singleton clustered index seek) is a random read I/O operation. Sequential reads are generally less expensive (less mechanical movements on the disk level) than the random reads. This is also true for RAM/SSD although they don’t have moving parts. This a very high level observation on data storage systems. 🙂

The number of random reads depends on size of row. The wider the row the more random reads key lookup needs to perform to get the same result-set. More about this later in the post.

Read ahead optimisation

Earlier in the blog I used TRACEFLAG 652 to disable the page pre-fetching aka read ahead functionality. Read ahead is an asynchronous I/O mechanism build to overcome the gap between CPU and I/O performances. Because CPU is many times faster than any storage system, Sql Server’s storage engine tries to read up to 64 sequential pages(8 extents) before they are requested by a query. This provides more logical reads, but on the other hand is faster than performing physical reads only when required. Although the number of read-ahead pages may vary, the mechanism reads pages from the index intermediate level(s) in order to identify the leaf level pages to be read in advance. In our case, the functionality would, if not turned off, “added a few” extra pages to the STATISTICS IO report and we wouldn’t be able to reconcile the reads with the diagram in Figure 2.
Try to run the same test query without turning on the traceflag. The number of logical reads should be greater than 1503.

The tipping point

The tipping point is the point which represents the critical number of rows after which query optimiser decides to perform cluster index scan instead non-clustered/key lookup data access pattern.
As previously shown, in non-clustered index/key lookup scenario, the number of rows requested by a query relates to the number of random reads. The main concern when determining the tipping point is actually the number of data pages that needs to be “randomly” read from clustered index leaf level – a read per each row requested . In our example this number is 500. The approach excludes the clustered index leaf level page reads and the non-cluster reads all together.
The number of pages that represents the tipping point is between 1/4 and 1/3 of clustered index data pages(leaf level). If a query requests less than 1/4 (25%) of the number of clustered index leaf level pages, query optimiser is most likely to allow random reads and non-clustered index/key lookup data access pattern. However, if a query requests more than 1/3(33%) pages, query optimiser will implement a clustered index scan data access pattern.
Figure 3, The tipping point

Knowing that a random read corresponds to a selected row, we can express the tipping point as a number of rows;

                      1/4(no of data pages) <= Tipping point (rows)  => 1/3(no of data pages)

In our example, the tipping point is expected to be somewhere between 1562 and 2083 rows.
So where exactly is our tipping point?
One approach is to apply binary search algorithm to the tipping point interval and perform trial and error approach until the query plan changes.
The other approach is to construct some kind of a program to do that for you 🙂

SET NOCOUNT ON;
GO

DECLARE @start INT
       ,@end INT 
       ,@IndexScan NVARCHAR(1000);

SELECT @start = 6250.00 / 4
      ,@end = 6250.00 / 3;

SELECT [Tipping point range lower boundary] = @start
      ,[Tipping point range higher boundary] = @end;


;WHILE @start <= @end
BEGIN

	SET FMTONLY ON	
		SELECT * --thisIsMyLittleQuery
		FROM DBO.TestBookmarkLookupthreshold
		WHERE SearchValue <= @start
		OPTION (RECOMPILE)
	SET FMTONLY OFF

	;WITH XMLNAMESPACES (default 'http://schemas.microsoft.com/sqlserver/2004/07/showplan' )
	,individualPlans AS 
	(
		SELECT  DatabaseName  = DB_NAME(s2.[dbid])
			   ,ProcName      = OBJECT_NAME(s2.objectid, s2.[dbid])
			   ,sql_statement = SUBSTRING( s2.[text]
										   ,(s1.statement_start_offset / 2) + 1 --/2 2byutes per character S2.text is of type nvarchar(max). Start position is 0byte
										   ,( 
												(CASE -- check for the end of the batch
													WHEN s1.statement_end_offset = -1 THEN  DATALENGTH(s2.[text]) --end of the last query in the batch  -1 represents the end of a batch
													ELSE s1.statement_end_offset
												 END) 
												- (statement_start_offset) / 2
											 ) + 1
								 )
				,query_plan = CAST(s3.query_plan AS XML) 
		FROM    sys.dm_exec_query_stats AS s1
			CROSS APPLY sys.dm_exec_sql_text(s1.[sql_handle]) AS s2
				CROSS APPLY sys.dm_exec_text_query_plan(s1.plan_handle, s1.statement_start_offset, s1.statement_end_offset) s3 -- get the query plans
		WHERE s2.text LIKE '%SELECT * --thisIsMyLittleQuery%'
	)
	SELECT @IndexScan = query_plan.value('(/ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple/QueryPlan/RelOp/@PhysicalOp)[1]','nvarchar(max)')
	FROM individualPlans
	WHERE sql_statement LIKE 'SELECT * --thisIsMyLittleQuery%';

	IF @IndexScan = 'Clustered Index Scan'
	BEGIN
		SELECT [TippingPoint (NoOfrows)] = @start
		BREAK;
	END 

	SET @start +=1;

END

The script runs the same query for different SearchValue values. The values are within the expected tipping point range, starting from the lower boundary. Query result is suppressed  by FMTONLY ON session setting. OPTION(RECOMPILE)* in this context ensures that the value of local variable @start is known to Query Optimiser when creating execution plan for the query*. For each query run, program checks the current query plan RelOp element. If the element’s attribute @PhysicalOp has value set to ‘Clustered Index Scan‘ the program terminates and selects the current SearchValue value. The value represents the Tipping point we are looking for.
Figure 4, The exact Tipping point number of rows

Note:  Instead using OPTION(RECOMPILE) we could use Dynamic string execution.

/*
SET FMTONLY ON	
   SELECT * --thisIsMyLittleQuery
   FROM DBO.TestBookmarkLookupthreshold
   WHERE SearchValue <= @start
   OPTION (RECOMPILE)
SET FMTONLY OFF
*/
EXEC ('SET FMTONLY ON
       SELECT * --thisIsMyLittleQuery
       FROM DBO.TestBookmarkLookupthreshold
       WHERE SearchValue <=' + @start + '
       SET FMTONLY ON')

The approach constructs and optimise the query during run-time. In this case, local variable @start gets evaluated and is treated as a literal within the dynamic string. It is most likely that the query plan will not be parameterised and the individual plans(one per execution) will be cached as Adhoc plans. This may lead to the Plan pollution situation, but this is a topic for a separate blog 🙂

Lets check the tipping point number of rows.

-- check the tipping point TP = 1677, Row Size 500bytes.
DBCC TRACEON(652);
GO
  SET STATISTICS XML,IO  ON;
	SELECT *
	FROM dbo.TestBookmarkLookupthreshold
	WHERE SearchValue < 1677;
  GO
	SELECT *
	FROM dbo.TestBookmarkLookupthreshold
	WHERE SearchValue < 1678;
  GO
  SET STATISTICS XML,IO  OFF;
DBCC TRACEOFF(652);
GO

Figure 5, Query plan change

In terms of the number of pages, tipping point is expected in the range from 25%33% of the total number of clustered index data pages(leaf level). In our example, the range was between 1562 and 2083 pages.
From the number of rows point of view, tipping point was 1677 rows which is  (1677 /100000)*100 = ~1.7% of the total number of rows in the table(clustered index leaf level). This means that Sql Server is very conservative when to use bookmark lookup data access pattern, although the percentage of rows depends on the row size and probably other conditions i.e memory pressure, parallel query execution ..etc.

Tipping point & row size

As mentioned above, the number of random reads in non-clustered/key lookup access pattern depends on the size of a row. The wider the row the more random reads key lookup needs to perform to get the same result-set.

Lets increase our test table row size from 500(493bytes + 7bytes overhead) to 1000bytes. The new rows size will expand the table footprint.

ALTER TABLE dbo.TestBookmarkLookupthreshold
	ALTER COLUMN Notes CHAR(745);
GO

ALTER INDEX NCI_SearchValue ON dbo.TestBookmarkLookupthreshold
	REBUILD;
GO
--this is why it's always better to name db constraints/indexes :)
ALTER INDEX [PK__TestBook__7AD04F11C1954CB9] ON dbo.TestBookmarkLookupthreshold
	REBUILD;
GO

The number of clustered index data pages required to store 100000 rows is now doubled, 12502 to be precise (Figure 3 query). The Tipping point is now expected to be in the range from (1/4) * 12502 and (1/3) * 12502 or 3125 and 4167 rows.

If we run query (Figure 4), we’ll get the exact tipping point , 3228 rows.

-- check the tipping point TP = 3228 , Row Size 1000bytes.
DBCC TRACEON(652);
GO
  SET STATISTICS XML,IO  ON;
	SELECT *
	FROM dbo.TestBookmarkLookupthreshold
	WHERE SearchValue < 3228;
  GO
	SELECT *
	FROM dbo.TestBookmarkLookupthreshold
	WHERE SearchValue < 3229;
  GO
  SET STATISTICS XML,IO  OFF;
DBCC TRACEOFF(652);
GO


Figure 6, Query plan change (row size 1000bytes)

The interesting thing here is that now, with the wider rows, the tipping point represents (3228/100000)*100 = ~3.2% of the total number of rows in the table which is almost double than 1.7% calculated for the 500byte rows.

The tipping point experiments can be put in the context of a cached plan. If we wrap our test query into a stored proceudre and make local variable @start  to be stored proc’s input parameter, on the very first sp call query optimiser will create and cache query plan using the value passed

the query execution plan will be crated using the

Conclusion

The concept known as The Tipping Point represents the point at which the number of page reads required by the bookmark lookup operator exceeds a certain point at which a clustered index/heap table scan becomes less expensive than the non-clustered index seek. In this context, a bookmark operator(Key or RID) is coupled with a non-clustered , non-covered index – Index Seek operator. The latter performs sequential I/O operations whereas the first performs a number of Random Access I/O read operations. Random I/O reads are generally more expensive than sequential I/O read operations regardless of the storage system (mechanical HDD, SSD, RAM ..etc). Query optimiser allows bookmark lookup/index seek data access pattern only if the number of clustered index pages needed to be randomly accessed does not exceed 1/4 of the total number of clustered index data pages(leaf level). If the number of pages exceeds 1/3 of the total number of the clustered index data pages, Query optimiser will choose Clustered index scan data access instead. This is also true for Rid Lookup/Table scan access pattern when table is a heap.
The range of data pages between 1/4(25%) and 1/3(33.3%) of the total data pages defines The Tipping Point space. In this scenario, the number of randomly accessed pages relates to the total number of the selected rows. However,  25% – 33% of pages represents only a fraction of the total number of rows – for 500byte row size, between 1.6% and 2%. The range also depends on the row size. For the same number of rows and with the row size set to 1000bytes, the range increases to 3% – 4% of  the total number of rows.

I wish to thank to my dear colleague and a great SQL Server enthusiast Jesin Jayachandran for inspiring me to write this blog.

Thanks for reading.

Dean Mincic