Adding non-NULL value columns ONLINE

Adding non-NULL value columns with values to a busy table – ONLINE

Adding a non-null column to a big table on a busy database can be tricky business. It might seem to be an easy task of altering table and adding a column ( an integer data type column – 4 bytes more or less, who cares) but it’s not. DBAs are always extra careful when perform those sorts of actions since it can cause serious implications to the db applications. For example, adding a non-null column to a busy table which contains 10M rows means that every row must be updated with a value. The table may be accessed by many different applications many times in a short period of time. These requests can be seriously blocked by the alter table/add a not null column/update column with a value, causing app timeouts, deadlocks and all sorts of headaches.

One of the common approaches is to add a NULL column and then to update the column values across all rows using the batch update scenario (update N number of rows per transaction) and then to alter column as non-null. This operation is usually performed “offline” or when the table is “quiet”.

Starting with SQL 2012+(Enterprise Edition)  the operation is not “painful” any more. Adding column(s) to a busy table is now instantaneous and has less impact to the OLTP operations. In fact, the operation can be performed online.
This still requires taking into the consideration other factors that can impact the operation such as; data type of the new column, row length etc.

The task for today will be to add a new , non-null column of integer data type with the default value 46 to a busy table that has  1M rows 🙂

Test table

IF OBJECT_ID('dbo.addANonNullColOnline','U') IS NOT NULL
    DROP TABLE dbo.addANonNullColOnline
GO
--create a test table
SELECT TOP 1000000 id = IDENTITY(INTEGER,1,1)
                   ,val = CAST(c2.[name] +'AAAA' AS VARCHAR(4)) --fill all 4 bytes
                            COLLATE DATABASE_DEFAULT
    INTO dbo.addANonNullColOnline
FROM sys.columns c1
    CROSS JOIN sys.columns c2
        CROSS JOIN sys.columns c3
GO
--set one value to NULL
UPDATE dbo.addANonNullColOnline
    SET val = NULL
WHERE id = 2
id          val
----------- -----
1           bitp
2           NULL
3           colg
4           hbco
5           maxi
6           null
7           offs
...
...
10000000    runt

The table contains 1M rows. One row, id=2 has a NULL value for column “val”. The rest of the column values are taking the maximum available space of 4bytes.
NOTE: Varchar stores non-unicode characters e.g VARCHAR(4) =’AAAA’ takes 4bytes. (1 byte per non-unicode character).

The query below provides basic information about the table rows:

SELECT index_level
       ,page_count
       ,record_count
       ,min_record_size_in_bytes
       ,max_record_size_in_bytes
       ,avg_record_size_in_bytes
       ,SizeInMB = ROUND( ((record_count * avg_record_size_in_bytes) /1024.00 )/1024.00,0)
FROM sys.dm_db_index_physical_stats(db_id('test'),object_id('dbo.addANonNullColOnline','U'),null,null,'detailed')

The metadata information shows that the “shortest” row (id = 2) takes 11bytes of disk space, and the “longest” row(all other rows) takes 19bytes. The whole table(all rows) takes around 18MB of the disk space.
Before we add a new, not null column to the table, lets show how the rows are physically stored.

Rows physical storage

In this case, rows are stored using the FixedVar storage format. The name comes from the order in which Sql Server storage engine stores different data types – fixed data types(integer, decimal, char..) and then variable data types(nvarchar, varchar, varbinary..).The format has been used since Sql Server 7.0(Sphinx) released in 1998. (smile).
FixedVar format internally adds a number of additional bytes to every row on page. The extra space per row is used to provide information about rows itself like (if the row has been deleted/ghost ,the number of columns, columns with NULL values, position of the variable columns if any,  timestamp and pointer to the verison store if optimistic locking is enabled.. etc..).
For the fixed columns, storage engine will add:

6bytes + CEILING(TotalNoOfColumns / 8.00bytes)

..and for the variable part of the row

2bytes(if at least one of the varColumns has non-null value) + 2bytes * NoOfVariableNonNULLColumns.

In our example, the minimum row size(11bytes , row id=2) and the maximum row size( all other rows 19 bytes) will be..

--min_record_size_in_bytes
SELECT [FixedPart: 6 + CEILING(2/8.00)] = 6 + CEILING(2/8.00)
      ,[FixedColumnValue(INT) bytes] = 4
      ,'---'='---'
      ,[VariablePart: 0 + 2*0] = 0 +2* 0 --no non-null variableColumns
      ,[VariableColValue(VARCHAR(4)] = NULL
      ,TotalRowSize = 6 + CEILING(2/8.00) + 4 + 0
GO
--max_record_size_in_bytes
SELECT  [FixedPart: 6 + CEILING(2/8.00)] = 6 + CEILING(2/8.00)
       ,[FixedColumnValue(INT) bytes] = 4
       ,'---'='---'
       ,[VariablePart: 2 + 2*1] = 2 +2* 1
       ,[VariableColValue(VARCHAR(4) bytes] = 4
       ,TotalRowSize = 6 + CEILING(2/8.00) + 4 + 4 + 4
GO

The image below shows the structure of a FixedVar row (Pro Sql Server Internals by Dimitri Korotkevitch)

Now, down to the bits and bytes, lets find the first data page(out of 2598) that is allocated to the table and try to reconcile the min and a max row. The undocumented internal column/fn %%Lockres%% gives us FileId:PageId:SlotId for the selected rows.

SELECT TOP(5) f.*
              ,%%lockres%% as [FileId:PageId:SlotId]
FROM dbo.addANonNullColOnline f 
ORDER BY id

The query selects first few rows from the test table.

id          val  FileId:PageId:SlotId
----------- ---- --------------------------------
1           bitp 1:16576:0
2           NULL 1:16576:1
3           colg 1:16576:2
4           hbco 1:16576:3
5           maxi 1:16576:4

(5 row(s) affected)

With the PageID of the page we want to examine, the next step is to allow sql server to redirect the page content output to the screen.  To achieve this we can use DBCC(Database Console Command). The system change will affect only the current session.

--check the current state of the flag
DBCC TRACESTATUS(3604)
GO
DBCC TRACEON(3604,-1) --sets up traceflag 3604 value to true on a session level
GO
--displays the first page
DBCC PAGE (0,1,16576,3) --WITH TABLERESULTS
GO

The page snapshot below shows the first page header(first 98bytes of the total page size – 8192bytes) and the first couple of rows. The relevant information for this exercise is highlighted.

The page is read from the disk and put into the buffer pool. The relevant information on the page:

  1.  File No : Page number (1:16576)
  2.  m_lsn = (525:62471:2) – Log Sequence Number.
  3. The position of the first row.
    Slot(0) – the first row on the page
    Offset 0x60 – the physical position from the beginning of the page (Hex 0x60 = 96bytes)
    Length = 19* Happens to be the maximum record size. All rows exept row no 2 (id =2) are 19bytes.
  4. The first row -hexadecimal(byte swapped values)

      30000800 01000000 02000001 00130062 697470

       03 – TagA 1 byte
       00– TagB 1 byte
       0008 – Fsize bytes  – location of the end of the last fixed datatype column.
                       8bytes = 1b(TagA) +1b(TagB) + 2bytes(Fsize itself) + 4b(INT column)
      00000001 – The fixed data type column values. One int column with value 1.
      0002 – Total number of columns (2bytes)  =decimal 2
      00 – NullBitmap 1byte = CEILING(TotalNoOfColumns /  8.00bytes) = CEILING(1/8.00bytes)

     TOTAL Fixed part of the row: 1+1+2+4+2+1 = 11bytes

     00012bytes – No. of variable type columns = decimal 1 (If all of the variable data type columns are NULL, this will be 0bytes.)
     0013 2bytes – Every two bytes stores the location of the end of a variable,not NULL data type column. In this case there is only one variable column = decimal 19 – the var. column ends on 19th byte.
(2bytes * NoOfVariableNonNULLColumns)
     62 697470
 – 4 bytes – The actual variable column value ‘bitp’

SELECT CONVERT(VARBINARY(4), 'bitp')
Column 2 value
--------------
0x62697470

(1 row(s) affected)

    TOTAL Variable part of the row: 2+2+ = 8bytes

    Total row1 size = 11bytes(Fixed part) + 8bytes(variable part) = 19bytes

  1. Column “id” value, first row
  2. Column “val” value, first row
  3. The position of the second row.
    Slot(1) – second row on the page
    Offset 0x73 – the physical position from the beginning of the page (Hex 0x73 = 115bytes). This is header(96bytes) + first row(19bytes)
    Length = 11*  Happens to be the minimum record size. All othere rows are 19bytes.
  4. Second row -hexadecimal(byte swapped values)

      10000800 02000000 020002

      01 – TagA 1 byte
      00– TagB 1 byte
      0008 – Fsize bytes  – location of the end of the last fixed datatype column.
                      8bytes = 1b(TagA) +1b(TagB) + 2bytes(Fsize itself) + 4b(INT column)
      00000002 – The fixed data type column values. One int column with value 2.
      0002 – Total number of columns (2bytes)  =decimal 2
      02 – NullBitmap 1byte = CEILING(TotalNoOfColumns /  8.00bytes) = CEILING(1/8.00bytes)  02hex = 0010bin.  This indicates that the second column (second bit is on) in the row contains NULL value.

      TOTAL Fixed part of the row: 1+1+2+4+2+1 = 11bytes

      Since the only variable column contains NULL value, the variable part does not take any space.

      Total row2 size =11bytes(Fixed part) = 11bytes

    1. Column “id” value, second row
    2. Column “val” value, second row
    3. Slot 1 Column 2 Offset 0x0 Length 0 Length (physical) 0
      This shows that the position of the column 2 in the row 2 is 0 and the physical length of the column is also 0. This means that the value is not physically stored on the page. The NULL value during the materialisation of the column value comes from theNullBitmap byte 02

Add a new column

Finally, we will add a column to our “hot” table. The column will be a non-null integer with the default value of 46.

ALTER TABLE dbo.addANonNullColOnline
    ADD NewCol INT NOT NULL
        CONSTRAINT DF_NewCol DEFAULT(46)
GO
SELECT *
FROM addANonNullColOnline

The action will “add” a new column and 1M 46-es in a few milliseconds. We can confirm that the new column with the values is real(materialised).

id         val   NewCol
----------- ----- -----------
1           bitp 46
2           NULL 46
3           colg 46
4           hbco 46
5           maxi 46
6           null 46
7           offs 46
...
...
10000000    runt 46

How it works

After adding a new, integer data type column, we expect to see increase in row size( by 4 bytes). We also expect that Sql Server storage engine somehow managed to add 1M 4byte integer values to the table  in a fraction of second.

Let’s check rows metadata again.

SELECT index_level
       ,page_count
       ,record_count
       ,min_record_size_in_bytes
       ,max_record_size_in_bytes
       ,avg_record_size_in_bytes
       ,SizeInMB = ROUND( ((record_count * avg_record_size_in_bytes) /1024.00 )/1024.00,0)
FROM sys.dm_db_index_physical_stats(db_id('test'),object_id('dbo.addANonNullColOnline','U'),null,null,'detailed')

The query result shows that there were no changes in the row sizes.
So, what actually happened? Knowing that table rows are stored on data pages(RowStore – FixedVar storage format), we would expect increase in min and max row sizes. Using the formulas above…

min_record_size_in_bytes (row id=2)  = 6 + CEILING(2/8.00) + 4 + 4    +  0 + 2*0  =   6 + 1 + 8  =  15
max_record_size_in_bytes  = 6 + CEILING(2/8.00) + 4 + 4    +  2 + 2*1  + 4  =  6 + 1 + 8  + 4  + 4  =  23

The formulas are correct but the result is not 🙂

If we check the first page again, we’ll notice that
m_lsn (log sequential number) has not changed
– The new column (NewCol) is not stored on the page (offset from the beginning of the rows is 0bytes)

The unchanged m_lsn shows us that there were no changes on the page.

More about m_lsn

m_lsn – represents the last Log Sequential Number related to the last transaction that changed the page. The set of log records which describe the changes(old and new  version of the data, transaction, affected row etc.) starts with the same lsn. The log information allow SQL Server to recover databases to transnationally consistent state in case of system crash or any unexpected shut down.
SQL Server uses WAL(Write-Ahead-Log) model which guarantees that no data modifications will be written to disk before the associated log record is written to the disk.
Example: A transaction T1  updates a few columns in a row. The high level sequence of events will be:

  1. A Data access operator requests from the buffer pool manager(BPM) data page that contains rows to be updated .  BPM checks if the page already exists in the buffer pool. If not, BPM requests I/O operations to retrieve the page from the disk.
  2. The update operation(transaction T1) updates relevant row on the page in the buffer pool. The page header information m_lsn is updated. The operation creates a set of new log records(starting with the the same m_lsn) that describes changes made to the page. The log records are created in the log buffer.
  3. Transaction T1 commits the changes. Log buffer manager flushes the content of log buffer(~64kb) to the disk(log file).
  4. Client application receives confirmation that the transaction T1 has been successfully committed.

New column metadata

After creating a new, non-null column, instead of updating all 1M rows with some value, Sql Server 2012+(Enterprise Ed) stores the information as metadata.

SELECT TableName = object_name(p.object_id)
      ,ColumnType = type_name(pc.system_type_id)
      ,pc.max_length
      ,pc.[precision]
      ,pc.scale
      ,pc.is_nullable
      ,has_default
      ,default_value
FROM sys.system_internals_partitions p
    INNER JOIN sys.system_internals_partition_columns pc
        ON p.partition_id = pc.partition_id
WHERE p.object_id = object_id('dbo.addANonNullColOnline');

The two metadata columns hold the information about the added column. This is where 46 is materialised from, not from the page, but from the metadata.
This new behavior occurs automatically and does not impact the system.

default_value has_default

The defaut_value / has_default columns which belongs to sys.system_internals_partition_columns system view are not related to DF_NewCol Default Constraint defined during the table alteration.
The metadata that describes the default constraints can be found using the query below

SELECT parentTable = object_name(dc.parent_object_id)
      ,dc.name
      ,dc.type_desc
      ,dc.[definition]
FROM  sys.default_constraints dc
WHERE dc.name = 'DF_NewCol'

If we drop the constraint, has_default/ default_value will stay intact.

row UPDATE

An interesting thing will happen when we update a row. Let’s update row id=2 and replace the NULL value with something.

UPDATE dbo.addANonNullColOnline
    SET val = 'Hi 5'
WHERE id = 2;
id         val   NewCol
----------- ----- -----------
1           bitp 46
2           Hi 5 46
3           colg 46
4           hbco 46
5           maxi 46
6           null 46
7           offs 46
...
...
10000000    runt 46

…and now the max,min rowsizes are changed.

Whenever we update one or more values in a row, the default value (NewCol = 46) will be written on the data page. The rows that are not updated will still materialise value 46 from the metadata.

If you inspect the first page, you’ll find that the page has a new m_lsn and that all the column values of the row are written on the page
Before the change, the shortest row was row2 (id=2), size = 11b. Now, after the update, the row2 size increased to 23b. The NewColumn default value (46) now  became a physical part of the row.
Using the formulas mentioned before, the new min/max row sizes are:

max_record_size_in_bytes (row id=2)  = 6 + CEILING(2/8.00) + 4 + 4  +  2 + 2*1  + 4 =  {6 + 1 + 8 } + {4  + 4 }= 23
**4bytes for  the NewCol value of 64 and 4bytes for ‘Hi 5’  value that replaced NULL value

min_record_size_in_bytes (all rows expect row id=2)  = 6 + CEILING(2/8.00) + 4  +  2 + 2*1  + 4  =  6 + 1 + 4  + 4  + 4  =  19

Table row source(s)

The figure below shows that the test table materialises column values from two different sources – data pages and metadata.

We see that only the updated row (id=2) has all the columns physically stored on the data page.
E.g row 1 is physically stored in Slot 0, but the new column is not part of the row
Slot 0 Column 3 Offset 0x00 Length 4 Length (physical) 0
NewCol = 46

During the production life, all the table rows may be gradually updated.  The has_default default_value values(1 and 46 respectively) metadata remain but will not be in use.
Metadata will be removed if we REBUILD the table

ALTER TABLE dbo.addANonNullColOnline
    REBUILD;

Restrictions

There are a few data types that cannot be added online.

  • Varchar(max)
  • NVarchar(max)
  • Varbinary(max)
  • XML
  • hierarchy_id
  • geometry
  • geography

Default expressions that are not runtime constant (expressions that require a different value for each row) cannot be added online. e.g NewId(),NewSequentialId() ..

Row-overflow row storage format is not supported. New columns must not increase  the maximum row size over 8060 bytes limit.

Conclusion

Starting from Sql Server 2012 (Enterprise Ed) it is possible to add, ONLINE, non null columns with values to a busy table. Sql Server storage engine uses metadata to store the default values for the added columns. When requested, the table columns are materialised from data pages and metadata. There are restrictions on the data types that may be used. e.g  columns with BLOB datatypes cannot be added online.. etc. Only the runtime constant expressions can be used as a default constraints.

Thank you for reading

Dean Mincic

ORDER BY – Changes everything

SQL programming has a few unique aspects such as Three value logic(3VL), set based programming logic and logical processing order. Divination from the programming patterns based on the unique nature of the language usually leads to poor performing SQL code.

RDBMS fundamentals

Sql Server as a RDBMS(Relational Database Management System), and its SQL language dialect/variant TSQL(Transact-Structured Query Language) has strong foundation in mathematics. TSQL as Sql Servers language for data management and manipulation is based on two cornerstones of mathematics – Set theory and Predicate logic.

Edgar F. Codd was a computer scientist who invented the relational model for database management systems. He used mathematical theories (Set theory and predicate logic) to create a general theory of data management.
IBM’s System R is the very first database system build on Codd’s relational model. In order to “force” DB vendors to follow the relational principles when designing db systems, Codd published his 12 rules which describes a relational database.
Various database vendors(Oracle, Sybase, Sql server..) implemented the Relational principles in a similar but different way. This means that the db systems “slightly” deviate from Codd’s relational theory, and the theory itself “slightly” deviates* from the original set theory and predicate logic – but that’s ok 🙂

*e.g One of the differences between predicate logic and relational algebra is that the first supports the two-valued logic (True/False). Relational model supports three way logic(True/false and Unknown)

Although tsql querying design patterns are derived from the Relational algebra, Tsql does not strictly follow relational rules. One of the differences, that is also related to this article is the that a Relation is not sorted → There is no relevance to the order of the elements in a set. However, Tsql allow us to sort(ORDER BY) the final result-set(the one that is returned to Client). We are allowed to replace the Relational result(as a “final product” of the relational operations in a Relational database based on relational theory…. ) with a non- relational result – CURSOR. That is why Order by “Changes everything” 🙂

Logical query processing sequence

To understand ORDER BY operation, it is important to understand the logical query processing sequence. The concept is unique to SQL world and defines the order of execution of different query segments.  It’s also known as Theoretical execution order because the sequence may be changed by SQL Srv optimiser due to may different reasons i.e data access patterns, query simplification etc. The query sequence:

  1. FROM – The FROM phase identifies the query’s source tables. This phase processes CROSS JOIN, ON Filter and in case of OUTER JOINs it adds the non-matched rows back to the result-set.
  2. WHERE Filter. Removes all those rows from the previous stage for which the predicate evaluates false.
  3. GROUP BY – Performs grouping and aggregation calculations. This phase generates one row per group.
  4. HAVING – This is the last of the three query filters. It filters the results of aggregate functions.
  5. SELECT – Processes the elements in the Select clause i.e Evaluates expressions within Select statement and/or removes duplicates(DISTINCT) and/or filters the specified percentage/number of rows(TOP) defined by the ORDER BY clause. The phase returns TABLE.
  6. ORDER BY – Sorts the rows according to the list of columns specified and returns CURSOR.
  7. FOR XML/FOR JSON The phase converts tabular result from the SELECT statement into XML/JSON outupt.

For the queries that include SET operations, the logical sequence will be.

  1. Query 1
  2. SET operation (union, except, intersect) or multi-set Union All
  3. Query 2
  4. Order By

Order by and …

ORDER BY operation guarantees ordered result set – Cursor. TSQL deviates from ANSI standard and allows us to sort results using the columns and/or expressions based on columns that are not part of the SELECT statement(This is not true if query uses DISTINCT clause) . ANSI standard allows sorting only by using the columns mentioned in the select statement. ORDER BY can also be used as a logical sorting operator when operating as a part of TOP/Offset functions.

Because ORDER BY converts query result-set into a CURSOR, it cannot be used to define table expressions;

  • Views
  • ITVF(Inline table valued functions aka parameterised views)
  • Derived tables
  • Subqueries
  • CTE(common table expressions)

… unless used as a logical sorting with(top,offset,for xml/json)

The CURSOR must be returned to the client application that is able to “consume” cursor records, one at a time and in order.

ORDER BY treats NULL values as equal. NULLs are sorted FIRST in ASC order.

SET operations (Union, Except, Intersect) and multi-set Union All

When using SET based operations (UNION, EXCEPT, INTERSECT) and/or multi-set operation UNION ALL  we cannot use ORDER BY operator in the individual statements. The following query will fail the parsing phase.

DECLARE @t1 TABLE (id INT, val VARCHAR(10));
DECLARE @t2 TABLE (id1 INT, val1 VARCHAR(10));
INSERT INTO @t1
    VALUES (1,'A'),(1,'B'),(2,'S')
INSERT INTO @t2
   VALUES (1,'A'),(1,'B1'),(2,'S'),(3,'M')

SELECT * FROM @t1 ORDER BY val DESC
INTERSECT 
SELECT * FROM @t2

Msg 156, Level 15, State 1, Line 24
Incorrect syntax near the keyword ‘INTERSECT’

Sql Server’s SET operations correspond to operators defined in mathematical set theory. Therefore, the operands must be sets – represented as Tables (Set → Relation → Table)
ORDER BY changes query output to something that ANSI SQL defined as CURSOR. Set operations do not support ordered structures.

We can use ORDER BY clause to format the FINAL result-set for the presentation purposes. The ORDER BY in the example below logically does not belong to the last query. It operates on (Set1 U Set2)

SELECT * FROM @t1 --Set1
INTERSECT --U
SELECT * FROM @t2 --Set2
ORDER BY val DESC

TOP/OFFSET-FETCH

TOP option is unique to tsql and does not follow ANSI guidelines. The directive simply allows us to select a number of rows or percentage of rows to return. The option is supposed to be paired up with ORDER BY – always.
This time ORDER BY does not serve a presentation purpose. This time it serves as a logical “TOP sorting” operator. It supposed to answer the question “TOP according to what order?”. The result of a TOP/ORDER BY operation is a table – not a cursor.
Unfortunately, tsql allows us to use TOP without ORDER BY and in that case the general query result will be random and non-deterministic TOP(n) rows / TOP(n) PERCENT result set.

It is a good practice to specify ORDER BY(SELECT NULL) if our intention is to select random and/or non-deterministic TOP(n) rows. Query Optimiser will remove the logical ordering during the simplification phase. The purpose of this is to let other developers know that this was done intentionally.

Test data:

IF OBJECT_ID('tempdb.dbo.#testTopOrderBy','U') IS NOT NULL
DROP TABLE #testTopOrderBy
GO
CREATE TABLE #testTopOrderBy(CustomerId INT UNIQUE NOT NULL
,OrderId INT
,OrderDate DATE)
GO
INSERT INTO #testTopOrderBy
   SELECT t.*
   FROM (VALUES (100,1001,'20170101')
                ,(106,1005,'20170210')
                ,(105,1007,'20170101')
                 
                ,(103,1008,'20170410')
                ,(104,1011,'20170511')
                ,(102,1022,'20170101')
                ,(101,1223,'20170210') ) t(custId,OrdId,odate)
CustomerId  OrderId     OrderDate
----------- ----------- ----------
100         1001        2017-01-01
106         1005        2017-02-10
105         1007        2017-01-01
103         1008        2017-04-10
104         1011        2017-05-11
102         1022        2017-01-01
101         1223        2017-02-10

(7 rows affected)

Test 1: The query returns two randomly chosen rows(TOP without ORDER BY)

SELECT TOP(2) *
FROM #testTopOrderBy
ORDER BY (SELECT NULL) – include this line to let other developers know about your intentions
CustomerId  OrderId     OrderDate
----------- ----------- ----------
100         1001        2017-01-01
106         1005        2017-02-10

Test 2: The query returns a non-deterministic result – “non unique” ORDER BY is used aka. non-deterministic Order By.

SELECT TOP(2) *
FROM #testTopOrderBy
ORDER BY OrderDate
CustomerId  OrderId     OrderDate
----------- ----------- ----------
100         1001        2017-01-01
105         1007        2017-01-01
(2 rows affected)
--row not included
102         1022        2017-01-01

The query “decided” to select Customers 100 and 105 but not 102. In this case, if we had a clustered index on customerId, the query would probably select customers 100 and 102.

Test 3: Two ways to make the “TOP(n)” query to return a deterministic result.

--Deterministic ORDER BY
SELECT TOP(2) *
FROM #testTopOrderBy
ORDER BY CustomerId,OrderDate
CustomerId OrderId OrderDate
----------- ----------- ----------
100 1001 2017-01-01
101 1223 2017-02-10

(1) The deterministic order by provides more specific information about which TOP(2) order dates to select. Now the query knows exactly which TOP(2) rows to select. a combination of Order By columns that provides a deterministic result-set is called Tiebreaker.

--TOP with TIES
SELECT TOP(2) WITH TIES *
FROM #testTopOrderBy
ORDER BY OrderDate
CustomerId OrderId OrderDate
----------- ----------- ----------
100 1001 2017-01-01
105 1007 2017-01-01
102 1022 2017-01-01

(2) The “WITH TIES” guaranties deterministic result based on Order by column(s). WITH TIES must have ORDER BY defined. In case of non-deterministic Order By, WITH TIES will add all relevant rows to the result-set in order to provide determinism. In the example above it added an extra row.

OFSET-FETCH (tsql 2012+)

OFSET-FETCH is an ANSI SQL supported feature. Basically, the function is an advanced version of the TOP function. The option allows us to select(fetch) only a window or a page of results from the result set.
Opposed to the similar, TOP function, OFFSET–FETCH can not operate without Order By. On the other hand OFFSET-FETCH does not support WITH TIES therefore it cannot guarantee deterministic result set.
The following query fails because offset-fetch always operate with order by:

SELECT *
FROM #testTopOrderBy
OFFSET 0 ROWS FETCH FIRST 2 ROWS ONLY

Msg 102, Level 15, State 1, Line 25 Incorrect syntax near ‘0’. Msg 153, Level 15, State 2, Line 25 Invalid usage of the option FIRST in the FETCH statement.

If we want to select random 2 rows from a table, we can use

SELECT *
FROM #testTopOrderBy
ORDER BY (SELECT NULL)
OFFSET 0 ROWS FETCH FIRST 2 ROWS ONLY

The query below will return the same result as the query from the Test3.

SELECT *
FROM #testTopOrderBy
ORDER BY CustomerId,OrderDate
OFFSET 0 ROWS FETCH FIRST 2 ROWS ONLY
CustomerId OrderId OrderDate
----------- ----------- ----------
100 1001 2017-01-01
101 1223 2017-02-10
(2 rows affected)

Same as TOP, OFFSET-FETCH returns a table therefore it can be used in other table expressions, e.g.. (the query does not make much sense – its only for testing purposes …:)

SELECT a.*
FROM #testTopOrderBy a
INNER JOIN (
            SELECT CustomerId
            FROM #testTopOrderBy
            ORDER BY CustomerId,OrderDate
            OFFSET 0 ROWS FETCH FIRST 2 ROWS ONLY
           ) x --the result is a table expression
ON a.CustomerId = x.CustomerId

..and this will fail. Table operators (JOIN and non-ANSI APPLY,PIVOT and UNPIVOT)  operates only between tables(table expressions).

SELECT a.*
FROM #testTopOrderBy a
INNER JOIN (
            SELECT CustomerId
            FROM #testTopOrderBy
            ORDER BY CustomerId,OrderDate
           ) x --the result is a cursor
ON a.CustomerId = x.CustomerId

Msg 1033, Level 15, State 1, Line 38
The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP, OFFSET or FOR XML is also specified.

VIEWS

A view is a named virtual table that is defined by a query and used as a table
A view represents a table, and a table is a logical entity that has no order to its rows. This is the reason why it’s not allowed to use presentation ORDER BY in a view. VIEWS ARE NOT ORDERED!
The view definition below will fail

Test 4. Views are not ordered

CREATE VIEW dbo.TestViews
AS
SELECT v.CustId
      ,v.OrderId
FROM (VALUES(1,1000),(2,10001),(2,2003)) v(CustId,OrderId)
ORDER BY CustId

Msg 1033, Level 15, State 1, Procedure TestViews, Line 7 [Batch Start Line 1]
The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP, OFFSET or FOR XML is also specified.

Now, some programmers might think that they somehow managed to override RDBMS fundamentals by defining a view with TOP clause specifying all 100% rows and ORDER BY.

Test 5, Views with TOP 100% ordered rows

CREATE VIEW testViews
AS
    SELECT TOP(100) PERCENT
            [name]
           ,[object_id]
           ,create_date
    FROM sys.objects
    ORDER BY create_date DESC

This will pass the parsing stage and will be stored as a valid view definition. In reality the view definition does not make much sense(see the TOP/ORDER BY example above). During the simplification phase, query optimizer will remove the top100%/ORDER BY as an unnecessary logic.

SELECT * FROM  testViews
name                     object_id   create_date
------------------------ ----------- -----------------------
sysrscols                3           2014-02-20 20:48:35.270
sysrowsets               5           2009-04-13 12:59:11.093
sysclones                6           2014-02-20 20:48:35.477
sysallocunits            7           2009-04-13 12:59:11.077
sysfiles1                8           2003-04-08 09:13:38.093
sysseobjvalues           9           2014-02-20 20:48:35.610
syspriorities            17          2014-02-20 20:48:35.367
sysdbfrag                18          2014-02-20 20:48:35.563
sysfgfrag                19          2014-02-20 20:48:35.243
... etc...

The result is not ordered by create_date DESC
In some cases i.e when we use TOP and ORDER by the clustered index key columns, the result may be appearing to be sorted – QO may decide to use sorted table scan. Again, there is no guarantee that the returned rows will be ordered.
e.g. During the final SELECT phase, QO may decide to use Sql Server’s storage engine advanced scan functionality also known as merry-go-round scan or scan piggybacking to return the same rows to two or more concurrent sessions. Without diving into the details of how the advanced scan works(will cover that interesting optimisation mechanism in one of the next posts), the different sessions will have the same result-sets but with different order.

The presentation ORDER BY will convert un-ordered view result into an ordered CURSOR.

SELECT *
FROM testViews
ORDER BY create_date DESC

Window Functions

Windows functions are intorduced in ANSI SQL-2003 standard. Microsoft partially implemented  the functionality in tSql 2005 and expanded to a full range of window functions in tSql 2012,
A Window function operates on a number of rows(window) in a set(table).  Window functions use ORDER BY as a logical operator which defines the behaviour of the aggregate operations performed by the functions. In This scenario, ORDER BY, as a part of the functions does not perform the final sort.

It is important to understand where Window function sits in the logical query execution sequence.

(5) SELECT
(5.1) Evaluate expressions in the SELECT statement
(5.2) WINDOWS Fn can be executed from here…
(5.3) DISTINCT
(5.4) TOP
(5.5) ….to here

(6) ORDER BY (presentation)

The example below demonstrates how window functions are positioned in the logical query exec. sequence.

Test Data.

IF OBJECT_ID('tempdb.dbo.#testWindowFnLogicalSeq','U') IS NOT NULL
    DROP TABLE #testWindowFnLogicalSeq
GO
CREATE TABLE #testWindowFnLogicalSeq( 
        OrderPackDesc VARCHAR(50) 
        ,OrderValue INT)
GO
INSERT INTO #testWindowFnLogicalSeq
    SELECT *
    FROM (VALUES ('Media Pack',100)
                ,('Media Pack',90)
                ,(NULL,10)
                ,('MS Surfice accessories',200)
                ,('MS Surfice accessories',50)
        ) tab(ordrPck,val)
OrderPackDesc                                      OrderValue
-------------------------------------------------- -----------
Media Pack                                         100
Media Pack                                         90
NULL                                               10
MS Surfice accessories                             200
MS Surfice accessories                             50

(5 rows affected)

The query below sums the OrderValues per OrderPackDescription. The OrderPackDesc = NULL will be summarised as a distinct value resulting OrderValue=10. ISNULL(OrderPackDesc,’Media Pack’) evaluates the NULL value as “Media Pack”

SELECT DISTINCT OrderPackDescEXPR = ISNULL(OrderPackDesc,'Media Pack')
               ,OrdVal = SUM(OrderValue) OVER(PARTITION BY OrderPackDesc
                                              ORDER BY( SELECT NULL))
FROM #testWindowFnLogicalSeq
OrderPackDescEXPR                                  OrdVal
-------------------------------------------------- -----------
Media Pack                                         10
Media Pack                                         190
MS Surfice accessories                             250

(3 rows affected)

This could lead to a logical error. We want to find out the sums of order values per order pack. We also want to treat the missing OrderPacks as ‘Media Pack’. The previous query sums the “missing order packs” separately. This is a tipical logical error. The error would be more difficult to find if we had OrderValue = 190 for the unknown orderPackDesc. In that case we will have total of 190 instead of 380 for ‘Media Pack’.
Taking into the consideration the query execution sequence we can modify our query as:

SELECT DISTINCT  OrderPackDescEXPR = ISNULL(OrderPackDesc,'Media Pack')
                ,OrdVal = SUM(OrderValue) OVER(PARTITION BY ISNULL(OrderPackDesc,'Media Pack') ORDER BY( SELECT NULL))
FROM #testWindowFnLogicalSeq
OrderPackDescEXPR                                  OrdVal
-------------------------------------------------- -----------
Media Pack                                         200
MS Surfice accessories                             250

(2 rows affected)

…OVER(PARTITION BY ( ISNULL(OrderPackDesc,’Media Pack’) … is evaluated BEFORE the WINDOW fn.
Now our window fn, partition knows how to treat the missing values before the aggregation part and the execution sequence works in your favor 🙂
The example also shows that DISTINCT executes after WINDOW functions.
NOTE: It is not possible to use alias OrderPackDescEXPR in other expressions within the same query, like  … OVER(Partition by OrderPackDescEXPR ..) The limitation is another unique aspect of SQL language – All-at-once operation.
E.g in most programming languages, to swap values between variables we use a third, temp var. A→ temp, B→A  , temp→ B . In SQL we can swap table column values without using the temp var storage..

DECLARE @test TABLE(Age1 tinyint, Age2 tinyint)
    INSERT INTO @test
        VALUES (65,25)
SELECT * FROM @test
    UPDATE @test
        SET Age1 = Age2
           ,Age2 = Age1
SELECT * FROM @test

Window aggregate funtions

Starting from SQL 2012 the ORDER BY clause can be used along with the window aggregate functions. ORDER BY dictates how the aggragate functions operate within the window partitions. It it important to understand the ORDER BY in this content to be able toavoid logical errors.

Create some test data

IF OBJECT_ID('tempdb.dbo.#testWindowFnLogicalSeq','U') IS NOT NULL
    DROP TABLE #testWindowFnLogicalSeq
GO
CREATE TABLE #testWindowFnLogicalSeq(
     CustomerId INT
    ,OrderId INT
    ,OrderValue INT)
GO
;WITH createNumbers as
(
    SELECT TOP 15 ROW_NUMBER()OVER(ORDER BY(SELECT NULL)) + 1000 rn
    FROM sys.columns c1
    CROSS JOIN sys.columns c2
    ORDER BY (SELECT NULL)
)
INSERT INTO #testWindowFnLogicalSeq
    SELECT CustomerId = (rn % 3)+ 1000 -- 3 customers
          ,OrderId = rn + 1000 --some unique orders
          ,OrderValue = ABS(CAST(CHECKSUM(NewId()) AS BIGINT)) % 1500 -- "random" numbers between 0 - 1499
FROM createNumbers
GO
SELECT * FROM #testWindowFnLogicalSeq ORDER BY CustomerId,OrderId
CustomerId  OrderId     OrderValue
----------- ----------- -----------
1000        2002        870
1000        2005        169
1000        2008        859
1000        2011        516
1000        2014        11
1001        2003        692
1001        2006        683
1001        2009        986
1001        2012        403
1001        2015        71
1002        2001        419
1002        2004        213
1002        2007        609
1002        2010        289
1002        2013        38

(15 row(s) affected)
** The OrderValues column contain random values.

The following query explains how ORDER BY affects SUM – window aggregate function

SELECT CustomerId
       ,OrderId
       ,OrderValue
       ,SUM(OrderValue) OVER() AS [GrandTotal OrderValue sum]
       ,SUM(OrderValue) OVER(PARTITION BY CustomerId) [Total OrderValue per cust.]
       --cumulative sums
       ,SUM(OrderValue) OVER(ORDER BY CustomerId) [Cumulative OrderValue per cust.]
       ,SUM(OrderValue) OVER(PARTITION BY CustomerId
                             ORDER BY OrderId) [Cumulative OrderValue per cust. & ord.]
FROM #testWindowFnLogicalSeq
ORDER BY CustomerID,OrderId --presentaiton purpose
CustomerId  OrderId     OrderValue  GrandTotal OrderValue sum Total OrderValue per cust. Cumulative OrderValue per cust. Cumulative OrderValue per cust. & ord.
----------- ----------- ----------- ------------------------- -------------------------- ------------------------------- --------------------------------------
1000        2002        870         6828                      2425                       2425                            870
1000        2005        169         6828                      2425                       2425                            1039
1000        2008        859         6828                      2425                       2425                            1898
1000        2011        516         6828                      2425                       2425                            2414
1000        2014        11          6828                      2425                       2425                            2425
1001        2003        692         6828                      2835                       5260                            692
1001        2006        683         6828                      2835                       5260                            1375
1001        2009        986         6828                      2835                       5260                            2361
1001        2012        403         6828                      2835                       5260                            2764
1001        2015        71          6828                      2835                       5260                            2835
1002        2001        419         6828                      1568                       6828                            419
1002        2004        213         6828                      1568                       6828                            632
1002        2007        609         6828                      1568                       6828                            1241
1002        2010        289         6828                      1568                       6828                            1530
1002        2013        38          6828                      1568                       6828                            1568

ORDER BY activates cumulative aggregations on the ordered column over defined partition. The same applies to other aggregate functions MAX, MIN, AVG and COUNT.
Failing to understand this behavior can lead to logical error 🙂

CONCLUSION

The main purpose of ORDER BY is to force a query to return an ordered result – CURSOR. This changes the relational nature of SQL language – query result-sets are NEVER ordered. The presentation ORDER BY is an expensive operation (it needs to collect the entire result-set and to put it in the requested order). The same syntax (ORDER BY) is used as a logical operator for TOP/OFFSET FETCH/WINDOW Functions/FOR XML formatter. In those cases, the purpose of the operator is completely different and DOES NOT guarantee the ordered output.

Thank you for reading.