Adding non-NULL value columns ONLINE

Adding non-NULL value columns with values to a busy table – ONLINE

Adding a non-null column to a big table on a busy database can be tricky business. It might seem to be an easy task of altering table and adding a column ( an integer data type column – 4 bytes more or less, who cares) but it’s not. DBAs are always extra careful when perform those sorts of actions since it can cause serious implications to the db applications. For example, adding a non-null column to a busy table which contains 10M rows means that every row must be updated with a value. The table may be accessed by many different applications many times in a short period of time. These requests can be seriously blocked by the alter table/add a not null column/update column with a value, causing app timeouts, deadlocks and all sorts of headaches.

One of the common approaches is to add a NULL column and then to update the column values across all rows using the batch update scenario (update N number of rows per transaction) and then to alter column as non-null. This operation is usually performed “offline” or when the table is “quiet”.

Starting with SQL 2012+(Enterprise Edition)  the operation is not “painful” any more. Adding column(s) to a busy table is now instantaneous and has less impact to the OLTP operations. In fact, the operation can be performed online.
This still requires taking into the consideration other factors that can impact the operation such as; data type of the new column, row length etc.

The task for today will be to add a new , non-null column of integer data type with the default value 46 to a busy table that has  1M rows 🙂

Test table

The table contains 1M rows. One row, id=2 has a NULL value for column “val”. The rest of the column values are taking the maximum available space of 4bytes.
NOTE: Varchar stores non-unicode characters e.g VARCHAR(4) =’AAAA’ takes 4bytes. (1 byte per non-unicode character).

The query below provides basic information about the table rows:

The metadata information shows that the “shortest” row (id = 2) takes 11bytes of disk space, and the “longest” row(all other rows) takes 19bytes. The whole table(all rows) takes around 18MB of the disk space.
Before we add a new, not null column to the table, lets show how the rows are physically stored.

Rows physical storage

In this case, rows are stored using the FixedVar storage format. The name comes from the order in which Sql Server storage engine stores different data types – fixed data types(integer, decimal, char..) and then variable data types(nvarchar, varchar, varbinary..).The format has been used since Sql Server 7.0(Sphinx) released in 1998. (smile).
FixedVar format internally adds a number of additional bytes to every row on page. The extra space per row is used to provide information about rows itself like (if the row has been deleted/ghost ,the number of columns, columns with NULL values, position of the variable columns if any,  timestamp and pointer to the verison store if optimistic locking is enabled.. etc..).
For the fixed columns, storage engine will add:

6bytes + CEILING(TotalNoOfColumns / 8.00bytes)

..and for the variable part of the row

2bytes(if at least one of the varColumns has non-null value) + 2bytes * NoOfVariableNonNULLColumns.

In our example, the minimum row size(11bytes , row id=2) and the maximum row size( all other rows 19 bytes) will be..

The image below shows the structure of a FixedVar row (Pro Sql Server Internals by Dimitri Korotkevitch)

Now, down to the bits and bytes, lets find the first data page(out of 2598) that is allocated to the table and try to reconcile the min and a max row. The undocumented internal column/fn %%Lockres%% gives us FileId:PageId:SlotId for the selected rows.

The query selects first few rows from the test table.

With the PageID of the page we want to examine, the next step is to allow sql server to redirect the page content output to the screen.  To achieve this we can use DBCC(Database Console Command). The system change will affect only the current session.

The page snapshot below shows the first page header(first 98bytes of the total page size – 8192bytes) and the first couple of rows. The relevant information for this exercise is highlighted.

The page is read from the disk and put into the buffer pool. The relevant information on the page:

  1.  File No : Page number (1:16576)
  2.  m_lsn = (525:62471:2) – Log Sequence Number.
  3. The position of the first row.
    Slot(0) – the first row on the page
    Offset 0x60 – the physical position from the beginning of the page (Hex 0x60 = 96bytes)
    Length = 19* Happens to be the maximum record size. All rows exept row no 2 (id =2) are 19bytes.
  4. The first row -hexadecimal(byte swapped values)

      30000800 01000000 02000001 00130062 697470

       03 – TagA 1 byte
       00– TagB 1 byte
       0008 – Fsize bytes  – location of the end of the last fixed datatype column.
                       8bytes = 1b(TagA) +1b(TagB) + 2bytes(Fsize itself) + 4b(INT column)
      00000001 – The fixed data type column values. One int column with value 1.
      0002 – Total number of columns (2bytes)  =decimal 2
      00 – NullBitmap 1byte = CEILING(TotalNoOfColumns /  8.00bytes) = CEILING(1/8.00bytes)

     TOTAL Fixed part of the row: 1+1+2+4+2+1 = 11bytes

     00012bytes – No. of variable type columns = decimal 1 (If all of the variable data type columns are NULL, this will be 0bytes.)
     0013 2bytes – Every two bytes stores the location of the end of a variable,not NULL data type column. In this case there is only one variable column = decimal 19 – the var. column ends on 19th byte.
(2bytes * NoOfVariableNonNULLColumns)
     62 697470
 – 4 bytes – The actual variable column value ‘bitp’

    TOTAL Variable part of the row: 2+2+ = 8bytes

    Total row1 size = 11bytes(Fixed part) + 8bytes(variable part) = 19bytes

  1. Column “id” value, first row
  2. Column “val” value, first row
  3. The position of the second row.
    Slot(1) – second row on the page
    Offset 0x73 – the physical position from the beginning of the page (Hex 0x73 = 115bytes). This is header(96bytes) + first row(19bytes)
    Length = 11*  Happens to be the minimum record size. All othere rows are 19bytes.
  4. Second row -hexadecimal(byte swapped values)

      10000800 02000000 020002

      01 – TagA 1 byte
      00– TagB 1 byte
      0008 – Fsize bytes  – location of the end of the last fixed datatype column.
                      8bytes = 1b(TagA) +1b(TagB) + 2bytes(Fsize itself) + 4b(INT column)
      00000002 – The fixed data type column values. One int column with value 2.
      0002 – Total number of columns (2bytes)  =decimal 2
      02 – NullBitmap 1byte = CEILING(TotalNoOfColumns /  8.00bytes) = CEILING(1/8.00bytes)  02hex = 0010bin.  This indicates that the second column (second bit is on) in the row contains NULL value.

      TOTAL Fixed part of the row: 1+1+2+4+2+1 = 11bytes

      Since the only variable column contains NULL value, the variable part does not take any space.

      Total row2 size =11bytes(Fixed part) = 11bytes

    1. Column “id” value, second row
    2. Column “val” value, second row
    3. Slot 1 Column 2 Offset 0x0 Length 0 Length (physical) 0
      This shows that the position of the column 2 in the row 2 is 0 and the physical length of the column is also 0. This means that the value is not physically stored on the page. The NULL value during the materialisation of the column value comes from theNullBitmap byte 02

Add a new column

Finally, we will add a column to our “hot” table. The column will be a non-null integer with the default value of 46.

The action will “add” a new column and 1M 46-es in a few milliseconds. We can confirm that the new column with the values is real(materialised).

How it works

After adding a new, integer data type column, we expect to see increase in row size( by 4 bytes). We also expect that Sql Server storage engine somehow managed to add 1M 4byte integer values to the table  in a fraction of second.

Let’s check rows metadata again.

The query result shows that there were no changes in the row sizes.
So, what actually happened? Knowing that table rows are stored on data pages(RowStore – FixedVar storage format), we would expect increase in min and max row sizes. Using the formulas above…

min_record_size_in_bytes (row id=2)  = 6 + CEILING(2/8.00) + 4 + 4    +  0 + 2*0  =   6 + 1 + 8  =  15
max_record_size_in_bytes  = 6 + CEILING(2/8.00) + 4 + 4    +  2 + 2*1  + 4  =  6 + 1 + 8  + 4  + 4  =  23

The formulas are correct but the result is not 🙂

If we check the first page again, we’ll notice that
m_lsn (log sequential number) has not changed
– The new column (NewCol) is not stored on the page (offset from the beginning of the rows is 0bytes)

The unchanged m_lsn shows us that there were no changes on the page.

More about m_lsn

m_lsn – represents the last Log Sequential Number related to the last transaction that changed the page. The set of log records which describe the changes(old and new  version of the data, transaction, affected row etc.) starts with the same lsn. The log information allow SQL Server to recover databases to transnationally consistent state in case of system crash or any unexpected shut down.
SQL Server uses WAL(Write-Ahead-Log) model which guarantees that no data modifications will be written to disk before the associated log record is written to the disk.
Example: A transaction T1  updates a few columns in a row. The high level sequence of events will be:

  1. A Data access operator requests from the buffer pool manager(BPM) data page that contains rows to be updated .  BPM checks if the page already exists in the buffer pool. If not, BPM requests I/O operations to retrieve the page from the disk.
  2. The update operation(transaction T1) updates relevant row on the page in the buffer pool. The page header information m_lsn is updated. The operation creates a set of new log records(starting with the the same m_lsn) that describes changes made to the page. The log records are created in the log buffer.
  3. Transaction T1 commits the changes. Log buffer manager flushes the content of log buffer(~64kb) to the disk(log file).
  4. Client application receives confirmation that the transaction T1 has been successfully committed.

New column metadata

After creating a new, non-null column, instead of updating all 1M rows with some value, Sql Server 2012+(Enterprise Ed) stores the information as metadata.

The two metadata columns hold the information about the added column. This is where 46 is materialised from, not from the page, but from the metadata.
This new behavior occurs automatically and does not impact the system.

default_value has_default

The defaut_value / has_default columns which belongs to sys.system_internals_partition_columns system view are not related to DF_NewCol Default Constraint defined during the table alteration.
The metadata that describes the default constraints can be found using the query below

If we drop the constraint, has_default/ default_value will stay intact.

row UPDATE

An interesting thing will happen when we update a row. Let’s update row id=2 and replace the NULL value with something.

…and now the max,min rowsizes are changed.

Whenever we update one or more values in a row, the default value (NewCol = 46) will be written on the data page. The rows that are not updated will still materialise value 46 from the metadata.

If you inspect the first page, you’ll find that the page has a new m_lsn and that all the column values of the row are written on the page
Before the change, the shortest row was row2 (id=2), size = 11b. Now, after the update, the row2 size increased to 23b. The NewColumn default value (46) now  became a physical part of the row.
Using the formulas mentioned before, the new min/max row sizes are:

max_record_size_in_bytes (row id=2)  = 6 + CEILING(2/8.00) + 4 + 4  +  2 + 2*1  + 4 =  {6 + 1 + 8 } + {4  + 4 }= 23
**4bytes for  the NewCol value of 64 and 4bytes for ‘Hi 5’  value that replaced NULL value

min_record_size_in_bytes (all rows expect row id=2)  = 6 + CEILING(2/8.00) + 4  +  2 + 2*1  + 4  =  6 + 1 + 4  + 4  + 4  =  19

Table row source(s)

The figure below shows that the test table materialises column values from two different sources – data pages and metadata.

We see that only the updated row (id=2) has all the columns physically stored on the data page.
E.g row 1 is physically stored in Slot 0, but the new column is not part of the row
Slot 0 Column 3 Offset 0x00 Length 4 Length (physical) 0
NewCol = 46

During the production life, all the table rows may be gradually updated.  The has_default default_value values(1 and 46 respectively) metadata remain but will not be in use.
Metadata will be removed if we REBUILD the table

Restrictions

There are a few data types that cannot be added online.

  • Varchar(max)
  • NVarchar(max)
  • Varbinary(max)
  • XML
  • hierarchy_id
  • geometry
  • geography

Default expressions that are not runtime constant (expressions that require a different value for each row) cannot be added online. e.g NewId(),NewSequentialId() ..

Row-overflow row storage format is not supported. New columns must not increase  the maximum row size over 8060 bytes limit.

Conclusion

Starting from Sql Server 2012 (Enterprise Ed) it is possible to add, ONLINE, non null columns with values to a busy table. Sql Server storage engine uses metadata to store the default values for the added columns. When requested, the table columns are materialised from data pages and metadata. There are restrictions on the data types that may be used. e.g  columns with BLOB datatypes cannot be added online.. etc. Only the runtime constant expressions can be used as a default constraints.

Thank you for reading

Dean Mincic

ORDER BY – Changes everything

SQL programming has a few unique aspects such as Three value logic(3VL), set based programming logic and logical processing order. Divination from the programming patterns based on the unique nature of the language usually leads to poor performing SQL code.

RDBMS fundamentals

Sql Server as a RDBMS(Relational Database Management System), and its SQL language dialect/variant TSQL(Transact-Structured Query Language) has strong foundation in mathematics. TSQL as Sql Servers language for data management and manipulation is based on two cornerstones of mathematics – Set theory and Predicate logic.

Edgar F. Codd was a computer scientist who invented the relational model for database management systems. He used mathematical theories (Set theory and predicate logic) to create a general theory of data management.
IBM’s System R is the very first database system build on Codd’s relational model. In order to “force” DB vendors to follow the relational principles when designing db systems, Codd published his 12 rules which describes a relational database.
Various database vendors(Oracle, Sybase, Sql server..) implemented the Relational principles in a similar but different way. This means that the db systems “slightly” deviate from Codd’s relational theory, and the theory itself “slightly” deviates* from the original set theory and predicate logic – but that’s ok 🙂

*e.g One of the differences between predicate logic and relational algebra is that the first supports the two-valued logic (True/False). Relational model supports three way logic(True/false and Unknown)

Although tsql querying design patterns are derived from the Relational algebra, Tsql does not strictly follow relational rules. One of the differences, that is also related to this article is the that a Relation is not sorted → There is no relevance to the order of the elements in a set. However, Tsql allow us to sort(ORDER BY) the final result-set(the one that is returned to Client). We are allowed to replace the Relational result(as a “final product” of the relational operations in a Relational database based on relational theory…. ) with a non- relational result – CURSOR. That is why Order by “Changes everything” 🙂

Logical query processing sequence

To understand ORDER BY operation, it is important to understand the logical query processing sequence. The concept is unique to SQL world and defines the order of execution of different query segments.  It’s also known as Theoretical execution order because the sequence may be changed by SQL Srv optimiser due to may different reasons i.e data access patterns, query simplification etc. The query sequence:

  1. FROM – The FROM phase identifies the query’s source tables. This phase processes CROSS JOIN, ON Filter and in case of OUTER JOINs it adds the non-matched rows back to the result-set.
  2. WHERE Filter. Removes all those rows from the previous stage for which the predicate evaluates false.
  3. GROUP BY – Performs grouping and aggregation calculations. This phase generates one row per group.
  4. HAVING – This is the last of the three query filters. It filters the results of aggregate functions.
  5. SELECT – Processes the elements in the Select clause i.e Evaluates expressions within Select statement and/or removes duplicates(DISTINCT) and/or filters the specified percentage/number of rows(TOP) defined by the ORDER BY clause. The phase returns TABLE.
  6. ORDER BY – Sorts the rows according to the list of columns specified and returns CURSOR.
  7. FOR XML/FOR JSON The phase converts tabular result from the SELECT statement into XML/JSON outupt.

For the queries that include SET operations, the logical sequence will be.

  1. Query 1
  2. SET operation (union, except, intersect) or multi-set Union All
  3. Query 2
  4. Order By

Order by and …

ORDER BY operation guarantees ordered result set – Cursor. TSQL deviates from ANSI standard and allows us to sort results using the columns and/or expressions based on columns that are not part of the SELECT statement(This is not true if query uses DISTINCT clause) . ANSI standard allows sorting only by using the columns mentioned in the select statement. ORDER BY can also be used as a logical sorting operator when operating as a part of TOP/Offset functions.

Because ORDER BY converts query result-set into a CURSOR, it cannot be used to define table expressions;

  • Views
  • ITVF(Inline table valued functions aka parameterised views)
  • Derived tables
  • Subqueries
  • CTE(common table expressions)

… unless used as a logical sorting with(top,offset,for xml/json)

The CURSOR must be returned to the client application that is able to “consume” cursor records, one at a time and in order.

ORDER BY treats NULL values as equal. NULLs are sorted FIRST in ASC order.

SET operations (Union, Except, Intersect) and multi-set Union All

When using SET based operations (UNION, EXCEPT, INTERSECT) and/or multi-set operation UNION ALL  we cannot use ORDER BY operator in the individual statements. The following query will fail the parsing phase.

Msg 156, Level 15, State 1, Line 24
Incorrect syntax near the keyword ‘INTERSECT’

Sql Server’s SET operations correspond to operators defined in mathematical set theory. Therefore, the operands must be sets – represented as Tables (Set → Relation → Table)
ORDER BY changes query output to something that ANSI SQL defined as CURSOR. Set operations do not support ordered structures.

We can use ORDER BY clause to format the FINAL result-set for the presentation purposes. The ORDER BY in the example below logically does not belong to the last query. It operates on (Set1 U Set2)

TOP/OFFSET-FETCH

TOP option is unique to tsql and does not follow ANSI guidelines. The directive simply allows us to select a number of rows or percentage of rows to return. The option is supposed to be paired up with ORDER BY – always.
This time ORDER BY does not serve a presentation purpose. This time it serves as a logical “TOP sorting” operator. It supposed to answer the question “TOP according to what order?”. The result of a TOP/ORDER BY operation is a table – not a cursor.
Unfortunately, tsql allows us to use TOP without ORDER BY and in that case the general query result will be random and non-deterministic TOP(n) rows / TOP(n) PERCENT result set.

It is a good practice to specify ORDER BY(SELECT NULL) if our intention is to select random and/or non-deterministic TOP(n) rows. Query Optimiser will remove the logical ordering during the simplification phase. The purpose of this is to let other developers know that this was done intentionally.

Test data:

Test 1: The query returns two randomly chosen rows(TOP without ORDER BY)

Test 2: The query returns a non-deterministic result – “non unique” ORDER BY is used aka. non-deterministic Order By.

The query “decided” to select Customers 100 and 105 but not 102. In this case, if we had a clustered index on customerId, the query would probably select customers 100 and 102.

Test 3: Two ways to make the “TOP(n)” query to return a deterministic result.

(1) The deterministic order by provides more specific information about which TOP(2) order dates to select. Now the query knows exactly which TOP(2) rows to select. a combination of Order By columns that provides a deterministic result-set is called Tiebreaker.

(2) The “WITH TIES” guaranties deterministic result based on Order by column(s). WITH TIES must have ORDER BY defined. In case of non-deterministic Order By, WITH TIES will add all relevant rows to the result-set in order to provide determinism. In the example above it added an extra row.

OFSET-FETCH (tsql 2012+)

OFSET-FETCH is an ANSI SQL supported feature. Basically, the function is an advanced version of the TOP function. The option allows us to select(fetch) only a window or a page of results from the result set.
Opposed to the similar, TOP function, OFFSET–FETCH can not operate without Order By. On the other hand OFFSET-FETCH does not support WITH TIES therefore it cannot guarantee deterministic result set.
The following query fails because offset-fetch always operate with order by:

Msg 102, Level 15, State 1, Line 25 Incorrect syntax near ‘0’. Msg 153, Level 15, State 2, Line 25 Invalid usage of the option FIRST in the FETCH statement.

If we want to select random 2 rows from a table, we can use

The query below will return the same result as the query from the Test3.

Same as TOP, OFFSET-FETCH returns a table therefore it can be used in other table expressions, e.g.. (the query does not make much sense – its only for testing purposes …:)

..and this will fail. Table operators (JOIN and non-ANSI APPLY,PIVOT and UNPIVOT)  operates only between tables(table expressions).

Msg 1033, Level 15, State 1, Line 38
The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP, OFFSET or FOR XML is also specified.

VIEWS

A view is a named virtual table that is defined by a query and used as a table
A view represents a table, and a table is a logical entity that has no order to its rows. This is the reason why it’s not allowed to use presentation ORDER BY in a view. VIEWS ARE NOT ORDERED!
The view definition below will fail

Test 4. Views are not ordered

Msg 1033, Level 15, State 1, Procedure TestViews, Line 7 [Batch Start Line 1]
The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP, OFFSET or FOR XML is also specified.

Now, some programmers might think that they somehow managed to override RDBMS fundamentals by defining a view with TOP clause specifying all 100% rows and ORDER BY.

Test 5, Views with TOP 100% ordered rows

This will pass the parsing stage and will be stored as a valid view definition. In reality the view definition does not make much sense(see the TOP/ORDER BY example above). During the simplification phase, query optimizer will remove the top100%/ORDER BY as an unnecessary logic.

The result is not ordered by create_date DESC
In some cases i.e when we use TOP and ORDER by the clustered index key columns, the result may be appearing to be sorted – QO may decide to use sorted table scan. Again, there is no guarantee that the returned rows will be ordered.
e.g. During the final SELECT phase, QO may decide to use Sql Server’s storage engine advanced scan functionality also known as merry-go-round scan or scan piggybacking to return the same rows to two or more concurrent sessions. Without diving into the details of how the advanced scan works(will cover that interesting optimisation mechanism in one of the next posts), the different sessions will have the same result-sets but with different order.

The presentation ORDER BY will convert un-ordered view result into an ordered CURSOR.

Window Functions

Windows functions are intorduced in ANSI SQL-2003 standard. Microsoft partially implemented  the functionality in tSql 2005 and expanded to a full range of window functions in tSql 2012,
A Window function operates on a number of rows(window) in a set(table).  Window functions use ORDER BY as a logical operator which defines the behaviour of the aggregate operations performed by the functions. In This scenario, ORDER BY, as a part of the functions does not perform the final sort.

It is important to understand where Window function sits in the logical query execution sequence.

(5) SELECT
(5.1) Evaluate expressions in the SELECT statement
(5.2) WINDOWS Fn can be executed from here…
(5.3) DISTINCT
(5.4) TOP
(5.5) ….to here

(6) ORDER BY (presentation)

The example below demonstrates how window functions are positioned in the logical query exec. sequence.

Test Data.

The query below sums the OrderValues per OrderPackDescription. The OrderPackDesc = NULL will be summarised as a distinct value resulting OrderValue=10. ISNULL(OrderPackDesc,’Media Pack’) evaluates the NULL value as “Media Pack”

This could lead to a logical error. We want to find out the sums of order values per order pack. We also want to treat the missing OrderPacks as ‘Media Pack’. The previous query sums the “missing order packs” separately. This is a tipical logical error. The error would be more difficult to find if we had OrderValue = 190 for the unknown orderPackDesc. In that case we will have total of 190 instead of 380 for ‘Media Pack’.
Taking into the consideration the query execution sequence we can modify our query as:

…OVER(PARTITION BY ( ISNULL(OrderPackDesc,’Media Pack’) … is evaluated BEFORE the WINDOW fn.
Now our window fn, partition knows how to treat the missing values before the aggregation part and the execution sequence works in your favor 🙂
The example also shows that DISTINCT executes after WINDOW functions.
NOTE: It is not possible to use alias OrderPackDescEXPR in other expressions within the same query, like  … OVER(Partition by OrderPackDescEXPR ..) The limitation is another unique aspect of SQL language – All-at-once operation.
E.g in most programming languages, to swap values between variables we use a third, temp var. A→ temp, B→A  , temp→ B . In SQL we can swap table column values without using the temp var storage..

Window aggregate funtions

Starting from SQL 2012 the ORDER BY clause can be used along with the window aggregate functions. ORDER BY dictates how the aggragate functions operate within the window partitions. It it important to understand the ORDER BY in this content to be able toavoid logical errors.

Create some test data

The following query explains how ORDER BY affects SUM – window aggregate function

ORDER BY activates cumulative aggregations on the ordered column over defined partition. The same applies to other aggregate functions MAX, MIN, AVG and COUNT.
Failing to understand this behavior can lead to logical error 🙂

CONCLUSION

The main purpose of ORDER BY is to force a query to return an ordered result – CURSOR. This changes the relational nature of SQL language – query result-sets are NEVER ordered. The presentation ORDER BY is an expensive operation (it needs to collect the entire result-set and to put it in the requested order). The same syntax (ORDER BY) is used as a logical operator for TOP/OFFSET FETCH/WINDOW Functions/FOR XML formatter. In those cases, the purpose of the operator is completely different and DOES NOT guarantee the ordered output.

Thank you for reading.