Archive for November, 2006

DELETE and TRUNCATE explained.

November 24, 2006 2 comments

DELETE logs the data for each row affected by the statement in the transaction log and physically removes the row from the file, one row at a time. The recording of each affected row can cause your transaction log grow massively if you are deleting huge numbers of rows. However, when you run your databases in full recovery mode, detailed logging is necessary for SQL Server to be able to recover the database to the most recent state, should a problem arise. The fact that each row is logged explains why DELETE statements can be slow.

TRUNCATE is faster than DELETE due to the way TRUNCATE “removes” rows. Actually, TRUNCATE does not remove data, but rather deallocates whole data pages and removes pointers to indexes. The data still exists until it is overwritten or the database is shrunk. This action does not require a lot of resources and is therefore very fast. It is a common mistake to think that TRUNCATE is not logged. This is wrong. The deallocation of the data pages is recorded in the log file. Therefore, BOL refers to TRUNCATE operations as “minimally logged” operations. You can use TRUNCATE within a transaction, and when this transaction is rolled-back, the data pages are reallocated again and the database is again in its original, consistent state.

Some limitations do exist for using TRUNCATE.

· You need to be db_owner, ddl_admin, or owner of the table to be able to fire a TRUNCATE statement.

· TRUNCATE will not work on tables, which are referenced by one or more FOREIGN KEY constraints.

So if TRUNCATE is so much faster than DELETE, should one use DELETE at all? Well, TRUNCATE is an all-or-nothing approach. You can’t specify just to truncate those rows that match a certain criteria. It’s either all rows or none.

You can, however, use a workaround here. Suppose you want to delete more rows from a table than will remain. In this case you can export the rows that you want to keep to a temporary table, run the TRUNCATE statement, and finally reimport the remaining rows from the temporary table. If your table contains a column with the IDENTITY property defined on it, and you want to keep the original IDENTITY values, be sure to enabled IDENTITY_INSERT on the table before you reimport from the temporary table. Chances are good that this workaround is still faster than a DELETE operation.

You can also set the recovery mode to “Simple” before you start this workaround, and then back to “Full” one it is done. However, keep in mind that is this case, you might only be able to recover to the last full backup.

Categories: Database, SQL Server

Understanding NULLS

November 23, 2006 Leave a comment

Some SQL programmers and developers tend to think NULL as zero or blank. In fact, NULL is neither of these. NULL literally means that the value is unknown or indeterminate.

One side effect of the indeterminate nature of NULL value is it cannot be used in a calculation or a comparision.

Listed below are a few important rules to remember about the behaviour of NULL values.

  • A NULL value cannot be inserted into a column defined as NOT NULL.
  • NULL values are not equal to each other. It is a frequent mistake to compare two columns that contain NULL and expect the NULL values to match. (A NULL column can be identified in a WHERE clause or in a boolean expression using phrases such as ‘value is NULL’ and ‘value is NOT NULL’.)
  • A column containing a NULL value is ignored in the calculation of aggregate values such as AVG, SUM or MAX.
  • When columns that contain NULL values in a GROUP BY clause of a query are listed, the query output contains rows for those NULL values.
  • JOINS between tables, in which one join condition contains values and the other contains NULL, are governed by the rules for “outer joins”.
Categories: Database, SQL Server

Difference between UNIQUE constraint and PRIMARY key

November 22, 2006 52 comments

A UNIQUE constraint is similar to PRIMARY key, but you can have more than one UNIQUE constraint per table.

When you declare a UNIQUE constraint, SQL Server creates a UNIQUE index to speed up the process of searching for duplicates. In this case the index defaults to NONCLUSTERED index, because you can have only one CLUSTERED index per table.

* The number of UNIQUE constraints per table is limited by the number of indexes on the table i.e 249 NONCLUSTERED index and one possible CLUSTERED index.

Contrary to PRIMARY key UNIQUE constraints can accept NULL but just once. If the constraint is defined in a combination of fields, then every field can accept NULL and can have some values on them, as long as the combination values is unique.

Categories: Database, SQL Server

How do you find the Second highest Salary?

November 14, 2006 156 comments

This is the most common question asked in Interviews.

EMPLOYEE table has fields EMP_ID and SALARY how do you find the second highest salary?


We can write a sub-query to achieve the result


The first sub-query in the WHERE clause will return the MAX SALARY in the table, the main query SELECT’s the MAX SALARY from the results which doesn’t have the highest SALARY.

Categories: Database, SQL Server

How many rows this query will return

November 14, 2006 Leave a comment

Tables EMP and MANAGER has 5 records each.

How many rows will the following query return?



25 rows. Each rows in EMP will have five rows of MANAGER which makes 5×5 = 25.

Categories: Database, SQL Server

What is a deadlock?

November 10, 2006 4 comments

Deadlock is a situation when two processes, each having a lock on one piece of data, attempt to acquire a lock on the other’s piece. Each process would wait indefinitely for the other to release the lock, unless one of the user processes is terminated.
SQL Server detects deadlocks and terminates one user’s process.

A livelock is one, where a request for an exclusive lock is repeatedly denied because a series of overlapping shared locks keeps interfering.
SQL Server detects the situation after four denials and refuses further shared locks.
A livelock also occurs when read transactions monopolize a table or page, forcing a write transaction to wait indefinitely.

Categories: Database, SQL Server

What is the maximum size of a row?

November 10, 2006 1 comment

This table specifies the maximum sizes and numbers of various objects defined in Microsoft SQL Server databases, or referenced in Transact-SQL statements. The table does not include Microsoft SQL Server 2000 Windows® CE Edition.

SQL Server Maximum Capacity Specifications

1 Network Packet Size is the size of the tabular data scheme (TDS) packets used to communicate between applications and the relational database engine. The default packet size is 4 KB, and is controlled by the network packet size configuration option.
2 The maximum number of bytes in any key cannot exceed 900 in SQL Server 2000. You can define a key using variable-length columns whose maximum sizes add up to more than 900, provided no row is ever inserted with more than 900 bytes of data in those columns. For more information, see Maximum Size of Index Keys.
3 The data portion of a database cannot exceed 2 GB in size when using the SQL Server 2000 Desktop Engine (MSDE 2000) or the Microsoft Data Engine (MSDE) 1.0. The total size of the database, including log files, can exceed 2 GB provided the sum of the sizes of the data files remains 2 GB or lower.
4 Database objects include all tables, views, stored procedures, extended stored procedures, triggers, rules, defaults, and constraints. The sum of the number of all these objects in a database cannot exceed 2,147,483,647.

Categories: Database, SQL Server