Monday 12 January 2015


 SSAS INTERVIEW QUESTION's

   
  •  What is the difference between SSAS 2005 and SSAS2008?
  1. In 2005 its not possible to create an empty cube but in 2008 we can create an empty cube.
  2. A new feature in Analysis Services 2008 is the Attribute Relationships tab in the Dimension Designer . to implement attribute relationship is complex in ssas 2005
  3. we can create ONLY 2000 partitions per Measure Group in ssas 2005 and the same limit of partitions is removed in ssas 2008.
You can answer more but if you end this with these then the interviewer feel that you are REAL EXPERIENCED.
  •  What is datawarehouse in short DWH?
The datawarehouse is an informational environment that
  • Provides an integrated and total view of the enterprise
  • Makes the enterprise’s current and historical information easily available for decision making
  • Makes decision-support transactions possible without hindering operational systems
  • Renders the organization’s information consistent
  • Presents a flexible and interactive source of strategic information
OR a warehouse is a
  • Subject oriented
  • Integrated
  • Time variant
  • Non volatile for doing decision support
OR
Collection of data in support of management’s decision making process”. He defined the terms in the sentence as follows.
OR
Subject oriented:
It define the specific business domain ex: banking, retail, insurance, etc…..
Integrated:
It should be in a position to integrated data from various source systems
Ex: sql,oracle,db2 etc……
Time variant:
It should be in a position to maintain the data the various time periods.
Non volatile:
Once data is inserted it can’t be changed
  • What is data mart?
A data mart is a subset of an organizational data store, usually oriented to a specific purpose or major data subject that may be distributed to support business needs. Data marts are analytical data stores designed to focus on specific business functions for a specific community within an organization.
Data marts are often derived from subsets of data in a data warehouse, though in the bottom-up data warehouse design methodology the data warehouse is created from the union of organizational data marts.
They are 3 types of data mart they are
  1. Dependent
  2. Independent
  3. Logical data mart
  •  What are the difference between data mart and data warehouse?
Datawarehouse is complete data where as Data mart is Subset of the same.
Ex:
All the organisation data may related to finance department, HR, banking dept are stored in data warehouse where as in data mart only finance data or HR department data will be stored. So data warehouse is a collection of different data marts.
  •  Have you ever worked on performance tuning, if yes what are the steps involved in it?
We need to identify the bottlenecks to tune the performance, to overcome the bottleneck we need to following the following.
  1. Avoid named queries
  2. Unnecessary relationships between tables
  3. Proper attribute relationships to be given
  4. Proper aggregation design
  5. Proper partitioning of data
  6. Proper dimension usage design
  7. Avoid unnecessary many to many relationships
  8. Avoid unnecessary measures
  9. Set AttributeHierarchyEnabled = FALSE to Attributes that is not required
  10. Won’t take even single measure which is not necessary.
  • What are the difficulties faced in cube development?
This question is either to test whether you are really experienced or when he doesnot have any questions to ask .. ;)
You can tell any area where you feel difficult to work. But always the best answers will be the following.
  1. Giving attribute relationships
  2. Calculations
  3. Giving dimension usage (many to many relationship)
  4. Analyzing the requirements
  •  Explain the flow of creating a cube?
Steps to create a cube in ssas
  1. Create  a data source.
  2. Create a datasource view.
  3. Create Dimensions
  4. Create a cube.
  5. Deploy and Process the cube.
  • What is a datasource or DS?
The data source is the Physical Connection information that analysis service uses to connect to the database that host the data. The data source contains the connection string which specifies the server and the database hosting the data as well as any necessary authentication credentials.
  • What is datasourceview or DSV?
A data source view is a persistent set of tables from a data source that supply the data for a particular cube. BIDS also includes a wizard for creating data source views, which you can invoke by right-clicking on the Data Source Views folder in Solution Explorer.
  1. Datasource view is the logical view of the data in the data source.
  2. Data source view  is the only thing a cube can see.
  •  What is named calculation?
A named calculation is a SQL expression represented as a calculated column. This expression appears and behaves as a column in the table. A named calculation lets you extend the relational schema of existing tables or views in a data source view without modifying the tables or views in the underlying data source.
Named calculation is used to create a new column in the DSV using hard coded values or by using existing columns or even with both.
  • What is named query?
Named query in DSV is similar to View in Database. This is used to create Virtual table in DSV which will not impact the underlying database. Named query is mainly used to merge the two or more table in the datasource view or to filter columns of a table.
  • Why we need named queries?
A named query is used to join multiple tables, to remove unnecessary columns from a table of a database. You can achieve the same in database using Views but this Named Queries will be the best bet whe you don’t have access to create Views in database.
  •  How will you add a new column to an existing table in data source view?
By using named calculations we can add a new column to an existing table in the data source view. Named Calculation is explained above.
  •  What is dimension table?
A dimension table contains hierarchical data by which you’d like to summarize. A dimension table contains specific business information, a dimension table that contains the specific name of each member of the dimension. The name of the dimension member is called an “attribute”
The key attribute in the dimension must contain a unique value for each member of the dimension. This key attribute is called “primary key column”
The primary key column of each dimension table corresponding to the one of the key column  in any related fact table.
  • What is fact table?
A fact table contains the basic information that you wish to summarize. The table that stores the detailed value for measure is called fact table. In simple and best we can define as “The table which contains METRICS” that are used to analyse the business.
It consists of 2 sections
1) Foregine key to the dimesion
2) measures/facts(a numerical value that used to monitor business activity)
  •  What is Factless fact table?
This is very important interview question. The “Factless Fact Table” is a table which is similar to Fact Table except for having any measure; I mean that this table just has the links to the dimensions. These tables enable you to track events; indeed they are for recording events.
Factless fact tables are used for tracking a process or collecting stats. They are called so because, the fact table does not have aggregatable numeric values or information. They are mere key values with reference to the dimensions from which the stats can be collected
  •  What is attribute relationships, why we need it?
Attribute relationships are the way of telling the analysis service engine that how the attributes are related with each other. It will help to relate two or more  attributes to each other.Processing time will be decreased if proper relationships are given. This increases the Cube Processing performance and MDX query performance too.
In Microsoft SQL Server Analysis Services, attributes within a dimension are always related either directly or indirectly to the key attribute. When you define a dimension based on a star schema, which is where all dimension attributes are derived from the same relational table, an attribute relationship is automatically defined between the key attribute and each non-key attribute of the dimension. When you define a dimension based on a snowflake schema, which is where dimension attributes are derived from multiple related tables, an attribute relationship is automatically defined as follows:
  • Between the key attribute and each non-key attribute bound to columns in the main dimension table.
  • Between the key attribute and the attribute bound to the foreign key in the secondary table that links the underlying dimension tables.
  • Between the attribute bound to foreign key in the secondary table and each non-key attribute bound to columns from the secondary table.

  • How many types of attribute relationships are there?
They are 2 types of attribute relationships they are
  1. Rigid
  2. Flexible
Rigid: In Rigid relationships  where the relationship between the attributes is fixed, attributes will not change levels or their respective attribute relationships.
Example: The time dimension. We know that month “January 2009″ will ONLY belong to Year “2009″ and it wont be moved to any other year.
Flexible :   In Flexible relationship between the attributes is changed.
Example: An employee and department. An employee can be in accounts department today but it is possible that the employee will be in Marketing department tomorrow.
  •  How many types of dimensions are there and what are they?
They are 3 types of dimensions:
  1. confirm dimension
  2. junk dimension
  3. degenerate attribute
  • What are confirmed dimensions, junk dimension and degenerated dimensions?
Confirm dimension: It is the dimension which is sharable across the multiple facts or data model. This is also called as Role Playing Dimensions.
junk dimension: A number of very small dimensions might be lumped (a small irregularly shaped) together to form a single dimension, a junk dimension – the attributes are not closely related. Grouping of Random flags and text Attributes in a dimension and moving them to a separate sub dimension is known as junk dimension.
Degenerated dimension: In this degenerate dimension contains their values in fact table and the dimension id not available in dimension table. Degenerated Dimension is a dimension key without corresponding dimension.
Example: In the PointOfSale Transaction Fact table, we have:
Date Key (FK), Product Key (FK), Store Key (FK), Promotion Key (FP), and POS Transaction Number
Date Dimension corresponds to Date Key, Production Dimension corresponds to Production Key. In a traditional parent-child database, POS Transactional Number would be the key to the transaction header record that contains all the info valid for the transaction as a whole, such as the transaction date and store identifier. But in this dimensional model, we have already extracted this info into other dimension. Therefore, POS Transaction Number looks like a dimension key in the fact table but does not have the corresponding dimension table.
  • What are the types of database schema?
They are 3 types of database schema they are
  1. Star
  2. Snowflake
  3. Starflake
  •  What is star, snowflake and star flake schema?
Star schema: In star schema fact table will be directly linked with all dimension tables. The star schema’s dimensions are denormalized with each dimension being represented by a single table. In a star schema a central fact table connects a number of individual dimension tables.
Snowflake: The snowflake schema is an extension of the star schema, where each point of the star explodes into more points. In a star schema, each dimension is represented by a single dimensional table, whereas in a snowflake schema, that dimensional table is normalized into multiple lookup tables, each representing a level in the dimensional hierarchy. In snow flake schema fact table will be linked directly as well as there will be some intermediate dimension tables between fact and dimension tables.
Star flake: A hybrid structure that contains a mixture of star(denormalized) and snowflake(normalized) schema’s.
  • How will you hide an attribute?
We can hide the attribute by selecting “AttributeHierarchyVisible = False” in properties of the attribute.
  •  How will you make an attribute not process?
By selecting  “ AttributeHierarchyEnabled = False”, we can make an  attribute not in process.
  •  What is use of IsAggregatable property?
In Analysis Service we generally see all dimension has All member. This is because of IsAggregatable property of the attribute. You can set its value to false, so that it will not show All member. Its default member for that attribute. If you hide this member than you will have to set other attribute value to default member else it will pick some value as default and this will create confusion in browsing data if someone is not known to change in default member.
  •  What are key, name and value columns of an attribute?
Key column of any attribute: Contains the column or columns that represent the key for the attribute, which is the column in the underlying relational table in the data source view to which the attribute is bound. The value of this column for each member is displayed to users unless a value is specified for the NameColumn property.
Name  column of an attribute: Identifies the column that provides the name of the attribute that is displayed to users, instead of the value in the key column for the attribute. This column is used when the key column value for an attribute member is cryptic or not otherwise useful to the user, or when the key column is based on a composite key. The NameColumn property is not used in parent-child hierarchies; instead, the NameColumn property for child members is used as the member names in a parent-child hierarchy.
Value columns of an attribute: Identifies the column that provides the value of the attribute. If the NameColumn element of the attribute is specified, the same DataItem values are used as default values for the ValueColumn element. If the NameColumn element of the attribute is not specified and the KeyColumns collection of the attribute contains a single KeyColumn element representing a key column with a string data type, the same DataItem values are used as default values for the ValueColumn element.
  • What is hierarchy, what are its types and difference between them?
A hierarchy is a very important part of any OLAP engine and allows users to drill down from  summary levels hierarchies represent the way user expect to explore data at more detailed level
hierarchies  is made up of multipule levels creating the structure based on end user requirements.
->years->quarter->month->week ,are all the levels of calender hierarchy
They are 2 types of hierarchies they are
  1. Natural hierarchy
  2. Unnatural hierarchy
 Natural hierarchy: This means that the attributes are intuitively related to one another. There is a clear relationship from the top of the hierarchy to the bottom.
Example: An example of this would be date: year, quarter and month follow from each other, and in part, define each other.
Unnatural hierarchy: This means that the attributes are not clearly related.
Example: An example of this might be geography; we may have country -> state -> city, but it is not clear where Province might sit.
  •  What is Attribute hierarchy?
An attribute hierarchy is created for every attribute in a dimension, and each hierarchy is available for dimensioning fact data. This hierarchy consists of an “All” level and a detail level containing all members of the hierarchy.
you can organize attributes into user-defined hierarchies to provide navigation paths in a cube. Under certain circumstances, you may want to disable or hide some attributes and their hierarchies.
  •  What is use of AttributeHierarchyDisplayFolder property ?
AttributeHierarchyDisplayFolder: Identifies the folder in which to display the associated attribute hierarchy to end users. For example if I set the property value as “Test” to all the Attributes of a dimension then a folder with the name “Test” will be created and all the Attributes will be placed into the same.
  •  What is use of AttributeHierarchyEnabled?
AttributeHierarchyEnabled: Determines whether an attribute hierarchy is generated by Analysis Services for the attribute. If the attribute hierarchy is not enabled, the attribute cannot be used in a user-defined hierarchy and the attribute hierarchy cannot be referenced in Multidimensional Expressions (MDX) statements.
  •  What is use of AttributeHierarchyOptimizedState?
AttributeHierarchyOptimizedState: Determines the level of optimization applied to the attribute hierarchy. By default, an attribute hierarchy is FullyOptimized, which means that Analysis Services builds indexes for the attribute hierarchy to improve query performance. The other option, NotOptimized, means that no indexes are built for the attribute hierarchy. Using NotOptimized is useful if the attribute hierarchy is used for purposes other than querying, because no additional indexes are built for the attribute. Other uses for an attribute hierarchy can be helping to order another attribute.
  •   What is use of AttributeHierarchyOrdered ?
AttributeHierarchyOrdered: Determines whether the associated attribute hierarchy is ordered. The default value is True. However, if an attribute hierarchy will not be used for querying, you can save processing time by changing the value of this property to False.
  •  What is the use of AttributeHierarchyVisible ?
AttributeHierarchyVisible : Determines whether the attribute hierarchy is visible to client applications. The default value is True. However, if an attribute hierarchy will not be used for querying, you can save processing time by changing the value of this property to False.
  •  What are types of storage modes?
There are three standard storage modes in OLAP applications
  1. MOLAP
  2. ROLAP
  3. HOLAP
  • Compare the Three Storage Modes ?
Summary and comparison
Basic Storage Mode Storage Location for Detail Data Storage Location for Summary/ Aggregations Storage space requirement Query Response Time Processing Time Latency
MOLAP Multidimensional Format Multidimensional Format MediumBecause detail data is stored in compressed format. Fast Fast High
HOLAP Relational Database Multidimensional Format Small Medium Fast Medium
ROLAP Relational Database Relational Database Large Slow Slow Low

  • What is MOLAP and its advantage?
MOLAP (Multi dimensional Online Analytical Processing) : MOLAP is the most used storage type. Its designed to offer maximum query performance to the users. the data and aggregations are stored in a multidimensional format, compressed and optimized for performance. This is both good and bad. When a cube with MOLAP storage is processed, the data is pulled from the relational database, the aggregations are performed, and the data is stored in the AS database. The data inside the cube will refresh only when the cube is processed, so latency is high.
Advantages:
  1. Since the data is stored on the OLAP server in optimized format, queries (even complex calculations) are faster than ROLAP.
  2. The data is compressed so it takes up less space.
  3. And because the data is stored on the OLAP server, you don’t need to keep the connection to the relational database.
  4. Cube browsing is fastest using MOLAP.
  •  What is ROLAP and its advantage?
ROLAP (Relational Online Analytical Processing) : ROLAP does not have the high latency disadvantage of MOLAP. With ROLAP, the data and aggregations are stored in relational format. This means that there will be zero latency between the relational source database and the cube.
Disadvantage of this mode is the performance, this type gives the poorest query performance because no objects benefit from multi dimensional storage.
Advantages:
  1. Since the data is kept in the relational database instead of on the OLAP server, you can view the data in almost real time.
  2. Also, since the data is kept in the relational database, it allows for much larger amounts of data, which can mean better scalability.
  3. Low latency.
  •   What is HOLAP and its advantage?
Hybrid Online Analytical Processing (HOLAP): HOLAP is a combination of MOLAP and ROLAP. HOLAP stores the detail data in the relational database but stores the aggregations in multidimensional format. Because of this, the aggregations will need to be processed when changes are occur. With HOLAP you kind of have medium query performance: not as slow as ROLAP, but not as fast as MOLAP. If, however, you were only querying aggregated data or using a cached query, query performance would be similar to MOLAP. But when you need to get that detail data, performance is closer to ROLAP.
Advantages:
  1. HOLAP is best used when large amounts of aggregations are queried often with little detail data, offering high performance and lower storage requirements.
  2. Cubes are smaller than MOLAP since the detail data is kept in the relational database.
  3. Processing time is less than MOLAP since only aggregations are stored in multidimensional format.
  4. Low latency since processing takes place when changes occur and detail data is kept in the relational database.
  • What are Translations and its use?
Translation: The translation feature in analysis service allows you to display caption and attributes names that correspond to a specific language. It helps in providing GLOBALIZATION to the Cube.
  • What is Database dimension?
All the dimensions that are created using NEW DIMENSION Wizard are database dimensions. In other words, the dimensions which are at Database level are called Database Dimensions.
  • What is Cube dimension?
A cube dimension is an instance of a database dimension within a cube is called as cube dimension. A database dimension can be used in multiple cubes, and multiple cube dimensions can be based on a single database dimension
  • Difference between Database dimension and Cube dimension?
  1. The Database dimension has only Name and ID properties, whereas a Cube dimension has several more properties.
  2. Database dimension is created one where as Cube dimension is referenced from database dimension.
  3. Database dimension exists only once.where as Cube dimensions can be created more than one using ROLE PLAYING Dimensions concept.
  •  How will you add a dimension to cube?
To add a dimension to a cube follow these steps.
  1.   In Solution Explorer, right-click the cube, and then click View Designer.
  1.   In the Design tab for the cube, click the Dimension Usage tab.
  2.   Either click the Add Cube Dimension button, or right-click anywhere on the work surface  and then click Add Cube Dimension.
  3.   In the Add Cube Dimension dialog box, use one of the following steps:
  4. To add an existing dimension, select the dimension, and then click OK.
  5. To create a new dimension to add to the cube, click New dimension, and then follow the steps in the Dimension Wizard.
  • What is SCD (slowly changing dimension)?
Slowly changing dimensions (SCD) determine how the historical changes in the dimension tables are handled. Implementing the SCD mechanism enables users to know to which category an item belonged to in any given date.
  • What are types of SCD?
It is a concept of STORING Historical Changes and when ever an IT guy finds a new way to store then a new Type will come into picture. Basically there are 3 types of SCD they are given below
  1. SCD type1
  2. SCD type2
  3. SCD type3
  • What  is Type1, Type2, Type3 of SCD?
Type 1: In Type 1 Slowly Changing Dimension, the new information simply overwrites the original information. In other words, no history is kept.
In our example, recall we originally have the following table:
Customer Key Name State
1001 Christina Illinois
After Christina moved from Illinois to California, the new information replaces the new record, and we have the following table:
Customer Key Name State
1001 Christina California
Advantages: This is the easiest way to handle the Slowly Changing Dimension problem, since there is no need to keep track of the old information.
Disadvantages: All history is lost. By applying this methodology, it is not possible to trace back in history.
Usage:  About 50% of the time.
When to use Type 1: Type 1 slowly changing dimension should be used when it is not necessary for the data warehouse to keep track of historical changes.
 Type 2: In Type 2 Slowly Changing Dimension, a new record is added to the table to represent the new information. Therefore, both the original and the new record will be present. The new record gets its own primary key.
In our example, recall we originally have the following table:
Customer Key Name State
1001 Christina Illinois
After Christina moved from Illinois to California, we add the new information as a new row into the table:
Customer Key Name State
1001 Christina Illinois
1005 Christina California
Advantages: This allows us to accurately keep all historical information.
Disadvantages:
  1. This will cause the size of the table to grow fast. In cases where the number of rows for the table is very high to start with, storage and performance can become a concern.
  2. This necessarily complicates the ETL process.
Usage: About 50% of the time.
Type3 : In Type 3 Slowly Changing Dimension, there will be two columns to indicate the particular attribute of interest, one indicating the original value, and one indicating the current value. There will also be a column that indicates when the current value becomes active.
In our example, recall we originally have the following table:
Customer Key Name State
1001 Christina Illinois
To accommodate Type 3 Slowly Changing Dimension, we will now have the following columns:
Customer Key,Name,OriginalState,CurrentState,Effective Date
After Christina moved from Illinois to California, the original information gets updated, and we have the following table (assuming the effective date of change is January 15, 2003):
Customer Key Name OriginalState CurrentState Effective Date
1001 Christina Illinois California 15-JAN-2003
Advantages:
  1. This does not increase the size of the table, since new information is updated.
  2. This allows us to keep some part of history.
Disadvantages: Type 3 will not be able to keep all history where an attribute is changed more than once. For example, if Christina later moves to Texas on December 15, 2003, the California information will be lost.
Usage: Type 3 is rarely used in actual practice.
  •  What is role playing dimension with two examples?
Role play dimensions: We already discussed about this. This is nothing but CONFIRMED Dimensions. A dimension can play different role in a fact table you can recognize a roleplay dimension when there are multiple columns in a fact table that each have foreign keys to the same dimension table.
Ex1: There are three dimension keys in the factinternalsales,factresellersales tables which all refer to the dimtime table,the same time dimension is used  to track sales by that contain either of these fact table,the corresponding  role-playing dimension are automatically added to the cube.
Ex2 : In retail banking, for checking account cube we could have transaction date dimension and effective date dimension. Both dimensions have date, month, quarter and year attributes. The formats of attributes are the same on both dimensions, for example the date attribute is in ‘dd-mm-yyyy’ format. Both dimensions have members from 1993 to 2010.
  •  What is measure group, measure?
Measure groups : These measure groups can contain different dimensions and be at different granularity  but so long as you model your cube correctly, your users will be able to use measures from each of these measure groups in their queries easily and without worrying about the underlying complexity.
Creating multiple measure groups : To create a new measure group in the Cube Editor, go to the Cube Structure tab and right-click on the cube name in the Measures pane and select ‘New Measure Group’. You’ll then need to select the fact table to create the measure group from and then the new measure group will be created; any columns that aren’t used as foreign key columns in the DSV will automatically be created as measures, and you’ll also get an extra measure of aggregation type Count. It’s a good idea to delete any measures you are not going to use at this stage.
 Measures :  Measures are the numeric values that our users want to aggregate, slice, dice and otherwise analyze, and as a result, it’s important to make sure they behave the way we want them to. One of the fundamental reasons for using Analysis Services is that, unlike a relational database it allows us to build into our cube design business rules about measures: how they should be formatted, how they should aggregate up, how they interact with specific dimensions and so on.
  •  What is attribute?
An attribute is a specification that defines a property of an object, element, or file. It may also refer to or set the specific value for a given instance of such.
  •  What is surrogate key?
A surrogate key is the SQL generated key which acts like an alternate primary key for the table in database, Data warehouses commonly use a surrogate key to uniquely identify an entity. A surrogate is not generated by the user but by the system. A primary difference between a primary key and surrogate key in few databases is that primarykey uniquely identifies a record while a Surrogatekey uniquely identifies an entity.
Ex: An employee may be recruited before the year 2000 while another employee with the same name may be recruited after the year 2000. Here, the primary key will uniquely identify the record while the surrogate key will be generated by the system (say a serial number) since the SK is NOT derived from the data.
  • How many types of relations are there between dimension and measure group?
They are six relation between the dimension and measure group, they are
  1. No Relationship
  2. Regular
  3. Refernce
  4. Many to Many
  5. Data Mining
  6. Fact
  •  What is regular type, no relation type, fact type, referenced type, many-to-many type with example?
No relationship: The dimension and measure group are not related.
Regular: The dimension table is joined directly to the fact table.
Referenced: The dimension table is joined to an intermediate table, which in turn,is joined to the fact table.
Many to many:The dimension table is to an intermediate fact table,the intermediate fact table is joined , in turn, to an intermediate dimension table to which the fact table is joined.
Data mining:The target dimension is based on a mining model built from the source dimension. The source dimension must also be included in the cube.
Fact table: The dimension table is the fact table.
  •  What are calculated members and what is its use?
Calculations are item in the cube that are eveluated at runtime
Calculated members: You can create customized measures or dimension members, called calculated members, by combining cube data, arithmetic operators, numbers, and/or functions.
Example: You can create a calculated member called Marks that converts dollars to marks by multiplying an existing dollar measure by a conversion rate. Marks can then be displayed to end users in a separate row or column. Calculated member definitions are stored, but their values exist only in memory. In the preceding example, values in marks are displayed to end users but are not stored as cube data.
  •  What are KPIs and what is its use?
In Analysis Services, a KPI is a collection of calculations that are associated with a measure group in a cube that are used to evaluate business success. We use KPI to see the business at the particular point, this is represents with some graphical items such as traffic signals,ganze etc
  • What are actions, how many types of actions are there, explain with example?
Actions are powerful way of extending the value of SSAS cubes for the end user. They can click on a cube or portion of a cube to start an application with the selected item as a parameter, or to retrieve information about the selected item.
One of the objects supported by a SQL Server Analysis Services cube is the action. An action is an event that a user can initiate when accessing cube data. The event can take a number of forms. For example, a user might be able to view a Reporting Services report, open a Web page, or drill through to detailed information related to the cube data
Analysis Services supports three types of actions..
Report action: Report action Returns a Reporting Services report that is associated with the cube data on which the action is based.
Drill through: Drillthrough Returns a result set that provides detailed information related to the cube data on which the action is based.
Standard: Standard has five action subtypes that are based on the specified cube data.
Dataset: Returns a mutlidimensional dataset.
Proprietary: Returns a string that can be interpreted by a client application.
Rowset: Returns a tabular rowset.
Statement: Returns a command string that can be run by a client application.
URL:  Returns a URL that can be opened by a client application, usually a browser.
  • What is partition, how will you implement it?
You can use the Partition Wizard to define partitions for a measure group in a cube. By default, a single partition is defined for each measure group in a cube. Access and processing performance, however, can degrade for large partitions. By creating multiple partitions, each containing a portion of the data for a measure group, you can improve the access and processing performance for that measure group.
  • What is the minimum and maximum number of partitions required for a measure group?
In 2005 a MAX of 2000 partitions can be created per measure group and that limit is lifted in later versions.
In any version the MINIMUM is ONE Partition per measure group.
  •  What are Aggregations and its use?
Aggregations provide performance improvements by allowing Microsoft SQL Server Analysis Services (SSAS) to retrieve pre-calculated totals directly from cube storage instead of having to recalculate data from an underlying data source for each query. To design these aggregations, you can use the Aggregation Design Wizard. This wizard guides you through the following steps:
  1. Selecting standard or custom settings for the storage and caching options of a partition, measure group, or cube.
  2. Providing estimated or actual counts for objects referenced by the partition, measure group, or cube.
  3. Specifying aggregation options and limits to optimize the storage and query performance delivered by designed aggregations.
  4. Saving and optionally processing the partition, measure group, or cube to generate the defined aggregations.
  5. After you use the Aggregation Design Wizard, you can use the Usage-Based Optimization Wizard to design aggregations based on the usage patterns of the business users and client applications that query the cube.
  •  What is perspective, have you ever created perspective?
Perspectives are a way to reduce the complexity of cubes by hidden elements like measure groups, measures, dimensions, hierarchies etc. It’s nothing but slicing of a cube, for ex we are having retail and hospital data and end user is subscribed to see only hospital data, then we can create perspective according to it.
  •  What is deploy, process and build?
Bulid: Verifies the project files and create several local files.
Deploy: Deploy the structure of the cube(Skeleton) to the server.
Process: Read the data from the source and build the dimesions and cube structures
Elaborating the same is given below.
Build: Its is a used to process the data of the cube database. Build is a version of a program. As a rule, a build is a pre-release version and as such is identified by a build number, rather than by a release number. Reiterative (repeated) builds are an important part of the development process. Throughout development, application components are collected and repeatedly compiled for testing purposes, to ensure a reliable final product. Build tools, such as make or Ant, enable developers to automate some programming tasks. As a verb, to build can mean either to write code or to put individual coded components of a program together.
Deployment: During development of an Analysis Services project in Business Intelligence Development Studio, you frequently deploy the project to a development server in order to create the Analysis Services database defined by the project. This is required to test the project.
for example, to browse cells in the cube, browse dimension members, or verify key performance indicators (KPIs) formulas.
  •  What is the maximum size of a dimension?
The maximum size of the dimension is 4 gb.
  •  What are the types of processing and explain each?
They are 6 types of processing in ssas ,they are
  • Process Full
  • Process Data
  • Process Index
  • Process Incremental
  • Process Structure
  • UnProcess
 Process Full: Processes an Analysis Services object and all the objects that it contains. When Process Full is executed against an object that has already been processed, Analysis Services drops all data in the object, and then processes the object. This kind of processing is required when a structural change has been made to an object, for example, when an attribute hierarchy is added, deleted, or renamed. This processing option is supported for cubes, databases, dimensions, measure groups, mining models, mining structures, and partitions.
Process Data: Processes data only without building aggregations or indexes. If there is data is in the partitions, it will be dropped before re-populating the partition with source data. This processing option is supported for dimensions, cubes, measure groups, and partitions.
Process Index: Creates or rebuilds indexes and aggregations for all processed partitions. This option causes an error on unprocessed objects. This processing option is supported for cubes, dimensions, measure groups, and partitions.
Process Increment: Adds newly available fact data and process only to the relevant partitions. This processing option is supported for measure groups, and partitions.
Process Structure: If the cube is unprocessed, Analysis Services will process, if it is necessary, all the cube’s dimensions. After that, Analysis Services will create only cube definitions. If this option is applied to a mining structure, it populates the mining structure with source data. The difference between this option and the Process Full option is that this option does not iterate the processing down to the mining models themselves. This processing option is supported for cubes and mining structures.
Unprocess : Drops the data in the object specified and any lower-level constituent objects. After the data is dropped, it is not reloaded. This processing option is supported for cubes, databases, dimensions, measure groups, mining models, mining structures, and partitions.
Process Default: Detects the process state of an object, and performs processing necessary to deliver unprocessed or partially processed objects to a fully processed state. This processing option is supported for cubes, databases, dimensions, measure groups, mining models, mining structures, and partitions.
  • What is a cube?
The basic unit of storage and analysis in Analysis Services is the cube. A cube is a collection of data that’s been aggregated to allow queries to return data quickly.
For example, a cube of order data might be aggregated by time period and by title, making the cube fast when you ask questions concerning orders by week or orders by title.
  • What is AMO?
The full form of AMO is Analysis Managament Objects. This is used to create or alter cubes from .NET code.
  • After creating the cube, if  we added a new column to the OLTP table then how you add this new attribute to the cube?
Just open the datasourceview and on right click we find the option REFRESH. Click the REFRESH then it will add new attributes to the table which can be added to Cube.
REAL TIME INTERVIEW QUESTIONS - 
  •  What is the size of the Cube in your last Project?
Answer to this question varies from project to project and mainly depends on how BIG is your database and how COMPLEX the database design is. Generally for the database with a TRANSACTION TABLE of 50 crore records, the cube size will be around 100GB. So, better go with 100GB as answer to this question.
  • What is size of the database in your last Project?
You can expect this question immediately after you answer 100GB to the last question. The database size will be 600 to 800GB for which the cube will come to 100 GB. So go with 800GB for this question.
  • What is size of the fact(Transaction) table in your last Project?
This will be the next question if you answer 800GB as your dataabase size. Here he is not expecting SIZE in GBs but the interviewer will be expecting NUMBER OF ROWS in the Transaction table. Go with 57Crore records for this question.
  • How frequently you process the cube?
You have to be very careful here. Frequency of processing cube depends on HOW FREQUENTLY YOU ARE GETTING NEW DATA. Once the new data comes then SSIS team loads it and send a mail to SSAS team after load is completed successfully. Once SSAS team receives the mail then these guys will look for best time to PROCESS.
Typically we get data either Weekly or Monthly. So you can say that the processing of the cube will be done either Weekly or monthly.
  • How frequently you get DATA from clients?
This answer should be based on your last answer. IF you answered WEEKLY to last question then the Answer to this question also should be WEEKLY. IF MONTHLY for last question then this answer also should be MONTHLY.
  • What type of Processing Options you used to process the cube in your Project?
This is the toughest question to answer. This depends on DATA you have and CLIENTS requirements. Let me explain here.
  1. If the database is SMALL, let’s say it has only 1 crore records then people do FULL PROCESS as it wont take much time.
  2. If the database is MEDIUM, let’s say it has only 15 crore records then people prefer to do INCREMENTAL PROCESS unless CLIENTS ask us to do FULL PROCESS as it takes little bit of time.
  3. If the database is HUGE, let’s say it has more than 35 to 40 crore records then people prefer to do INCREMENTAL PROCESS unless CLIENTS ask us to do FULL PROCESS as it takes lot of time. In this case we TRY to convince clients for INCREMENTAL and if they don’t agree then we don’t have any other option.
  4. Incremental process will come into picture ONLY when there is no updates to the OLD data i.e no changes to already existing data else NO OTHER OPTION than FULL PROCESS.
  • How you provide security to cube?
By defining roles we provide security to cubes. Using roles we can restrict users from accessing restricted data. Procedure as follows - 
  1. Define Role
  2. Set Permission
  3. Add appropriate Users to the role
  • How you move the cube from one server to another?
There are many ways to do the same. Let me explain four here and cleverly you can say “I worked on 4 SSAS projects till date and implemented different types in all the four.”
  1. Backup and restore – This is the simplest way. Take the Backup from development server and copy the backup to FTP folder of clients. After doing this drop a mail to Client’s Admin and he will take care of RESTORE part.
  2. Directly PROCESS the cube in PRODUCTION environment. For this you need access to Production which will not be given by clients unless the clients are *********. One of the client I worked for given FULL access to me .. ;)
  3. Under Srart –> All Programs –> Sql Server –> Analysis Services you can see deployment wizard. This is one way of moving the cube. This method has some steps to follow. First deploy your cube and FOUR files will be created in BIN folder of PROJECT folder. Copy those FOUR files and paste in Production server in any directory. Then OPEN this DEPLOYMENT Wizard in production and when it ask for Database file then point to the location where you copied the files. After that NEXT,NEXT,NEXT … OK .. Cube will be deployed and processed.
  4. This way is most beautiful one. Synchronization, In this we will first deploy and process the cube in STAGING ENVIRONMENT and then we will go to production server. Connect to Analysis services in SSMS and select Synchronize by right clicking on Databases folder in SSMS of analysis services. Then select source as STAGING SERVER and then click on OK. The changes in the cube present in the Staging server will be copied to the production server.
  • What is the toughest challenge you face in your Project?
There are couple of this where we face difficulty.
  1. While working on RELATIONSHIPS between Measure Groups and Dimensions.
  2. Working on Complex calculations
  3. Performance tuning
  • How you created Partitions of the cube in your Last Project?
Partitions can be created on different data. Few people do it on PRODUCT NAME wise and many prefer to do it on DATE data wise.  you go with DATE wise.
In dates, we can create MONTH wise,WEEK wise,QUARTER wise and some times YEAR wise. This all depends on how much data you are coming per WEEK or MONTH or QUARTER … If you are getting 50 lakhs records per month then tell you do MONTH wise.
  • How many dimensions in your last cube?
47 to 50.
  • How many measure groups in your last cube?
Total 10 and in that 4 are Fact tables and remaining 6 are Fact less fact tables.
  • What is the Schema of your last cube?
Snowflake
  • Why not STAR Schema ?
My data base design doesn’t support STAR Schema.
  • What are the different relationships that you are used in your cube?
  1. Regular
  2. Referenced
  3. Many to Many
  4. Fact
  5. No Relationship
  • Have you created the KPI’s , If then Explain?
Don’t add much to this as the questions in this will be tricky. Just tell that you worked on couple of KPI and you have basic knowledge on this. (Don’t worry, this is not MANDATORY)
  • How you define Aggregations in your Project?
We defined the aggregations for MOST FREQUENTLY USED data in SSRS reports.
  • Size of SSAS team in your last Project?
Just 2 guys as we guys are really in demand and lot of scarcity:)
  • How many Resources worked on same Cube in your Project?
Only 2 and one in morning shift and another in Evening shift.
  • How much time it take to Process the Cube?
This is Very very important question. This again depends on the SIZE of database,Complexity of the database and your server settings. For database with 50 cr transaction records, it generally takes 3.5 hrs.
  • How many Calculation you done in Your Project?
I answer more than 5000 and if you tell the same then you are caught unless you are super good in MDX. Best answer for you is “Worked on 50 calculations”

Thursday 13 March 2014


SQL Performance Explained

THE INFORMATION IN THIS ARTICLE IS PROVIDED “AS IS”
WITHOUT WARRANTY OF ANY KIND BY ANY PERSON,
INCLUDING WITHOUT LIMITATION, THE AUTHOR AND
THE PUBLISHER.

Performance problems are as old as SQL itself. There are evenopinions that say that SQL is inherently giving poor performance.
Although it might have been true in the early days of SQL, it is
definitely not true anymore. Nevertheless SQL performance
problems are everywhere, everyday. How does this happen?

The SQL language is perhaps the most successful fourth
generation programming language (4GL). The main goal of SQL is
to separate the “what” from the “how”. An SQL statement is a
straight expression of what is needed without instructions how to
get it. Consider the following example:

SELECT date_of_birth
FROM employees
WHERE last_name = 'WINAND'

Writing this SQL statement doesn't require any knowledge about
the physical properties of the storage (such as disks, files, blocks,
...) or any knowledge how the database executes that statement.
There are no instructions what to do first, which files to open or
how to find the requested records inside the data files. From a
developer's perspective, the database is a black box.

Although developers know a great deal how to write SQL, there is
no need to know anything about the inner workings of a database
to do it. This abstraction greatly improves programmers
productivity and works very well in almost all cases. However,
there is one common problem where the abstraction doesn't work
anymore: performance.

That's where separating “what” and “how” bites back. Per
definition, the author of the SQL statement should not care how
the statement is executed. Consequently, the author is not
responsible if the execution is slow. However, experience proves
the opposite; the author must know a little bit about the database
to write efficient SQL.

As it turns out, the only thing developers need to know to write
efficient SQL is how indexes work.
That's because missing and inadequate indexes are among the
most common causes of poor SQL performance. Once the
mechanics of indexes are understood, another performance killer
disappears automatically: bad SQL statements.

This book covers everything a developer must know to use
indexes properly—and nothing more. To be more precise, the
book actually covers only the most important type of index in the
SQL databases: the B-Tree index.

The B-Tree index works almost identical in many SQL database
implementation. That's why the principles explained in this book
are applicable to many different databases. However, the main
body of this book uses the vocabulary of the Oracle database. Side
notes explain the differences to four more major SQL databases:
IBM DB2, MySQL, PostgreSQL and Microsoft SQL Server.
The structure of the book is tailor-made for developers; most of
the chapters correspond to a specific part of an SQL statement.

CHAPTER 1 - Anatomy of an Index
The first chapter is the only one that doesn't cover SQL; it's
about the fundamental structure of an index. The
understanding of the index structure is essential to follow
the later chapters—don't skip this.
Although the chapter is rather short—about 4 printed pages
—you will already understand the phenomenon of slow
indexes after working through the chapter.

CHAPTER 2 - The Where Clause
This is where we pull out all the stops. This chapter explains
all aspects of the where clause; beginning with very simple
single column lookups down to complex clauses for ranges
and special cases like NULL.
The chapter will finally contain the following sections:
The Equals Operator
Functions
Indexing NULL
Searching for Ranges
Obfuscated Conditions
This chapter makes up the main body of the book. Once you
learn to use these techniques properly, you will already
write much faster SQL.


CHAPTER 3 - Testing and Scalability
This chapter is a little digression about a performance
phenomenon that hits developers very often. It explains the
performance differences between development and
production databases and covers the effects of growing data
volumes.

CHAPTER 4 - Joins (not yet published)
Back to SQL: how to use indexes to perform a fast table
join?

CHAPTER 5 - Fetching Data (not yet published)
Have you ever wondered if there is any difference between
selecting a single column or all columns? Here is the answer
—along with a trick to get even better performance.

CHAPTER 6 - Sorting, Grouping and Partitioning (not
yet published)
ORDER BY, GROUP BY and even PARTITION BY can
benefit from an index.

CHAPTER 7 - Views (not yet published)
There is actually nothing magic about indexes on views;
they just don't exist. However, there are materialized views.

CHAPTER 8 - Advanced Techniques (not yet published)
This chapter explains how to index for some frequently used
structures like Top-N Queries or min()/max() searches.

CHAPTER 9 - Insert, Delete and Update (not yet
published)
How do indexes affect data changes? An index doesn't come
for free—use them wisely!

APPENDIX A - Execution Plans
Asking the database how it executes a statement.

APPENDIX B - Myth Directory
Lists some common myth and explains the truth. Will be
extended as the book grows.

Chapter 1. Anatomy of an Index
“An index makes the query fast” is the most basic explanation of
an index I have ever heard of. Although it describes the most
important aspect of an index very well, it is—unfortunately—not
sufficient for this book. This chapter describes the index structure
on a high level, like an X-Ray, but provides all the required details
to understand the performance aspects discussed throughout the
book.
First of all, an index is a distinct object in the database that
requires space, and is stored at a different place than the table
data. Creating an index does not change the table; it just creates a
new data structure that refers to the table. A certain amount of
table data is copied into the index so that the index has redundant
data. The book index analogy describes the concept very well; it is
stored at a different place (typically at the end of the book), has
some redundancy, and refers to the main part of the book.
Clustered Indexes (SQL Server, MySQL/InnoDB)
There is an important difference how SQL Server and MySQL
(with InnoDB engine) handle tables.
SQL Server and InnoDB organize tables always as indexes that
consists of all table columns. That index (that is in fact the
table) is a clustered index. Other indexes on the same table,
secondary indexes or non-clustered indexes, work like
described in this chapter.
Volume 2 explains the corresponding Oracle database feature;
Index Organized Tables. The benefits and drawbacks described
there apply to Clustered Indexes in SQL Server and InnoDB.
A database index supports fast data access in a very similar way
to a book index or a printed phone book. The fundamental concept
is to maintain an ordered representation of the indexed data.
Once the data is available in a sorted manner, localizing an
individual entry becomes a simple task. However, a database
index is more complex than a phone book because it undergoes
constant change. Imagine maintaining a phone book manually by
adding new entries in the correct place. You will quickly find out
that the individual pages don't have enough space to write a new
entry between two existing ones. That's not a problem for printed
phone books because every new print covers all the accumulated
updates. An SQL database cannot work that way. There is
constant need to insert, delete and update entries without
disturbing the index order.

The database uses two distinct data structures to meet this
challenge: the leaf nodes and the tree structure.
The Leaf Nodes
The primary goal of an index is to maintain an ordered
representation of the indexed data. However, the database cannot
store the records sequentially on the disk because an insert
statement would need to move data to make room for the new
entry. Unfortunately, moving data becomes very slow when data
volume grows.

The solution to this problem is a chain of data fragments (nodes)—
each referring to its neighboring nodes. Inserting a new node into
the chain means to update the references in the preceding and
following nodes, so that they refer to the new node. The physical
location of the new node doesn't matter, it is logically linked at the
correct place between the preceding and following nodes.

Because each node has two links with each adjacent node, the data
structure is called a double linked list. The key feature of a double
linked list is to support the insertion and deletion of nodes in
constant time—that is, independent of the list length. Double
linked list are also used for collections (containers) in many
programming languages. Table 1.1, “Various Double-Linked List
Implementations” lists some of them.
Table 1.1. Various Double-Linked List Implementations
Programming
Language Name
Java java.util.LinkedList
.NET Framework System.Collections.Generic.LinkedList
C++ std::list
Databases uses double linked lists to connect the individual index
leaf nodes. The leaf nodes are stored in database blocks or pages;
that is, the smallest storage unit databases use. Every block
within an index has the same size; that is, typically a few
kilobytes. The database stores as many index entries as possible
in each block to use the space optimally. That means that the sort
order is maintained on two different levels; the index entries
within each leaf node, and then the leaf nodes among each other
with a double linked list.
Figure 1.1. Index Leaf Nodes and Corresponding Table
Data
Figure 1.1, “Index Leaf Nodes and Corresponding Table Data”
illustrates the index leaf nodes and their connection to the table
data. Each index entry consists of the indexed data (the key) and
a reference to the corresponding table row (the ROWID—that is, the
physical address of the table row). Unlike an index, the table data
is not sorted at all. There is neither a relationship between the
rows stored in the same block, nor is there any connection
between the blocks.
The index leaf nodes allow the traversal of index entries in both
directions very efficiently. The number of leaf nodes accessed
during a range scan is minimal. For example, a search for the key
range 15 to 35 in Figure 1.1, “Index Leaf Nodes and
Corresponding Table Data” needs to visit only three index nodes
to find all matching entries.
The Tree
Because the leaf nodes are not physically sorted on the disk—their
logical sequence is established with the double linked list—the
search for a specific entry can't be performed as in a phone book.
A phone book lookup works because the physical order of the
pages (sheets) is correct. If you search for “Smith” in but open it
at “Robinson” in the first place, you know that Smith must come
farther back. A search in the index leaf nodes doesn't work that
way because the nodes are not stored sequentially. It's like
searching in a phone book with shuffled pages.
A second data structure is required to support the fast search for
a specific entry. The balanced search tree, or B-Tree in short, is
the perfect choice for this purpose.
Figure 1.2. Tree Structure
Figure 1.2, “Tree Structure” shows an example index with 30
entries in the leaf nodes and the corresponding tree structure.
The tree supports the fast search for any specific index entry—
independent of the physical location on the disk.
The zoom on the left hand side shows the topmost part of the tree
in more detail. The key values in the branch node correspond to
the greatest key values stored in the referenced leaf nodes. For
instance, the highest value in the first leaf node is 18; the
corresponding entry in the branch node is therefore also 18. The
same is true for the other two leaf nodes so that the first branch
node contains the keys 18, 27 and 39. The remaining leaf nodes
are processed in the same so that the first level of tree nodes is
built. The next level is built on top of the branch nodes. The
procedure repeats until all keys fit into a single node, the root
node.

The entire tree has the same tree depth; all leaf nodes have the
same distance to the root node. That means that each element can
be accessed as fast as all other elements. The index in Figure 1.2,
“Tree Structure” has a tree depth of three because the traversal
from the root node to the leaf node goes along three nodes.
Please note that the database uses special algorithms to keep the
tree balance at any time. The database performs the required
steps to keep the balance for each insert, delete and update
statement. The index maintenance overhead can become
considerable—it will be discussed in Volume 2.
Figure 1.3. Tree Traversal
The strength of the balanced tree is a very efficient traversal.
Figure 1.3, “Tree Traversal” shows an index fragment to illustrate
the tree traversal to the key "57". The index lookup starts at the
root node on the left hand side. Each entry of the root node is
processed in ascending order until an index entry is bigger or
equal (>=) to the search term (57). The same procedure continues
at the referenced node until the scan reaches a leaf node.
A textual explanation of an algorithm is always difficult. Let's
repeat it with the real numbers from the figure. The process
starts with the entry 39 at the first entry of the root node. 39 not
bigger than or equal to 57 (search term), that means that the
procedure repeats for the next entry in the same node. 83
satisfies the bigger or equal (>=) test so that the traversal follows
the reference the next level—the branch node. The process skips
the first two entries in the branch node (46 and 53) because they
are smaller than the search term. The next entry is equal to the
search term (57)—the traversal descends along that reference to
the leaf node. The leaf node is also inspected entry-by-entry to
find the search key.

The tree traversal is a very efficient operation. The tree traversal
works almost instantly—even upon a huge data volume. The
excellent performance comes from the logarithmic grows of the
tree depth. That means that the depth grows very slowly in
comparison to the number of leaf nodes. The sidebar Logarithmic
Scalability describes this in more detail. Typical real world indexes
with millions of records have a tree depth of four or five. A tree
depth of six is hardly ever seen.
Logarithmic Scalability

In mathematics, the logarithm of a number to a given base is
the power or exponent to which the base must be raised in
order to produce the number (Wikipedia:
http://en.wikipedia.org/wiki/Logarithm).
In databases, I prefer to explain by example. A tree with three
levels and nodes that can hold up to four entries can store up to
64 keys (43 )—just like the example in Figure 1.2, “Tree
Structure”. If this index grows one level, it can already hold 256
entries (44 ). Each time a level is added, the maximum number
of index entries quadruples. The noteworthy aspect is that an
addition to the tree depth translates to a multiplication of the
maximum number of index entries. Table 1.2, “Logarithmic
Scalability” demonstrates this relation.
Table 1.2. Logarithmic Scalability
Tree Depth Maximum Number of
Entries
3 64
4 256
5 1.024
6 4.096
7 16.384
8 65.536
9 262.144
10 1.048.576

The logarithmic growth enables the example index to search a
million records with only nine tree traversal steps, but a real
world index is even more efficient. The main factor that affects
the tree depth, and therefore the lookup performance, is the
number of entries in each tree node. The number of entries in
each node corresponds to—mathematically speaking—the basis
of the logarithm. The higher the basis, the shallower and faster
the tree.
The Oracle database exposes this concept to a maximum
extent and puts as many entries as possible into each node,
typically hundreds. That means, every new index level
supports hundred times more entries in the index.
Slow Indexes
Despite the efficiency of the tree traversal, there are still cases
where an index lookup doesn't work as fast as expected. This
contradiction has fueled the myth of the “degenerated index“ for a
long time. The miracle solution is usually to rebuild the index.
Appendix A, Myth Directory covers this myth in detail. For now,
you can take it for granted that rebuilding an index does not
improve performance in the long run. The real reason for trivial
statements becoming slow—even if it's using an index—can be
explained on the basis of the previous sections.
The first ingredient for a slow index lookup is scanning a wider
range than intended. As in Figure 1.3, “Tree Traversal”, the
search for an index entry can return many records. In that
particular example, the value 57 occurs twice in the index leaf
node. There could be even more matching entries in the next leaf
node, which is not shown in the figure. The database must read
the next leaf node to see if there are any more matching entries.
The number of scanned leaf nodes can grow large, and the
number of matching index entries can be huge.
The second ingredient for a slow index lookup is the table access.
As in Figure 1.1, “Index Leaf Nodes and Corresponding Table
Data”, the rows are typically distributed across many table blocks.
Each ROWID in a leaf node might refer to a different table block—in
the worst case. On the other hand, many leaf node entries could,
potentially, refer to the same table block so that a single read
operation retrieves many rows in one shot. That means that the
number of required blocks depends on the tree depth; on the
number of rows that match the search criteria; but also on the
row distribution in the table. The Oracle database is aware of this
effect of clustered row data and accounts for it with the so-called
clustering factor.
Clustering Factor
The clustering factor is a benchmark that expresses the
correlation between the row order in the index and the row
order in the table.
For example, an ORDERS table, that grows every day, might have
an index on the order date and another one on the customer id.
Because orders don't get deleted there are no holes in the table
so that each new order is added to the end. The table grows
chronologically. An index on the order date has a very low
clustering factor because the index order is essentially the same
as the table order. The index on customer id has a higher
clustering factor because the index order is different from the
table order; the table row will be inserted at the end of the
table, the corresponding index entry somewhere in the middle
of the index—according to the customer id.
The overall number of blocks accessed during an index lookup can
explode when the two ingredients play together. For example, an
index lookup for some hundred records might need to access four
blocks for the tree traversal (tree depth), a few more blocks for
subsequent index leaf nodes, but some hundred blocks to fetch
the table data. It's the table access that becomes the limiting
factor.
The main cause for the “slow indexes” phenomenon is the
misunderstanding of the three most dominant index operations:
INDEX UNIQUE SCAN
The INDEX UNIQUE SCAN performs the tree traversal only. The
database can use this operation if a unique constraint
ensures that the search criteria will match no more than one
entry.
INDEX RANGE SCAN
The INDEX RANGE SCAN performs the tree traversal and walks
through the leaf nodes to find all matching entries. This is
the fall back operation if multiple entries could possibly
match the search criteria.
TABLE ACCESS BY INDEX ROWID
The TABLE ACCESS BY INDEX ROWID operation retrieves the row
from the table. This operation is (often) performed for every
matched record from a preceding index scan operation.
The important point to note is that an INDEX RANGE SCAN can,
potentially, read a large fraction of an index. If a TABLE ACCESS BY
INDEX ROWID follows such an inefficient index scan, the index
operation might appear slow.
Chapter 2. Where Clause
In the previous chapter, we have explored the index structure
and discussed the most dominant cause of poor index
performance. The next step is to put it into context of the SQL
language, beginning with the where clause.
The where clause is the most important component of an SQL
statement because it's used to express a search condition. There is
hardly any useful SQL statement without a where clause.
Although so commonplace, there are many pitfalls that can
prevent an efficient index lookup if the where clause is not
properly expressed.
This chapter explains how a where clause benefits from an index,
how to define multi-column indexes for maximum usability, and
how to support range searches. The last section of the chapter is
devoted to common anti-patterns that prevent efficient index
usage.
The Equals Operator
The most trivial type of where clause is also the most frequent one:
the primary key lookup. That's a result of highly normalized
schema design as well as the broadening use of Object-relational
mapping (ORM) frameworks.
This section discusses the single column surrogate primary keys;
concatenated keys; and general-purpose multi column indexes.
You will see how the order of columns affects the usability of an
index and learn how to use a concatenated primary key index for
more than primary key lookups.
Surrogate Keys
Primary keys are often generated from a sequence and stored in
an ID column for the sole purpose to serve as surrogate key.
Surrogate keys have become the predominant form of primary
keys for many good reasons. They are easy to handle, flexible, and
tolerant to inappropriately chosen natural unique constraints.
Corporate guidelines and Object-relational mapping (ORM)
frameworks often encourage, and sometimes even enforce, the
use of surrogate keys.
As first example, consider an EMPLOYEES table with a unique index
on the EMPLOYEE_ID column:
CREATE TABLE employees (
employee_id NUMBER NOT NULL,
first_name VARCHAR2(1000) NOT NULL,
last_name VARCHAR2(1000) NOT NULL,
date_of_birth DATE NOT NULL,
phone_number VARCHAR2(1000) NOT NULL,
CONSTRAINT employees_pk PRIMARY KEY (employee_id)
);
The Oracle database creates a unique index according to the
definition of the primary key automatically. There is no need for a
separate create index statement in that case.
Tip
http://Use-The-Index-Luke.com/sql/exampleschema
contains all the required scripts to create
the demo tables and populate them with sample
data. In that case, the EMPLOYEES table is filled with
1000 records—one of them is mine. You don't need
to know more than that to follow the examples but if
you like to try them out for yourself, you can use the
scripts from the appendix.
For example, the following statement queries some employee
detail by its surrogate primary key:
SELECT first_name, last_name
FROM employees
WHERE employee_id = 123
According to the previous chapter, this statement should use the
most effective index lookup—the INDEX UNIQUE SCAN—because a
unique constraint ensures that no more than one record can
match the where clause.
-----------------------------------------------------------------
| Id | Operation | Name | Rows | Cost |
-----------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 1 | 2 |
|* 2 | INDEX UNIQUE SCAN | EMPLOYEES_PK | 1 | 1 |
-----------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("EMPLOYEE_ID"=123)
As expected, the execution plan shows the very efficient INDEX
UNIQUE SCAN operation. Almost independent of the data volume, the
INDEX UNIQUE SCAN finds the required index entry almost instantly.
Tip
The execution plan (sometimes explain plan or
query plan) shows the steps to execute a statement.
It can be gathered with the explain plan
command. http://Use-The-Index-
Luke.com/sql/example-schema covers that in more
detail.
The database must perform an additional step, the TABLE ACCESS BY
INDEX ROWID, to fetch the actual table data (FIRST_NAME, LAST_NAME).
Although this operation can become a performance bottleneck—as
explained in the section called “The Leaf Nodes”—there is no such
risk with an INDEX UNIQUE SCAN. An INDEX UNIQUE SCAN can not return
more than one ROWID. The subsequent table access is therefore not
at risk to read many blocks.
The primary key lookup, based on a single column surrogate key,
is bullet proof with respect to performance.
Primary Keys Supported by Nonunique Indexes
Nonunique indexes can be used to support a primary key or
unique constraint. In that case the lookup requires an INDEX
RANGE SCAN instead of an INDEX UNIQUE SCAN. Because the
constraint still maintains the uniqueness of every key, the
performance impact is often negligible. In case the searched
key is the last in its leaf node, the next leaf node will be read to
see if there are more matching entries. The example in the
section called “The Tree ” explains this phenomenon.
One reason to intentionally use a nonunique index to enforce a
primary key or unique constraint is to make the constraint
deferrable. While regular constraints are validated during
statement execution the validation of a deferrable constraint is
postponed until the transaction is committed. Deferred
constraints are required to propagate data into tables with
circular foreign key dependencies.
Concatenated Keys
Although surrogate keys are widely accepted and implemented,
there are cases when a key consists of more than one column. The
indexes used to support the search on multiple columns are called
concatenated or composite indexes.
The order of the individual columns within a concatenated index is
not only a frequent cause of confusion but also the foundation for
an extraordinary resistant myth; the „most selective first” myth
—Appendix A, Myth Directory has more details. The truth is that
the column order affects the number of statements that can use
the index.
For the sake of demonstration, let's assume the 1000 employees
company from the previous section was bought by a Very Big
Company. Unfortunately the surrogate key values used in our
EMPLOYEES table collide with those used by the Very Big Company.
The EMPLOYEE_ID values can be reassigned—theoretically—because
it's not a natural but a surrogate key. However, surrogate keys
are often used in interface to other systems—like an access control
system—so that changing is not as easy. Adding a new column to
maintain the uniqueness is often the path of least resistance.
After all, reality bites, and the SUBSIDIARY_ID column is added to
the table. The primary key is extended accordingly, the
corresponding unique index is replaced by a new one on
EMPLOYEE_ID and SUBSIDIARY_ID:
CREATE UNIQUE INDEX employee_pk
ON employees (employee_id, subsidiary_id);
The new employee table contains all employees from both
companies and has ten times as many records as before. A query
to fetch the name of a particular employee has to state both
columns in the where clause:
SELECT first_name, last_name
FROM employees
WHERE employee_id = 123
AND subsidiary_id = 30;
As intended and expected, the query still uses the INDEX UNIQUE
SCAN operation:
-----------------------------------------------------------------
| Id | Operation | Name | Rows | Cost |
-----------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 1 | 2 |
|* 2 | INDEX UNIQUE SCAN | EMPLOYEES_PK | 1 | 1 |
-----------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("EMPLOYEE_ID"=123 AND "SUBSIDIARY_ID"=30)
This setup becomes more interesting when the where clause
doesn't contain all the indexed columns. For example, a query that
lists all employees of a subsidiary:
SELECT first_name, last_name
FROM employees
WHERE subsidiary_id = 20;
----------------------------------------------------
| Id | Operation | Name | Rows | Cost |
----------------------------------------------------
| 0 | SELECT STATEMENT | | 110 | 477 |
|* 1 | TABLE ACCESS FULL| EMPLOYEES | 110 | 477 |
----------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("SUBSIDIARY_ID"=20)
The database performs a FULL TABLE SCAN. A FULL TABLE SCAN reads
all table blocks and evaluates every row against the where clause.
No index is used. The performance degrades linear with the data
volume; that is, double the amount of data, twice as long to wait
for the result. The FULL TABLE SCAN is amongst the most expensive
database operations. It's almost always causing performance
problems in online systems.
Full Table Scan
There are several cases when the database considers a FULL
TABLE SCAN the most effective way to retrieve the requested
data.
If the number of selected rows is a considerable fraction of the
overall table size, the FULL TABLE SCAN can be more effective than
an index lookup. Although this sounds odd in the first place, the
FULL TABLE SCAN has an advantage over any index based access:
there is no need for an additional TABLE ACCESS BY INDEX ROWID
step. The performance impact caused by the additional table
access can be considerable—as explained in the section called
“The Leaf Nodes”. Another aspect is that the Oracle database
can perform the read operations for a FULL TABLE SCAN in a more
efficient way than for an index lookup. The blocks needed for an
index lookup are not known in advance. The database must
read and process the index nodes in a block-by-block manner.
A FULL TABLE SCAN must read the entire table anyway, the
database can use the more efficient multi block read.
All of that should not hide the fact that a FULL TABLE SCAN is often
caused by a missing or inadequate index.
The database doesn't use the index because it is not suitable for
this query. A closer look into the index leaf nodes makes it more
apparent.
To repeat the most important lesson from the previous chapter:
the index leaf nodes are a sorted representation of the index
columns. In case multiple columns are indexed, the first column is
the most significant sort criterion, followed by the second, the
third, and so on.
As a consequence, the tree structure can be used only if the
where clause includes the leading columns of the index. The
values of the subsequent index columns are not centralized within
the leaf node structure and cannot be localized with a tree
traversal.
Figure 2.1. Concatenated Index
Figure 2.1, “Concatenated Index” shows an index fragment with
three leaf nodes and the corresponding branch node. The index
consists of the EMPLOYEE_ID and SUBSIDIARY_ID columns (in that
order), as in the example above.
The search for SUBSIDIARY_ID = 20 is not supported by the index
because the matching entries are distributed over a wide range of
the index. Although two index entries match the filter, the branch
node doesn't contain the search value at all. The tree cannot be
used to find those entries.
Tip
Visualizing an index like Figure 2.1, “Concatenated
Index” helps to understand which queries can be
supported by an index and which can't. Although
such a figure is very nice, a much simpler picture is
sufficient to get the point. It is usually enough to see
the index order and know that the tree can quickly
localize one particular place within this sequence.
The following SQL template returns the indexed
columns in index order; that is the logical order of
the index entries in the leaf nodes:
SELECT * FROM (
SELECT <INDEX COLUMN LIST>
FROM <TABLE>
ORDER BY <INDEX COLUMN LIST>
)
WHERE ROWNUM < 100;
If you insert the index definition and the
corresponding table name into that statement, you
will get a small excerpt from the index. Ask yourself
where you would start to search for the required
data. If there isn't any particular place where the
searched values appear together, the index tree
can't be used to find them.
It seems like the primary key index doesn't support the query to
list all employees of a subsidiary. The easiest solution to tune the
query is to create a new index on SUBSIDIARY_ID. This index boosts
the queries performance immediately:
---------------------------------------------------------------
| Id | Operation | Name | Rows | Cost |
---------------------------------------------------------------
| 0 | SELECT STATEMENT | | 110 | 77 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 110 | 77 |
|* 2 | INDEX RANGE SCAN | EMP_SUP_ID | 110 | 1 |
---------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("SUBSIDIARY_ID"=20)
The execution plan shows an INDEX RANGE SCAN on the new index.
Although the solution seems to be perfectly reasonable, there is an
alternative that should be preferred.
Considering that a search for an EMPLOYEE_ID in any subsidiary is
very unlikely, the existing unique index can be restructured to
support the primary key lookup as well as the lookup with the
SUBSIDIARY_ID only. The trick is to change the column order in the
index so that the new index definition is as follows:
CREATE UNIQUE INDEX EMPLOYEES_PK
ON EMPLOYEES (SUBSIDIARY_ID, EMPLOYEE_ID);
The index is still unique, so the primary key lookup will perform
an INDEX UNIQUE SCAN as before. The reversed column order
changed which statements can be supported by the index. The
original definition served queries for EMPLOYEE_ID only while the
new definition supports queries on SUBSIDIARY_ID only.
Important
When defining an index, the number of statements it
can support is the most important factor to consider.
Although the two-index solution will also yield very good select
performance, the one index variant will give much better insert,
delete and update performance. The preserved space might
even increase the cache-hit rate so that the overall scalability
improves. Volume 2 covers those effects in more detail.
To choose the right index, you must not only know how an index
works—as explained in this book—you must also know the
business domain. The knowledge about dependencies between
various attributes is essential to define an index correctly.
An external performance consultant can have a very hard time to
figure out which columns can go alone into the where clause and
which are always paired with other attributes. As long as you are
not familiar with the business domain, this kind of exercise is
actually reverse engineering. Although I admit that reverse
engineering can be fun if practiced every now and then, I know
that it becomes a very depressing task if practiced on an every
day basis.
Despite the fact that internal database administrators know the
industry of their company often better than external consultants,
the detailed knowledge needed to optimally define the indexes is
hardly accessible to them. The only place where the technical
database knowledge meets the functional knowledge of the
business domain is the development department.
Slow Indexes, Part II
The previous chapter has demonstrated that a changed column
order can gain additional benefits from an existing index.
However, this was considering two statements only. An index
change can influence all statements that access the corresponding
table. You probably know from your own experience: never
change a running system. At least not without comprehensive
testing beforehand.
Although the changed EMPLOYEE_PK index improves performance of
all queries that use a subsidiary filter without any other clause,
the index might be more tempting than it should. Even if an index
can support a query, it doesn't mean that it will give the best
possible performance. It's the optimizer's job to decide which
index to use—or not to use an index at all. This section drafts a
case that tempts the optimizer to use an inappropriate index.
The Query Optimizer
The query optimizer is the database component that
transforms an SQL statement into an execution plan. This
process is often called parsing.
The so-called Cost Based Optimizer (CBO) generates various
execution plan permutations and assigns a cost value to each
one. The cost value serves as benchmark to compare the
various execution plans; the plan with the lowest cost value is
considered best.
Calculating the cost value is a complex matter that easily fills a
book of its own. From users perspective it is sufficient to know
that the optimizer believes a lower cost value results in a better
statement execution.
The so-called Rule Based Optimizer (RBO) was the CBO's
predecessor and is of no practical use nowadays.
The new problem—after the index change—is that the telephone
directory application has become very slow. Although the
switchboard operators enter as much search criteria as possible,
the searches for a telephone number takes up to a minute. It
turns out that the following SQL is very slow:
SELECT first_name, last_name, subsidiary_id, phone_number
FROM employees
WHERE last_name = 'WINAND'
AND subsidiary_id = 30;
The execution plan is:
Example 2.1. Execution plan with revised primary key
index
-----------------------------------------------------------------
| Id | Operation | Name | Rows | Cost |
-----------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 30 |
|* 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 1 | 30 |
|* 2 | INDEX RANGE SCAN | EMPLOYEES_PK | 40 | 2 |
-----------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("LAST_NAME"='WINAND')
2 - access("SUBSIDIARY_ID"=30)
On the first sight, the execution plan looks fine. An index is used
and the cost value is rather low. Please note that the query uses
the redefined primary key index. Bearing in mind that the original
index definition—with EMPLOYEE_ID in the first position—didn't
support the statement, chances are good that index change causes
the bad performance.
The original execution plan can be checked with the use of an
optimizer hint. Hints provide additional information to the
optimizer in form of particularly formatted SQL comments. The
following statement uses a hint that instructs the optimizer not to
use the new index for this query:
SELECT /*+ NO_INDEX(EMPLOYEES EMPLOYEE_PK) */
first_name, last_name, subsidiary_id, phone_number
FROM employees
WHERE last_name = 'WINAND'
AND subsidiary_id = 30;
The original execution plan uses a FULL TABLE SCAN and has a higher
cost value than the INDEX RANGE SCAN:
----------------------------------------------------
| Id | Operation | Name | Rows | Cost |
----------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 477 |
|* 1 | TABLE ACCESS FULL| EMPLOYEES | 1 | 477 |
----------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("LAST_NAME"='WINAND' AND "SUBSIDIARY_ID"=30)
Even though the FULL TABLE SCAN must read all table blocks and
process all table rows, it is—in this particular case—faster than the
INDEX RANGE SCAN. The optimizer is well aware that my name isn't
very common and estimates a total row count of one. An index
lookup for one particular record should outperform the FULL TABLE
SCAN—but it doesn't; the index is slow.
A step-by-step investigation of the execution plan is the best way
to find the problem. The first step is the INDEX RANGE SCAN, which
finds all entries that match the filter SUBSIDIARY_ID = 30. Because
the second filter criteria—on LAST_NAME—is not included in the
index, it can't be considered during the index lookup.
The “Predicate Information” section of the execution plan in
Example 2.1, “Execution plan with revised primary key index”
reveals which filter criteria (predicates) are applied at each
processing step. The INDEX RANGE SCAN operation has the operation
Id 2; the corresponding predicate information is
“access("SUBSIDIARY_ID"=30)”. That means, the index tree structure
is traversed to find the first entry for SUBSIDIARY_ID 30.
Afterwards, the leaf node chain is followed to find all matching
entries. The result of the INDEX RANGE SCAN is a list of matching
ROWIDs that satisfy the filter on SUBSIDIARY_ID. Depending on the
size of the subsidiary, the number of rows matching that criterion
can vary from a few dozen to some thousands.
The next step in the execution plan is the TABLE ACCESS BY INDEX
ROWID that fetches the identified rows from the table. Once the
complete row—with all columns—is available, the outstanding part
of the where clause can be evaluated. All the rows returned from
the INDEX RANGE SCAN are read from the table and filtered by the
predicate related to the TABLE ACCESS BY INDEX ROWID operation:
“filter("LAST_NAME"='WINAND')”. The remaining rows are those that
fulfill the entire where clause.
The performance of this select statement is vastly depended on
the number of employees in the particular subsidiary. For a small
subsidiary—e.g., only a few dozen members—the INDEX RANGE SCAN
will result in good performance. On the other hand, a search in a
huge subsidiary can become less efficient than a FULL TABLE SCAN
because it can not utilize multi block reads (see Full Table Scan)
and might suffer from a bad clustering factor (see Clustering
Factor).
The phone directory lookup is slow because the INDEX RANGE SCAN
returns thousand records—all employees from the original
company—and the TABLE ACCESS BY INDEX ROWID must fetch all of
them. Remember the two ingredients for a “Slow Index”
experience: a wider index scan than intended and the subsequent
table access.
Besides the individual steps performed during the query, the
execution plan provides information about the optimizer's
estimates. This information can help to understand why the
optimizer has chosen a particular execution plan. The “Rows”
column is of particular interest for that purpose. It reveals the
optimizer's estimation that the INDEX RANGE SCAN will return 40
rows—Example 2.1, “Execution plan with revised primary key
index”. Under this presumption, the decision to perform an INDEX
RANGE SCAN is perfectly reasonable. Unfortunately, it's off by a
factor of 25.
The optimizer uses the so-called optimizer statistics for its
estimates. They are usually collected and updated on a regular
basis by the administrator or an automated job. They consist of
various information about the tables and indexes in the database.
The most important statistics for an INDEX RANGE SCAN are the size
of the index (number of rows in the index) and the selectivity of
the respective predicate (the fraction that satisfies the filter).
Statistics and Dynamic Sampling
The optimizer can use a variety of statistics on table, index, and
column level. Most statistics are collected per table column: the
number of distinct values, the smallest and biggest value (data
range), the number of NULL occurrences and the column
histogram (data distribution). As of Oracle 11g it is also possible
to collect extended statistics for column concatenations and
expressions.
There are only very few statistics for the table as such: the size
(in rows and blocks) and the average row length. However, the
column statistics belong to the table; that is, they are computed
when the table statistics are gathered.
The most important index statistics are the tree depth, the
number of leaf nodes, the number of distinct keys and the
clustering factor (see Clustering Factor).
The optimizer uses these values to estimate the selectivity of
the predicates in the where clause.
If there are no statistics available, the optimizer can perform
dynamic sampling. That means that it reads a small fraction of
the table during query planning to get a basis for the estimates.
Dynamic sampling is enabled per default since Oracle release
9.2—although in a restricted manner. Release 10g changed the
default to perform dynamic sampling more aggressively.
If there are no statistics available—as I deleted them on purpose,
to demonstrate this effect—the optimizer defaults. The default
statistics suggest a small index with medium selectivity and lead
to the estimation that the INDEX RANGE SCAN will return 40 rows.
Correct statistics lead to more realistic estimates in the execution
plan. The estimated rows count for the INDEX RANGE SCAN changed
to 1000. The second filter—on LAST_NAME—is expected to reduce
the result set down to a single row. The new estimates are very
close to the actual values:
-----------------------------------------------------------------
| Id | Operation | Name | Rows | Cost |
-----------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 680 |
|* 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 1 | 680 |
|* 2 | INDEX RANGE SCAN | EMPLOYEES_PK | 1000 | 4 |
-----------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("LAST_NAME"='WINAND')
2 - access("SUBSIDIARY_ID"=30)
Fetching 1000 records individually with the TABLE ACCESS BY INDEX
ROWID is rather expensive. The cost value of the new execution
plan has grown to almost 700. A closer look to the plan reveals
that the INDEX RANGE SCAN is, with a cost value of 4, rather “cheap”.
The expensive operation is the TABLE ACCESS BY INDEX ROWID; the
cost grows to 680 at this step. The optimizer will automatically
prefer the FULL TABLE SCAN because its cost of 477 indicates a better
performance.
The discussion about bad index performance and a fast FULL TABLE
SCAN should not hide the fact that a properly defined index is the
best solution. To support a search by last name, an appropriate
index should be added:
CREATE INDEX emp_name ON employees (last_name);
The optimizer calculates a cost value of 3 for the new plan:
Example 2.2. Execution Plan with Dedicated Index
--------------------------------------------------------------
| Id | Operation | Name | Rows | Cost |
--------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 3 |
|* 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 1 | 3 |
|* 2 | INDEX RANGE SCAN | EMP_NAME | 1 | 1 |
--------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("SUBSIDIARY_ID"=30)
2 - access("LAST_NAME"='WINAND')
Because of the statistics, the optimizer knows that LAST_NAME is
more selective than the SUBSIDIARY_ID. It estimates that only one
row will fulfill the predicate of the index lookup—on LAST_NAME—so
that only row has to be retrieved from the table.
Please note that the difference in the execution plans as shown in
figures Example 2.1, “Execution plan with revised primary key
index” and Example 2.2, “Execution Plan with Dedicated Index” is
minimal. The performed operations are the same and the cost is
low in both cases. Nevertheless the second plan performs much
better than the first. The efficiency of an INDEX RANGE SCAN—
especially when accompanied by a TABLE ACCESS BY INDEX ROWID—
can vary in a wide range. Just because an index is used doesn't
mean the performance is good.
Functions
The index in the previous section has improved the performance
considerably, but you probably noticed that it works only if the
names are stored in all caps. That's obviously not the way we
would like to store our data.
This section describes the solution to this kind of problem as well
as the limitations.
DB2
Function based indexes available for DB2 on zOS but not on
other systems.
The backup solution is to create a real column in the table
that holds the result of the expression. The column must be
maintained by a trigger or by the application layer—
whatever is more appropriate. The new column can be
indexed like any other, SQL statements must query the new
column (without the expression).
MySQL
MySQL does, as of version 5, neither support function based
indexes nor virtual columns. MySQL is case-insensitive by
default, but that can be controlled on column level. Virtual
columns are in the queue for version 6.
The backup solution is to create a real column in the table
that holds the result of the expression. The column must be
maintained by a trigger or by the application layer—
whatever is more appropriate. The new column can be
indexed like any other, SQL statements must query the new
column (without the expression).
Oracle
The Oracle database supports function based indexes since
release 8i. Virtual columns were additionally added with 11g.
PostgreSQL
PostgreSQL supports Indexes on Expressions.
SQL Server
SQL Server supports Computed Columns that can be
indexed.
Case-Insensitive Search
The SQL for a case-insensitive search is very simple—just upper
case both sides of the search expression:
SELECT first_name, last_name, subsidiary_id, phone_number
FROM employees
WHERE UPPER(last_name) = UPPER('winand');
The query works by converting both sides of the comparison to
the same notation. No matter how the LAST_NAME is stored, or the
search term is entered, the upper case on both sides will make
them match. From functional perspective, this is a reasonable
SQL statement. However, let's have a look at the execution plan:
----------------------------------------------------
| Id | Operation | Name | Rows | Cost |
----------------------------------------------------
| 0 | SELECT STATEMENT | | 10 | 477 |
|* 1 | TABLE ACCESS FULL| EMPLOYEES | 10 | 477 |
----------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(UPPER("LAST_NAME")='WINAND')
It's a comeback of our old friend the full table scan. The index on
LAST_NAME is unusable because the search is not on last name—it's
on UPPER(LAST_NAME). From the database's perspective, that's
something entirely different.
It's a trap we all fall into. We instantly recognize the relation
between LAST_NAME and UPPER(LAST_NAME) and expect the database to
“see” it as well. In fact, the optimizer's picture is more like that:
SELECT first_name, last_name, subsidiary_id, phone_number
FROM employees
WHERE BLACKBOX(...) = 'WINAND';
The UPPER function is just a black box—hence the index on
LAST_NAME cannot be used.
Tip
Thinking of a black box instead of the real function
helps to understand the optimizer's point of view.
Evaluating Literal Expressions
The optimizer is able to evaluate the expression on the right
hand side of the comparison because it doesn't refer to table
data or bind parameters.
That's very similar to a compiler that is able to evaluate
constant expressions at compile time. Analogous, the optimizer
can evaluate literal expressions at parse time.
The predicate information section of the execution plan shows
the evaluated expression.
To support that query, an index on the actual search expression is
required; that is, a so-called function based index. Although the
name function based index suggests a special feature, it is just an
ordinary B-Tree index that is applied upon an expression instead
of a column. The following statement creates an index that
supports the query:
CREATE INDEX emp_up_name
ON employees (UPPER(last_name));
The create statement for a function based index is very similar
to a regular index—there is no special keyword. The difference is
that an expression is used instead of a column name.
The index stores the all capitalized notation of the LAST_NAME
column. It can be shown like described in the tip on index
visualization:
SELECT * FROM (
SELECT UPPER(last_name)
FROM employees
ORDER BY UPPER(last_name)
)
WHERE ROWNUM < 10;
The statement will return the first 10 index entries in index
order:
UPPER(LAST_NAME)
--------------------
AAACH
AAAXPPKU
AABCZLTSCNM
AAGJX
AAIIARN
AAQVASLR
AASQD
AAUMEJHQUEI
ABAIHSJFYG
ABAS
The Oracle database can use a function based index if the exact
expression of the index definition appears in an SQL statement—
like in the example above—so that the new execution plan uses
the index:
----------------------------------------------------------------
| Id | Operation | Name | Rows | Cost |
----------------------------------------------------------------
| 0 | SELECT STATEMENT | | 100 | 41 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 100 | 41 |
|* 2 | INDEX RANGE SCAN | EMP_UP_NAME | 40 | 1 |
----------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(UPPER("LAST_NAME")='WINAND')
It is a normal INDEX RANGE SCAN, exactly as described in Chapter 1,
Anatomy of an Index; the tree is traversed and the leaf nodes are
followed to see if there is more than one match. There are no
“special” operations for function based indexes.
Warning
UPPER and LOWER is sometimes used without
developer's knowledge. E.g., some ORM frameworks
support case-insensitive natively but implement it
by using UPPER or LOWER in the SQL. Hibernate, for
example, injects an implicit LOWER for a caseinsensitive
search.
The execution plan has one more issue: the row estimates are way
too high. The number of rows returned by the table access is even
higher than the number of rows expected from the INDEX RANGE
SCAN. How can the table access match 100 records if the preceding
index scan returned only 40 rows? Well, it can't. These kind of
“impossible” estimates indicate a problem with the statistics (see
also Statistics and Dynamic Sampling).
This particular problem has a very common cause; the table
statistics were not updated after creating the index. Although the
index statistics are automatically collected on index creation (since
10g), the table stats are left alone. The box Collecting Statistics
has more information why the table statistics are relevant and
what to take care of when updating statistics.
Collecting Statistics
The column statistics, which include the number of distinct
column values, are part to the table statistics. That means,
even if multiple indexes contain the same column, the
corresponding statistics are kept one time only—as part of the
table statistics.
Statistics for a function based index (FBI) are implemented as
virtual columns on table level. The DBMS_STATS package can
collect the statistics on the virtual column after the FBI was
created—when the virtual column exists. The Oracle
documentation says:
After creating a function-based index, collect statistics on
both the index and its base table using the DBMS_STATS
package. Such statistics will enable Oracle Database to
correctly decide when to use the index.
Collecting and updating statistics is a task that should be
coordinated with the DBAs. The optimizer is heavily depending
on the statistics—there is a high risk to run into trouble. My
general advice is to always backup statistics before updating
them, always collect the table statistics and all related index
statistics in one shot, and talk to the DBAs.
After updating the table statistics, the execution plan has more
correct estimates:
----------------------------------------------------------------
| Id | Operation | Name | Rows | Cost |
----------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 3 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 1 | 3 |
|* 2 | INDEX RANGE SCAN | EMP_UP_NAME | 1 | 1 |
----------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(UPPER("LAST_NAME")='WINAND')
Note
Statistics for function based indexes and multicolumn
statistics were introduced with Oracle
release 11g. Previous releases might behave
differently.
Although the execution performance is not improved by the
updated statistics—because the index was correctly used anyway
—it is always good to have a look at the optimizer's estimates. The
number of rows processed for each step (cardinality) is a very
important figure for the optimizer—getting them right for simple
queries can easily pay off for complex queries.
User Defined Functions
A case-insensitive search is probably the most common use for a
function based index—but other functions can be "indexed" as
well. In fact, almost every function—even user defined functions—
can be used with an index.
It is, however, not possible to use the SYSDATE function as part of an
index definition. For example, the following function can't be
indexed:
CREATE FUNCTION get_age(date_of_birth DATE)
RETURN NUMBER
AS
BEGIN
RETURN
TRUNC(MONTHS_BETWEEN(SYSDATE, date_of_birth)/12);
END;
/
The function converts the date of birth into an age—according to
the current system time. It can be used in the select-list to query
an employees age, or as part of the where clause:
SELECT first_name, last_name, get_age(date_of_birth)
FROM employees
WHERE get_age(date_of_birth) = 42;
Although it's a very convenient way search for all employees who
are 42 years old, a function based index can not tune this
statement because the GET_AGE function is not deterministic. That
means, the result of the function call is not exclusively determined
by its parameters. Only functions that always return the same
result for the same parameters—functions that are deterministic
—can be indexed.
The reason behind this limitation is easily explained. Just
remember that the return value of the function will be physically
stored in the index when the record is inserted. There is no
background job that would update the age on the employee's
birthday—that's just not happening. The only way to update an
individual index entry is to update an indexed column of the
respective record.
Besides being deterministic, a function must be declared
DETERMINISTIC in order to be usable with a function based index.
Caution
The Oracle database trusts the DETERMINISTIC
keyword—that means, it trust the developer. The
GET_AGE function can be declared DETERMINISTIC so that
the database allows an index on
GET_AGE(date_of_birth).
Regardless of that, it will not work as intended
because the index entries will not increase as the
years pass; the employees will not get older—at
least not in the index.
Other examples for functions that cannot be indexed are the
members of the DBMS_RANDOM package and functions that implicitly
depend on the environment—such as NLS (National Language
Support) settings. In particular, the use of TO_CHAR without
formating mask is often causing trouble.
Over Indexing
In case the concept of function based indexing is new to you, you
are now probably in the mood to index everything. Unfortunately,
this is the very last thing you should do. Every index has its cost,
but function based indexes are worst because they make it very
easy to create redundant indexes.
Consider the case-insensitive search again: the UPPER function has
converted the search term and all the employee names to the
same notation, the function based index made it fast. But there
are other ways to implement a case-insensitive search:
SELECT first_name, last_name, subsidiary_id, phone_number
FROM employees
WHERE LOWER(last_name) = LOWER('winand');
That query can't use the EMP_UP_NAME index—it's a different
expression!
An index on LOWER(last_name) would be redundant—obviously. Real
world examples are much more subtle—unfortunately. The better
solution—for this particular query—is to use the same expression
for all case-insensitive searches on LAST_NAME.
Tip
Unify the access path in all statements so that less
indexes can achieve more.
Warning
UPPER and LOWER is sometimes used without
developers knowledge. E.g., some ORM frameworks
support case-insensitive natively but implement it
by using UPPER or LOWER in the SQL. Hibernate, for
example, injects an implicit LOWER for a caseinsensitive
search.
Every CREATE INDEX statement puts a huge burden on the
database: the index needs space—on the disks and in the
memory; the optimizer has to consider the index for each query
on the base table; and, each and every insert, update, delete
and merge statement on the base table must update the index.
All of that, just because of one CREATE INDEX statement. Use it
wisely.
Tip
Always aim to index the original data. That is often
the most useful information you can put into an
index.
Exercise
One problem from the section called “User Defined Functions” is
still unanswered; how to use an index to search for employees
who are 42 years old?
Try to find the solution and share your thoughts on the forum. But
watch out, it's a trap. Open your mind to find the solution.
Another question to think about is when to use function based
indexes? Do you have examples?
Bind Parameter
This section covers a topic that is way too often ignored in
textbooks; the use of bind parameters in SQL statements.
Bind parameter—also called dynamic parameters or bind
variables—are an alternative way to provide data to the database.
Instead of putting the values literally into the SQL statement, a
placeholder (like ?, :name or @name) is used. The actual values for
the placeholder are provided through a separate API call.
Even though literal values are very handy for ad-hoc statements,
programs should almost always use bind variables. That's for two
reasons:
Security
Bind variables are the best way to prevent SQL injection.
Performance
The Oracle optimizer can re-use a cached execution plan if
the very same statement is executed multiple times. As
soon as the SQL statement differs—e.g., because of a
different search term—the optimizer has to redo the
execution plan (see also: CURSOR_SHARING).
The general rule is therefore to use bind variables in programs.
There is, of course, one small exception. Re-using an execution
plan means that the same execution plan will be used for different
search terms. Different search terms can, however, have an
impact on the data volume. That has, in turn, a huge impact on
performance.
For example, the number of rows selected by the following SQL
statement is vastly depending on the subsidiary:
99 rows selected.
SELECT first_name, last_name
FROM employees
WHERE subsidiary_id = 20;
----------------------------------------------------------------
| Id | Operation | Name | Rows | Cost |
----------------------------------------------------------------
| 0 | SELECT STATEMENT | | 99 | 70 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 99 | 70 |
|* 2 | INDEX RANGE SCAN | EMPLOYEE_PK | 99 | 2 |
----------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("SUBSIDIARY_ID"=20)
While an INDEX RANGE SCAN is best for small and medium
subsidiaries, a FULL TABLE SCAN can outperform the index for large
subsidiaries:
1000 rows selected.
SELECT first_name, last_name
FROM employees
WHERE subsidiary_id = 30;
----------------------------------------------------
| Id | Operation | Name | Rows | Cost |
----------------------------------------------------
| 0 | SELECT STATEMENT | | 1000 | 478 |
|* 1 | TABLE ACCESS FULL| EMPLOYEES | 1000 | 478 |
----------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("SUBSIDIARY_ID"=30)
The statistics—the column histogram —indicate that one query
selects ten times more than the other, therefore the optimizer
creates different execution plans so that both statements can
execute with the best possible performance. However, the
optimizer has to know the actual search term to be able to use the
statistics—that is, it must be provided as literal value. the section
called “Indexing LIKE Filters” covers another cases that is very
sensible to bind parameters: LIKE filters.
Column Histograms
A column histogram is the part of the table statistics that holds
the outline of the data distribution of a column. The histogram
indicates which values appear more often than others.
The Oracle database uses two different types of histograms
that serve the same purpose: frequency histograms and heightbalanced
histograms.
In this respect, the optimizer is similar to a compiler; if the literal
values are available during parsing (compiling), they can be taken
into consideration. If bind variables are used, the actual values are
not known during parsing (compiling) and the execution plan can
not be optimized for a nonuniform data distribution.
Tip
Status flags such as "todo" and "done" have a
nonuniform distribution very often. E.g., the "done"
records can typically outnumber the "todo" records
by an order of magnitude.
The use of bind parameters might prevent the best
possible execution plan for each status value. In this
case, the status values should be escaped and
validated against a white list before they are put
literally into the SQL statement.
An execution plan that is tailor-made for a particular search value
doesn't come for free. The optimizer has to re-create the
execution plan every time a new distinct value appears. That
means, executing the same SQL with 1000 different values will
cause the optimizer to compute the execution plan 1000 times. In
the compiler analogy, it's like compiling the source code every
time you run the program.
The database has a little dilemma when deciding to use a cached
version of the execution plan or to parse the statement again. On
the one hand, can the column histogram greatly improve
execution performance for some statements. On the other hand is
parsing a very expensive task that should be avoided whenever
possible. The dilemma is that the optimizer doesn't know in
advance if the different values will result in a different execution
plan.
The application developer can come to help with this dilemma.
The rule is to use bind values except for fields where you expect a
benefit from a column histogram—e.g., because a full table scan
makes sense in one case, but an index lookup in another case.
Tip
In case of doubt, use bind parameters.
Please note that the SQL standard defines positional parameters
only—not named ones. Most databases and abstraction layers
support named bind parameters nevertheless—in a nonstandard
way. The following code snippets are examples how to use bind
parameters.
C#
Instead of
int subsidiary_id;
SqlCommand command = new SqlCommand(
"select first_name, last_name"
+ " from employees"
+ " where subsidiary_id = " + subsidiary_id
, connection);
use the following
int subsidiary_id;
SqlCommand command = new SqlCommand(
"select first_name, last_name"
+ " from employees"
+ " where subsidiary_id = @subsidiary_id
, connection);
command.Parameters.AddWithValue("@subsidiary_id", subsidiary_id);
Further documentation: SqlParameterCollection.
Java
Instead of
int subsidiary_id;
Statement command = connection.createStatement(
"select first_name, last_name"
+ " from employees"
+ " where subsidiary_id = " + subsidiary_id
);
use the following
int subsidiary_id;
PreparedStatement command = connection.prepareStatement(
"select first_name, last_name"
+ " from employees"
+ " where subsidiary_id = ?"
);
command.setInt(1, subsidiary_id);
Further documentation: PreparedStatement.
Perl
Instead of
my $subsidiary_id;
my $sth = $dbh->prapare(
"select first_name, last_name"
. " from employees"
. " where subsidiary_id = $subsidiary_id"
);
$sth->execute();
use the following
my $subsidiary_id;
my $sth = $dbh->prapare(
"select first_name, last_name"
. " from employees"
. " where subsidiary_id = ?"
);
$sth->execute($subsidiary_id);
Further documentation: Programming the Perl DBI.
PHP
The following example is for MySQL—probably the most
commonly used database with PHP.
Instead of
$mysqli->query("select first_name, last_name"
. " from employees"
. " where subsidiary_id = " . $subsidiary_id);
use the following
if ($stmt = $mysqli->prepare("select first_name, last_name"
. " from employees"
. " where subsidiary_id = ?"))
{
$stmt->bind_param("i", $subsidiary_id);
$stmt->execute();
} else {
/* handle SQL error */
}
Further documentation: mysqli_stmt::bind_param.
The PDO interface supports prepared statements as well.
Ruby
Instead of
dbh.execute("select first_name, last_name"
+ " from employees"
+ " where subsidiary_id = #{subsidiary_id}");
use the following:
dbh.prepare("select first_name, last_name"
+ " from employees"
+ " where subsidiary_id = ?");
dbh.execute(subsidiary_id);
Further documentation: Ruby DBI Tutorial
Even though the SQL standard (par. 6.2, page 114) defines the
question mark (?) as the placeholder sign, the implementations
vary. The Oracle database does not natively support question
marks but uses the colon syntax for named placeholder (:name).
On top of that, many database abstraction layers (e.g., Java JDBC
and perl DBI) emulate the question mark support so that it can be
used with the Oracle database when accessed through these
layers.
Note
Bind parameters cannot change the structure of an
SQL statement.
That means, you cannot use bind parameters in the
from clause or change the where clause
dynamically. Non of the following two bind
parameters works:
String sql = prepare("SELECT * FROM ? WHERE ?");
sql.execute('employees', 'employee_id = 1');
If you need to change the structure of an SQL
statement, use dynamic SQL.
Oracle Cursor Sharing, Bind Peeking and Adaptive
Cursor Sharing
Because bind parameters and histograms are very important,
the Oracle database has undergone various attempts to make
them work together.
The by far most common problem are applications that do not
use bind parameters at all. For that reason, Oracle has
introduced the setting CURSOR_SHARING that allows the database
to re-write the SQL to use bind parameters (typically named
:SYS_Bx). However, that feature is a workaround for applications
that do not use bind parameters and should never be used for
new applications.
With release 9i, Oracle introduced the so-called bind peeking.
Bind peeking enables the optimizer to use the actual bind
values of the first execution during parsing. The problem with
that approach is its nondeterministic behavior: whatever value
was used in the first execution (e.g., since database startup)
affects all executions. The execution plan can change every
time the database is restarted or, more problematic, the cached
time the database is restarted or, more problematic, the cached
plan expires. In other words; the execution plan can change at
any time.
Release 11g introduced adaptive cursor sharing to cope with
the problem. This feature enables the database to have
multiple execution plans for the same SQL statement. The
“adaptive” approach is to run everything as usual, but to take
note of the time each execution takes. In case one execution
runs much slower than the others, the optimizer will create a
tailor-made plan for the specific bind values. However, that
tailor-made plan is created the next time the statement
executes with the problematic bind values. That means that the
first execution must run slow before the second execution can
benefit.
All those features attempt to cope with a problem that can be
handled by the application. If there is a heavy imbalance upon
the distribution of search keys, using literal variables should be
considered.
NULL And Indexes
SQL's NULL is a constant source of confusion. Although the basic
idea of NULL—to represent missing data—is rather simple, there
are numerous side effects. Some of them need special attention in
SQL—e.g., "IS NULL" as opposed to "= NULL". Other side effects are
much more subtle—but have a huge performance impact.
This section describes the most common performance issues when
handling NULL in the Oracle database—in that respect, the Oracle
database is very special.
DB2
DB2 does not treat the empty string as NULL.
MySQL
MySQL does not treat the empty string as NULL.
mysql> SELECT 0 IS NULL, 0 IS NOT NULL, '' IS NULL, '' IS NOT NULL;
+-----------+---------------+------------+----------------+
| 0 IS NULL | 0 IS NOT NULL | '' IS NULL | '' IS NOT NULL |
+-----------+---------------+------------+----------------+
| 0 | 1 | 0 | 1 |
+-----------+---------------+------------+----------------+
Oracle
The most annoying “special” handling of the Oracle database
is that an empty string is considered NULL:
select * from dual where '' is null;
D-X
1 row selected.
PostgreSQL
PostgreSQL does not treat the empty string as NULL.
SQL Server
SQL Server
SQL Server does not treat the empty string as NULL.
Indexing NULL
The Oracle database does not include rows into an index if all
indexed columns are NULL.
DB2
DB2 includes NULL into every index.
MySQL
MySQL includes NULL into every index.
Oracle
The Oracle database does not include rows into an index if
all indexed columns are NULL.
PostgreSQL
PostgreSQL has "Index support for IS NULL" since release
8.3.
SQL Server
SQL Server includes NULL into every index.
For example, the EMP_DOB index, which consists of the DATE_OF_BIRTH
column only, will not contain an entry for a record where
DATE_OF_BIRTH is null:
INSERT INTO employees (
subsidiary_id, employee_id,
first_name, last_name, phone_number)
VALUES (?, ?, ?, ?, ?);
The DATE_OF_BIRTH is not specified and defaults to NULL—hence, the
record is not added to the EMP_DOB index. As a consequence, the
index cannot support a query for records where the date of birth
is null:
SELECT first_name, last_name
FROM employees
WHERE date_of_birth IS NULL;
----------------------------------------------------
| Id | Operation | Name | Rows | Cost |
----------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 477 |
|* 1 | TABLE ACCESS FULL| EMPLOYEES | 1 | 477 |
----------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("DATE_OF_BIRTH" IS NULL)
Nevertheless, the record will be inserted into a concatenated
index if at least one column is not NULL:
CREATE INDEX demo_null ON employees (subsidiary_id, date_of_birth);
The new row is included in the demo index because the subsidiary
is not null for that row. As soon as any index column is not null,
the entire row is added to the index. This index can therefore
support a query for all employees of a specific subsidiary where
the date of birth is null:
SELECT first_name, last_name
FROM employees
WHERE subsidiary_id = ?
AND date_of_birth IS NULL;
--------------------------------------------------------------
| Id | Operation | Name | Rows | Cost |
--------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 1 | 2 |
|* 2 | INDEX RANGE SCAN | DEMO_NULL | 1 | 1 |
--------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("SUBSIDIARY_ID"=TO_NUMBER(?)
AND "DATE_OF_BIRTH" IS NULL)
Please note that the index covers the entire where clause; all
predicates are evaluated during the index lookup (Id 2). That
means, the NULL is in the index.
This concept can be extended to find all records where date of
birth is null—regardless of the subsidiary. Like with any other
concatenated index, the DATE_OF_BIRTH column must be the first to
support that query. However, it is required to keep the
SUBSIDIARY_ID in the index, otherwise NULL would not be included in
the index at all. Any column that cannot be NULL—e.g., because of a
not null constraint—is suitable to enforce inclusion of NULL into an
index. Even faked columns can be used, just for the purpose to
include NULL into an index:
DROP INDEX emp_dob;
CREATE INDEX emp_dob ON employees (date_of_birth, '1');
The index is postfixed with a constant string that can never be
NULL. That makes sure that the index has all rows, even if the date
of birth is NULL.
Tip
Putting a column that cannot be NULL into an index
allows indexing NULL like any value.
It is, however, very important that the dummy column can never
become NULL and that the database knows that. Otherwise, the
side effects might surprise you, as the next section explains.
NOT NULL Constraints
Whenever an Oracle index should support "where column IS NULL"
queries, the index must include a column that can never be NULL.
However, it is not sufficient that there are no NULL entries—it must
be enforced by a constraint. Otherwise, there could be a row
where all indexed columns are null; that row would not be
included in the index.
For example, the following index supports the select if the
LAST_NAME column has a NOT NULL constraint:
DROP INDEX emp_dob;
CREATE INDEX emp_dob ON employees (date_of_birth, last_name);
SELECT *
FROM employees
WHERE date_of_birth IS NULL;
-----------------------------------------------------------------
| Id | Operation | Name | Rows | Cost |
-----------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 3 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 1 | 3 |
|* 2 | INDEX RANGE SCAN | EMP_DOB_NAME | 1 | 2 |
-----------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("DATE_OF_BIRTH" IS NULL)
As soon as the not null constraint is removed, the index can't be
used for the query anymore:
ALTER TABLE employees
MODIFY last_name NULL;
SELECT *
FROM employees
WHERE date_of_birth IS NULL;
----------------------------------------------------
| Id | Operation | Name | Rows | Cost |
----------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 477 |
|* 1 | TABLE ACCESS FULL| EMPLOYEES | 1 | 477 |
----------------------------------------------------
Tip
A missing NOT NULL constraint is often causing
count(*) statements to do a full table scan.
However, the database recognizes a constant expression as well—
like the index definition from the previous section.
An index on a user defined function, on the other hand, does not
impose any NOT NULL constraint on the index expression:
CREATE OR REPLACE FUNCTION blackbox(id IN NUMBER) RETURN NUMBER
DETERMINISTIC
IS
BEGIN
RETURN id;
END;
DROP INDEX emp_dob;
CREATE INDEX emp_dob_bb
ON employees (date_of_birth, blackbox(employeeid));
SELECT *
FROM employees
WHERE date_of_birth IS NULL;
----------------------------------------------------
| Id | Operation | Name | Rows | Cost |
----------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 477 |
|* 1 | TABLE ACCESS FULL| EMPLOYEES | 1 | 477 |
----------------------------------------------------
The function name BLACKBOX emphasizes the fact that the optimizer
has no idea what the function is doing (see also the section called
“Case-Insensitive Search”). Although we can see that the function
passes the input value straight through, the database doesn't
recognize—it's just a user defined function that returns a numeric
value. The NOT NULL property of the EMPLOYEE_ID column is lost. It's
not possible to use the index to find all records. However, there is
still a way to use the index if you know that the expression will
never return NULL:
SELECT *
FROM employees
WHERE date_of_birth IS NULL
AND blackbox(employee_id) IS NOT NULL;
---------------------------------------------------------------
| Id | Operation | Name | Rows | Cost |
---------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 3 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 1 | 3 |
|* 2 | INDEX RANGE SCAN | EMP_DOB_BB | 1 | 2 |
---------------------------------------------------------------
The added where clause is an indirect way to put the not null
constraint on the expression. The statement can use the index
because it selects only the records where the expression is not
null; they are in the index—per definition.
There is, unfortunately, no way to declare a function so that NULL is
returned only if NULL is provided as input.
NOT NULL Constraints on Virtual Columns
Oracle introduced virtual columns that can be indexed with
release 11g.
Virtual columns serve a similar purpose as functional indexes,
but they have a distinct column name that can be referenced.
They are, in fact, very similar to a column that is added via a
view. But they can be indexed.
These virtual columns can have a NOT NULL constraint:
DROP INDEX emp_dob_bb;
ALTER TABLE employees
ADD bb_expression
GENERATED ALWAYS AS (blackbox(employee_id)) NOT NULL;
CREATE INDEX emp_dob_bb
ON employees (date_of_birth, bb_expression);
SELECT *
FROM employees
WHERE date_of_birth IS NULL;
---------------------------------------------------------------
| Id | Operation | Name | Rows | Cost |
---------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 3 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 1 | 3 |
|* 2 | INDEX RANGE SCAN | EMP_DOB_BB | 1 | 2 |
---------------------------------------------------------------
In fact, virtual columns existed before Oracle 11g as an
implementation vehicle for function based indexes. Each
indexed expression automatically creates a virtual column
("SYS_....") that is not shown when the table is described.
Altering system generated virtual columns to NOT NULL is
possible, as it seems.
However, the Oracle database knows that there are functions that
may return NULL only in case NULL is provided as input. But that's
for internal functions only:
DROP INDEX emp_dob_bb;
CREATE INDEX emp_dob_upname
ON employees (date_of_birth, upper(last_name));
SELECT *
FROM employees
WHERE date_of_birth IS NULL;
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost |
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 3 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 1 | 3 |
|* 2 | INDEX RANGE SCAN | EMP_DOB_UPNAME | 1 | 2 |
-------------------------------------------------------------------
The upper function preserves the NOT NULL property of the indexed
columns. Removing the constraint on the base columns renders
the index unusable:
ALTER TABLE employees MODIFY last_name NULL;
SELECT *
FROM employees
WHERE date_of_birth IS NULL;
----------------------------------------------------
| Id | Operation | Name | Rows | Cost |
----------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 477 |
|* 1 | TABLE ACCESS FULL| EMPLOYEES | 1 | 477 |
----------------------------------------------------
Partial Indexes, Part I
The Oracle way to handle NULL in indexes opens a door to
implement a missing feature: partial indexes―sometimes also
called filtered indexes.
DB2
DB2 does not support partial indexes.
MySQL
The MySQL community uses the term "partial index" for
feature that is different from the one that is explained here.
There is no equivalent for the feature that is described here.
Oracle
The Oracle database has only indirect support for partial
indexes that is described in this section.
PostgreSQL
PostgreSQL has support for partial indexes since release 7.2.
Documentation is available in the PostgreSQL manual.
SQL Server
SQL Server supports filtered indexes.
A partial index is an index that does not contain all rows from the
table, but only those that match a filter predicate. It is often
implemented as where clause in the index definition. The
purpose of a partial index is to keep the index small, even if the
base table grows large. However, as of release 11g, the Oracle
database doesn't support partial indexes.
For example, consider a queue table where each item can have
one of three states: todo (T), done (D), and error (E). New items
are constantly put into the queue, picked up by a periodic process,
and updated when processed. If something goes wrong, the state
is set to error otherwise to done.
This scenario will lead to a non-uniform data distribution because
the "done" records accumulate over the time. Whenever a query
selects the "done" rows, a column histogram will probably tell the
optimizer to perform a full table scan because that's the best
execution plan to retrieve a large fraction of the table. Regardless
of that, the index on state is still required to support queries for
the "todo" records.
An index on state is a good candidate for a partial index that
doesn't contain the "done" records. The index definition, which
works in PostgreSQL and SQL Server only, would be:
CREATE INDEX task_state ON tasks (state) WHERE state IN ('T','E');
Databases with native support for partial indexes can use it to
select all T and E records—but not for any other state. This is,
however, the same effect that a column histogram would cause in
that scenario.
The Oracle database doesn't support partial indexes as
demonstrated above but has an implicit where clause in every
index; that is, "column is not null". In other words, every Oracle
index is a partial index that doesn't contain entries where all
columns are null. Mapping any value that shall not be indexed to
NULL simulates a partial index:
CREATE OR REPLACE FUNCTION active_state(state CHAR)
RETURN CHAR
DETERMINISTIC
AS
BEGIN
IF state IN ('T', 'E') THEN
RETURN state;
ELSE
RETURN NULL;
END IF;
END;
/
CREATE INDEX task_state ON tasks (active_state(state));
The oddity with that is that the SQL statement must use the
function—otherwise the index can't be used:
SELECT *
FROM tasks
WHERE active_state(state) = 'T';
On the other hand, the function must not be used to find the
processed records:
SELECT *
FROM tasks
WHERE state = 'D';
Although this kind of emulated partial index is known to work, it is
rather awkward.
Partial Indexes, Part II
As of Oracle release 11g, there is a second approach to simulate
a partial index.
However, as of writing I do not have any practical experience
with that, so I prefer not to push that approach right now. The
new approach does not exploit the NULL trick and does therefore
not belong into this section anyway.
In case you would like to try the new approach, just drop me a
note.
Searching For Ranges
SQL inequality operators such as <, > and between can be tuned
just like exact key lookups. The same rules apply—but the column
order becomes even more important than before.
However, there is one special case that is hardly tunable—it is, in
fact, the only case where a B-Tree index can act as a makeshift
only.
Greater, Less and Between
The most important performance factor of an INDEX RANGE SCAN is
the leaf node traversal—keeping that small is the golden rule of
indexing. The question to answer for every range scan in
therefore: where will the scan start and where will it stop?
Although the question is valid for every index access, it becomes
more important in conjunction with inequality operators.
The question is easy to answer if the SQL statement states the
start and stop conditions explicitly:
SELECT first_name, last_name, date_of_birth
FROM employees
WHERE date_of_birth <= TO_DATE(?, 'YYYY-MM-DD')
AND date_of_birth >= TO_DATE(?, 'YYYY-MM-DD')
An index on DATE_OF_BIRTH will be scanned for the specified date
range only. The scan will start at the first date and end at the
second one. It can't be narrowed any further.
The scenario becomes more complex if a second column gets
involved:
SELECT first_name, last_name, date_of_birth
FROM employees
WHERE date_of_birth <= TO_DATE(?, 'YYYY-MM-DD')
AND date_of_birth >= TO_DATE(?, 'YYYY-MM-DD')
AND subsidiary_id = ?
There are two different ways to define the corresponding index—
either DATE_OF_BIRTH first or SUBSIDIARY_ID first. Although it doesn't
make any difference with the equals operator, it makes a
difference when a range condition is used.
Important
The column order in a concatenated index does not
make any performance difference for statements
that use the equals operators only.
However, the column order can be re-arranged to
support more queries that use only some of these
columns (see the section called “Concatenated
Keys”).
Whenever a range comparison is used, the column
order matters for a single statement. The freedom
to re-arrange it to support other queries is lost.
The following figures demonstrate the effect of the two index
variants with a search for subsidiary 27 and a date range between
1st till 9th of January 1971.
Figure 2.2, “Range Scan in DATE_OF_BIRTH, SUBSIDIARY_ID
index” visualizes a detail of the index on DATE_OF_BIRTH and
SUBSIDIARY_ID—in that order. Where will the scan start? Or, to put
it another way: Where would the tree traversal go?
Figure 2.2. Range Scan in DATE_OF_BIRTH,
SUBSIDIARY_ID index
The example is very similar to the one discussed in the section
called “Concatenated Keys”. The condition on the subsidiary is
useless because the index is ordered by the date first. As a result,
the records for subsidiary 27 are distributed over the entire
index. This is also true within the selected date range; there is—on
the first sight—no order within the subsidiary ids.
However, the subsidiaries are of course ordered—but only within
each day. That's why the range scan must begin at the first entry
that satisfies the condition on the date of birth—disregarding the
subsidiary. In fact, the INDEX RANGE SCAN must go through the
entire date range—13 rows—and check every entry against the
subsidiary.
The picture changes dramatically when the index definition is
turned around: first SUBSIDIARY_ID, then DATE_OF_BIRTH. Figure 2.3,
“Range Scan in SUBSIDIARY_ID, DATE_OF_BIRTH Index”
shows the index.
Figure 2.3. Range Scan in SUBSIDIARY_ID,
DATE_OF_BIRTH Index
The difference is that there is only one distinct value that matches
the leading column—SUBSIDIARY_ID = 27. The next index column is
therefore sorted—at least within that subsidiary. So, there no
need to visit the first or second leaf node because the branch
nodes indicate that none of the records will match the search
criteria.
Remember that each branch node entry corresponds to the last
record in the corresponding leaf node. That means that the first
branch node entry—SUBSIDIARY_ID 27 and DATE_OF_BIRTH 12th
September 1960—refers to a leaf node where all records are less
or equal to that. Hence, there is no record that satisfies the
original search condition (01-JAN-71 till 10-JAN-71) in that leaf node.
The same is true for the second node.
The third node is visited because the date in the branch node is
bigger than the search key. If there is any matching record it
must be stored in that leaf node. The tree traversal ends in the
third leaf node and the leaf node scan starts. However, that
terminates in the very same leaf node because the last record is
already out of the search range. The entire index lookup visits one
leaf node only.
Tip
Rule of thumb: Index for equality first—then for
ranges.
Please note that selectivity of the individual columns is the same if
we disregard the other column; there are 13 records for the date
range and there are 13 records for the subsidiary. Regardless of
that, the second index will do better for this query.
The actual performance difference depends on the data
distribution. It might be negligible if the range predicate alone is
very selective, but it can become huge if the range predicate
matches a large fraction of the table. In that case, the optimizer
might choose to run a full table scan instead—even if the joint
selectivity is good.
The problem with the two index variants is that the difference is
almost invisible in the execution plan; both variants perform an
INDEX RANGE SCAN. Still there is a difference shown in the execution
plan:
---------------------------------------------------------------
| Id | Operation | Name | Rows | Cost |
---------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 |
|* 1 | FILTER | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 1 | 4 |
|* 3 | INDEX RANGE SCAN | EMP_TEST | 2 | 2 |
---------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(TO_DATE(:END_DT,'YYYY-MM-DD')
>=TO_DATE(:START_DT,'YYYY-MM-DD'))
3 - access("DATE_OF_BIRTH">=TO_DATE(:START_DT,'YYYY-MM-DD')
AND "SUBSIDIARY_ID"=TO_NUMBER(:SUBS_ID)
AND "DATE_OF_BIRTH"<=TO_DATE(:END_DT ,'YYYY-MM-DD'))
filter("SUBSIDIARY_ID"=TO_NUMBER(:SUBS_ID))
First of all, the plan looks more complicated than expected—but
that's just a side effect of using bind parameters. In this case,
there is an extra FILTER step to check if the end date is not before
the start date. If the real values contradict each other—e.g., a date
range from 1970 to 1960—the following steps will not be executed
at all.
Apart from that, the execution plan is as expected; a range scan
on the index and the corresponding table access. All the predicates
are applied during the index lookup and there is no filter on table
level.
However, the predicate section reveals the one detail that makes
all the difference; the SUBSIDIARY_ID condition shows up as filter
predicate—that means the EMP_TEST index is on DATE_OF_BIRTH first.
Important
Filter predicates are applied during the leaf node
traversal only.
They don't contribute to the start and stop
conditions and do therefore not narrow the scanned
range.
Filter Predicates that are Access Predicates
An execution plan is sometimes showing a condition as filter
predicate and as access predicate at the same time—like
SUBSIDIARY_ID in the example above. That means that the
condition can be used during the tree traversal but only within
the first distinct value of the preceding columns.
Figure 2.4, “Start Condition in DATE_OF_BIRTH,
SUBSIDIARY_ID Index” shows the relevant part from the
index again:
Figure 2.4. Start Condition in DATE_OF_BIRTH,
SUBSIDIARY_ID Index
The tree traversal uses the predicates DATE_OF_BIRTH >= '01-JAN-
71' AND SUBSIDIARY_ID = 27. When the database inspects the first
entry from the branch node, it must decide if a matching entry
could possibly appear in the corresponding leaf node. However,
the first date in the branch node is equal to the start date. That
means that the corresponding leaf node cannot contain any
record with a later date nor can it contain any record for 1st
January with a subsidiary bigger than the one stored in the that
branch node entry (6). In other words, there can't be any
matching record in the corresponding leaf node. Even if there
would be a record for 1st January and subsidiary 27, it would
not be stored in the first branch node. So, the subsidiary
predicate narrows the scanned range just a little bit; the range
scan must filter 12 rows only—instead of all 13 that match the
data range.
To compare it with the execution plan on the better index, there is
no filter applied during the index scan:
---------------------------------------------------------------
| Id | Operation | Name | Rows | Cost |
---------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 3 |
|* 1 | FILTER | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 1 | 3 |
|* 3 | INDEX RANGE SCAN | EMP_TEST2 | 1 | 2 |
---------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(TO_DATE(:END_DT,'YYYY-MM-DD')
>=TO_DATE(:START_DT,'YYYY-MM-DD'))
3 - access("SUBSIDIARY_ID"=TO_NUMBER(:SUBS_ID)
AND "DATE_OF_BIRTH">=TO_DATE(:START_DT,'YYYY-MM-DD')
AND "DATE_OF_BIRTH"<=TO_DATE(:END_T,'YYYY-MM-DD'))
Another useful detail about the Predicate Information section for
an index access is that it can be read from top to bottom. The
important point is that the access predicates appear in a defined
order; first the conditions for the tree traversal—that is, the start
of the range scan. They show up in index order. Whatever follows
those predicates is the stop condition—the end date in this case.
Important
The access predicates express the start and stop
conditions for the range scan.
The order of the access predicates indicates the
order of use: first the clauses used in the tree
traversal, then the stop condition. However, they
can collapse so that only one remains; e.g., when
there is only one equality clause. The stop condition
will also disappear for an unbounded range scan,
e.g., WHERE DATE_OF_BIRTH >= '01-JAN-71'.
Finally, there is the BETWEEN keyword. It is just a shortcut so
that
DATE_OF_BIRTH >= '01-JAN-71'
AND DATE_OF_BIRTH <= '10-JAN-71'
is equivalent to
DATE_OF_BIRTH BETWEEN '01-JAN-71'
AND '10-JAN-71'
Note that BETWEEN is always inclusive (see page 210, §8.6/6 of
SQL92).
Indexing LIKE Filters
The SQL LIKE operator is a common cause of performance issues
because the performance is vastly depending on the search term.
But it's not the selectivity of the entire search term that
determines the performance—it's the prefix selectivity that
matters. In that context, the prefix is the substring before the
first wildcard.
Consider the following example:
SELECT first_name, last_name, date_of_birth
FROM employees
WHERE UPPER(last_name) LIKE 'WIN%D'
----------------------------------------------------------------
| Id | Operation | Name | Rows | Cost |
----------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 1 | 4 |
|* 2 | INDEX RANGE SCAN | EMP_UP_NAME | 1 | 2 |
----------------------------------------------------------------
The LIKE operator can use an index if the search term has a prefix
before the first wildcard. Only the prefix determines the scanned
range and is therefore an access predicate—that's only the WIN
part of the search term. The longer the prefix is, the narrower is
the range scan. The worst case—a postfix search LIKE '%AND', or a
anywhere search LIKE '%NAN%'—will scan the entire index.
It's the difference between access and a filter predicates again.
Figure 2.5, “Various LIKE searches” illustrates that for three
different search terms—all of them select the same row.
Figure 2.5. Various LIKE searches
The prefix length before the first wildcard affects the scanned
index range—it's highlighted in the figure. The longer the prefix,
the shorter the scanned range. A postfix filter is not able to
narrow the index access; it just removes the records that don't
match.
Important
Only the LIKE prefix can serve as access predicate.
Everything after the first wildcard is a filter
predicate.
The first query in the figure has only two prefix characters. It
needs to read 18 records. The postfix filter removes 17 of them.
The second expression reduces the range scan to two records—
but it still needs to filter one record. The last search doesn't need a
filter at all and reads the selected record only.
Tip
Avoid anywhere LIKE searches (e.g., LIKE '%TERM%')
whenever possible.
In case it is inevitable, try to provide another access
path to the selected rows—using the LIKE clause as
filter predicate on index or table level.
In case there is no other access path you should
consider using a full text index.
The Oracle optimizer recognizes prefix wildcards in LIKE searches
to avoid an ineffective range scan. However, the use of bind
parameters makes that impossible. The optimizer assumes a
usable prefix and uses an INDEX RANGE SCAN for a LIKE ? clause.
Statically Prefixing Anywhere LIKE Expressions
There are, theoretically, three solutions to that problem.
The first is not to use bind parameters for anywhere searches.
However, that might introduce a considerable parsing
overhead.
The second doesn't work in the Oracle database as of release
11r2. However, it would statically prefix the bind parameter.
E.g., LIKE '%' || ?. However, the database evaluates the entire
expression only when the bind value is available. So, that
doesn't change anything but it would be very nice if the
database would recognize that case.
The third solution is to use a hint that disallows the respective
index. However, hints should be avoided in production code.
But reality bites—it's often the only possible solution.
The LIKE operator—as well as a regular expression—is no
adequate way to implement full text searches. Most databases
offer a special purpose index to for that. The following overview
gives some examples.
DB2
DB2 supports the CONTAINS keyword:
http://www.ibm.com/developerworks/data/tutorials/dm-
0810shettar/index.html
MySQL
MySQL implements the MATCH and AGAINST keywords, but
they work on MyISAM tables only:
http://dev.mysql.com/doc/refman/5.5/en/fulltextsearch.
html
Oracle
Oracle supports the CONTAINS keyword:
http://download.oracle.com/docs/cd/B19306_01/text.102/b14218/csql.htm#i997503
PostgreSQL
PostgreSQL uses the @@ operator to implement full text
searches:
http://www.postgresql.org/docs/9.0/static/textsearchintro.
html#TEXTSEARCH-MATCHING
Another option is using the wildspeed extension. This is
basically storing the string in all possible rotations, so that
each character of the string is at the beginning. That means,
that the indexed string is not stored once, but as many times
as there are characters in the string—so, it needs quite some
space. However, that makes it possible to effectively query
the index without a wildcard prefix.
SQL Server
SQL Server implements the CONTAINS keyword:
http://msdn.microsoft.com/en-us/library/ms142571.aspx
Index Combine
It's actually the most common question about indexing at all: is it
better to have one index with all columns or one individual index
for every column? The answer is quite simple in almost all cases:
one index with multiple columns is better—that is, a concatenated
or compound index. the section called “Concatenated Keys”
explains that in detail.
However, there is one case where a single index can't do a perfect
job—no matter how the index definition is changed. That's the
case when there are two or more independent range conditions in
the where clause.
Consider the following SQL:
SELECT first_name, last_name, date_of_birth
FROM employees
WHERE UPPER(last_name) < ?
AND date_of_birth < ?
It's impossible to define a B-Tree index so that there isn't a filter
predicate. This limitation is quite simple to understand—bearing
in mind that an index is essentially an ordered list.
For example, when indexing UPPER(LAST_NAME), DATE_OF_BIRTH (in
that order), the list will start with A and end with Z. However, a
record for the employee Brown will be rather close to the start of
the list —regardless of the date of birth. The date of birth
influences the list position only marginally—just if the same name
appears twice. That means that scanning from the beginning of
the list will need to filter for the date of birth. The date of birth
doesn't narrow the scanned range.
The same is true when defining the index the other way around.
The scan will start with the earlier dates but will need a filter to
remove the names that don't match.
After all, the problem is as simple as that: a chain with one axis
supports one range condition as an access predicate. Supporting
two range conditions as an access predicate would mean to scan a
corner of a chessboard. However, a B-Tree index is a chain—there
is no second axis.
There are two workarounds for this problem. The first is to define
a concatenated index just as described above—accepting the filter
predicate. This is the only case where the most selective first rule
applies. Whatever condition is more selective should go first so
that the scanned range is as small as possible.
The second approach is to borrow a feature from the datawarehousing
world.
Data-warehousing is the mother of all ad-hoc queries. A few clicks
allow querying any combination of search filters. It's impossible to
tell which columns will be queried together. That makes indexing
as described so far rather impossible.
There is, of course, a solution to that problem. That is, the bitmap
index—a special purpose index type. However, bitmap indexes
have limitations that make them truly special purpose indexes.
One problem is that bitmap operations are very CPU bound. But
the more limiting problem is their terrible
insert/update/delete scalability. In fact, concurrent write
operations on bitmap indexes are almost impossible so that
bitmap indexes are hardly usable in an OLTP environment.
However, the key benefit of bitmap indexes is that merging of
multiple indexes is rather easy. That is, multiple indexes can be
used and combined for a single table access.
Important
The database can—in principle—use one B-Tree
index only per table access. However, a single SQL
statement can access many tables—by means of
joins and subqueries—hence, a single SQL statement
can use many B-Tree indexes.
Bitmap indexes can be combined on the fly so that a
single table access can use multiple indexes.
There are many databases, including the Oracle database, that
use a hybrid solution between regular B-Tree indexes and bitmap
indexes. They can, despite any better option, transform the result
of multiple B-Tree scans to a bitmaps and merge them on the fly.
That means that the concurrency limitation of bitmap indexes is
bypassed by making the overall operation even more CPU bound.
After all, a B-Tree based bitmap merge is an option of last resort
only.
Important
The limitations of bitmap indexes make them hardly
usable in a OLTP environment.
Some databases use different techniques to combine multiple BTree
indexes. The following table gives a short overview for
different implementations:
DB2
DB2 supports multiple index access on LUW 9r7 (using a
dynamic bitmap) and on zOS v10.
MySQL
MySQL has an index merge optimization starting with
release 5.0.
Oracle
The Oracle database uses BITMAP CONVERSIONs to combine
multiple indexes on the fly (introduced with 9i).
PostgreSQL
PostgreSQL uses bitmaps to combine multiple indexes since
version 8.1.
SQL Server
SQL Server can use multiple indexes ("Index Intersect")
starting with V7.0 using a hash algorithm.
It's often better to use a single B-Tree index instead of bitmap
indexes or multiple B-Tree indexes—even in case of multiple
independent range conditions.
The following execution plan shows the bitmap conversions
caused by two individual B-Tree indexes for two independent
range conditions.
--------------------------------------------------------------
| Id | Operation | Name | Cost (%CPU)|
--------------------------------------------------------------
| 0 | SELECT STATEMENT | | 17 (12)|
| 1 | TABLE ACCESS BY INDEX ROWID | EMPLO | 17 (12)|
| 2 | BITMAP CONVERSION TO ROWIDS | | |
| 3 | BITMAP AND | | |
| 4 | BITMAP CONVERSION FROM ROWIDS| | |
| 5 | SORT ORDER BY | | |
|* 6 | INDEX RANGE SCAN | DOB | 2 (0)|
| 7 | BITMAP CONVERSION FROM ROWIDS| | |
| 8 | SORT ORDER BY | | |
|* 9 | INDEX RANGE SCAN | UNAME | 2 (0)|
--------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
6 - access("DATE_OF_BIRTH"<:DOB)
filter("DATE_OF_BIRTH"<:DOB)
9 - access(UPPER("LAST_NAME")<:NAME)
filter(UPPER("LAST_NAME")<:NAME)
Please note that the Rows column was removed in favor of the CPU
cost information.
The notable operations are the sorts that follow the index range
scans and the bitmap operations. There are two independent
scans and the bitmap operations. There are two independent
range scans; the result is sorted and converted to an in-memory
bitmap index. Those in-memory indexes are then combined with
the BITMAP AND operation and the result is converted back to get
the ROWIDs. The last step is to fetch the records from the table.
Another, very important, detail in the execution plan is the CPU
cost—12% in that case.
In comparison to the plan above, a single index on UPPER(last_name)
and DATE_OF_BIRTH will perform better and not cause any notable
CPU cost:
-------------------------------------------------------------
| Id | Operation | Name | Cost (%CPU)|
-------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3 (0)|
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 3 (0)|
|* 2 | INDEX RANGE SCAN | UNAME_DOB | 2 (0)|
-------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(UPPER("LAST_NAME")<:NAME
AND "DATE_OF_BIRTH"<:DOB)
filter("DATE_OF_BIRTH"<:DOB)
Please note that both plans were created with bind parameters.
Hence, the cardinality estimates are ballpark figures and the
overall cost might be very different.
Obfuscated Conditions
The following sections demonstrate some popular (and some
funny) ways to obfuscate conditions. Obfuscated conditions are
where clauses that are phrased in a way which prevents proper
index usage. It is therefore a collection of anti-patterns—
explained and tuned.
Dates
The by far most common obfuscation affects DATE columns. The
Oracle database is particularly vulnerable to that because the DATE
type always includes time. It's common practice to use the TRUNC
function to get rid of the time part:
SELECT ...
FROM ...
WHERE TRUNC(date_column) = TRUNC(sysdate - 1)
It's a perfectly valid and correct statement—except that it can't
use an index on DATE_COLUMN. That's because TRUNC(date_column) is
an entirely different thing than DATE_COLUMN—at least to the
database. the section called “Case-Insensitive Search” explains
that in more detail and suggests to think of every function as a
BLACKBOX.
There is a rather simple solution to that problem: a function based
index.
CREATE INDEX index_name
ON table_name (TRUNC(date_column))
However, that requires all queries to use TRUNC. If there are
inconsistent queries (some with, others without TRUNC), two
indexes are required.
Even databases that have plain DATE types—that don't need TRUNC
for day wise filters—can run into similar problems. E.g., with
monthly queries like this MySQL example:
SELECT ...
FROM ...
WHERE DATE_FORMAT(date_column, "%Y-%M")
= DATE_FORMAT(now() , "%Y-%M')
That is very similar to the Oracle example above. However, the
solution (function based index) doesn't apply—MySQL doesn't
have function based indexes.
There is a generic solution that works in all environments:
rephrasing the condition as an explicit range query.
SELECT ...
FROM ...
WHERE date_column BETWEEN
quarter_begin(?)
AND quarter_end(?)
If you have done your homework, you probably recognize that
pattern from the exercise about all employees aged 42.
The benefit of this technique is that a straight index on DATE_COLUMN
works. The two functions QUARTER_BEGIN and QUARTER_END hide the
complexity of the boundary date calculation. That's especially
handy because the BETWEEN operator is always inclusive—the
QUARTER_END function must return a time just before the first day of
the next quarter.
The following samples implement QUARTER_BEGIN and QUARTER_END for
various databases. That's another benefit of using functions; the
database dependent code is not in the select statement.
DB2
CREATE FUNCTION quarter_begin(dt TIMESTAMP)
RETURNS TIMESTAMP
RETURN TRUNC(dt, 'Q');
CREATE FUNCTION quarter_end(dt TIMESTAMP)
RETURNS TIMESTAMP
RETURN TRUNC(dt, 'Q') + 3 MONTHS - 1 SECOND;
MySQL
CREATE FUNCTION quarter_begin(dt DATETIME)
RETURNS DATETIME DETERMINISTIC
RETURN CONVERT
(
CONCAT
( CONVERT(YEAR(dt),CHAR(4))
, '-'
, CONVERT(QUARTER(dt)*3-2,CHAR(2))
, '-01'
)
, datetime
);
CREATE FUNCTION quarter_end(dt DATETIME)
RETURNS DATETIME DETERMINISTIC
RETURN DATE_ADD
( DATE_ADD ( quarter_begin(dt), INTERVAL 3 MONTH )
, INTERVAL -1 MICROSECOND);
Oracle
CREATE FUNCTION quarter_begin(dt IN DATE)
RETURN DATE
AS
BEGIN
RETURN TRUNC(dt, 'Q');
END;
/
CREATE FUNCTION quarter_end(dt IN DATE)
RETURN DATE
AS
BEGIN
-- the Oracle DATE type has seconds resolution
-- subtract one second from the first
-- day of the following quarter
RETURN TRUNC(ADD_MONTHS(dt, +3), 'Q')
- (1/(24*60*60));
END;
/
PostgreSQL
CREATE FUNCTION quarter_begin(dt timestamp with time zone)
RETURNS timestamp with time zone AS $$
BEGIN
RETURN date_trunc('quarter', dt);
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION quarter_end(dt timestamp with time zone)
RETURNS timestamp with time zone AS $$
BEGIN
RETURN date_trunc('quarter', dt)
+ interval '3 month'
- interval '1 microsecond';
END;
$$ LANGUAGE plpgsql;
SQL Server
CREATE FUNCTION quarter_begin (@dt DATETIME )
RETURNS DATETIME
BEGIN
RETURN DATEADD (qq, DATEDIFF (qq, 0, @dt), 0)
END
GO
CREATE FUNCTION quarter_end (@dt DATETIME )
RETURNS DATETIME
BEGIN
RETURN DATEADD
( ms
, -3
, DATEADD(mm, 3, dbo.quarter_begin(@dt))
);
END
GO
Similar auxiliary functions can be used for other periods—they'll
probably be less complex than the examples above.
Tip
Write queries for continuous periods as explicit
range condition.
That's also true when the period is a single day—e.g.,
in Oracle: DATE_COLUMN >= TRUNC(sysdate) AND
DATE_COLUMN < TRUNC(sysdate+1).
There is another frequent problem with date columns that hits
applications that use string representations for dates. Consider
the following PostgreSQL example:
SELECT ...
FROM ...
WHERE TO_CHAR(date_column, 'YYYY-MM-DD') = '1970-01-01'
The problem is again the conversion of the DATE_COLUMN. The better
approach (regarding indexing) is to convert the string to the
native DATE representation of the database:
SELECT ...
FROM ...
WHERE date_column = TO_DATE('1970-01-01', 'YYYY-MM-DD')
That allows a straight index on DATE_COLUMN. Moreover, it converts
the input string only once. The other statement needs to convert
all the dates from the table before they can be compared against
the string literal.
However, there is a problem with that approach. In case the
DATE_COLUMN includes the time of day—like the Oracle DATE type
does—the statement must read like that:
SELECT ...
FROM ...
WHERE date_column >= TO_DATE('1970-01-01', 'YYYY-MM-DD')
AND date_column < TO_DATE('1970-01-01', 'YYYY-MM-DD') + 1
Numeric Strings
Numeric strings are numbers that are stored in string fields.
There is no problem with that if all filters use string comparisons:
SELECT ...
FROM ...
WHERE numeric_string = '42'
That statement can, of course, use an index on NUMERIC_STRING.
However, an index range scan is not possible if the condition is
expressed as numeric comparison:
SELECT ...
FROM ...
WHERE numeric_string = 42
Note the missing quotes. The database transforms that query
before execution. The result is something like this:
SELECT ...
FROM ...
WHERE TO_NUMBER(numeric_string) = 42
It's the same story again. The function prevents an INDEX RANGE
SCAN on the NUMERIC_STRING column (if there is such an index). The
solution is, again, to change the query so that the straight column
is queried—that is, convert the literal search term to a string:
SELECT ...
FROM ...
WHERE numeric_string = TO_CHAR(42)
You might wonder why the database doesn't do it that way? It's
because transforming a string to a number is unambiguous. That's
not the case the other way around. A formatted, numeric string
might contain spaces, or thousands separators, or leading zeros:
42
042
0042
00042
...
The database doesn't know the format of the numeric strings in
the table. It's therefore done the other way around; the strings
are turned into numbers—that's an unambiguous transformation.
However, an explicit TO_CHAR(42) evaluates to a single string
representation—the query will match only one from the above
strings.
The important point is that there is a semantic difference between
the two variants. The TO_CHAR variant matches only one record
from the list, the TO_NUMBER variant matches all of them.
The obfuscation becomes even more subtle with bind parameters
because the Oracle database always assumes matching types for
bind parameters:
SELECT ...
FROM ...
WHERE numeric_string = ?
The Oracle database expects a string value for the placeholder. If
that assumption doesn't hold true, the transformation applies
again. However, that might make the prepared execution plan
useless if an INDEX RANGE SCAN isn't possible anymore.
This is one of the cases where the execution plan made with
EXPLAIN PLAN doesn't tell the truth. The execution plan might differ
if the parameter types don't match.
SQL*Plus Autotrace And SQL Trace Files
The SQL*Plus autotrace functionality is also vulnerable to the
bind parameter type confusion. In fact, autotrace is performing
an explain plan in the background. The autotrace plan is not the
actually executed plan, but just an automatically explained
plan.
The Oracle SQL traces log the actually executed plan.
Numeric strings introduce many problems. That's the semantic
difference that can lead to unexpected results; the unusable
indexes that lead to bad performance; but there is also the risk of
exceptions. A transparently transformed statement can fail if
TO_NUMBER fails to parse a string. That's why a trivial statement—
without any explicit conversion—can still raise number conversion
errors if there is a type mismatch.
Tip
Use numeric types to store numbers.
The problem doesn't exist the other way around:
SELECT ...
FROM ...
WHERE numeric_number = '42'
The database will, consistently, transform the string into a
number. However, that does not wrap the—potentially indexed—
column through any function. A regular index will work. But a
manual conversion can still be done in the wrong way:
SELECT ...
FROM ...
WHERE TO_CHAR(numeric_number) = '42'
Using the TO_CHAR function on an indexed column renders the
expression unusable as access predicate.
Date/Time Concatenation
This section is about a popular way to obfuscate concatenated
indexes. It is very similar to the "Dates" section but the other way
around. It is about concatenation date and time columns to apply
a range filter.
Consider the following MySQL example. It queries a table that
stores DATE and TIME separately—in two columns:
SELECT ...
FROM ...
WHERE ADDTIME(date_column, time_column)
> DATE_ADD(now(), INTERVAL -1 DAY)
It selects all records from the past 24 hours. However, the query
can't do an index range scan on a concatenated index (DATE_COLUMN,
TIME_COLUMN) because the search expression is, again, completely
unrelated to the indexed columns.
The problem doesn't exist if the time is stored along with the date
in a single column (e.g., MySQL DATETIME). One column can be
queried without any obfuscation:
SELECT ...
FROM ...
WHERE datetime_column
> DATE_ADD(now(), INTERVAL -1 DAY)
However, once the table is designed, it's hard to change. There
might be a good reason for that design anyway—so, is usually not
possible to change.
The next option is a function based index, if the RDBMS supports
it. That works very well, but there are all the drawbacks
discussed before. However, that example assumes a MySQL
database—function based index are not an option.
Still, there is one more option that improves performance. It
works by adding a redundant where clause on DATE:
SELECT ...
FROM ...
WHERE ADDTIME(date_column, time_column)
> DATE_ADD(now(), INTERVAL -1 DAY)
AND date_column
>= DATE(DATE_ADD(now(), INTERVAL -1 DAY))
The new part of the where clause is absolutely redundant from a
functional point of view—but it's a straight filter DATE_COLUMN. That
means any index that starts with DATE_COLUMN will improve
performance greatly. Even though this technique is not perfect—
because the time component can't use the index—it's still a major
improvement over a full scan.
Tip
The date part of a date/time concatenation can use
an index with a redundant where clause.
This technique works also if date and time are stored as strings.
However, that requires date and time formats that yields the
correct sort order—e.g., like ISO 8601 suggests it (YYYY-MM-DD
HH:MM:SS). It is also very important that the date has a fixed length
—including leading zeros, as ISO suggests.
The following Oracle example demonstrates it.
SELECT ...
FROM ...
WHERE date_string || time_string
> TO_CHAR(sysdate - 1, 'YYYY-MM-DD HH24:MI:SS')
AND date_string
>= TO_CHAR(sysdate - 1, 'YYYY-MM-DD')
Smart Logic
One of the key features of SQL databases is their support for adhoc
queries—that is, dynamic SQL. That's possible because the
query optimizer (query planner) works at runtime; each SQL
statement is analyzed when received to generate a reasonable
execution plan. The overhead introduced by runtime optimization
can be minimized with bind parameters—the section called “Bind
Parameter” covers that in detail.
The gist of that recap is that databases are optimized for dynamic
SQL—so, use it.
There is, however, one widely used practice to avoid dynamic SQL
if favour of static SQL—often because of the "Dynamic SQL is
slow" myth. Unfortunately that practice is doing more harm than
good.
So, let's imagine an application that queries the employees table
(see http://Use-The-Index-Luke.com/sql/example-schema). It
allows searching for subsidiary id, employee id and last name
(case-insensitive) in any combination. The following statement
support all these variants:
SELECT first_name, last_name, subsidiary_id, employee_id
FROM employees
WHERE ( subsidiary_id = :sub_id OR :sub_id IS NULL )
AND ( employee_id = :emp_id OR :emp_id IS NULL )
AND ( UPPER(last_name) = :name OR :name IS NULL )
The statement uses Oracle style named placeholder for better
readability. All possible filter expressions are statically coded in
the statement. Whenever a filter isn't needed, the respective
search term is NULL so that the particular filter becomes
ineffective.
It's a perfectly reasonable SQL statement. The use of NULL is even
in line with its definition according to the three valued logic of
SQL. Regardless of that, it is still amongst the worst things for
performance.
It's impossible to know which filters will be effective and which
will be canceled out. The database has to prepare for all cases—it's
therefore impossible to use an index:
----------------------------------------------------
| Id | Operation | Name | Rows | Cost |
----------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 478 |
|* 1 | TABLE ACCESS FULL| EMPLOYEES | 2 | 478 |
----------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(
(:NAME IS NULL OR UPPER("LAST_NAME")=:NAME)
AND (:EMP_ID IS NULL OR "EMPLOYEE_ID"=TO_NUMBER(:EMP_ID))
AND (:SUB_ID IS NULL OR "SUBSIDIARY_ID"=TO_NUMBER(:SUB_ID)))
The database has to run a full table scan even if proper indexes
exist.
Please remember that the purpose of bind parameters is to
decrease parsing overhead—as explained in the section called
“Bind Parameter”. The database will therefore not re-evaluate
the plan when the actual bind parameters are sent—hence, the
prepared query plan has to cover all cases.
It's not that the Oracle database is "stupid". The optimizer
resolves the (tautological) expressions when inlined literally:
SELECT first_name, last_name, subsidiary_id, employee_id
FROM employees
WHERE( subsidiary_id = NULL OR NULL IS NULL )
AND( employee_id = NULL OR NULL IS NULL )
AND( UPPER(last_name) = 'WINAND' OR 'WINAND' IS NULL )
The execution plan is now optimized:
----------------------------------------------------------------
| Id | Operation | Name | Rows | Cost |
----------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 1 | 2 |
|* 2 | INDEX RANGE SCAN | EMP_UP_NAME | 1 | 1 |
----------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(UPPER("LAST_NAME")='WINAND')
Warning
Using literal values in SQL exposes your application
to SQL Injection and causes performance problems
due to the parsing overhead.
However, that is not a good solution—avoid literal values
whenever possible.
The best solution is to build the SQL statement dynamically—only
with the required clauses—with bind parameters.
SELECT first_name, last_name, subsidiary_id, employee_id
FROM employees
WHERE UPPER(last_name) = :name
The bind parameter prevents SQL Injection and improves
execution plan caching. The caching is still effective, even if there
are multiple variants of the statement. Some of them won't be
used anyway, the others will be cached and expired according the
database's policy.
Tip
Use dynamic SQL if you need dynamic where
clauses.
Still use bind parameters when generating dynamic
SQL—otherwise, the “Dynamic SQL is Slow” myth
might become true.
Math
There is one more cluster of techniques that's smart, but prevents
index usage in most cases. It's similar to smart logic, but using
mathematics.
Consider the following statement. Can it use an index on
NUMERIC_COLUMN?
SELECT numeric_number
FROM table_name
WHERE numeric_number - 1000 > ?
Or can the following statement use an index on A and B (in any
order)?
SELECT a, b
FROM table_name
WHERE 3 * a + 5 = b
Let's put that question into a different perspective; if you were
developing an SQL database, would you add an equation solving
engine?
Most database vendors say "no" to that question. So, they doesn't
use an index for any of the samples above.
That technique is sometimes even used to intentionally disable a
particular index—e.g., by adding zero:
SELECT numeric_number
FROM table_name
WHERE numeric_number + 0 = ?
This is actually more portable than a hint—however, that's not a
recommendation to do it.
Never the less, a function based index will work for all the samples
above. It's even possible to use a function based index along with a
mathematical transformation to index a statement that can't use
an index otherwise:
SELECT a, b
FROM table_name
WHERE a = b
There is no index that supports this query with an INDEX RANGE
SCAN. However, the statement can be transformed so that this
index works:
CREATE INDEX a_minus_b ON table_name (a - b)
How? The where expression must be transformed so that all the
table columns are on the one side, but the constants on the other
side of the equation:
SELECT a, b
FROM table_name
WHERE a - b = 0
Appendix A. Myth Directory
One of the major issues with “mature” software is that it is often
surrounded by myths. The Oracle database seems to be
particularly vulnerable to this phenomenon, probably because the
name Oracle is used since 1979. There are some myth that were
the best current practice at that time but are outdated since many
releases.
In an attempt to thin out the dark woods of Oracle database
myths, I list and explain some of the most common myth and
misbeliefs here.
Indexes Can Degenerate
The most prominent myth is that an index can become
degenerated after a while and must be re-built regularly. First of
all, the database keeps the tree balance—always. It is not possible
that a single fragment of the index grows deeper and deeper until
the tree traversal becomes slow. What can happen is that the
index become bigger than needed. If there are many UPDATE or
DELETE statements involved space utilization can become
suboptimal. However, even if the index is bigger than required, it
is very unlikely that the depth of the index grows because of that.
As explained in the section called “The Tree ”, the number of
entries in the index must typically grow by a factor of hundred to
increase the index depth by one level.
Rebuilding an index might reduce the number of leaf nodes by
about 20% - 30%. The most you can possibly expect from this
reduction is 20%-30% for very expensive operations like a FULL
INDEX SCAN. The typical INDEX UNIQUE SCAN gain of an index rebuild is
0%-2% because the depth of the index is not reduced by the
rebuild.
Most Selective First
Every time a compound index is created, the order of the columns
must be chosen wisely. the section called “Concatenated Keys” is
devoted to this question.
However, there is the myth that you should always put the most
selective column to the first position; that is just wrong. Others
have tried to proof that before, but the myth persists.
Important
The most important factor to consider when defining
a concatenated index is the number of statements it
can support.
After that, there are even reasons to put the least selective
column first:
To utilize an INDEX SKIP SCAN
To accomplish a better compression
However, that's advanced stuff. But the most important
factor...uhm, did I say that before?
I'm not the first to fight this myth. Here are some more
references that disproof the myth:
Tom Kyte - "Expert Oracle Database Architecture" -
Apress
Devotes the section "Myth: Most Discriminating Elements
Should Be First" to this topic.
Tom Kyte also commented this on AskTom.
Guy Harrison - Oracle Performance Survival Guide -
Prentice Hall
In "Guidelines for Concatenated Indexes":
Don't automatically put the most selective term first in
a concatenated index. [...]
Jonathan Lewis - Author of "Cost-Based Oracle
Fundamentals" - Apress
I didn't find it explicitly in his book, however, Jonathan
Lewis blogged it:
One of the often-quoted fairy-tales about indexes was
the directive to “put the most selective column first”. It
was never a sensible rule of thumb (except, possibly,
prior to version 6.0). But if you have a lot of code that
uses this nvl() construct then there may actually be a
good argument for adopting this approach. (There’s a
better argument for fixing the code, of course).
The core of truth behind this myth is related to indexing
independent range conditions—that is the only case where the
selectivity should influence the index design (see the section called
“Index Combine”).
The Oracle Database Cannot Index
NULL
The source of this myth is rather easy to understand when you
look at the correctly expressed statement:
The Oracle database does not include rows into an index if all
indexed columns are NULL.
The difference between the myth and the reality is small—it
seems that they myth is a sloppy form of the truth.
The truth is that NULL can be indexed by adding another, not
nullable, column to the index:
CREATE INDEX with_null ON table_name (nullable_column, '1');
Read the section called “Indexing NULL” for the full story.
Dynamic SQL is Slow
The core of truth behind the “Dynamic SQL is Slow” myth is
rather simple; dynamic SQL can be slow—when done wrong.
So, what's dynamic SQL? It's the opposite of embedding SQL
directly into the program's source code. PL/SQL and other stored
procedure dialects are good examples for embedded SQL but it's
also possible to embed SQL into C and other languages. The
benefit of embedded SQL is that it integrates very smoothly with
the respective programming language. However, embedded SQL
is compiled into the program. It can not change when the program
runs—it's static.
Dynamic SQL, to the contrary, is handled as string within the
application. The program can change it at runtime. However,
that's probably the cause of the myth. Consider the following
example:
String sql = "SELECT first_name, last_name"
+ " FROM employees"
+ " WHERE employee_id = " + employeeId;
ResultSet rs = con.executeQuery(sql);
So, is that dynamic SQL? The actual SQL string is built
dynamically during program runtime. However, that is not what's
meant with dynamic SQL because the structure of the SQL
statement doesn't change. That example should be changed to use
bind parameters—that's all, nothing dynamic here.
Real dynamic SQL changes the structure of the statement during
runtime. That's something that cannot be done with bind
parameters—like a conditional where clause:
String where = "";
if (subsidiaryId != null) {
where += (where == "") ? " WHERE " : " AND "
+ "subsidiary_id = " + subsidiaryId;
}
if (employeeId != null) {
where += (where == "") ? " WHERE " : " AND "
+ "employee_id = " + employeeId;
} if (
lastName != null) {
where += (where == "") ? " WHERE " : " AND "
+ "UPPER(last_name) = '" + lastName.toUpperCase() + "'";
} String SQL =
"
SELECT employee_id, first_name, last_name "
+ " FROM employees"
+ where;
// execute SQL
The code constructs an SQL statement to fetch employees based
on any combination of three filter criteria. Although the code is
rather awkward, the constructed SQL can be executed and the
database will use the best index to retrieve the data. The problem
with that code is not only the highlighted SQL Injection
vulnerability, but also that it introduces a very high parsing
overhead.
That means that the database has to recreate the execution plan
every time, because the inlined search terms—that are different
every time—prevents caching. the section called “Bind
Parameter” explains parsing overhead in more detail.
Implementing the example with plain JDBC and bind parameters
yields even more awkward code—it's omitted here. However,
most ORM frameworks provide a sufficiently convenient way to
dynamically create SQL using bind parameters. The following
overview shows some samples:
Java
The following sample demonstrates Hibernate's Criteria
classes:
Criteria criteria = session.createCriteria(Employees.class);
if (subsidiaryId != null) {
criteria.add(Restrictions.eq("subsidiaryId", subsidiaryId));
} if (
employeeId != null) {
criteria.add(Restrictions.eq("employeeId", employeeId));
} if (
lastName != null) {
criteria.add(
Restrictions.eq("lastName", lastName).ignoreCase()
);
}
When providing LAST_NAME only, the following SQL is
generated by Hibernate (Oracle):
select this_.subsidiary_id as subsidiary1_0_0_,
[... other columns ...]
from employees this_
where lower(this_.last_name)=?
Please note that a bind parameter is used and the LOWER
function to implement the ignoreCase() functionality. The
same is true for the ilike restriction. That's a very important
fact for function based indexing.
The Java Persistence API (JPA) has a similar functionality:
However, it's less straight and doesn't support a native caseinsensitive
search that's probably good):
List<Predicate> predicates = new ArrayList<Predicate>();
if (lastName != null) {
predicates.add(queryBuilder.equal(
queryBuilder.upper(r.get(Employees_.lastName))
, lastName.toUpperCase())
);
} if (
employeeId != null) {
predicates.add(queryBuilder.equal(
r.get(Employees_.employeeId)
, employeeId)
);
} if (
subsidiaryId != null) {
predicates.add(queryBuilder.equal(
r.get(Employees_.subsidiaryId)
, subsidiaryId)
);
}
query.where(predicates.toArray(new Predicate[0]));
You see that the example is less straight in favour of compile
time type safety. Another difference is that JPA doesn't
support native case-insensitive operators—explicit case
conversion is needed. That's probably good to have
awareness and control over it. Just as a side note; the native
Hibernate API supports explicit case conversion as well.
Tip
Download the complete sample code and try
yourself.
Perl
The following sample demonstrates Perl's DBIx::Class
framework:
my @search = ();
if (defined $employee_id) {
push @search, {employee_id => $employee_id};
} if (
defined $
subsidiary_id) {
push @search, {subsidiary_id => $subsidiary_id};
} if (
defined $
last_name) {
push @search, {'UPPER(last_name)' => uc($last_name)};
}
my @employees = $schema->resultset('Employees')
->search({-and => \@search});
This results in the following (Oracle) SQL when searching by
LAST_NAME:
SELECT me.employee_id, me.subsidiary_id,
me.last_name, me.first_name,
me.date_of_birth
FROM employees me
WHERE ( UPPER(last_name) = :p1 )
Tip
Download the complete sample code and try
yourself.
PHP
The following sample demonstrates PHP's Doctrine
framework:
$filter = $qb->expr()->andx();
if (isset($employee_id)) {
$filter->add(
$qb->expr()->eq('e.employee_id', ':employee_id'));
$qb->setParameter('employee_id', $employee_id);
} if (
isset($subsidiary_id)) {
$filter->add(
$qb->expr()->eq('e.subsidiary_id', ':subsidiary_id'));
$qb->setParameter('subsidiary_id', $subsidiary_id);
} if (
isset($last_name)) {
$filter->add($qb->expr()->eq(
$qb->expr()->upper('e.last_name'), ':last_name'));
$qb->setParameter('last_name', strtoupper($last_name));
}
if ($filter->count() > 0) {
$qb->where($filter);
}
Please note that you have to use bind parameters explicitly.
Doctrine generates the following SQL for a search by last
name (MySQL):
SELECT e0_.employee_id AS employee_id0,
[... other columns ...]
FROM employees e0_
WHERE UPPER(e0_.last_name) = ?
Tip
Download the complete sample code and try
yourself.
Using dynamic SQL, with bind parameters, allows the optimizer to
take the best execution plan for the particular combination of
where clauses. That will yield better performance than
constructions like described in the section called “Smart Logic”:
SELECT first_name, last_name
FROM employees
WHERE ( employee_id = ? OR ? IS NULL)
AND ( subsidiary_id = ? OR ? IS NULL)
AND (UPPER(last_name) = ? OR ? IS NULL)
However, there are some—I'd say rare—cases when dynamic SQL
can be slower than static SQL with switches like above. That's
when very cheap (fast) SQL statements are executed at a very
high frequency. But, first of all, there are two terms to explain:
Hard Parsing
Hard parsing is constructing an execution plan based on the
SQL statement. That's a major effort; inspecting all parts of
the SQL; considering all indexes; considering all join orders
and so on. Hard parsing is very resource intensive.
Soft Parsing
Soft parsing is searching, finding and using a cached
execution plan. There are some minor checks done, e.g.,
access rights, but the execution plan can be re-used as is.
That is a rather fast operation.
The key to the cache is basically the literal SQL string—
usually a hash of it. If there is no exact match, a hard parse
is triggered. That's why inlined literals—as opposed to bind
parameters—trigger a hard parse unless the very same
search terms are used again. But even in that case there are
good chances that the previous execution plan was already
expired from the cache because new ones are coming over
and over again.
However, there is a way to execute a statement without any
parsing at all—not even soft parsing. The trick is to keep the
parsed statement open, e.g., like in the following Java pseudocode:
PreparedStatement pSQL = con.prepareStatement("select ...");
for (String last_name:last_names) {
pSQL.setString(1, last_name.toUpperCase());
ResultSet rs = pSQL.executeQuery();
// process result
} pSQL.close();
Note that the PreparedStatement is opened and closed once only—
yet it can be executed many times. That means that there is only
one parsing operation—during prepare—but non inside the loop.
The pitfall is that converting that to dynamic SQL moves the
prepareStatement call into the loop—causing a soft-parse for each
execution. The parsing overhead, that might also include network
latencies, can exceed the savings of a better execution plan when
the statement is executed often and runs fast anyway. That's
especially true if the actual execution plan doesn't vary for the
different where clauses—e.g., because one well indexed where
clause is always present.
Even though the “prepare before loop” trick is seldom used
explicitly, it is very common in stored procedures—but implicit.
Languages such as PL/SQL—with real static SQL—prepare the
SQL when the procedure is compiled or, at most, once per
execution. Changing that to dynamic SQL can easily kill
performance.