How to Use Indexes for SQL Server Performance Tuning (Practical Guide)

Last updated January 14, 2026 ~28 min read 30 views
SQL Server indexing clustered index nonclustered index query performance execution plan missing indexes index maintenance statistics DBA IT operations system engineering T-SQL DMVs Query Store fragmentation SARGability included columns filtered index covering index
How to Use Indexes for SQL Server Performance Tuning (Practical Guide)

Indexes are one of the most powerful—and most misused—tools in SQL Server performance tuning. When they match real query patterns, indexes reduce logical reads, shorten CPU time, and make response times predictable under load. When they’re guessed, duplicated, or allowed to drift out of alignment with the workload, they add write overhead, bloat storage, and can even make the optimizer choose worse plans.

This article is a practical, operations-minded guide to using indexes for SQL Server performance tuning. The goal is not “add indexes until it’s fast,” but to apply a repeatable process: measure the workload, identify the right access paths, implement the smallest effective index, validate with plans and metrics, and then keep it healthy through maintenance and change control. Along the way, you’ll see how clustered and nonclustered indexes behave, how included columns and filtered indexes change the cost model, and how to avoid common pitfalls like non-SARGable predicates and parameter sensitivity.

Because index work is inseparable from how SQL Server reads data, we’ll start with the fundamentals of access methods and then move into a workflow you can use on any instance, from a single OLTP database to a multi-tenant system with mixed workloads.

Establish a tuning workflow that starts with evidence

Index tuning goes wrong most often when it starts with a hunch. Before you design anything, decide what “better” means, and capture enough evidence to prove an improvement and avoid regressions.

At a minimum, you want to answer three questions for a candidate query or stored procedure: how much CPU does it consume, how much IO does it drive (logical reads are usually the best first indicator), and how often does it run. A query that runs once a day but reads 500 million pages might still be less urgent than a query that runs 1,000 times per minute and reads 2,000 pages each time.

Start by enabling and using built-in instrumentation. Query Store (SQL Server 2016+) is particularly valuable because it captures query text, runtime stats, and plan history over time; it also helps you distinguish “bad indexing” from “plan regression after a deployment.” For ad-hoc triage, dynamic management views (DMVs) such as sys.dm_exec_query_stats and sys.dm_db_index_usage_stats let you see what’s expensive and what indexes are actually being used.

A simple baseline for a specific query is still helpful during design. Capture SET STATISTICS IO, TIME ON output and the actual execution plan before you change anything. For a production system, avoid running heavy “capture everything” techniques during peak hours; rely on Query Store for historical context and sample a few representative queries during a maintenance window.

The following T-SQL is a safe starting point to identify expensive queries by total worker time and read volume. It won’t tell you which index to build, but it tells you where index work might matter.

SELECT TOP (25)
    qs.total_worker_time / 1000.0 AS total_cpu_ms,
    qs.total_elapsed_time / 1000.0 AS total_duration_ms,
    qs.total_logical_reads AS total_logical_reads,
    qs.execution_count,
    (qs.total_logical_reads * 1.0) / NULLIF(qs.execution_count, 0) AS avg_logical_reads,
    (qs.total_worker_time * 1.0) / NULLIF(qs.execution_count, 0) AS avg_cpu,
    DB_NAME(st.dbid) AS database_name,
    SUBSTRING(st.text, (qs.statement_start_offset/2) + 1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(st.text)
            ELSE qs.statement_end_offset
          END - qs.statement_start_offset)/2) + 1) AS statement_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
ORDER BY qs.total_logical_reads DESC;

As you move through the rest of this guide, keep this workflow in mind: identify the highest-impact queries, understand how they access data, design an index that matches that access, validate with plans and metrics, and then operationalize the change.

Understand how SQL Server uses indexes to access data

SQL Server stores data in 8 KB pages. An index is a data structure (typically a B-tree) that organizes keys so the engine can locate rows efficiently without scanning an entire table or heap. The performance win comes from reducing page reads and narrowing the search to a small range of pages.

Two access methods show up repeatedly in execution plans and are central to index tuning: seek and scan. An index seek navigates the B-tree to a specific key or range of keys; a scan reads many or all pages. A scan is not automatically “bad”—for small tables or broad queries it can be optimal—but for selective predicates on large tables, a scan is usually the reason you’re doing too much IO.

A second concept you’ll see throughout is key lookup (sometimes “RID lookup”). When a nonclustered index can locate qualifying rows but does not contain all columns needed by the query, SQL Server may perform lookups back to the base table (clustered index) or heap to retrieve missing columns. Lookups are often acceptable at low row counts, but they can be disastrous when the predicate returns many rows, turning into thousands of random IO operations.

Finally, the optimizer is cost-based: it chooses the plan that it estimates will be cheapest. Indexes influence those estimates, but so do statistics (histograms and density information). Many “index problems” are actually statistics problems or query shape problems, so it’s important to interpret index recommendations through that lens.

Choose the right table structure: heaps vs clustered indexes

Before you add nonclustered indexes, confirm the base table structure supports the workload. A heap is a table without a clustered index; rows are not stored in key order. Heaps can be valid for staging tables or write-heavy, read-rarely tables, but they complicate point lookups and can amplify fragmentation-like behavior through forwarded records.

A clustered index defines the physical order of rows on disk by the clustered key. Every nonclustered index uses the clustered key as the row locator (unless the table is a heap, in which case the locator is a RID). This makes the clustered key a foundational decision: it affects nonclustered index size, lookup cost, and even insert behavior.

For OLTP systems, a clustered index is usually appropriate. A common pattern is a narrow, ever-increasing surrogate key (like an IDENTITY bigint) to reduce page splits and keep inserts mostly sequential. But “narrow and increasing” isn’t a universal rule. If your access pattern is heavily range-based on a timestamp (for example, WHERE CreatedAt >= @Start AND CreatedAt < @End), a clustered index on the timestamp can be advantageous, especially if most reads are recent data and older data is rarely accessed.

When choosing a clustered key, consider uniqueness, width, and volatility. A wide clustered key (for example, a composite of several NVARCHAR columns) increases every nonclustered index size because the clustered key is stored in each nonclustered index row. A volatile key that changes frequently causes expensive updates to every referencing nonclustered index. You typically want a stable key that aligns with range access and is as narrow as practical.

In the sections that follow, you’ll design nonclustered indexes assuming the clustered key decision is sensible. If you’re working with heaps or questionable clustered keys, you may need to address that foundation first, because nonclustered index tuning can only compensate so much.

Map query patterns to index types and access paths

Index design becomes straightforward when you translate “what the app does” into a few common query patterns.

For point lookups (WHERE Id = @Id), the best index is one where the predicate columns are the leading key columns. For range scans (WHERE CreatedAt BETWEEN ...), you want the range column early in the key so SQL Server can seek to the start and scan to the end. For queries that filter on one column and order by another, you may need a composite index so SQL Server can both seek and return rows in the desired order without an extra sort.

For aggregations and reporting-style queries, you often want an index that supports ordered scans or reduces lookups by “covering” the query. Covering means the index contains all columns needed by the query, either as key columns or included columns. Covering does not always mean “fast”—a very wide covering index can increase storage and write cost significantly—but it can eliminate lookups and reduce CPU.

SQL Server offers several index variants you’ll use in tuning:

  • Nonclustered indexes: the default choice for accelerating predicates and joins.
  • Included columns: non-key columns stored at the leaf level to cover queries without changing key order.
  • Filtered indexes: indexes with a WHERE clause that only cover a subset of rows, useful for skewed data distributions.
  • Unique indexes: enforce uniqueness and can improve cardinality estimation.
  • Columnstore indexes: specialized for analytics/warehouse workloads; useful, but not the focus of this B-tree indexing guide.

The next step is to connect these tools to the optimizer’s decisions by reading execution plans.

Read execution plans with an indexing lens

Execution plans can be intimidating, but for index tuning you can focus on a few repeatable signals.

Start with the highest-cost operators and look for scans on large tables, expensive sorts, hash joins caused by large intermediate sets, and nested loops with high row counts that drive repeated lookups. In SQL Server Management Studio (SSMS), use the Actual Execution Plan and also check the Warnings (yellow triangles) for spills, implicit conversions, and missing index hints.

When you see an index scan, the question is not “how do I avoid scans?” but “is the scan scanning too much?” If a scan touches a tiny table or returns a large portion of the table, it might be fine. If a scan touches millions of pages to return a small number of rows, you likely need an index or a query rewrite to make the predicate SARGable.

When you see a seek plus a key lookup, focus on the estimated and actual number of rows. If the query returns a handful of rows and the lookup count is small, this may be optimal and adding a wide covering index might harm write performance. If the query returns thousands of rows and performs thousands of lookups, that’s usually a good candidate for a covering index or for adjusting the predicate to be more selective.

Also pay attention to whether the plan uses a different index than you expect. That’s often a sign the index isn’t selective enough, the stats are stale, or the query’s predicate shape prevents a seek.

Make predicates SARGable so indexes can be used

An index can only help if SQL Server can use it efficiently. This is where SARGability matters. A predicate is SARGable (Search ARGument-able) when SQL Server can use it to seek into an index. If you wrap the indexed column in a function or force an implicit conversion, SQL Server may be unable to seek and will scan instead.

Common non-SARGable patterns include:

  • WHERE CONVERT(date, CreatedAt) = @Date
  • WHERE ISNULL(Status, 'X') = 'A'
  • WHERE LEFT(Email, 3) = 'abc'
  • Comparing different data types, causing implicit conversion on the column side

Instead, rewrite predicates so the column is “naked” on one side of the comparison. For date filtering, use ranges:

sql
-- Non-SARGable
WHERE CONVERT(date, CreatedAt) = @Date

-- SARGable
WHERE CreatedAt >= @Date
  AND CreatedAt < DATEADD(day, 1, @Date);

This matters because an index on CreatedAt can support a range seek, but not a function-wrapped equality. Before you add indexes, check whether the query can be made SARGable. It’s one of the highest ROI changes you can make because it improves plan quality without adding write overhead.

Design composite indexes that match equality, range, and ORDER BY

Most performance-critical queries filter on more than one column, so you’ll build composite indexes. The order of key columns is not cosmetic; it defines what predicates can seek.

A practical rule: put equality predicates first, then range predicates, then columns that support ORDER BY or grouping. Equality predicates (like CustomerId = @CustomerId) narrow the search to a specific segment of the index. Range predicates (like CreatedAt >= @Start AND CreatedAt < @End) define a contiguous span. Columns after a range predicate are less useful for seeking (though they can still help ordering in some cases).

Consider a query pattern:

sql
SELECT TOP (50) OrderId, CreatedAt, TotalAmount
FROM dbo.Orders
WHERE CustomerId = @CustomerId
  AND CreatedAt >= @Start
  AND CreatedAt < @End
ORDER BY CreatedAt DESC;

A good index here is typically:

sql
CREATE INDEX IX_Orders_CustomerId_CreatedAt
ON dbo.Orders (CustomerId, CreatedAt DESC)
INCLUDE (TotalAmount);

This allows SQL Server to seek into CustomerId, then range scan CreatedAt in the needed order, returning rows already sorted. Including TotalAmount avoids a lookup if the clustered key doesn’t already cover it. Note that DESC can matter for avoiding an extra sort when you regularly query in descending order.

If you invert the key order (CreatedAt, CustomerId), the query might still use the index, but it may scan a much larger range and filter CustomerId after the fact. That’s an example of an index that “exists” but doesn’t match the query’s selectivity.

This column-order discipline becomes even more important when you have many similar queries. A single well-chosen composite index can support multiple queries; multiple slightly different indexes can create high write overhead with little benefit.

Use included columns to cover without inflating key width

Key columns determine ordering and how seeks work. Included columns exist to cover the SELECT list without changing key order. They are stored only at the leaf level, which usually makes them cheaper than adding those columns to the key.

Included columns are best for columns that are returned frequently but are not used for filtering, joining, or ordering. For example, a query may filter on CustomerId and CreatedAt, but return TotalAmount, TaxAmount, and CurrencyCode. Those three columns are good candidates for INCLUDE.

However, included columns are not free. A wide include list can make the index large, increase memory pressure, and slow down inserts/updates. As a practical approach, include only what you need to remove expensive lookups on the hot queries you are tuning. If a query returns large text columns (NVARCHAR(MAX), VARBINARY(MAX)), consider whether you actually need to cover those columns; large object (LOB) columns can make indexes heavy and may not be allowed in all index contexts.

A useful validation step after adding includes is to compare logical reads and plan shape. If the query switches from seek + lookup to a single index seek/scan that returns everything, and logical reads drop meaningfully, you likely made a good trade.

Apply filtered indexes to skewed data distributions

Filtered indexes target a subset of rows using a WHERE clause in the index definition. They are most effective when the workload frequently queries a small subset of a large table.

For example, many systems query “active” rows far more often than historical rows. If a table has 100 million rows but only 2 million are active, an index filtered on active rows can be much smaller and more efficient.

Consider a tickets table where most tickets are closed, but the application dashboard queries open tickets all day:

sql
SELECT TicketId, Priority, AssignedTo, UpdatedAt
FROM dbo.Tickets
WHERE Status = 'Open'
  AND AssignedTo = @UserId
ORDER BY UpdatedAt DESC;

A filtered index can help significantly:

sql
CREATE INDEX IX_Tickets_Open_AssignedTo_UpdatedAt
ON dbo.Tickets (AssignedTo, UpdatedAt DESC)
INCLUDE (Priority)
WHERE Status = 'Open';

This index is smaller and more selective than an unfiltered index on (Status, AssignedTo, UpdatedAt). It can also improve plan stability because the optimizer has a clearer path for the common “open tickets” case.

Filtered indexes require the query predicate to match the filter definition. If your queries use different literal values, variables, or parameterization patterns, validate that the optimizer can still match the filtered index. In some cases, you may need to use OPTION (RECOMPILE) for specific queries to let the optimizer see the parameter values and choose the filtered index appropriately, but treat recompilation as a targeted tool, not a default.

Align indexing with joins and foreign keys

Joins are a common source of expensive scans and hash joins. Indexing the join columns reduces the need to build large hash tables and can enable efficient nested loops joins where appropriate.

A basic but frequently missed practice is indexing foreign key columns. SQL Server does not automatically create indexes for foreign keys. Without them, joining a parent table to a large child table often forces scans on the child.

If you have a common pattern like:

sql
SELECT o.OrderId, o.CreatedAt, c.CustomerName
FROM dbo.Orders AS o
JOIN dbo.Customers AS c ON c.CustomerId = o.CustomerId
WHERE o.CreatedAt >= @Start AND o.CreatedAt < @End;

An index on Orders(CustomerId, CreatedAt) can serve both the join and the date filter (or vice versa depending on selectivity). On the Customers side, CustomerId is usually the clustered primary key already, so the join is efficient.

When indexing for joins, keep cardinality in mind. Indexing a column with very low selectivity (like a Status column with three values) won’t help unless paired with a more selective column or used as part of a filtered index. Joins benefit most from indexes on columns that substantially reduce the rowset early.

Use Query Store and DMVs to find index candidates (carefully)

SQL Server can suggest missing indexes via DMVs such as sys.dm_db_missing_index_details and related views. These can be useful, but they are not a design blueprint. Missing index suggestions are per-query, can propose overlapping indexes, and don’t account for write overhead or existing indexes that are “close enough.”

Still, they are good for discovery. Here is a commonly used query to list high-impact missing index groups for a database:

sql
SELECT TOP (20)
    migs.avg_user_impact,
    migs.user_seeks,
    migs.user_scans,
    mid.statement AS table_name,
    mid.equality_columns,
    mid.inequality_columns,
    mid.included_columns
FROM sys.dm_db_missing_index_group_stats AS migs
JOIN sys.dm_db_missing_index_groups AS mig
  ON migs.group_handle = mig.index_group_handle
JOIN sys.dm_db_missing_index_details AS mid
  ON mig.index_handle = mid.index_handle
WHERE mid.database_id = DB_ID()
ORDER BY migs.avg_user_impact DESC, migs.user_seeks DESC;

Treat the output as a set of hypotheses. Before creating anything, check whether:

  • An existing index already covers the same leading keys.
  • The suggested includes would make the index too wide.
  • The query can be rewritten to be SARGable.
  • The workload would suffer from added write cost.

In parallel, use sys.dm_db_index_usage_stats to identify unused or rarely used indexes. An index with high user_updates and near-zero user_seeks/user_scans is a candidate for removal, but validate carefully: usage stats reset on restart and may not reflect infrequent but critical jobs.

sql
SELECT
    OBJECT_NAME(s.object_id) AS table_name,
    i.name AS index_name,
    s.user_seeks, s.user_scans, s.user_lookups,
    s.user_updates
FROM sys.indexes AS i
LEFT JOIN sys.dm_db_index_usage_stats AS s
  ON s.object_id = i.object_id
 AND s.index_id = i.index_id
 AND s.database_id = DB_ID()
WHERE OBJECTPROPERTY(i.object_id, 'IsUserTable') = 1
ORDER BY s.user_updates DESC;

This discovery work sets you up to act surgically: add an index to address a specific, measured problem and remove indexes that demonstrably cost more than they deliver.

Real-world example 1: Reducing dashboard latency caused by lookups

Consider an internal operations dashboard that shows the most recent orders per customer. Users report that it “hangs” during peak hours. Query Store shows a single query dominating logical reads.

The query looks like this:

sql
SELECT TOP (100)
    o.OrderId,
    o.CreatedAt,
    o.TotalAmount,
    o.Status
FROM dbo.Orders AS o
WHERE o.CustomerId = @CustomerId
ORDER BY o.CreatedAt DESC;

The table has a clustered index on OrderId (IDENTITY), and a nonclustered index exists on CustomerId. The execution plan shows an index seek on CustomerId, then a key lookup to fetch CreatedAt, TotalAmount, and Status, followed by a sort. Under load, the lookup count is high because active customers have many orders; the nested loops + lookups pattern becomes CPU-heavy and creates latch contention on the clustered index pages.

Index tuning here follows the principles discussed earlier. The query filters by CustomerId and orders by CreatedAt DESC. A composite index on (CustomerId, CreatedAt DESC) can avoid the sort, and includes can eliminate lookups.

sql
CREATE INDEX IX_Orders_CustomerId_CreatedAtDesc
ON dbo.Orders (CustomerId, CreatedAt DESC)
INCLUDE (TotalAmount, Status);

After deployment, SET STATISTICS IO, TIME ON shows a substantial drop in logical reads, and the plan becomes a straightforward index seek returning rows in order. The write overhead is acceptable because the Orders table has far fewer updates than reads and inserts are already sequential due to the clustered key.

What’s notable is that the original CustomerId index was not “wrong,” it was just incomplete for the actual query shape. By aligning key order and covering columns, you reduce both IO and CPU, and you remove the sort.

Balance read performance against write overhead and index bloat

Every additional index has a cost: inserts, updates, and deletes must maintain each index. This increases log generation, can slow down write-heavy workloads, and can lengthen recovery times.

For IT administrators managing production instances, the operational costs matter as much as query latency. If you index every query, you’ll eventually shift the bottleneck from reads to writes, or from CPU to storage and memory.

A practical approach is to treat indexes as part of capacity management:

  • Prefer one well-designed composite index that supports multiple queries over many narrow, overlapping indexes.
  • Avoid wide indexes that include many columns “just in case.” Add includes based on measured lookup pain.
  • Review index usage periodically and remove indexes that are demonstrably unused or redundant.

Redundancy is especially common when different developers add indexes for different incidents. Two indexes that share the same leading keys but differ only slightly in includes often can be consolidated.

Also consider fill factor (the percentage of space filled on leaf pages) for write-heavy indexes where page splits are frequent. Fill factor can reduce page splits at the cost of larger index size. It’s not a universal tuning knob; use it where you have evidence of heavy fragmentation driven by random inserts.

Manage fragmentation and choose maintenance methods intentionally

Index fragmentation is often discussed as if it’s always the primary issue. In practice, fragmentation matters most when it increases IO for range scans or causes excessive page splits that harm write performance.

SQL Server exposes fragmentation metrics via sys.dm_db_index_physical_stats. For B-tree indexes, the common maintenance actions are reorganize (a lighter, online defragmentation) and rebuild (a heavier operation that recreates the index and updates statistics with fullscan by default).

Because maintenance has real costs—CPU, IO, log growth, blocking (depending on edition and options)—index maintenance should be driven by thresholds and by workload type. A common operational pattern is:

  • Reorganize moderately fragmented indexes.
  • Rebuild heavily fragmented indexes.
  • Skip very small indexes where fragmentation doesn’t matter.

Rather than provide one-size thresholds, calibrate based on your environment. A large table with heavy range scans may benefit from stricter maintenance than a small OLTP table with mostly point lookups.

Here is a sample query to identify fragmentation for indexes above a page count threshold (to ignore tiny indexes):

sql
SELECT
    OBJECT_NAME(ps.object_id) AS table_name,
    i.name AS index_name,
    ps.index_id,
    ps.avg_fragmentation_in_percent,
    ps.page_count
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ps
JOIN sys.indexes AS i
  ON ps.object_id = i.object_id
 AND ps.index_id = i.index_id
WHERE ps.page_count >= 1000
  AND ps.index_id > 0
ORDER BY ps.avg_fragmentation_in_percent DESC;

If you implement maintenance, do it with change control and monitoring. Rebuilds can grow the transaction log substantially (especially in full recovery model). If you rely on log shipping, replication, or AGs, coordinate maintenance windows and ensure the log and tempdb have capacity.

Keep statistics current because indexes rely on estimates

Indexes and statistics are tightly coupled. Statistics summarize data distribution; the optimizer uses them to estimate row counts and choose between seeks, scans, and join types. An excellent index can be ignored if the optimizer misestimates selectivity.

SQL Server can auto-create and auto-update statistics, but auto-update is triggered based on row change thresholds and can lag for very large tables. As tables grow, the threshold for an automatic update can become large enough that stats remain stale for long periods.

When you see plans with large differences between estimated and actual rows, consider statistics. A targeted stats update can be cheaper and less risky than adding new indexes.

A typical operational approach is to:

  • Ensure AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS are enabled unless you have a deliberate alternative.
  • Schedule stats updates for large, volatile tables (possibly with sampling decisions that match your needs).
  • Prefer updating stats on specific tables/indexes that drive critical queries rather than blanket fullscan updates that consume a lot of IO.

Here is a simple example for updating stats on a single table:

sql
UPDATE STATISTICS dbo.Orders WITH RESAMPLE;

If you do rebuilds as part of maintenance, remember that index rebuilds update index statistics automatically. Non-index statistics may still need attention depending on your schema.

Real-world example 2: Fixing a “fast in dev, slow in prod” report with filtered indexing

A reporting query works fine in development but times out in production. The table dbo.Events has 300 million rows in prod, and the report filters on a small subset: only events from the last 7 days and only a specific event type.

The query is:

sql
SELECT
    EventType,
    COUNT(*) AS event_count
FROM dbo.Events
WHERE EventType = @EventType
  AND EventTime >= DATEADD(day, -7, SYSUTCDATETIME())
GROUP BY EventType;

An unfiltered index on (EventType, EventTime) exists, but it is huge and heavily fragmented due to inserts, and the optimizer often chooses a scan because it estimates a high row count for popular event types. The report is slow and causes IO spikes.

There are multiple levers here: query shape, partitioning, and indexing. Staying within indexing, a filtered index can target only the event type(s) that the report commonly runs for, but because EventTime uses a moving window, you can’t filter directly on “last 7 days” in a stable index definition.

Instead, the team introduces a persisted computed column IsRecent that is updated via a nightly job—this is not always feasible, but it is a realistic operational pattern when you have strict reporting needs. A more common and simpler option is to filter by event type if the event type is the primary skew.

Assume the report is almost always run for EventType = 'AUTH_FAILURE', which is rare relative to the total. A filtered index helps:

sql
CREATE INDEX IX_Events_AuthFailure_EventTime
ON dbo.Events (EventTime)
INCLUDE (EventType)
WHERE EventType = 'AUTH_FAILURE';

Now the optimizer can use a narrow index to seek and scan only the qualifying time range for that event type. The result is lower IO, less cache pollution, and reduced impact on OLTP queries.

The key lesson is to look for stable predicates that represent workload skew. Filtered indexes are most effective when the filter predicate is common and stable, and when it significantly reduces index size.

Avoid parameter sensitivity pitfalls when validating index changes

Even when an index is correct, parameterized queries can show inconsistent performance due to parameter sensitivity (often called “parameter sniffing”). SQL Server compiles a plan based on parameter values at compile time; if the first execution uses atypical values, the cached plan may not be optimal for typical executions.

This matters during index tuning because you might test with one parameter set, see improvement, and deploy—only to find that other parameter values now perform worse. The index didn’t fail; the plan selection did.

To evaluate properly, test with representative parameter values that reflect both common and worst-case distributions. In Query Store, compare plans across time and parameter sets if available. If you see plan instability, consider options such as:

  • Adjusting indexes so the optimizer has a better “one plan fits most” choice.
  • Using OPTION (RECOMPILE) for specific problematic statements (trading CPU for consistent plans).
  • Using OPTIMIZE FOR hints cautiously when you know the typical parameter values.

Indexing can sometimes reduce sensitivity by making the cost difference between plans smaller. For example, if a query sometimes seeks and sometimes scans based on parameter values, adding an index that supports the selective case can help, but you may still need to address plan choice.

Real-world example 3: Stabilizing an API endpoint by removing redundant indexes

An e-commerce API endpoint that updates inventory starts missing latency SLOs after a new release. The instinct is to “add an index,” but the data shows a different story: the endpoint is write-heavy and slowed by index maintenance.

The table dbo.Inventory has:

  • A clustered primary key on (WarehouseId, Sku).
  • Four nonclustered indexes added over time to support searches and reports.

DMV analysis shows extremely high user_updates on those nonclustered indexes, while two of them have near-zero seeks/scans. The new release increased update frequency (more frequent stock adjustments), and the unused indexes now impose a direct cost.

The team reviews query patterns and consolidates indexes. Two redundant indexes share the same leading keys and differ only in included columns that are no longer used. After verifying via Query Store that the indexes are not used across a full business cycle and confirming with stakeholders that the associated reports were deprecated, they drop the indexes.

sql
DROP INDEX IX_Inventory_OldReport1 ON dbo.Inventory;
DROP INDEX IX_Inventory_OldReport2 ON dbo.Inventory;

The write latency improves immediately, log generation decreases, and tempdb pressure drops because fewer index structures are being maintained per update.

This scenario is an important counterbalance to “add indexes.” In SQL Server performance tuning, removing indexes can be just as effective—especially for systems that evolved over time without centralized index governance.

Validate index effectiveness with measurable outcomes

After creating or changing indexes, validate using the same evidence-based approach you started with. The objective is not just “the query seems faster,” but measurable reductions in IO and CPU and improved plan shape.

For a specific query, compare:

  • Logical reads (STATISTICS IO)
  • CPU time (STATISTICS TIME)
  • Execution plan operators (scan vs seek, lookup count, sort elimination)
  • Duration and wait profile under representative load

If you use Query Store, compare runtime stats before and after deployment for the same query_id and plan_id. Also check whether the change affects other queries positively or negatively. A new index can change plan choices elsewhere; that’s sometimes beneficial, sometimes not.

When validating, also consider concurrency. A plan that is faster in isolation can be worse under load if it increases lock contention or uses a serial plan that blocks parallelism inappropriately. Indexes can change lock acquisition patterns; for example, a seek-based plan may lock fewer rows but may still lead to hot spots if it concentrates activity on a small range of keys.

Operationalize index changes: naming, deployment, and rollback

In production environments, index changes should be treated like code changes: documented, reviewed, deployed predictably, and reversible.

Use consistent naming conventions that encode table and key columns, such as IX_Orders_CustomerId_CreatedAtDesc. Avoid opaque names; when you’re diagnosing a plan at 2 a.m., descriptive index names reduce risk.

For deployment, plan for lock and log behavior. CREATE INDEX and ALTER INDEX REBUILD can be intrusive on large tables. Enterprise Edition supports online index operations for many cases (WITH (ONLINE = ON)), but not all index types and not all schema features are compatible. Standard Edition has more limitations; you may need maintenance windows.

Keep rollback simple. If a new index causes unexpected regressions, dropping it is straightforward, but it may not immediately revert plans if plan cache and Query Store plan forcing are involved. In systems with strict SLAs, consider deploying indexes first, observing plan changes, and only then adjusting queries or dropping old indexes.

If you automate deployments, ensure scripts are idempotent and check for existence before create/drop. Example pattern:

sql
IF NOT EXISTS (
    SELECT 1
    FROM sys.indexes
    WHERE object_id = OBJECT_ID('dbo.Orders')
      AND name = 'IX_Orders_CustomerId_CreatedAtDesc'
)
BEGIN
    CREATE INDEX IX_Orders_CustomerId_CreatedAtDesc
    ON dbo.Orders (CustomerId, CreatedAt DESC)
    INCLUDE (TotalAmount, Status);
END;

This is particularly useful for multi-environment rollouts where drift can occur.

Monitor index health and usage continuously

Index tuning is not a one-time project. As data volume grows and query patterns shift, yesterday’s perfect index can become today’s dead weight. Ongoing monitoring helps you spot when indexes stop being used, when fragmentation and stats drift start affecting plans, and when newly introduced queries need coverage.

Query Store provides the most practical continuous visibility into query performance. Pair it with periodic reports from DMVs for index usage and missing index suggestions (again, treated as hypotheses). For operations teams, the goal is to maintain a stable system with predictable performance rather than chase every micro-optimization.

If you want to collect index usage centrally, you can schedule a job to snapshot sys.dm_db_index_usage_stats into a table daily, because the DMV resets on restart. This turns “maybe it’s unused” into “it hasn’t been used in 45 days across two patch cycles,” which is a much safer basis for removal decisions.

Here is a minimal example of capturing index usage into a table you manage:

sql
-- One-time table
CREATE TABLE dbo.IndexUsageSnapshot
(
    SnapshotTime datetime2 NOT NULL,
    DatabaseName sysname NOT NULL,
    TableName sysname NOT NULL,
    IndexName sysname NULL,
    IndexId int NOT NULL,
    UserSeeks bigint NULL,
    UserScans bigint NULL,
    UserLookups bigint NULL,
    UserUpdates bigint NULL
);
GO

-- Periodic snapshot
INSERT INTO dbo.IndexUsageSnapshot
(
    SnapshotTime, DatabaseName, TableName, IndexName, IndexId,
    UserSeeks, UserScans, UserLookups, UserUpdates
)
SELECT
    SYSUTCDATETIME(),
    DB_NAME(),
    OBJECT_SCHEMA_NAME(i.object_id) + '.' + OBJECT_NAME(i.object_id),
    i.name,
    i.index_id,
    s.user_seeks, s.user_scans, s.user_lookups, s.user_updates
FROM sys.indexes AS i
LEFT JOIN sys.dm_db_index_usage_stats AS s
  ON s.object_id = i.object_id
 AND s.index_id = i.index_id
 AND s.database_id = DB_ID()
WHERE OBJECTPROPERTY(i.object_id, 'IsUserTable') = 1;

From there, you can trend usage over time and confidently identify candidate indexes for consolidation or removal.

Special considerations for tempdb, memory, and storage

While index tuning focuses on data access, it interacts with broader system resources.

Large indexes consume buffer pool memory (the primary SQL Server data cache). If you add many wide indexes, you may reduce cache efficiency, causing more physical reads. This can show up as increased disk latency even if individual queries become faster in isolation. On busy instances, watch the overall read/write latency and page life expectancy trends rather than only individual query times.

Indexes also affect tempdb usage. Operations like sorts, hash joins, and index builds can spill to tempdb if memory grants are insufficient. A good index can eliminate a sort, reducing tempdb pressure. Conversely, building or rebuilding large indexes can heavily use tempdb (especially with online operations). Make sure tempdb is sized and configured appropriately (multiple data files, sufficient storage performance) before you embark on aggressive index maintenance.

Finally, storage matters. NVMe-backed systems tolerate random IO better than spinning disks, which changes the urgency of eliminating lookups. Even with fast storage, CPU can become the bottleneck when logical reads are high. Use your monitoring to identify whether your instance is IO-bound or CPU-bound; index choices differ in those environments.

Use indexing to avoid sorts and reduce expensive operators

Sort operators are common hidden costs. They consume CPU and memory and may spill to tempdb if the sort set is large. A properly designed index can provide rows in the desired order, eliminating the sort entirely.

If a query uses ORDER BY on a column that is not supported by the index order, SQL Server may need a sort even if it uses an index seek for filtering. Composite indexes that match both filter and order can turn that into a streaming operation.

The same idea applies to GROUP BY and DISTINCT. If the index provides rows ordered by the grouping columns, SQL Server can sometimes use stream aggregation instead of hash aggregation, reducing memory needs.

Be careful not to over-index for every order-by variation. Focus on orderings that dominate runtime in your hot queries. If you have many different order-by patterns, consider whether application changes (like removing nonessential sorts) would produce better ROI than building many specialized indexes.

Recognize when an index won’t help and a different approach is needed

Some performance problems are not solvable with more B-tree indexes. It’s operationally important to recognize these cases early so you don’t accumulate indexes that increase costs without fixing the root issue.

If a query returns a large percentage of a table, a scan can be optimal. In that case, focus on reducing the rowset earlier (better predicates), archiving old data, partitioning, or using columnstore for analytics-style queries.

If the predicate is inherently non-SARGable due to requirements (for example, LIKE '%term%' on arbitrary substrings), consider full-text search or a different data model rather than forcing B-tree indexes to do a job they are not designed for.

If concurrency is the issue, not raw speed, indexing may still help (shorter lock durations due to faster queries), but you may also need to examine transaction scope, isolation levels (for example, READ COMMITTED SNAPSHOT), and hot-spot access patterns.

This article stays centered on indexes, but good SQL Server performance tuning treats indexes as one tool in a broader toolkit.

Putting it all together: a repeatable index tuning checklist

By this point, you have the concepts and the operational context to tune indexes reliably. The repeatable process looks like this:

First, identify high-impact queries using Query Store or DMVs, focusing on total reads/CPU and execution frequency. Then, inspect actual execution plans for scans, sorts, and lookup-heavy patterns, and confirm predicates are SARGable.

Next, design the smallest index that matches the query’s filtering and ordering needs. Start with correct key order (equality then range), use included columns to cover only what’s necessary, and use filtered indexes when the workload targets a stable subset of rows.

Finally, validate with before/after metrics and monitor over time. Treat index changes as operational changes with naming conventions, deployment scripts, and rollback plans. Periodically review index usage to remove redundancy and keep write overhead under control.

That disciplined cycle is how you turn indexes from an ad-hoc “maybe this helps” lever into a dependable, measurable practice for SQL Server performance tuning.