Indexes are supposed to make queries fast. Sometimes they make them slower. Here’s what I wish someone told me before I tanked production performance trying to “optimize” our database.
The query that started it all
Support said the admin dashboard was timing out. Found this in the slow query log:
SELECT *
FROM orders
WHERE customer_id = 12345
AND status IN ('pending', 'processing')
AND created_at > '2025-12-01'
ORDER BY created_at DESC
LIMIT 20;
Took 8 seconds on a table with 4 million rows. Obviously needs an index, right?
My first attempt (the wrong one)
CREATE INDEX idx_customer ON orders(customer_id);
CREATE INDEX idx_status ON orders(status);
CREATE INDEX idx_created ON orders(created_at);
Three separate indexes. Seemed logical - we’re filtering on all three columns.
Query time: 6 seconds.
Wait, what? I added indexes and it only got 20% faster? Checked EXPLAIN:
EXPLAIN SELECT ...
+----+-------------+-------+-------+-----------+------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+-----------+------+---------+------+------+-------------+
| 1 | SIMPLE | orders| range | idx_created | idx_created | 5 | NULL | 430000 | Using where |
+----+-------------+-------+-------+-----------+------+---------+------+------+-------------+
MySQL picked idx_created (the worst choice) and still scanned 430,000 rows. The other two indexes weren’t even considered.
Why separate indexes suck
MySQL can usually only use one index per table. It picks what it thinks is best, and often gets it wrong.
Even when it uses multiple indexes (index merge), it has to:
- Scan first index
- Scan second index
- Intersect the row IDs
- Fetch actual rows
This is often slower than scanning without indexes at all.
The composite index that worked
CREATE INDEX idx_customer_status_created
ON orders(customer_id, status, created_at);
Query time: 80ms.
That’s 100x faster. Same query, one index instead of three.
Why does this work? Index column order matters. A lot.
The leftmost prefix rule
Composite indexes work left-to-right. Our index on (customer_id, status, created_at) can help with:
-- ✅ Uses index
WHERE customer_id = 123
-- ✅ Uses index
WHERE customer_id = 123 AND status = 'pending'
-- ✅ Uses index (full)
WHERE customer_id = 123 AND status = 'pending' AND created_at > '2025-12-01'
-- ❌ Can't use index
WHERE status = 'pending'
-- ❌ Can't use index
WHERE created_at > '2025-12-01'
If you don’t filter on customer_id (the leftmost column), the index is useless.
Think of it like a phone book sorted by last name, then first name. You can find “Smith, John” but you can’t quickly find all Johns because first name isn’t the primary sort.
When composite indexes backfire
We got cocky and added more:
CREATE INDEX idx_customer_status_created_total
ON orders(customer_id, status, created_at, total_amount);
CREATE INDEX idx_customer_email_created
ON orders(customer_id, email, created_at);
Our table went from 2GB to 2.8GB. INSERT performance dropped 30%.
Why? Every index has to be updated on every INSERT, UPDATE, and DELETE. More indexes = more write overhead.
Worse, MySQL’s query optimizer got confused. With 8 different indexes, it sometimes picked the wrong one, making queries slower than before.
The indexes we actually need
After benchmarking real queries, we kept:
CREATE INDEX idx_customer_created ON orders(customer_id, created_at);
CREATE INDEX idx_status_created ON orders(status, created_at);
CREATE INDEX idx_email ON orders(email);
Three indexes, carefully chosen. Covers 95% of our query patterns.
The rule: add indexes for your slowest queries, not every possible query.
Covering indexes (the secret weapon)
Sometimes you can avoid table lookups entirely:
-- Original query
SELECT id, customer_id, status, created_at
FROM orders
WHERE customer_id = 12345
AND status = 'pending';
-- Original index
CREATE INDEX idx_customer_status ON orders(customer_id, status);
-- EXPLAIN shows "Using index" in Extra column
-- MySQL read 50 rows, fetched 50 rows from table
-- Covering index
CREATE INDEX idx_customer_status_covering
ON orders(customer_id, status, id, created_at);
-- Now EXPLAIN shows "Using index" (no table access needed)
-- MySQL read 50 rows, fetched 0 rows from table
A covering index contains all the columns you SELECT, not just the ones you filter on. MySQL can answer the entire query from the index without touching the table.
This cut query time from 12ms to 2ms for our most frequent query.
The cardinality trap
High cardinality = good for indexes. Low cardinality = bad.
-- Good (high cardinality)
CREATE INDEX idx_email ON users(email);
-- Each email is unique
-- Bad (low cardinality)
CREATE INDEX idx_status ON orders(status);
-- Only 5 possible statuses
-- Terrible (very low cardinality)
CREATE INDEX idx_is_active ON users(is_active);
-- Only 2 values: true/false
MySQL might not even use an index if it thinks a full table scan is faster. With low cardinality columns, it’s usually right.
Exception: combining low-cardinality columns can work:
CREATE INDEX idx_type_status ON orders(order_type, status);
If order_type has 4 values and status has 5, the combination has 20 possible values. Better, but still not great.
Index bloat (the thing that bit us)
After 6 months in production, our main index was 400MB but should’ve been 200MB. Queries got slower over time.
The problem: index fragmentation. DELETEs and UPDATEs leave gaps in the index structure. MySQL doesn’t automatically clean this up.
Fix:
-- Check fragmentation
SELECT
table_name,
index_name,
DATA_FREE / DATA_LENGTH as fragmentation_ratio
FROM information_schema.TABLES
WHERE table_schema = 'production';
-- Rebuild indexes
OPTIMIZE TABLE orders;
OPTIMIZE TABLE rebuilds the table and all its indexes. On a 4M row table, this took 15 minutes and locked the table. We now run it monthly during maintenance windows.
Partial indexes (MySQL 8+)
Sometimes you only care about a subset of data:
CREATE INDEX idx_active_orders
ON orders(customer_id, created_at)
WHERE status IN ('pending', 'processing');
Wait, that’s PostgreSQL syntax. MySQL doesn’t support filtered indexes directly.
Workaround: use a generated column:
ALTER TABLE orders
ADD COLUMN is_active BOOLEAN GENERATED ALWAYS AS
(status IN ('pending', 'processing')) STORED;
CREATE INDEX idx_active ON orders(is_active, customer_id, created_at);
Now queries for active orders only scan the relevant index entries. Smaller index = faster queries.
When to skip indexing
Don’t index:
- Small tables (< 1000 rows). Full table scan is fine.
- Columns with very low cardinality (like boolean flags)
- Tables with heavy writes and rare reads
- Columns that are always used with functions (
WHERE DATE(created_at) = '2025-01-01'can’t use an index oncreated_at)
Our audit_logs table has no indexes except the primary key. It’s insert-only and we query it once a month. Indexing would just slow down inserts.
The reality check
Good indexing strategy:
- Find your slowest queries (pg_stat_statements, slow query log)
- Check if they’re missing indexes
- Add the minimum indexes needed
- Monitor write performance
- Drop indexes that aren’t being used
-- Find unused indexes (MySQL 8+)
SELECT
object_schema,
object_name,
index_name
FROM performance_schema.table_io_waits_summary_by_index_usage
WHERE index_name IS NOT NULL
AND count_star = 0
ORDER BY object_schema, object_name;
We had 3 indexes with zero usage. Dropped them, saved 200MB and improved insert speed.
What actually matters
- Column order in composite indexes is critical
- More indexes != faster queries
- Indexes have write costs
- Test with production data volumes
- Monitor and adjust over time
We went from 12 indexes to 6, and overall performance improved. The key was understanding which queries matter and optimizing for those, not trying to optimize every possible query.
Also turns out some of our slow queries were slow because of missing LIMIT clauses, not missing indexes. But that’s a different story.