So here’s the thing nobody tells you about JSONB in Postgres: it’s fast until it’s not.

We migrated our user preferences table to use JSONB columns three months ago. The pitch was simple - no more ALTER TABLE every time marketing wants to track a new preference. Just stuff it in JSON and call it a day. Seemed smart at the time.

The honeymoon phase

First two weeks were great. Engineers loved it. Product loved it. We shipped features faster because we didn’t need schema migrations for every little thing. Our user_settings column grew from tracking 5 fields to 23. No problem, right?

-- This worked fine for a while
SELECT * FROM users
WHERE settings->>'newsletter_subscribed' = 'true'
AND settings->>'region' = 'US';

Response times stayed under 50ms even with 2 million users. We were heroes.

When things got weird

Week 3, someone on the analytics team wrote this query:

SELECT
  settings->>'signup_source',
  COUNT(*)
FROM users
WHERE settings->>'created_year' = '2025'
GROUP BY settings->>'signup_source';

Took 14 seconds. On a table that previously took 200ms for similar aggregations.

The problem? No indexes. JSONB doesn’t give you indexes for free. You have to explicitly create them, and most devs don’t realize this until production starts crawling.

The index rabbit hole

Here’s what actually works:

-- GIN index for containment queries
CREATE INDEX idx_user_settings_gin ON users USING GIN (settings);

-- Specific path index for exact lookups
CREATE INDEX idx_newsletter_sub ON users
((settings->>'newsletter_subscribed'));

The GIN index helped with queries like settings @> '{"premium": true}' but didn’t do jack for our ->> queries. We needed expression indexes for those.

After adding 8 different expression indexes, our table bloated by 30%. Not ideal, but queries were fast again.

What actually bit us

The real issue came when we tried to update JSONB fields. Postgres rewrites the entire JSONB value on update. So this:

UPDATE users
SET settings = jsonb_set(settings, '{last_login}', '"2026-01-08"')
WHERE user_id = 12345;

Writes the entire settings blob, not just the one field. With 50KB average JSONB size and 10K updates/minute, our WAL files exploded. We hit disk I/O limits we didn’t know existed.

The workaround nobody mentions

Turns out, you should split JSONB data by update frequency:

-- Hot data (updates frequently)
ALTER TABLE users ADD COLUMN session_data JSONB;

-- Cold data (rarely changes)
ALTER TABLE users ADD COLUMN profile_settings JSONB;

This cut our write amplification by 70%. session_data stays small (~2KB), so updates are cheap. profile_settings is larger but rarely changes.

Database performance graph

When JSONB makes sense

Despite all this, I’d still use JSONB for:

  • User preferences that genuinely vary per user
  • Audit logs with varying metadata
  • API response caching
  • Features in early exploration phase

Just don’t use it as a lazy way to avoid schema design. We learned that the hard way.

The current state

We ended up with a hybrid approach:

  • Core user fields in regular columns (email, name, created_at)
  • Volatile session state in small JSONB column
  • User preferences in indexed JSONB column
  • Historical data in separate JSONB table, partitioned by month

Query performance is back to normal. WAL growth is manageable. Engineers still get flexibility where it matters.

Would I do it again? Yeah, but I’d start with the hybrid model from day one. And I’d definitely read the Postgres docs on JSONB indexing before committing to production. Those docs are dry as hell, but they would’ve saved us two weeks of firefighting.

The irony? Our original schema migration to add those 5 columns would’ve taken 30 minutes with pt-online-schema-change. Sometimes the “old way” is old for a reason.