Complete reference for PostgreSQL JSON and JSONB functions covering construction, querying, modification, and path operations. Every function includes syntax, parameters, real SQL examples, and performance notes on JSONB indexing. Updated for PostgreSQL 16.
PostgreSQL JSON functions are built-in functions for storing, querying, and manipulating semi-structured data using JSON and JSONB column types. JSONB is the binary storage format that supports GIN indexing and is preferred for most use cases. Common PostgreSQL JSON functions include json_build_object(), jsonb_set(), jsonb_agg(), jsonb_each(), and the -> and ->> extraction operators.
to_json
PG 9.2+→ json
Converts any SQL value to JSON. Composite types and arrays become JSON objects and arrays respectively.
DeveloperData Eng
Signature
to_json ( anyelement ) → json
Parameters
Parameter
Type
Description
value
any
Value to convert to JSON
Examples
sql
SELECTto_json(42);
→42
sql
SELECTto_json('hello'::text);
→"hello"
sql
SELECTto_json(ARRAY[1,2,3]);
→[1,2,3]
sql
SELECTto_json(now());
→"2025-03-07T14:23:01.123456+00:00"
⚠Anti-Pattern— Using JSON type for queryable data
Storing data as `json` (not `jsonb`) prevents you from using GIN indexes, the `@>` containment operator, and JSONPath queries. Every lookup requires a full sequential scan and text parsing.
✓ Instead: Use `jsonb` for any column you will filter, index, or query. Reserve `json` only when you need to preserve exact whitespace/key order for round-trip fidelity.
Use `to_jsonb` when you need the result stored as JSONB or when you'll query it with JSONB operators. `to_json` produces a JSON string, `to_jsonb` produces the binary JSONB type.
⚠Anti-Pattern— Converting large composite types repeatedly in a loop
Calling `to_jsonb` inside a PL/pgSQL loop for each row of a large result set is slow. The conversion happens row-by-row without the set-based optimizations available in plain SQL.
✓ Instead: Use `SELECT to_jsonb(t.*) FROM my_table t` as a single set-returning query, or use `json_agg(to_jsonb(t.*))` to aggregate in one pass.
Wrap a record in `to_jsonb(row.*) FROM table row` to get a JSONB object per row, suitable for building API JSON responses directly in SQL.
⚠Anti-Pattern— Using row_to_json when column selection or renaming is needed
`row_to_json(t.*)` always includes every column with its schema name. This can accidentally expose internal fields (e.g., `password_hash`, `deleted_at`) and uses database column names as API keys.
✓ Instead: Use `json_build_object('key', col, ...)` to explicitly select and rename fields, or select into a CTE/subquery with only the desired columns before calling `row_to_json`.
Combine `row_to_json(t.*)` with `json_agg` to get an array of objects: `json_agg(row_to_json(t.*))`. This builds nested JSON responses without application-side assembly.
⚠Anti-Pattern— Building JSON in application code from SQL result sets
Fetching raw rows and constructing JSON objects in the application layer adds a round-trip and forces the app to assemble nested structures manually.
✓ Instead: Use `json_build_object` (or `jsonb_build_object`) server-side to shape the response in SQL, reducing data transfer and application complexity.
Use `json_build_object` to include only specific columns with custom key names — unlike `row_to_json` which includes all columns with their schema names.
⚠Anti-Pattern— Storing highly relational data as JSONB objects
Using `jsonb_build_object` to pack many foreign-key relationships (e.g., order + customer + address + line items) into a single JSONB blob trades queryability and referential integrity for perceived convenience.
✓ Instead: Keep relational data in normalized tables. Use `jsonb_build_object` for genuinely semi-structured, variable, or document-oriented data where the schema is not fixed.
If the resulting JSON will be stored in a `jsonb` column or queried with JSONB operators, use `jsonb_build_object` — it produces JSONB directly without an extra cast.
⚠Anti-Pattern— Building arrays of uniform typed values with json_build_array instead of array_to_json
When all values share the same SQL type, using `json_build_array(v1, v2, ...)` with hard-coded arguments is verbose and inflexible compared to SQL arrays.
✓ Instead: For uniform-type collections, collect into a SQL array first (`ARRAY[v1, v2, ...]` or `array_agg`) then use `array_to_json`. Reserve `json_build_array` for heterogeneous or dynamically shaped outputs.
Unlike SQL arrays (which require uniform types), `json_build_array` accepts mixed types: strings, numbers, booleans, nulls, and even nested JSON objects all in one array.
⚠Anti-Pattern— All values are coerced to text — numeric types lose type information
`json_object` accepts only `text[]` — all values are stored as JSON strings, even numbers. `{"count":"42"}` is not the same as `{"count":42}` for downstream consumers.
✓ Instead: Use `json_build_object` when values have distinct SQL types (int, boolean, numeric) to preserve the correct JSON type in the output.
When you have two parallel arrays (keys and values), `json_object(keys, values)` is cleaner than calling `json_build_object` with individual arguments. Useful when building JSON from dynamic column lists.
⚠Anti-Pattern— Extracting the array into application code to count its length
Fetching a JSONB column into the application and calling `.length` on the parsed array is wasteful — the full array must be serialized and transferred over the network.
✓ Instead: Use `jsonb_array_length(col)` directly in the WHERE or SELECT clause to let the database compute the count without transferring data.
Use `jsonb_array_length(col) > 0` to check for non-empty JSON arrays. This is faster than extracting and counting elements.
⚠Anti-Pattern— Using json_each in a loop to iterate array elements
Calling `json_array_elements` inside a PL/pgSQL `FOR` loop processes one element at a time in procedural fashion, losing the benefits of set-based SQL execution.
✓ Instead: Use `json_array_elements` in a FROM clause (lateral join) so the database can process all elements in a single set-based operation.
Combine with a FROM clause to unnest a JSON array per row: `FROM orders o, jsonb_array_elements(o.items) AS item`. Then query `item ->> 'sku'` or `(item ->> 'price')::numeric`.
⚠Anti-Pattern— Trusting unnested user-supplied JSON strings without sanitisation
Unnesting user-supplied JSON string arrays with `jsonb_array_elements_text` and embedding the results directly into dynamic SQL or shell commands can enable injection.
✓ Instead: Always treat unnested text values as untrusted input. Use parameterised queries and validate values against an allowlist before further processing.
Use `json_array_elements_text` instead of `json_array_elements` when the array contains strings — it returns plain `text` instead of JSON-quoted values, saving you the extra `->>` or `#>>` call.
⚠Anti-Pattern— Using json_each in a PL/pgSQL loop for row-by-row processing
Iterating over `json_each` results inside a procedural loop forces row-at-a-time processing and prevents query parallelism.
✓ Instead: Use `json_each` in a lateral FROM clause to let the planner optimise the full query as a set-based join.
Use `json_each` to iterate over dynamic JSON keys — useful when the set of keys is not known in advance. Combine with `jsonb_each_text` to get plain text values.
⚠Anti-Pattern— Using json_each_text when values are nested objects or arrays
`json_each_text` converts all values to their text representation. Nested objects become strings like `{"x":1}` which you then have to re-parse, losing type safety.
✓ Instead: Use `json_each` (not `_text`) when values may be nested objects or arrays so they remain as `json`/`jsonb` and can be passed to further JSON functions without re-casting.
`json_each_text` returns values as plain `text` (JSON unquoted), while `json_each` returns them as `json`. Use `_text` when you'll do string operations like LIKE or length() on the values.
⚠Anti-Pattern— Checking for a specific key existence using json_object_keys
Running `SELECT key FROM jsonb_object_keys(doc) WHERE key = 'target'` to test whether a key exists is unnecessarily expensive — it expands all keys then filters.
✓ Instead: Use the `?` operator: `doc ? 'target'`. This is direct, O(log n) in JSONB, and can be supported by a GIN index.
Use `json_object_keys` with `array_agg` to collect all unique keys across rows: `SELECT DISTINCT key FROM products, jsonb_object_keys(attributes) AS key`. This reveals the schema of a semi-structured column.
→Upserts stock count from inventory into product JSONB attributes
⚠Anti-Pattern— Not indexing JSONB columns that are queried with @>
After updating JSONB with `jsonb_set`, queries that filter with `@>` (containment) on the same column will do sequential scans if no GIN index exists, making queries slow at scale.
✓ Instead: Create a GIN index: `CREATE INDEX ON table USING gin(jsonb_col)`. This makes `@>` and `?` operators index-backed and dramatically faster.
To update multiple JSONB fields in one UPDATE, chain `jsonb_set` calls: `jsonb_set(jsonb_set(data, '{a}', val_a), '{b}', val_b)`. Alternatively use `||` (concatenation) for top-level key replacement.
Inserts new_value into target at the specified path. For arrays, inserts before (or after with insert_after=true) the indexed element. For objects, behaves like jsonb_set.
→Prepends a new change entry to the front of the changes array
⚠Anti-Pattern— Using jsonb_insert to replace an existing value
`jsonb_insert` raises an error if the object key already exists (unlike `jsonb_set`). Mistakenly using it for updates instead of inserts causes runtime errors.
✓ Instead: Use `jsonb_set` when you want to update an existing key. Use `jsonb_insert` only when adding a new element to an array or a new key that does not yet exist.
Use path `{items,-1}` with `insert_after=true` to append to a JSONB array directly in an UPDATE statement — no need to read, modify in app code, and write back.
→Removes multiple sensitive keys from the profile JSONB in one statement
⚠Anti-Pattern— Relying on key deletion as a security boundary without allowlisting
Using `-` to remove a known list of sensitive keys (blocklist approach) is fragile — if a new sensitive key is added to the JSONB document, it will be exposed until the deletion list is updated.
✓ Instead: Use an allowlist approach: construct the response object explicitly with `jsonb_build_object('safe_key', doc->>'safe_key', ...)` to include only what clients should see.
Use the `-` operator to strip sensitive fields like `password_hash` or `internal_notes` from JSONB objects before returning them in API responses.
Returns the type of a JSON value as a text string: 'object', 'array', 'string', 'number', 'boolean', or 'null'.
DeveloperData Eng
Signatures
json_typeof ( json ) → text
jsonb_typeof ( jsonb ) → text
Parameters
Parameter
Type
Description
value
json or jsonb
JSON value to inspect
Examples
sql
SELECTjson_typeof('42'::json);
→number
sql
SELECTjson_typeof('"hello"'::json);
→string
sql
SELECTjson_typeof('[1,2]'::json);
→array
sql
SELECTjsonb_typeof(payload) FROMeventsWHEREid=1;
→object
⚠Anti-Pattern— Skipping type checks before casting JSON values
Casting a JSON value directly (e.g., `(col->>'price')::numeric`) without first checking `jsonb_typeof(col->'price') = 'number'` can produce errors or unexpected NULLs when the value is missing or of the wrong type.
✓ Instead: Guard casts with a type check: `CASE WHEN jsonb_typeof(col->'price') = 'number' THEN (col->>'price')::numeric END`, or use `jsonb_path_query` with a typed filter.
Use `json_typeof` to validate that a stored JSON column contains the expected type before operating on it. Prevents errors when code assumes an array but finds an object.
⚠Anti-Pattern— Assuming null array elements are stripped
`json_strip_nulls` only removes object keys whose value is null. Null values inside arrays (e.g., `[1, null, 3]`) are preserved intentionally. Relying on it to clean arrays will produce unexpected results.
✓ Instead: To remove nulls from JSON arrays, use `jsonb_array_elements` to unnest, filter out nulls with `WHERE value != 'null'::jsonb`, then re-aggregate with `jsonb_agg`.
When performing a PATCH-style update, use `jsonb_strip_nulls(patch) || existing_data` — or vice versa — to merge only non-null fields from the patch into the stored document.
⚠Anti-Pattern— Using ->> and casting instead of jsonb_path_query for complex navigation
Chaining multiple `->>` and `->` operators to navigate deeply nested structures (`col->'a'->'b'->>'c'`) is verbose, error-prone, and returns NULL silently at any missing intermediate level.
✓ Instead: Use `jsonb_path_query(col, '$.a.b.c')` with the `silent` flag set to `true` to safely traverse nested paths and get an empty set (rather than NULL) when the path does not exist.
Use `? (@ condition)` in JSONPath to filter array elements: `$.items[*] ? (@.price > 100)` returns only expensive items without unnesting and re-aggregating.
⚠Anti-Pattern— Not using a GIN index when filtering with @? or jsonb_path_exists
Filtering rows with `jsonb_path_exists` or the `@?` operator without a GIN index results in a sequential scan — every row's JSONB document is parsed for every query.
✓ Instead: Create a GIN index (`CREATE INDEX ON table USING gin(jsonb_col)`) so that `@>` and `@?` queries can use the index. For JSONPath, consider expression indexes on specific extracted values.
`jsonb_path_exists(doc, path)` is equivalent to `doc @? path`. The operator form is more concise in WHERE clauses. Use `@@` for `jsonb_path_match`.
⚠Anti-Pattern— Inserting user-supplied JSON directly via json_populate_record without validation
Passing raw user input to `json_populate_record` and inserting the result maps arbitrary JSON keys to table columns. A malicious payload with extra keys (e.g., `is_admin`, `role`) could populate sensitive fields if they exist on the target type.
✓ Instead: Validate the JSON structure against an allowlist of expected keys before calling `json_populate_record`. Alternatively, explicitly extract only the needed fields with `->` operators or use a restricted composite type.
Use `json_populate_record(NULL::table_type, json)` to parse JSON API payloads into typed rows for validation and insertion. Fields missing in the JSON get their column default or NULL.
⚠Anti-Pattern— Using json_to_record for bulk processing without json_to_recordset
Calling `json_to_record` in a loop or with `LATERAL` on every row of a JSON array individually is much slower than expanding the whole array at once.
✓ Instead: Use `json_to_recordset(json_array) AS t(col1 type, ...)` to expand an entire JSON array of objects into a typed result set in a single operation.
`json_to_record` lets you declare exactly which fields to extract and their types — useful when the JSON contains many fields but you only care about a few.