Complete reference for MariaDB JSON functions covering JSON_EXTRACT, JSON_SET, JSON_CONTAINS, JSON_TABLE, JSON_MERGE_PATCH, JSON_ARRAYAGG, and 34+ more. Every function includes syntax, path expression examples, and differences from MySQL JSON functions. Updated for MariaDB 11.x.
MariaDB JSON functions provide full support for storing, querying, and modifying JSON data stored in TEXT or JSON column types. JSON_EXTRACT() retrieves values using path expressions (e.g. '$.key' or '$[0]'), JSON_SET() modifies values, JSON_CONTAINS() tests for value membership, and JSON_TABLE() (added in MariaDB 10.6) shreds a JSON array into a relational table. MariaDB JSON functions are broadly compatible with MySQL JSON functions but differ from PostgreSQL's JSONB operators.
JSON_EXTRACT
MariaDB 10.2+→ JSON
Extracts a value from a JSON document at the specified JSONPath location(s).
Omitting the leading $ in the path (e.g. passing 'name' instead of '$.name') returns NULL silently.
✓ Instead: Always start paths with $ and test with JSON_VALID() before querying production data.
The -> operator is shorthand for JSON_EXTRACT and returns a quoted JSON value, while ->> returns the unquoted scalar text. In most queries you want ->> to avoid having to call JSON_UNQUOTE separately.
⚠Anti-Pattern— Using JSON_VALUE on non-scalar paths
Pointing JSON_VALUE at an object or array path returns NULL (or triggers ON ERROR) because it only returns scalars.
✓ Instead: Use JSON_EXTRACT for objects/arrays and JSON_VALUE only for leaf scalar values.
Without RETURNING, JSON_VALUE returns a VARCHAR. Specifying RETURNING INT, RETURNING DATE, etc. avoids implicit type conversions and makes comparisons and ORDER BY work correctly.
⚠Anti-Pattern— Chaining JSON_SET calls in application code
Calling JSON_SET separately for each key in application code generates multiple round-trips and intermediate strings.
✓ Instead: Pass all path-value pairs to a single JSON_SET call: JSON_SET(doc, '$.a', 1, '$.b', 2).
JSON_SET inserts a value if the path does not exist and updates it if it does — making it the safest choice for general-purpose JSON mutation. Use JSON_INSERT when you only want to add new keys, or JSON_REPLACE when you only want to update existing ones.
⚠Anti-Pattern— Using JSON_INSERT expecting it to update
Developers coming from key-value stores expect JSON_INSERT to overwrite existing keys like a set operation.
✓ Instead: Use JSON_SET when you want upsert (insert-or-replace) semantics.
JSON_INSERT adds a key only if it is absent; it never overwrites an existing value. This makes it safe for initializing default fields on existing JSON documents without clobbering data.
⚠Anti-Pattern— Expecting JSON_REPLACE to add missing keys
JSON_REPLACE silently skips paths that do not exist, so missing keys produce no error and no change.
✓ Instead: Use JSON_SET if you want insert-or-update behavior.
Unlike JSON_SET, JSON_REPLACE returns the document unchanged if the specified path does not exist. Use it when you want to guard against accidentally adding unexpected keys.
JSON_REMOVE only accepts path expressions (positions), not values — you cannot pass a value to remove.
✓ Instead: Use JSON_SEARCH to find the path of the value first, then pass that path to JSON_REMOVE.
JSON_REMOVE accepts multiple path arguments and removes all matching paths in left-to-right order. Removing a parent path automatically removes all children — you do not need to list nested keys separately.
⚠Anti-Pattern— Expecting JSON_MERGE to overwrite duplicates
Unlike a typical dictionary merge, JSON_MERGE collects duplicate key values into an array rather than replacing them.
✓ Instead: Use JSON_MERGE_PATCH if you want last-writer-wins semantics on duplicate keys.
JSON_MERGE is an alias for JSON_MERGE_PRESERVE and generates a deprecation warning. JSON_MERGE_PATCH (RFC 7396) replaces duplicate keys with the new value, which is usually what you want for config updates. JSON_MERGE_PRESERVE combines arrays for duplicate keys.
JSON_MERGE_PATCH replaces nested objects wholesale rather than merging them recursively, so sibling keys in a nested object can be lost.
✓ Instead: Use JSON_MERGE_PRESERVE if you need deep array/object merging, or use JSON_SET for targeted key updates.
JSON_MERGE_PATCH follows RFC 7396: top-level keys from the second document overwrite keys in the first. Setting a key to JSON null in the patch removes that key from the result — this is the standard way to delete a field via merge.
⚠Anti-Pattern— Using JSON_MERGE_PRESERVE when you want last-writer-wins
If you expect duplicate keys to be overwritten, JSON_MERGE_PRESERVE will silently wrap them in arrays instead.
✓ Instead: Use JSON_MERGE_PATCH for RFC 7396 last-writer-wins semantics.
When both documents have the same key, JSON_MERGE_PRESERVE wraps the values into an array and concatenates them. This is ideal for accumulating tag lists or event arrays across multiple JSON documents.
Passing a JSON string literal to JSON_ARRAY treats it as a plain string and double-encodes it.
✓ Instead: Use JSON_EXTRACT or CAST to materialise the JSON value before passing it, or use JSON_MERGE to combine arrays.
Unlike GROUP_CONCAT which drops NULLs, JSON_ARRAY includes NULL arguments as JSON null values. This makes it the right choice when the position of each element in the array is significant.
Passing an odd number of arguments causes an error because keys and values must be paired.
✓ Instead: Always supply key-value pairs; use NULL as a placeholder value if needed.
JSON_OBJECT alternates key-value arguments. Keys are automatically converted to strings, but passing a NULL key raises an error. Wrap dynamic key expressions in CAST(... AS CHAR) when in doubt.
⚠Anti-Pattern— Using JSON_QUOTE to produce a scalar number
JSON_QUOTE always produces a string literal; calling JSON_QUOTE(42) returns '"42"' (a string), not the number 42.
✓ Instead: Pass numeric values directly to JSON_SET or JSON_OBJECT without quoting.
JSON_QUOTE escapes special characters (backslash, double-quote, control characters) and wraps the value in double quotes. This is essential when building JSON strings by concatenation rather than using JSON_OBJECT.
⚠Anti-Pattern— Applying JSON_UNQUOTE to a non-string JSON value
Calling JSON_UNQUOTE on a JSON number or object just returns the value as-is — it only truly unquotes JSON strings.
✓ Instead: Use CAST or CONVERT when you need to coerce a JSON number or boolean to a SQL type.
JSON_UNQUOTE removes the outer double-quotes from a JSON string value and unescapes backslash sequences. Use the ->> shorthand in queries to save verbosity; use JSON_UNQUOTE explicitly only when chaining with other JSON functions.
⚠Anti-Pattern— Passing an unquoted string as the candidate
The candidate argument must be valid JSON — passing a bare string like 'sale' instead of '"sale"' causes an error.
✓ Instead: Always pass properly quoted JSON: '"sale"' for a string, '42' for a number, 'true' for a boolean.
JSON_CONTAINS can leverage a generated column index on the JSON path being searched, making it significantly faster than a LIKE '%value%' scan. Pair it with a generated column and index for frequent membership checks.
⚠Anti-Pattern— Using JSON_CONTAINS_PATH to check a value, not a path
JSON_CONTAINS_PATH only tests key/path existence, not whether the value at that path matches anything.
✓ Instead: Use JSON_CONTAINS to test for a specific value at a path.
The second argument controls whether ANY of the listed paths must exist ('one') or ALL of them must exist ('all'). 'one' is equivalent to an OR check; 'all' is an AND check across multiple paths.
JSON_SEARCH only works on JSON string values — it cannot find numbers, booleans, or null values.
✓ Instead: For non-string values, use JSON_CONTAINS or iterate with JSON_EXTRACT on known paths.
JSON_SEARCH with 'one' mode stops at the first match and is faster than 'all' mode when you only need to confirm presence. Use 'all' mode when you need every matching path, such as for audit or transform operations.
JSON_KEYS returns NULL for JSON arrays — it only applies to JSON objects.
✓ Instead: Use JSON_LENGTH to count array elements or JSON_EXTRACT with $[*] to enumerate array values.
JSON_KEYS returns the top-level keys of a JSON object as a JSON array. When your schema evolves and columns contain documents with varying keys, JSON_KEYS lets you discover which keys are present without a schema change.
Returns the number of elements in a JSON array, keys in a JSON object, or 1 for a scalar.
Signature
JSON_LENGTH(json_doc[, path])
Parameters
Parameter
Type
Description
json_doc
JSON
The JSON document to measure
path
VARCHAR
Optional path to a sub-document to measure instead
Examples
Array length
sql
SELECTJSON_LENGTH('[1,2,3]');
→-- 3
Object key count
sql
SELECTJSON_LENGTH('{"a":1,"b":2}');
→-- 2
Scalar always returns 1
sql
SELECTJSON_LENGTH('"hello"');
→-- 1
Length of nested array
sql
SELECTJSON_LENGTH(data, '$.tags') FROMposts;
→-- 4
Filter by array size
sql
SELECT*FROMordersWHEREJSON_LENGTH(items) >5;
→-- large orders
⚠Anti-Pattern— Using JSON_LENGTH to check if a document is empty without guarding NULL
JSON_LENGTH('{}') returns 0 and JSON_LENGTH('[]') returns 0, but JSON_LENGTH(NULL) returns NULL — missing the NULL case leads to unexpected query results.
✓ Instead: Use COALESCE(JSON_LENGTH(col), 0) = 0 to treat NULL documents as empty.
JSON_LENGTH returns the count of elements for JSON arrays and the count of keys for JSON objects. For scalar values it returns 1. Use it to quickly filter documents with more than N items without deserializing the full structure.
example
SELECTidFROMordersWHEREJSON_LENGTH(items) >5;
→orders with more than 5 line items
◆MariaDB Note
Available in all MariaDB versions.
⇄ vs MySQL: Identical in MySQL 5.7+.
⇄ vs PostgreSQL: PostgreSQL uses jsonb_array_length() for arrays and jsonb_object_keys() count for objects.
⚠Anti-Pattern— Using JSON_DEPTH to count array elements
JSON_DEPTH measures nesting levels, not element count — a flat array of 1000 elements has depth 2.
✓ Instead: Use JSON_LENGTH to count elements.
JSON_DEPTH returns 1 for scalars, 2 for a flat array or object, and higher for nested structures. Use JSON_DEPTH in a CHECK constraint or validation query to prevent deeply nested documents that degrade parsing performance.
example
SELECTidFROMconfigsWHEREJSON_DEPTH(settings) >4;
→rows with unexpectedly deep nesting — candidates for schema review
◆MariaDB Note
Available in all MariaDB versions.
⇄ vs MySQL: Identical in MySQL 5.7+.
⇄ vs PostgreSQL: PostgreSQL has no direct equivalent; use a recursive function or extension.
⚠Anti-Pattern— Skipping JSON_VALID before parsing untrusted input
Calling JSON_EXTRACT or JSON_SET on invalid JSON causes an error that can crash stored procedures or application queries.
✓ Instead: Validate with JSON_VALID() first, especially for data ingested from external sources.
JSON columns in MariaDB are stored as LONGTEXT with a JSON_VALID constraint applied. Calling JSON_VALID() manually in application code before an INSERT prevents round-trip failures and provides a clear error message.
example
SELECTJSON_VALID(?) ASis_valid; -- bind parameter from application
→1 if valid JSON, 0 if not
◆MariaDB Note
Available in all MariaDB versions.
⇄ vs MySQL: Identical in MySQL 5.7+.
⇄ vs PostgreSQL: PostgreSQL validates JSON implicitly on cast to json/jsonb; use a TRY CAST or error-handling block for explicit validation.
⚠Anti-Pattern— Storing JSON_DETAILED output back into the column
The extra whitespace inflates storage size significantly for large documents.
✓ Instead: Use JSON_DETAILED only for display/debugging; store and transmit compact JSON.
JSON_DETAILED reformats a compact JSON document with indentation and newlines, making it easy to inspect in a terminal or log file. It is equivalent to JSON_PRETTY in MySQL and is intended for development/debugging only.
⚠Anti-Pattern— Applying JSON_COMPACT repeatedly on reads in a hot query path
JSON_COMPACT parses and re-serialises the document on every call, adding CPU overhead that is unnecessary if the data is already compact.
✓ Instead: Compact data once at write time and store it; avoid repeated compaction on reads.
JSON_COMPACT removes all unnecessary whitespace from a JSON document, reducing storage size. Run it on data imported from external APIs or user input that may contain inconsistent formatting.
Returns a loosely formatted JSON document with spaces after colons and commas for light readability.
Signature
JSON_LOOSE(json_doc)
Parameters
Parameter
Type
Description
json_doc
JSON
The JSON document to format loosely
Examples
Space after colon and comma
sql
SELECTJSON_LOOSE('{"a":1,"b":2}');
→-- {"a": 1, "b": 2}
Spaced array
sql
SELECTJSON_LOOSE('[1,2,3]');
→-- [1, 2, 3]
From JSON_OBJECT
sql
SELECTJSON_LOOSE(JSON_OBJECT('x', 1, 'y', 2));
→-- {"x": 1, "y": 2}
Column formatting
sql
SELECTJSON_LOOSE(data) FROMsettingsWHEREid=1;
→-- lightly formatted JSON
Nested structure
sql
SELECTJSON_LOOSE('{"arr":[1,2,{"k":"v"}]}');
→-- {"arr": [1, 2, {"k": "v"}]}
⚠Anti-Pattern— Confusing JSON_LOOSE with JSON_DETAILED for deep debugging
JSON_LOOSE adds spaces but does not add newlines or indentation — deeply nested structures remain on one long line.
✓ Instead: Use JSON_DETAILED when you need full indented multi-line output for inspection.
JSON_LOOSE adds spaces after colons and commas for readability without full indentation. It produces a single-line readable format suitable for logs where full JSON_DETAILED output would be too verbose.
example
SELECTJSON_LOOSE('{"a":1,"b":2}');
→{"a": 1, "b": 2}
◆MariaDB Note
MariaDB-specific function with no MySQL equivalent.
⇄ vs MySQL: MySQL has no JSON_LOOSE(); this is a MariaDB-specific function.
⇄ vs PostgreSQL: PostgreSQL has no equivalent; jsonb output is either compact or pretty (jsonb_pretty()).
⚠Anti-Pattern— Storing JSON_PRETTY output in application variables for large result sets
Storing pretty-printed JSON in application memory bloats RAM usage for large result sets.
✓ Instead: Apply JSON_PRETTY only in the final display layer; operate on compact JSON internally.
JSON_PRETTY was added to MariaDB for MySQL compatibility. Both JSON_PRETTY and JSON_DETAILED produce indented output, but JSON_DETAILED is the original MariaDB function. Use JSON_PRETTY in code that must run on both MySQL and MariaDB.
example
SELECTJSON_PRETTY('{"id":1,"name":"Alice"}');
→{
"id": 1,
"name": "Alice"
}
◆MariaDB Note
Functionally equivalent to JSON_DETAILED with default indent; added for MySQL compatibility.
⇄ vs MySQL: Available in MySQL 8.0+ with the same signature — use JSON_PRETTY for cross-compatibility.
⇄ vs PostgreSQL: PostgreSQL uses jsonb_pretty() on jsonb values.
⚠Anti-Pattern— Using JSON_ARRAYAGG without GROUP BY on large tables
Without GROUP BY, JSON_ARRAYAGG collects every row in the table into one array in memory, which can exhaust server RAM.
✓ Instead: Always pair with GROUP BY or use LIMIT / a subquery to cap the input set.
Without ORDER BY, the order of elements in the resulting JSON array is unspecified. Specify ORDER BY inside the aggregate to guarantee a stable output order, which is important for caching and regression testing.
⚠Anti-Pattern— Assuming duplicate keys are handled deterministically
If multiple rows produce the same key, JSON_OBJECTAGG produces a document with duplicate keys — the behaviour is undefined and results vary by version.
✓ Instead: Ensure keys are unique before aggregation, using DISTINCT on the key expression or pre-filtering.
JSON_OBJECTAGG silently uses the last value when duplicate keys appear. If your key column might have duplicates within a group, add a DISTINCT or pre-aggregate to ensure each key appears once.
⚠Anti-Pattern— Using JSON_TABLE on a non-array path
If the path expression does not point to an array, JSON_TABLE returns at most one row, which is often not what is intended.
✓ Instead: Ensure the path ends with [*] or points to a JSON array; use JSON_TYPE to verify beforehand.
JSON_TABLE converts a JSON array into a virtual relational table inline. This allows full SQL operations (JOIN, WHERE, GROUP BY) against JSON data without application-side parsing. It is the most powerful way to consume JSON arrays from an API or log column.
⚠Anti-Pattern— Using JSON_EXISTS on MariaDB versions before 10.6.1
JSON_EXISTS is only available from MariaDB 10.6.1+; using it on earlier versions causes a function-not-found error.
✓ Instead: Use JSON_CONTAINS_PATH for compatibility with MariaDB 10.2–10.5 installations.
JSON_EXISTS returns TRUE/FALSE and short-circuits on first match. It is slightly more concise than JSON_CONTAINS_PATH(..., 'one', path) and maps directly to the SQL/JSON standard, making code more portable.
⚠Anti-Pattern— Using very large or deeply recursive schemas
Highly complex JSON Schema documents with $ref cycles or allOf/anyOf chains can cause significant validation overhead per row.
✓ Instead: Keep schemas focused on essential constraints and validate complex business rules in the application layer.
JSON_SCHEMA_VALID validates a JSON document against a JSON Schema. Adding it as a CHECK constraint on a JSON column enforces structure at the database level, catching malformed documents on INSERT/UPDATE.
⚠Anti-Pattern— Relying on JSON_NORMALIZE to sort array elements
JSON_NORMALIZE sorts object keys but does not sort array elements — two arrays with the same elements in different order will not normalise to the same string.
✓ Instead: Sort array contents in the application layer before storing if order-independent array equality is required.
JSON_NORMALIZE sorts object keys and removes insignificant whitespace. Two JSON documents representing the same data compare equal after normalization, making it safe to use = or checksum comparisons on JSON columns.