Skip to content

Support json_object pushdown#10744

Merged
ti-chi-bot[bot] merged 6 commits intopingcap:masterfrom
windtalker:json_object_pushdown_master_pr
Mar 17, 2026
Merged

Support json_object pushdown#10744
ti-chi-bot[bot] merged 6 commits intopingcap:masterfrom
windtalker:json_object_pushdown_master_pr

Conversation

@windtalker
Copy link
Copy Markdown
Contributor

@windtalker windtalker commented Mar 16, 2026

What problem does this PR solve?

Issue Number: close #10743

Problem Summary:

json_object can already be planned for TiFlash pushdown on the TiDB side, but TiFlash execution on master was still missing the function implementation and DAG mapping. This PR adds the missing TiFlash support so pushed-down json_object expressions can run end-to-end.

What is changed and how it works?

support json_object pushdown
  • add TiFlash json_object execution support in the JSON function implementation
  • register json_object in the DAG/function mapping used by pushdown execution
  • add unit coverage for empty objects, mixed value types, duplicate keys, and NULL key errors
  • add fullstack coverage for TiFlash execution; keep the EXPLAIN assertion commented out temporarily until the TiDB-side explain output change is merged

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
  • No code

Side effects

  • Performance regression: Consumes more CPU
  • Performance regression: Consumes more Memory
  • Breaking backward compatibility

Documentation

  • Affects user behaviors
  • Contains syntax changes
  • Contains variable changes
  • Contains experimental features
  • Changes MySQL compatibility

Release note

Support pushing down `json_object` to TiFlash.

Summary by CodeRabbit

  • New Features

    • Added json_object() to construct JSON objects from alternating key/value inputs; supports nullable values, dynamic keys, empty-object result, duplicate-key semantics, and runtime null-key validation. Available in distributed execution.
  • Serialization

    • Added binary JSON object encoding for efficient object payload construction.
  • Bug Fixes

    • Improved validation and error reporting for oversized keys.
  • Tests

    • Added unit and full-stack tests covering semantics, errors, and execution plans.

@ti-chi-bot ti-chi-bot bot added do-not-merge/needs-linked-issue do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Mar 16, 2026
@pantheon-ai
Copy link
Copy Markdown

pantheon-ai bot commented Mar 16, 2026

Review failed due to infrastructure/execution failure after retries. Please re-trigger review.

ℹ️ Learn more details on Pantheon AI.

@ti-chi-bot ti-chi-bot bot added the size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. label Mar 16, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 16, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds TiFlash pushdown and execution support for json_object: registers and implements a variadic FunctionJsonObject, maps TiDB JsonObjectSig to "json_object" for DAG pushdown, implements binary JSON object encoding, and adds unit and fullstack tests covering semantics and error cases.

Changes

Cohort / File(s) Summary
DAG Function Mapping
dbms/src/Flash/Coprocessor/DAGUtils.cpp
Activated mapping of JsonObjectSig to "json_object" in the scalar function map for DAG pushdown.
Function Registration
dbms/src/Functions/FunctionsJson.cpp
Registered FunctionJsonObject so json_object is available at runtime.
Core Function Implementation
dbms/src/Functions/FunctionsJson.h
Added FunctionJsonObject (variadic) implementing key/value JSON object construction: even-arg validation, string key enforcement, nullable values, key-length checks, per-row accumulation and binary serialization. Note: header contains duplicate insertion points of the same class.
JSON Binary Encoding
dbms/src/TiDB/Decode/JsonBinary.h, dbms/src/TiDB/Decode/JsonBinary.cpp
Added buildBinaryJsonObjectInBuffer to encode binary JSON objects from key/value vectors; improved key-length error handling and exposed ARGUMENT_OUT_OF_BOUND error code.
Unit Tests
dbms/src/Functions/tests/gtest_json_object.cpp
Added gtests for basic semantics (empty input, mixed types, NULLs), duplicate-key behavior, and error cases (odd arg count, invalid key types, overly long keys).
Integration Tests
tests/fullstack-test/expr/json_object.test
Added fullstack SQL tests exercising MPP/TiFlash pushdown, empty/mixed inputs, dynamic keys, duplicate keys, NULL semantics, runtime NULL-key error, and plan verification.

Sequence Diagram

sequenceDiagram
    participant Planner as TiDB Planner
    participant DAGUtils as DAG Mapper
    participant TiFlash as TiFlash Executor
    participant Func as FunctionJsonObject
    participant JsonBin as JsonBinary Encoder
    participant Output as Result Buffer

    Planner->>DAGUtils: build DAG with JsonObjectSig
    DAGUtils->>TiFlash: map to "json_object" and send DAG
    TiFlash->>Func: invoke json_object per row
    Func->>Func: validate args (even count, string keys)
    Func->>Func: collect key/value pairs
    Func->>JsonBin: buildBinaryJsonObjectInBuffer(keys, values)
    JsonBin->>JsonBin: write type, counts, offsets, keys, values, total size
    JsonBin->>Output: emit binary JSON per row
    Output-->>Planner: return results
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 I hop through keys and pairs with cheer,
Building objects, row by row so clear,
Offsets tucked and depths checked tight,
Duplicates kept by first-seen right,
A tiny rabbit ships JSON tonight.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 16.67% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Support json_object pushdown' clearly and concisely summarizes the main change: enabling TiFlash execution support for the json_object function through pushdown.
Description check ✅ Passed The PR description fully addresses the template requirements: problem statement with issue reference, detailed changes, completed test checklist with unit and integration tests, side effects assessment, and release note provided.
Linked Issues check ✅ Passed All linked issue requirements are met: json_object execution support added to TiFlash [FunctionsJson.h, FunctionsJson.cpp], DAG mapping registered [DAGUtils.cpp], unit tests for NULL handling and duplicate keys [gtest_json_object.cpp], and fullstack integration tests [json_object.test].
Out of Scope Changes check ✅ Passed All changes are directly scoped to json_object pushdown support: function implementation, DAG registration, unit/integration tests, and JsonBinary helper method for object encoding.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@ti-chi-bot ti-chi-bot bot added do-not-merge/needs-triage-completed release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed do-not-merge/needs-linked-issue do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Mar 16, 2026
@windtalker
Copy link
Copy Markdown
Contributor Author

/check-issue-triage-complete

@windtalker
Copy link
Copy Markdown
Contributor Author

/run-check-issue-triage-complete

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@dbms/src/Functions/FunctionsJson.h`:
- Around line 1110-1116: The two new throw Exception(...) calls in
FunctionsJson.h omit ErrorCodes; update them to use DB::Exception with an
appropriate ErrorCodes value: for the NULL member name throw use throw
Exception("JSON documents may not contain NULL member names.",
ErrorCodes::BAD_ARGUMENTS) and for the oversized key throw use throw
Exception("TiDB/TiFlash does not yet support JSON objects with the key length >=
65536", ErrorCodes::TOO_LARGE_STRING) so downstream error handling can match on
ErrorCodes.

In `@dbms/src/Functions/tests/gtest_json_object.cpp`:
- Line 15: Remove the unnecessary include of <Columns/ColumnNullable.h> from
gtest_json_object.cpp since the test does not reference ColumnNullable; delete
the `#include` "<Columns/ColumnNullable.h>" directive (or replace it only if a
different, actually-used header is required by tests in this file) and then
rebuild/run tests to verify the unresolved include warning is gone.

In `@dbms/src/TiDB/Decode/JsonBinary.cpp`:
- Around line 1123-1125: The throw for oversized JSON key should include a
concrete ErrorCodes value; update the validation branch that checks key.size
(before UInt16 key_len = key.size) to throw DB::Exception (or Exception) with an
explicit ErrorCodes constant, e.g. replace throw Exception("TiDB/TiFlash does
not yet support JSON objects with the key length >= 65536"); with throw
Exception("TiDB/TiFlash does not yet support JSON objects with the key length >=
65536", ErrorCodes::ARGUMENT_OUT_OF_BOUND); so the Exception construction
includes ErrorCodes::ARGUMENT_OUT_OF_BOUND to classify the error.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 03906700-8e79-4001-be11-30024edbffa7

📥 Commits

Reviewing files that changed from the base of the PR and between b41b3cf and ea1226a.

📒 Files selected for processing (7)
  • dbms/src/Flash/Coprocessor/DAGUtils.cpp
  • dbms/src/Functions/FunctionsJson.cpp
  • dbms/src/Functions/FunctionsJson.h
  • dbms/src/Functions/tests/gtest_json_object.cpp
  • dbms/src/TiDB/Decode/JsonBinary.cpp
  • dbms/src/TiDB/Decode/JsonBinary.h
  • tests/fullstack-test/expr/json_object.test

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
dbms/src/Functions/tests/gtest_json_object.cpp (1)

15-15: ⚠️ Potential issue | 🔴 Critical

Remove unresolved header include before merge.

Line 15 includes <Columns/ColumnNullable.h>, and static analysis reports it as missing; this blocks compilation.

🛠️ Proposed fix
-#include <Columns/ColumnNullable.h>
 `#include` <TestUtils/FunctionTestUtils.h>
 `#include` <TestUtils/TiFlashTestBasic.h>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dbms/src/Functions/tests/gtest_json_object.cpp` at line 15, Remove the
unresolved include <Columns/ColumnNullable.h> from the test file
(gtest_json_object.cpp); locate the include line that references
ColumnNullable.h and delete it (or replace it with the correct existing header
if the test actually needs nullable column types—verify usage of ColumnNullable
or ColumnNullable::Ptr in the file and include the proper header name instead).
Ensure compilation passes by either removing unused dependency or adding the
correct header that exists in the codebase.
dbms/src/Functions/FunctionsJson.h (1)

1110-1116: ⚠️ Potential issue | 🟡 Minor

Add ErrorCodes to new exceptions.

At Line 1110 and Line 1116, new Exception throws omit ErrorCodes, which makes error handling less consistent.

🛠️ Proposed fix
-                        throw Exception("JSON documents may not contain NULL member names.");
+                        throw Exception(
+                            "JSON documents may not contain NULL member names.",
+                            ErrorCodes::ILLEGAL_COLUMN);

-                    throw Exception("TiDB/TiFlash does not yet support JSON objects with the key length >= 65536");
+                    throw Exception(
+                        "TiDB/TiFlash does not yet support JSON objects with the key length >= 65536",
+                        ErrorCodes::ILLEGAL_COLUMN);

As per coding guidelines, Use DB::Exception for error handling in C++. Pattern: throw Exception("Message", ErrorCodes::SOME_CODE);.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dbms/src/Functions/FunctionsJson.h` around lines 1110 - 1116, The two new
throws in FunctionsJson.h (the NULL member-name check and the key length >=
65536 check) must use DB::Exception with an ErrorCodes enum value; update both
throw sites to throw Exception("JSON documents may not contain NULL member
names.", ErrorCodes::BAD_ARGUMENTS) and throw Exception("TiDB/TiFlash does not
yet support JSON objects with the key length >= 65536",
ErrorCodes::BAD_ARGUMENTS) (or another appropriate ErrorCodes constant), and
add/include the ErrorCodes declaration if not present so the identifiers
resolve.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@dbms/src/Functions/FunctionsJson.h`:
- Around line 1110-1116: The two new throws in FunctionsJson.h (the NULL
member-name check and the key length >= 65536 check) must use DB::Exception with
an ErrorCodes enum value; update both throw sites to throw Exception("JSON
documents may not contain NULL member names.", ErrorCodes::BAD_ARGUMENTS) and
throw Exception("TiDB/TiFlash does not yet support JSON objects with the key
length >= 65536", ErrorCodes::BAD_ARGUMENTS) (or another appropriate ErrorCodes
constant), and add/include the ErrorCodes declaration if not present so the
identifiers resolve.

In `@dbms/src/Functions/tests/gtest_json_object.cpp`:
- Line 15: Remove the unresolved include <Columns/ColumnNullable.h> from the
test file (gtest_json_object.cpp); locate the include line that references
ColumnNullable.h and delete it (or replace it with the correct existing header
if the test actually needs nullable column types—verify usage of ColumnNullable
or ColumnNullable::Ptr in the file and include the proper header name instead).
Ensure compilation passes by either removing unused dependency or adding the
correct header that exists in the codebase.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: a7c32392-2130-416a-9655-2e3c3a5f255c

📥 Commits

Reviewing files that changed from the base of the PR and between 2ca4e2f and 3c587a6.

📒 Files selected for processing (2)
  • dbms/src/Functions/FunctionsJson.h
  • dbms/src/Functions/tests/gtest_json_object.cpp

{
const auto * key_nullmap = nullmaps[col];
if (!sources[col] || (key_nullmap && (*key_nullmap)[i]))
throw Exception("JSON documents may not contain NULL member names.");
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P2] Missing ErrorCodes on json_object execution-time errors

Why: DB::Exception defaults to code 0 when no error code is provided, which can misclassify user-facing errors and violate project error-handling guidance. The error code 0 is propagated into DAG responses, impacting observability and error classification downstream.

Evidence: throw Exception("JSON documents may not contain NULL member names."); at dbms/src/Functions/FunctionsJson.h:1110

The Exception constructor defaults to code 0 (dbms/src/Common/Exception.h:48), and this code is then observable in tipb::Error responses (dbms/src/Flash/Coprocessor/DAGDriver.cpp:307, 335). Note that other execution-path exceptions in FunctionsJson.h also lack error codes (lines 1116, 1696).

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refined

Copy link
Copy Markdown

@pantheon-ai pantheon-ai bot Mar 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue addressed - ErrorCodes added to json_object error handling.

ℹ️ Learn more details on Pantheon AI.

reserve_size += source ? source->getSizeForReserve() : rows;
JsonBinary::JsonBinaryWriteBuffer write_buffer(data_to, reserve_size);

std::map<String, JsonBinary> key_value_map;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P2] json_object per-row std::map + key copies (allocation-heavy)

Why: Repeated map node allocations + String key copies per output row can add avoidable CPU/allocator overhead in large scans. The function constructs an ordered std::map<String, JsonBinary> for every row and materializes each key into a new std::string, causing repeated node allocations/deallocations in a tight loop.

Evidence: std::map<String, JsonBinary> key_value_map; ... key_value_map.clear(); ... String key(...); at dbms/src/Functions/FunctionsJson.h:1095

The map is required to maintain key ordering (as JsonBinary lookup assumes ascending order per dbms/src/TiDB/Decode/JsonBinary.cpp:924), but the current implementation is allocation-intensive. A lower-allocation approach would need to preserve ordering and "last wins" duplicate key semantics.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refined

Copy link
Copy Markdown

@pantheon-ai pantheon-ai bot Mar 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue addressed - allocation optimizations applied to reduce per-row overhead.

ℹ️ Learn more details on Pantheon AI.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (3)
dbms/src/Functions/tests/gtest_json_object.cpp (2)

104-122: Cover the max accepted key length too.

The error case proves 65536 is rejected, but the off-by-one risk is 65535. A single success case at std::numeric_limits<UInt16>::max() would guard both the function-level and JsonBinary-level checks against drifting to >=.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dbms/src/Functions/tests/gtest_json_object.cpp` around lines 104 - 122, Add a
positive test that uses a key of length std::numeric_limits<UInt16>::max() to
ensure the boundary is accepted: within the TEST_F(TestJsonObject, TestErrors)
block after the existing too-long-key assertion, create a key string of length
std::numeric_limits<UInt16>::max(), call executeFunction("json_object",
{createColumn<String>({that_key}), value}) and assert it does not throw (or
assert successful result) so the max accepted key length is covered; reference
executeFunction, json_object, createColumn<String>, and
std::numeric_limits<UInt16>::max() when locating where to add this case.

67-100: Exercise a lookup path in the new tests.

These cases only validate the result via cast_json_as_string(), which exercises sequential decoding but not the search path that json_extract() / searchObjectKey() depend on. Adding one assertion against json_extract(json_object(...), '$.a') or json_keys(json_object(...)) would pin down the new binary-object invariants.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dbms/src/Functions/tests/gtest_json_object.cpp` around lines 67 - 100, In
TestJsonObject::TestBasicSemantics, add an assertion that exercises the
lookup/search path (not just cast_json_as_string) by calling json_extract or
json_keys on the json_object result produced by executeFunctionWithCast; e.g.,
after the block that builds inputs with keys "a" and "b" (the one using
executeFunctionWithCast({0,1,2,3}, inputs)), call executeFunctionWithCast or the
test helper to run json_extract(json_object(...), '$.a') (or
json_keys(json_object(...))) and ASSERT_COLUMN_EQ the expected values so that
json_extract/searchObjectKey invariants are validated in addition to sequential
decoding. Ensure you reference the same inputs/res variable and place the
assertion immediately after that block.
dbms/src/TiDB/Decode/JsonBinary.cpp (1)

1104-1174: Make the key-ordering contract explicit in the builder.

searchObjectKey() later assumes strictly ascending keys, but buildBinaryJsonObjectInBuffer() will currently serialize whatever order it gets. Today that only works because FunctionJsonObject sorts and de-dupes first, so the invariant is easy to violate from a future call site.

One defensive option
 void JsonBinary::buildBinaryJsonObjectInBuffer(
     const std::vector<StringRef> & keys,
     const std::vector<JsonBinary> & values,
     JsonBinaryWriteBuffer & write_buffer)
 {
     RUNTIME_CHECK(keys.size() == values.size());
+    for (size_t i = 1; i < keys.size(); ++i)
+        RUNTIME_CHECK(keys[i - 1] < keys[i]);
 
     write_buffer.write(TYPE_CODE_OBJECT);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dbms/src/TiDB/Decode/JsonBinary.cpp` around lines 1104 - 1174,
buildBinaryJsonObjectInBuffer currently writes keys in whatever order it
receives but searchObjectKey expects strictly ascending keys; update
buildBinaryJsonObjectInBuffer to enforce the contract by either (a) sorting the
keys (and reordering values in parallel) and removing duplicates before
encoding, or (b) adding a clear runtime check/assert that validates keys are
strictly increasing and throws a descriptive Exception if not; reference the
functions buildBinaryJsonObjectInBuffer and searchObjectKey and the caller
FunctionJsonObject to locate where to apply the sort+dedupe or the assertion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@dbms/src/Functions/tests/gtest_json_object.cpp`:
- Around line 104-122: Add a positive test that uses a key of length
std::numeric_limits<UInt16>::max() to ensure the boundary is accepted: within
the TEST_F(TestJsonObject, TestErrors) block after the existing too-long-key
assertion, create a key string of length std::numeric_limits<UInt16>::max(),
call executeFunction("json_object", {createColumn<String>({that_key}), value})
and assert it does not throw (or assert successful result) so the max accepted
key length is covered; reference executeFunction, json_object,
createColumn<String>, and std::numeric_limits<UInt16>::max() when locating where
to add this case.
- Around line 67-100: In TestJsonObject::TestBasicSemantics, add an assertion
that exercises the lookup/search path (not just cast_json_as_string) by calling
json_extract or json_keys on the json_object result produced by
executeFunctionWithCast; e.g., after the block that builds inputs with keys "a"
and "b" (the one using executeFunctionWithCast({0,1,2,3}, inputs)), call
executeFunctionWithCast or the test helper to run json_extract(json_object(...),
'$.a') (or json_keys(json_object(...))) and ASSERT_COLUMN_EQ the expected values
so that json_extract/searchObjectKey invariants are validated in addition to
sequential decoding. Ensure you reference the same inputs/res variable and place
the assertion immediately after that block.

In `@dbms/src/TiDB/Decode/JsonBinary.cpp`:
- Around line 1104-1174: buildBinaryJsonObjectInBuffer currently writes keys in
whatever order it receives but searchObjectKey expects strictly ascending keys;
update buildBinaryJsonObjectInBuffer to enforce the contract by either (a)
sorting the keys (and reordering values in parallel) and removing duplicates
before encoding, or (b) adding a clear runtime check/assert that validates keys
are strictly increasing and throws a descriptive Exception if not; reference the
functions buildBinaryJsonObjectInBuffer and searchObjectKey and the caller
FunctionJsonObject to locate where to apply the sort+dedupe or the assertion.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 9c386c3b-3e7a-479f-91df-d0ab12a11750

📥 Commits

Reviewing files that changed from the base of the PR and between 3c587a6 and 932e071.

📒 Files selected for processing (3)
  • dbms/src/Functions/FunctionsJson.h
  • dbms/src/Functions/tests/gtest_json_object.cpp
  • dbms/src/TiDB/Decode/JsonBinary.cpp

@ti-chi-bot ti-chi-bot bot added needs-1-more-lgtm Indicates a PR needs 1 more LGTM. approved labels Mar 17, 2026
Co-authored-by: Liqi Geng <gengliqiii@gmail.com>
Copy link
Copy Markdown
Contributor

@xzhangxian1008 xzhangxian1008 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other lgtm

}
for (const auto arg_idx : ext::range(0, arguments.size()))
{
if (!arguments[arg_idx]->onlyNull())
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When arg_idx % 2 == 0 and arguments[arg_idx]->onlyNull() == true, I think we can throw exception here as JSON documents may not contain NULL member names.

@ti-chi-bot ti-chi-bot bot added the lgtm label Mar 17, 2026
@ti-chi-bot
Copy link
Copy Markdown
Contributor

ti-chi-bot bot commented Mar 17, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: gengliqi, xzhangxian1008

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot ti-chi-bot bot removed the needs-1-more-lgtm Indicates a PR needs 1 more LGTM. label Mar 17, 2026
@ti-chi-bot
Copy link
Copy Markdown
Contributor

ti-chi-bot bot commented Mar 17, 2026

[LGTM Timeline notifier]

Timeline:

  • 2026-03-17 07:07:18.481301866 +0000 UTC m=+254365.568959393: ☑️ agreed by gengliqi.
  • 2026-03-17 10:56:26.018181049 +0000 UTC m=+268113.105838586: ☑️ agreed by xzhangxian1008.

Signed-off-by: xufei <xufeixw@mail.ustc.edu.cn>
@ti-chi-bot ti-chi-bot bot merged commit 7fe4b3b into pingcap:master Mar 17, 2026
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved lgtm release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Support json_object pushdown

3 participants