Skip to content

feat(ethexe-consensus): Limits for BatchCommitment abi encoding#5214

Open
ecol-master wants to merge 30 commits intomasterfrom
5205-ethexe-set-limits-for-batch-commitment-production
Open

feat(ethexe-consensus): Limits for BatchCommitment abi encoding#5214
ecol-master wants to merge 30 commits intomasterfrom
5205-ethexe-set-limits-for-batch-commitment-production

Conversation

@ecol-master
Copy link
Member

@ecol-master ecol-master commented Mar 11, 2026

@ecol-master ecol-master linked an issue Mar 11, 2026 that may be closed by this pull request
@semanticdiff-com
Copy link

semanticdiff-com bot commented Mar 11, 2026

Review changes with  SemanticDiff

Changed Files
File Status
  ethexe/consensus/src/validator/mock.rs  44% smaller
  ethexe/common/src/consensus.rs  25% smaller
  ethexe/consensus/src/validator/participant.rs  23% smaller
  ethexe/consensus/src/validator/mod.rs  13% smaller
  ethexe/consensus/src/validator/core.rs  2% smaller
  ethexe/service/src/tests/mod.rs  1% smaller
  ethexe/service/src/tests/utils/env.rs  1% smaller
  ethexe/cli/src/params/node.rs  1% smaller
  .ethexe.example.local.toml Unsupported file format
  .ethexe.example.toml Unsupported file format
  Cargo.lock Unsupported file format
  ethexe/consensus/Cargo.toml Unsupported file format
  ethexe/consensus/src/utils.rs  0% smaller
  ethexe/consensus/src/validator/batch/filler.rs  0% smaller
  ethexe/consensus/src/validator/batch/manager.rs  0% smaller
  ethexe/consensus/src/validator/batch/mod.rs  0% smaller
  ethexe/consensus/src/validator/batch/tests.rs  0% smaller
  ethexe/consensus/src/validator/batch/types.rs  0% smaller
  ethexe/consensus/src/validator/batch/utils.rs  0% smaller
  ethexe/consensus/src/validator/producer.rs  0% smaller
  ethexe/service/src/config.rs  0% smaller
  ethexe/service/src/lib.rs Unsupported file format

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant refactoring to the ethexe-consensus module by extracting the complex logic responsible for aggregating different types of commitments into a BatchCommitment into a new, dedicated BatchBuilder component. This change enhances modularity, improves code organization, and centralizes batch production configuration through the new BatchLimits struct, making the system more maintainable and easier to understand.

Highlights

  • Refactored Batch Commitment Aggregation: The logic for aggregating various commitments (chain, code, validators, rewards) into a BatchCommitment has been moved from ValidatorCore to a new dedicated BatchBuilder struct.
  • Introduced BatchBuilder and BatchLimits: A new batch.rs module was added, defining BatchBuilder to encapsulate batch commitment creation and BatchLimits to centralize configuration parameters like chain_deepness_threshold and commitment_delay_limit.
  • Delegation in ValidatorCore: ValidatorCore now includes an instance of BatchBuilder and delegates all batch commitment aggregation tasks to it, simplifying ValidatorCore's responsibilities.
Changelog
  • ethexe/consensus/src/validator/batch.rs
    • New file added, containing BatchBuilder and BatchLimits structs, and the logic for aggregating various commitments.
  • ethexe/consensus/src/validator/core.rs
    • Refactored ValidatorCore to remove batch commitment aggregation methods and delegate them to BatchBuilder.
    • Updated imports and ElectionRequest fields.
  • ethexe/consensus/src/validator/mock.rs
    • Updated mock context to initialize and use the new BatchBuilder.
  • ethexe/consensus/src/validator/mod.rs
    • Updated module structure and ValidatorService initialization to incorporate BatchBuilder and BatchLimits.
  • ethexe/consensus/src/validator/producer.rs
    • Modified Producer to use the BatchBuilder for aggregating batch commitments.
Activity
  • No specific activity has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a solid refactoring that extracts the logic for creating BatchCommitment into a new BatchBuilder. This improves the code's structure and modularity. My review includes a couple of suggestions to further improve the design by removing some redundant configuration fields and improving encapsulation.

@ecol-master ecol-master changed the title feat(ethexe-consensus): Limits for BatchCommitment production feat(ethexe-consensus): Limits for BatchCommitment abi encoding Mar 16, 2026
@ecol-master ecol-master added the D8-ethexe ethexe-related PR label Mar 18, 2026
@ecol-master ecol-master marked this pull request as ready for review March 19, 2026 08:13
@ecol-master ecol-master added the A0-pleasereview PR is ready to be reviewed by the team label Mar 19, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new batch commitment manager with improved logic for creating and validating batch commitments, including size limits for ABI encoding. While the changes enhance modularity and maintainability, feedback includes a high-severity concern regarding the accuracy of ABI-encoded size estimation, which could lead to incorrect batch validation. Additionally, there are medium-severity suggestions to improve code readability by clarifying variable names and providing more specific error messages for better debugging.

///
/// Each `charge_*` method subtracts the estimated encoded size of the provided
/// value and returns `false` when adding it would exceed the maximum batch size.
#[derive(Debug, Clone)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The BatchSizeCounter's approach to estimating the ABI-encoded size is described as "intentionally conservative but not exact." This could lead to either rejecting valid batches (inefficiency) or, more critically, accepting batches that exceed the actual on-chain size limit if the approximation is not consistently an overestimate. The current implementation sums individual component sizes but doesn't explicitly account for the overhead of the Gear::BatchCommitment tuple itself (e.g., dynamic offsets). This could lead to discrepancies between the estimated size and the actual on-chain size. Please ensure that the batch_size_limit is correctly configured to absorb this overhead, or consider refining the BatchSizeCounter to more accurately reflect the total ABI-encoded size of the final BatchCommitment structure, including tuple overhead.

For example, if Gear::BatchCommitment is defined as a tuple (ValidatorsCommitment, RewardsCommitment, ChainCommitment, CodeCommitment[]), its abi_encoded_size() would include not just the sum of its members' sizes but also the static part (offsets) for dynamic types within the tuple.

last_committed_announce,
head_announce_hash,
)?;
// TODO !!!: These variable are dirty, make clearly
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment TODO !!!: These variable are dirty, make clearly indicates that final_announce and max_deep variable names are not clear. Renaming them to be more descriptive would improve readability and maintainability of the code.

Copy link
Member

@grishasobol grishasobol left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Intermediate

.batch_size_limit
.unwrap_or(DEFAULT_BATCH_SIZE_LIMIT)
.min(MAX_BATCH_SIZE_LIMIT),
canonical_quarantine: self.canonical_quarantine.unwrap_or(CANONICAL_QUARANTINE),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think better just to panic in the case limit is bigger than max batch size. It's more UX friendly, because size is always as you set

let limits = BatchLimits {
chain_deepness_threshold: config.chain_deepness_threshold,
commitment_delay_limit: config.commitment_delay_limit,
batch_size_limit: config.batch_size_limit,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually it's not completely correct to use the same batch_size_limit for production and validation.

Can be a situation for example when:

  1. Producer has limit 120 for example
  2. Participant has limit 100
    So, producer sends batch with size 110, which is valid, but not for participant.

batch_size_limit - is more about production, then validation. So, I think it's better to use always MAX_BATCH_SIZE_LIMIT for validation

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought I would like a protocol constant for all validators. I think it would be better to use this limit from config to track where validators behavior not the same.

@ecol-master ecol-master added A1-inprogress Issue is in progress or PR draft is not ready to be reviewed and removed A0-pleasereview PR is ready to be reviewed by the team labels Mar 20, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

A1-inprogress Issue is in progress or PR draft is not ready to be reviewed D8-ethexe ethexe-related PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ethexe: set limits for batch commitment production

2 participants