feat(ethexe-consensus): Limits for BatchCommitment abi encoding#5214
feat(ethexe-consensus): Limits for BatchCommitment abi encoding#5214ecol-master wants to merge 30 commits intomasterfrom
ethexe-consensus): Limits for BatchCommitment abi encoding#5214Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a significant refactoring to the Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request is a solid refactoring that extracts the logic for creating BatchCommitment into a new BatchBuilder. This improves the code's structure and modularity. My review includes a couple of suggestions to further improve the design by removing some redundant configuration fields and improving encapsulation.
ethexe-consensus): Limits for BatchCommitment productionethexe-consensus): Limits for BatchCommitment abi encoding
There was a problem hiding this comment.
Code Review
This pull request introduces a new batch commitment manager with improved logic for creating and validating batch commitments, including size limits for ABI encoding. While the changes enhance modularity and maintainability, feedback includes a high-severity concern regarding the accuracy of ABI-encoded size estimation, which could lead to incorrect batch validation. Additionally, there are medium-severity suggestions to improve code readability by clarifying variable names and providing more specific error messages for better debugging.
| /// | ||
| /// Each `charge_*` method subtracts the estimated encoded size of the provided | ||
| /// value and returns `false` when adding it would exceed the maximum batch size. | ||
| #[derive(Debug, Clone)] |
There was a problem hiding this comment.
The BatchSizeCounter's approach to estimating the ABI-encoded size is described as "intentionally conservative but not exact." This could lead to either rejecting valid batches (inefficiency) or, more critically, accepting batches that exceed the actual on-chain size limit if the approximation is not consistently an overestimate. The current implementation sums individual component sizes but doesn't explicitly account for the overhead of the Gear::BatchCommitment tuple itself (e.g., dynamic offsets). This could lead to discrepancies between the estimated size and the actual on-chain size. Please ensure that the batch_size_limit is correctly configured to absorb this overhead, or consider refining the BatchSizeCounter to more accurately reflect the total ABI-encoded size of the final BatchCommitment structure, including tuple overhead.
For example, if Gear::BatchCommitment is defined as a tuple (ValidatorsCommitment, RewardsCommitment, ChainCommitment, CodeCommitment[]), its abi_encoded_size() would include not just the sum of its members' sizes but also the static part (offsets) for dynamic types within the tuple.
| last_committed_announce, | ||
| head_announce_hash, | ||
| )?; | ||
| // TODO !!!: These variable are dirty, make clearly |
…-batch-commitment-production
… of https://github.com/gear-tech/gear into 5205-ethexe-set-limits-for-batch-commitment-production
| .batch_size_limit | ||
| .unwrap_or(DEFAULT_BATCH_SIZE_LIMIT) | ||
| .min(MAX_BATCH_SIZE_LIMIT), | ||
| canonical_quarantine: self.canonical_quarantine.unwrap_or(CANONICAL_QUARANTINE), |
There was a problem hiding this comment.
I think better just to panic in the case limit is bigger than max batch size. It's more UX friendly, because size is always as you set
| let limits = BatchLimits { | ||
| chain_deepness_threshold: config.chain_deepness_threshold, | ||
| commitment_delay_limit: config.commitment_delay_limit, | ||
| batch_size_limit: config.batch_size_limit, |
There was a problem hiding this comment.
actually it's not completely correct to use the same batch_size_limit for production and validation.
Can be a situation for example when:
- Producer has limit 120 for example
- Participant has limit 100
So, producer sends batch with size 110, which is valid, but not for participant.
batch_size_limit - is more about production, then validation. So, I think it's better to use always MAX_BATCH_SIZE_LIMIT for validation
There was a problem hiding this comment.
I thought I would like a protocol constant for all validators. I think it would be better to use this limit from config to track where validators behavior not the same.
Resolves #5205, #4791
@grishasobol @breathx