-
Notifications
You must be signed in to change notification settings - Fork 105
feat: replace toml with cli config #732
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
17 commits
Select commit
Hold shift + click to select a range
f3dd91b
CLI for node
Mirko-von-Leipzig 1094e90
Replace RPC start command
Mirko-von-Leipzig 70f8e5c
Replace block-producer start command
Mirko-von-Leipzig 7b73139
Replace store start command
Mirko-von-Leipzig 07993b6
Refactor and replace node start command
Mirko-von-Leipzig 581febc
Update config with provers
Mirko-von-Leipzig 0a6fd65
Replace init subcommand
Mirko-von-Leipzig da17b8e
Re-add OTel support
Mirko-von-Leipzig 9247b4f
store bootstrap
Mirko-von-Leipzig 47c0291
cleanup
Mirko-von-Leipzig 13a5ae0
lints
Mirko-von-Leipzig 0a2fa25
rip old configs
Mirko-von-Leipzig 3d4ab68
fixup ci issues
Mirko-von-Leipzig ba7c7bf
fix http url/socket
Mirko-von-Leipzig 2a38fa7
Address review comments
Mirko-von-Leipzig db2fdde
Rename node to bundled and add bootstrapping
Mirko-von-Leipzig 384debf
Merge next
Mirko-von-Leipzig File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,75 @@ | ||
| use anyhow::Context; | ||
| use miden_node_block_producer::server::BlockProducer; | ||
| use miden_node_utils::grpc::UrlExt; | ||
| use url::Url; | ||
|
|
||
| use super::{ | ||
| ENV_BATCH_PROVER_URL, ENV_BLOCK_PRODUCER_URL, ENV_BLOCK_PROVER_URL, ENV_ENABLE_OTEL, | ||
| ENV_STORE_URL, | ||
| }; | ||
|
|
||
| #[derive(clap::Subcommand)] | ||
| pub enum BlockProducerCommand { | ||
| /// Starts the block-producer component. | ||
| Start { | ||
| /// Url at which to serve the gRPC API. | ||
| #[arg(env = ENV_BLOCK_PRODUCER_URL)] | ||
| url: Url, | ||
|
|
||
| /// The store's gRPC url. | ||
| #[arg(long = "store.url", env = ENV_STORE_URL)] | ||
| store_url: Url, | ||
|
|
||
| /// The remote batch prover's gRPC url. If unset, will default to running a prover | ||
| /// in-process which is expensive. | ||
| #[arg(long = "batch-prover.url", env = ENV_BATCH_PROVER_URL)] | ||
| batch_prover_url: Option<Url>, | ||
|
|
||
| /// The remote block prover's gRPC url. If unset, will default to running a prover | ||
| /// in-process which is expensive. | ||
| #[arg(long = "block-prover.url", env = ENV_BLOCK_PROVER_URL)] | ||
| block_prover_url: Option<Url>, | ||
|
|
||
| /// Enables the exporting of traces for OpenTelemetry. | ||
| /// | ||
| /// This can be further configured using environment variables as defined in the official | ||
| /// OpenTelemetry documentation. See our operator manual for further details. | ||
| #[arg(long = "enable-otel", default_value_t = false, env = ENV_ENABLE_OTEL)] | ||
| open_telemetry: bool, | ||
| }, | ||
| } | ||
|
|
||
| impl BlockProducerCommand { | ||
| pub async fn handle(self) -> anyhow::Result<()> { | ||
| let Self::Start { | ||
| url, | ||
| store_url, | ||
| batch_prover_url, | ||
| block_prover_url, | ||
| // Note: open-telemetry is handled in main. | ||
| open_telemetry: _, | ||
| } = self; | ||
|
|
||
| let store_url = store_url | ||
| .to_socket() | ||
| .context("Failed to extract socket address from store URL")?; | ||
|
|
||
| let listener = | ||
| url.to_socket().context("Failed to extract socket address from store URL")?; | ||
| let listener = tokio::net::TcpListener::bind(listener) | ||
| .await | ||
| .context("Failed to bind to store's gRPC URL")?; | ||
|
|
||
| BlockProducer::init(listener, store_url, batch_prover_url, block_prover_url) | ||
| .await | ||
| .context("Loading store")? | ||
| .serve() | ||
| .await | ||
| .context("Serving store") | ||
| } | ||
|
|
||
| pub fn is_open_telemetry_enabled(&self) -> bool { | ||
| let Self::Start { open_telemetry, .. } = self; | ||
| *open_telemetry | ||
| } | ||
| } | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,189 @@ | ||
| use std::{collections::HashMap, path::PathBuf}; | ||
|
|
||
| use anyhow::Context; | ||
| use miden_node_block_producer::server::BlockProducer; | ||
| use miden_node_rpc::server::Rpc; | ||
| use miden_node_store::server::Store; | ||
| use miden_node_utils::grpc::UrlExt; | ||
| use tokio::{net::TcpListener, task::JoinSet}; | ||
| use url::Url; | ||
|
|
||
| use super::{ | ||
| ENV_BATCH_PROVER_URL, ENV_BLOCK_PROVER_URL, ENV_DATA_DIRECTORY, ENV_ENABLE_OTEL, ENV_RPC_URL, | ||
| }; | ||
|
|
||
| #[derive(clap::Subcommand)] | ||
| #[expect(clippy::large_enum_variant, reason = "This is a single use enum")] | ||
| pub enum BundledCommand { | ||
| /// Bootstraps the blockchain database with the genesis block. | ||
| /// | ||
| /// This populates the genesis block's data with the accounts and data listed in the | ||
| /// configuration file. | ||
| /// | ||
| /// Each generated genesis account's data is also written to disk. This includes the private | ||
| /// key which can be used to create transactions for these accounts. | ||
| /// | ||
| /// See also: `store dump-genesis` | ||
| Bootstrap { | ||
| /// Genesis configuration file. | ||
| /// | ||
| /// If not provided the default configuration is used. | ||
| #[arg(long, value_name = "FILE")] | ||
| config: Option<PathBuf>, | ||
| /// Directory in which to store the database and raw block data. | ||
| #[arg(long, env = ENV_DATA_DIRECTORY, value_name = "DIR")] | ||
| data_directory: PathBuf, | ||
| // Directory to write the account data to. | ||
| #[arg(long, value_name = "DIR")] | ||
| accounts_directory: PathBuf, | ||
| }, | ||
|
|
||
| /// Runs all three node components in the same process. | ||
| /// | ||
| /// The internal gRPC endpoints for the store and block-producer will each be assigned a random | ||
| /// open port on localhost (127.0.0.1:0). | ||
| Start { | ||
| /// Url at which to serve the RPC component's gRPC API. | ||
| #[arg(long = "rpc.url", env = ENV_RPC_URL, value_name = "URL")] | ||
| rpc_url: Url, | ||
|
|
||
| /// Directory in which the Store component should store the database and raw block data. | ||
| #[arg(long = "data-directory", env = ENV_DATA_DIRECTORY, value_name = "DIR")] | ||
| data_directory: PathBuf, | ||
|
|
||
| /// The remote batch prover's gRPC url. If unset, will default to running a prover | ||
| /// in-process which is expensive. | ||
| #[arg(long = "batch-prover.url", env = ENV_BATCH_PROVER_URL, value_name = "URL")] | ||
| batch_prover_url: Option<Url>, | ||
|
|
||
| /// The remote block prover's gRPC url. If unset, will default to running a prover | ||
| /// in-process which is expensive. | ||
| #[arg(long = "block-prover.url", env = ENV_BLOCK_PROVER_URL, value_name = "URL")] | ||
| block_prover_url: Option<Url>, | ||
|
|
||
| /// Enables the exporting of traces for OpenTelemetry. | ||
| /// | ||
| /// This can be further configured using environment variables as defined in the official | ||
| /// OpenTelemetry documentation. See our operator manual for further details. | ||
| #[arg(long = "enable-otel", default_value_t = false, env = ENV_ENABLE_OTEL, value_name = "bool")] | ||
| open_telemetry: bool, | ||
| }, | ||
| } | ||
|
|
||
| impl BundledCommand { | ||
| pub async fn handle(self) -> anyhow::Result<()> { | ||
| match self { | ||
| BundledCommand::Bootstrap { | ||
| config, | ||
| data_directory, | ||
| accounts_directory, | ||
| } => { | ||
| // Currently the bundled bootstrap is identical to the store's bootstrap. | ||
| crate::commands::store::StoreCommand::Bootstrap { | ||
| config, | ||
| data_directory, | ||
| accounts_directory, | ||
| } | ||
| .handle() | ||
| .await | ||
| .context("failed to bootstrap the store component") | ||
| }, | ||
| BundledCommand::Start { | ||
| rpc_url, | ||
| data_directory, | ||
| batch_prover_url, | ||
| block_prover_url, | ||
| // Note: open-telemetry is handled in main. | ||
| open_telemetry: _, | ||
| } => Self::start(rpc_url, data_directory, batch_prover_url, block_prover_url).await, | ||
| } | ||
| } | ||
|
|
||
| async fn start( | ||
| rpc_url: Url, | ||
| data_directory: PathBuf, | ||
| batch_prover_url: Option<Url>, | ||
| block_prover_url: Option<Url>, | ||
| ) -> anyhow::Result<()> { | ||
| // Start listening on all gRPC urls so that inter-component connections can be created | ||
| // before each component is fully started up. | ||
| // | ||
| // This is required because `tonic` does not handle retries nor reconnections and our | ||
| // services expect to be able to connect on startup. | ||
| let grpc_rpc = rpc_url.to_socket().context("Failed to to RPC gRPC socket")?; | ||
| let grpc_rpc = TcpListener::bind(grpc_rpc) | ||
| .await | ||
| .context("Failed to bind to RPC gRPC endpoint")?; | ||
| let grpc_store = TcpListener::bind("127.0.0.1:0") | ||
| .await | ||
| .context("Failed to bind to store gRPC endpoint")?; | ||
| let grpc_block_producer = TcpListener::bind("127.0.0.1:0") | ||
| .await | ||
| .context("Failed to bind to block-producer gRPC endpoint")?; | ||
|
|
||
| let store_address = | ||
| grpc_store.local_addr().context("Failed to retrieve the store's gRPC address")?; | ||
| let block_producer_address = grpc_block_producer | ||
| .local_addr() | ||
| .context("Failed to retrieve the block-producer's gRPC address")?; | ||
|
|
||
| let mut join_set = JoinSet::new(); | ||
|
|
||
| // Start store. The store endpoint is available after loading completes. | ||
| let store = Store::init(grpc_store, data_directory).await.context("Loading store")?; | ||
| let store_id = | ||
| join_set.spawn(async move { store.serve().await.context("Serving store") }).id(); | ||
|
|
||
| // Start block-producer. The block-producer's endpoint is available after loading completes. | ||
| let block_producer = BlockProducer::init( | ||
| grpc_block_producer, | ||
| store_address, | ||
| batch_prover_url, | ||
| block_prover_url, | ||
| ) | ||
| .await | ||
| .context("Loading block-producer")?; | ||
| let block_producer_id = join_set | ||
| .spawn(async move { block_producer.serve().await.context("Serving block-producer") }) | ||
| .id(); | ||
|
|
||
| // Start RPC component. | ||
| let rpc = Rpc::init(grpc_rpc, store_address, block_producer_address) | ||
| .await | ||
| .context("Loading RPC")?; | ||
| let rpc_id = join_set.spawn(async move { rpc.serve().await.context("Serving RPC") }).id(); | ||
|
|
||
| // Lookup table so we can identify the failed component. | ||
| let component_ids = HashMap::from([ | ||
| (store_id, "store"), | ||
| (block_producer_id, "block-producer"), | ||
| (rpc_id, "rpc"), | ||
| ]); | ||
|
|
||
| // SAFETY: The joinset is definitely not empty. | ||
| let component_result = join_set.join_next_with_id().await.unwrap(); | ||
|
|
||
| // We expect components to run indefinitely, so we treat any return as fatal. | ||
| // | ||
| // Map all outcomes to an error, and provide component context. | ||
| let (id, err) = match component_result { | ||
| Ok((id, Ok(_))) => (id, Err(anyhow::anyhow!("Component completed unexpectedly"))), | ||
| Ok((id, Err(err))) => (id, Err(err)), | ||
| Err(join_err) => (join_err.id(), Err(join_err).context("Joining component task")), | ||
| }; | ||
| let component = component_ids.get(&id).unwrap_or(&"unknown"); | ||
|
|
||
| // We could abort and gracefully shutdown the other components, but since we're crashing the | ||
| // node there is no point. | ||
|
|
||
| err.context(format!("Component {component} failed")) | ||
| } | ||
|
|
||
| pub fn is_open_telemetry_enabled(&self) -> bool { | ||
| if let Self::Start { open_telemetry, .. } = self { | ||
| *open_telemetry | ||
| } else { | ||
| false | ||
| } | ||
| } | ||
| } |
This file was deleted.
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Considering your comment, is there anything to document (here and elsewhere) about the https support?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I can solve the issue by making component start up more robust. Right now I wouldn't even know what exactly would work beyond (sometimes) discarding the protocol.
One startup is robust it should be able to handle https as well, even if we ourselves never use it for performance/simplicity reasons.