| sidebar_position | 1 |
|---|---|
| description | Errors & Resolutions |
All operators should try to restart their nodes and should check if they are on the latest stable version before attempting anything other configuration change. You can restart and update with the following commands:
docker compose down
git pull
docker compose upYou can check your logs using
docker compose logscd to the directory where your private keys are located (ex: cd /path/to/charon/enr/private/key)
Run docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v1.8.1 enr. This prints the ENR on your screen.
For now, ENR rotation/replacement is not supported, it will be supported in a future release. Therefore, its advised to always keep a backup of your charon-enr-private-key in a secure location (ex: cloud storage, USB Flash drive, etc.).
The charon-enr-private-key is generated inside a hidden folder .charon. To view it, run ls -al in your terminal. This step may be a bit different for Windows.
Else, if you are on macOS, press Cmd + Shift + . to view the .charon folder in the Finder application.
This means that Lighthouse is still syncing which will throw a lot of errors down the line. Wait for the sync before moving further.
This indicates there is something wrong with your Lighthouse beacon node. This might be because the request buffer is full as your node is never starting consensus since it never gets the duties.
This could be linked to a internet connection being too slow or relying on a slow third-party service such as Infura.
This is likely due to Lighthouse not done syncing, wait and try again once synced. Can also be linked to Teku keystore issue.
Either your clock server time is off, or you are talking to a remote beacon client that is super slow (this is why we advise against using services like Infura).
A good quality beacon node API is critical to validator performance. It is always advised to run your own beacon node to ensure low latencies to boost validator performance. Using 3rd party services like Infura's beacon node API has significant disadvantages since the quality is often low. Requests often return 500s or timeout. This results in lots of warnings and errors and failed duties. Running a local beacon node is always preferred.
The required number of operators defined in your cluster-lock file is
probably not online to sign successfully. Make sure all operators are
running the latest version of Charon. To check if some peers are not online:
docker logs charon-distributed-validator-node-charon-1 2>&1 | grep 'absent'
Make sure you have successfully run a DKG before running the node. The key
should be created and placed in the right directory during the ceremony.
Also, make sure you are working in the right directory:
charon-distributed-validator-node.
Wait for Teku & Lighthouse sync to be complete.
RESERVATION_REFUSED is returned by the libp2p relay when some maximum
limit has been reached. This is most often due to "maximum reservations per IP/peer".
This is when your Charon node is restarting or in some error loop and constantly
attempting to create new relay reservations reaching the maximum.
To fix this error, stop your Charon node for 30mins before restarting it. This should allow the relay enough time to reset your IP/peer limits and should then allow new reservations. This could also be due to the relay being overloaded in general, so reaching a server wide "maximum connections" limit. This is an issue with relay scalability and we are working in a long term fix for this.
Error opening relay circuit NO_RESERVATION (204)` indicates the peer isn't connected to the relay, so the the Charon client cannot connect to the peer via the relay. That might be because the peer is offline or the peer is configured to connect to a different relay.
To fix this error, ensure the peer is online and configured with the exact
same --p2p-relays flag.
msgFetcher indicates a duty failed in the fetcher component when
it failed to fetch the required data from the beacon node API. This indicates
a problem with the upstream beacon node.
msgFetcherAggregatorNoAttData indicates an attestation aggregation
duty failed in the fetcher component since it couldn't fetch the prerequisite
attestation data. This indicates the associated attestation duty failed to obtain
a cluster agreed upon value.
msgFetcherAggregatorZeroPrepares indicates an attestation aggregation
duty failed in the fetcher component since it couldn't fetch the prerequisite
aggregated v2 committee subscription. This indicates the associated prepare aggregation
duty failed due to no partial v2 committee subscription submitted by the cluster
validator clients.
msgFetcherAggregatorFailedPrepare indicates an attestation aggregation
duty failed in the fetcher component since it couldn't fetch the prerequisite
aggregated v2 committee subscription. This indicates the associated prepare aggregation
duty failed.
msgFetcherProposerFewRandaos indicates a block proposer duty failed
in the fetcher component since it couldn't fetch the prerequisite aggregated
RANDAO. This indicates the associated randao duty failed due to insufficient
partial randao signatures submitted by the cluster validator clients.
msgFetcherProposerZeroRandaos indicates a block proposer duty failed
in the fetcher component since it couldn't fetch the prerequisite aggregated
RANDAO. This indicates the associated randao duty failed due to no partial randao
signatures submitted by the cluster validator clients.
msgFetcherProposerZeroRandaos indicates a block proposer duty failed
in the fetcher component since it couldn't fetch the prerequisite aggregated
RANDAO. This indicates the associated randao duty failed.
msgConsensus indicates a duty failed in consensus component. This
could indicate that insufficient honest peers participated in consensus or p2p
network connection problems.
msgValidatorAPI indicates that partial signature were never submitted
by the local validator client. This could indicate that the local validator client
is offline, or has connection problems with Charon, or has some other problem.
See validator client logs for more details.
msgParSigDBInternal indicates a bug in the partial signature database
as it is unexpected.
msgParSigEx indicates that no partial signature for the duty was
received from any peer. This indicates all peers are offline or p2p network connection
problems.
msgParSigDBThreshold indicates that insufficient partial signatures
for the duty was received from peers. This indicates problems with peers or p2p
network connection problems.
msgSigAgg indicates that BLS threshold aggregation of sufficient
partial signatures failed. This indicates inconsistent signed data. This indicates
a bug in Charon as it is unexpected.
When you turn on the --private-key-file-lock option in Charon, it
checks for a special file called the private key lock file. This file has the
same name as the ENR private key file but with a .lock extension.
If the private key lock file exists and is not older than 5 seconds, Charon won't
run. It doesn't allow running multiple Charon instances with the same ENR private
key. If the private key lock file has a timestamp older than 5 seconds, Charon
will replace it and continue with its work. If you`re sure that no other Charon
instances are running, you can delete the private key lock file.
Validator api 5xx response: mismatching validator client key share index, Mth key share submitted to Nth charon peer
The issue revolves around an invalid setup or deployment, where the validators private key shares don't match the ENR private key. There may have been a mix-up during deployment, leading to a mismatching validator client key share index.
For example:Imagine node N is Alice, and node M is Bob, the error would read:
mismatching validator client key share index, Bobs key share submitted to Alices charon node
Bobs private key share(s) are imported to a VC that is connected to Alices Charon node. This is a invalid setup/deployment.
Alices Charon node should only be connected to Alices VC.
Check the partial public key shares of each node inside
cluster-lock.json and see that matches with the public key inside
node(num)/validator_keys/keystore-0.json.
Sometimes, Grafana dashboard doesn't load any data first time around. You can solve this by following the steps below:
- Click the Wheel Icon > Datasources.
- Click prometheus.
- Change the "Access" field from
Server (default)toBrowser. Press "Save & Test". It should fail. - Change the "Access" field back to
Server (default)and press "Save & Test". You should be presented with a green success icon saying "Data source is working" and you can return to the dashboard page.
Can be linked to a Teku keystore issue.
You can ignore this error unless you have been contacted by the Obol Team
with monitoring credentials. In that case, follow [Monitoring your Node](../../run/running/monitoring.md) in our guides. It does not affect cluster performance or prevent the cluster from running.
Permission denied errors can come up in a variety of manners, particularly on Linux and WSL for Windows systems. In the interest of security, the charon docker image runs as a non-root user, and this user often does not have the permissions to write in the directory you have checked out the code to. This can be generally be fixed with some of the following:
- Running docker commands with
sudo, if you haven't setup docker to be run as a non-root user - Changing the permissions of the
.charonfolder with the commands:mkdir .charon(if it doesn't already exist);sudo chmod -R 666 .charon.
It`s because both Nethermind and Lighthouse start syncing and so there's
connectivity issues among the containers. Simply let the containers run for
a while. You won't observe frequent errors when Nethermind finishes syncing. You
can also add a second beacon node endpoint for something like Infura by
adding a comma separated API URL to the end of
`CHARON_BEACON_NODE_ENDPOINTS` in the docker-compose.yml.
If you get the following error when calling docker compose up:
Error response from daemon: error looking up logging plugin loki: plugin "loki" not found.
Then it probably means that the Loki docker driver isn't installed. In that case, run the following command to install loki:
docker plugin install grafana/loki-docker-driver:latest --alias loki --grant-all-permissions.
Replace replace.with.public.ip.or.hostname in the
relay/docker-compose.yml with your real public IP or DNS hostname.
The relay you are trying to connect to your peers via is offline or unreachable.