Skip to content

v2.1.28#688

Merged
joellidin merged 3 commits intomainfrom
dev
Jan 21, 2026
Merged

v2.1.28#688
joellidin merged 3 commits intomainfrom
dev

Conversation

@joellidin
Copy link
Copy Markdown
Collaborator

@joellidin joellidin commented Jan 21, 2026

  • (neurons) Add burn-only validator for transition
  • Bump run version

Description

Related Issue(s)

  • Closes #[issue number]

Type of Change

  • Feature (adding new functionality)
  • Fix (resolving a bug or issue)
  • Docs (documentation updates)
  • Refactor (code changes that don't affect functionality)
  • Maintenance (dependency updates or other maintenance)
  • Tests (adding or improving tests)
  • Breaking change (fix or feature with incompatible API changes)
  • Other: _____

Branch Naming

  • My branch follows the project's naming convention (e.g., feature/add-new-capability)

Commit Messages

  • My commits are small, atomic, and have proper commit messages
  • Commit messages are in imperative mood with a capitalized summary under 50 chars

Code Quality

  • I've performed a self-review of my code
  • I've added appropriate docstrings following the project's conventions
  • I've added proper logging where necessary (without trailing periods)
  • I've applied linting and formatting with Ruff
  • My code generates no new warnings

Testing

  • I've added tests for new functionality or bug fixes
  • All tests pass locally with my changes
  • Test coverage has not decreased

Documentation

  • I've updated documentation to reflect my changes
  • I've updated comments in hard-to-understand areas

If this is a breaking change

Screenshots/Examples

Additional Notes

Summary by CodeRabbit

New Features

  • Introduced dedicated burn validator mode enabling specialized burn-only validation operations with automatic periodic weight assignments to designated burn addresses, continuous operational monitoring, comprehensive error recovery mechanisms, and graceful shutdown support.

Chores

  • Version bumped to 2.1.28.

✏️ Tip: You can customize this high-level summary in your review settings.

Implement temporary burn validator for 100% burn period until Crusades
launches. Replaces normal validator behavior during transition phase.

- Add neurons/burn.py to set 100% weight to UID 1 every 360 blocks
- Update entrypoint.sh to run burn.py instead of full validator
- Remove torchrun and wandb dependencies for burn mode
- Add periodic status logging every 2 minutes
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Jan 21, 2026

Walkthrough

This PR introduces a new burn-only validator module (neurons/burn.py) that periodically assigns 100% weight to a designated UID, modifies the entrypoint script to launch this validator instead of the standard validator, and bumps the package version from 2.1.27 to 2.1.28.

Changes

Cohort / File(s) Summary
New burn validator module
neurons/burn.py
Introduces new validator script with constants BURN_UID (1) and WEIGHT_INTERVAL (360 blocks), get_config() for CLI parsing, and main() implementing an infinite loop that sets weights to designated UID every 360 blocks with error handling and 12-second loop iterations.
Entrypoint configuration
scripts/entrypoint.sh
Replaces validator branch startup from torchrun validator.py with multiple flags to python burn.py --subtensor.network, removing logging, CUDA, wandb, and debug arguments; updates log message accordingly.
Version bump
src/tplr/__init__.py
Updates __version__ constant from "2.1.27" to "2.1.28".

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Possibly related PRs

  • feat/burn #687: Introduces identical burn validator module and entrypoint changes in parallel.
  • v2.1.23 #678: Performs related version bump changes to the same __version__ variable.

Poem

🐰 A burn validator hops into place,

Setting weights with UID grace,

Every 360 blocks, steady and bright,

Version 2.1.28—perfectly right! ✨

Loop and sleep, the rhythm's in pace! 💫

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description contains only the template header with two bullet points but lacks substantive content; all template sections remain unchecked and unfilled (no description, no related issues, no change type selected, no quality checks marked). Fill in the description section with what was changed and why, select at least one change type, and complete key sections like testing and code quality verification.
Title check ❓ Inconclusive The title 'v2.1.28' is a version number that accurately reflects the package version bump in the PR, but is overly generic and does not convey what the main functional changes are (new burn validator and entrypoint modifications). Consider using a more descriptive title that captures the primary changes, such as 'Add burn-only validator and bump version to v2.1.28' or similar.
✅ Passed checks (1 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link
Copy Markdown

codecov bot commented Jan 21, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

Impacted file tree graph

@@           Coverage Diff           @@
##             main     #688   +/-   ##
=======================================
  Coverage   57.69%   57.69%           
=======================================
  Files          27       27           
  Lines        4990     4990           
=======================================
  Hits         2879     2879           
  Misses       2111     2111           
Files with missing lines Coverage Δ
src/tplr/__init__.py 100.00% <100.00%> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@joellidin joellidin merged commit f1c8c15 into main Jan 21, 2026
6 of 8 checks passed
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
scripts/entrypoint.sh (1)

8-31: Gate CUDA/WANDB checks by NODE_TYPE to avoid blocking burn validator.
With burn-only validator, unconditional CUDA and WANDB checks (and WANDB_API_KEY requirement) can prevent validator startup in CPU-only or no-WANDB environments. Move these checks into the miner/evaluator branches (and only require WANDB for miner).

🔧 Suggested refactor
-# Check required environment variables
-for var in WALLET_NAME WALLET_HOTKEY NODE_TYPE WANDB_API_KEY NETUID; do
+ # Check required environment variables
+for var in WALLET_NAME WALLET_HOTKEY NODE_TYPE NETUID; do
     if [ -z "${!var}" ]; then
         echo "Error: $var environment variable is required"
         exit 1
     fi
 done
@@
-# Check CUDA availability
-if ! python3 -c "import torch; assert torch.cuda.is_available(), 'CUDA not available'"; then
-    echo "Error: CUDA is not available"
-    echo "Available GPUs:"
-    (set +e; nvidia-smi -L 2>/dev/null) || echo "nvidia-smi not available"
-    exit 1
-fi
-
-# Login to wandb non-interactively
-wandb login ${WANDB_API_KEY} --relogin
+# NOTE: CUDA/WANDB only needed for miner/evaluator
 if [ "$NODE_TYPE" = "miner" ]; then
+    if [ -z "${WANDB_API_KEY}" ]; then
+        echo "Error: WANDB_API_KEY environment variable is required"
+        exit 1
+    fi
+    if ! python3 -c "import torch; assert torch.cuda.is_available(), 'CUDA not available'"; then
+        echo "Error: CUDA is not available"
+        echo "Available GPUs:"
+        (set +e; nvidia-smi -L 2>/dev/null) || echo "nvidia-smi not available"
+        exit 1
+    fi
+    wandb login ${WANDB_API_KEY} --relogin
     echo "Starting miner with torchrun..."
@@
 elif [ "$NODE_TYPE" = "validator" ]; then
     echo "Starting burn validator..."
     exec python neurons/burn.py \
@@
 elif [ "$NODE_TYPE" = "evaluator" ]; then
+    if ! python3 -c "import torch; assert torch.cuda.is_available(), 'CUDA not available'"; then
+        echo "Error: CUDA is not available"
+        echo "Available GPUs:"
+        (set +e; nvidia-smi -L 2>/dev/null) || echo "nvidia-smi not available"
+        exit 1
+    fi
     # Count the number of visible GPUs for evaluator

Also applies to: 22-31, 63-68

🤖 Fix all issues with AI agents
In `@neurons/burn.py`:
- Around line 32-34: metagraph.hotkeys.index(...) can raise ValueError when
wallet.hotkey.ss58_address is not registered; before calling index, check
membership (e.g., if wallet.hotkey.ss58_address in metagraph.hotkeys) and if
missing log a clear error via tplr.logger.error with context (include the hotkey
and config.netuid) and exit/return cleanly; alternatively wrap the
metagraph.hotkeys.index call in a try/except ValueError that logs the same clear
message and exits to avoid an unhandled exception.

Comment on lines +32 to +34
metagraph = subtensor.metagraph(config.netuid)
my_uid = metagraph.hotkeys.index(wallet.hotkey.ss58_address)
tplr.logger.info(f"Running burn on UID {my_uid}, netuid {config.netuid}")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Gracefully handle unregistered hotkey before indexing.
metagraph.hotkeys.index(...) throws ValueError if the hotkey isn’t registered; handle this explicitly to give a clear error and exit cleanly.

✅ Suggested guard
 import argparse
 import time
+import sys
@@
-    metagraph = subtensor.metagraph(config.netuid)
-    my_uid = metagraph.hotkeys.index(wallet.hotkey.ss58_address)
+    metagraph = subtensor.metagraph(config.netuid)
+    if wallet.hotkey.ss58_address not in metagraph.hotkeys:
+        tplr.logger.error(
+            f"Hotkey {wallet.hotkey.ss58_address} is not registered on netuid {config.netuid}"
+        )
+        sys.exit(1)
+    my_uid = metagraph.hotkeys.index(wallet.hotkey.ss58_address)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
metagraph = subtensor.metagraph(config.netuid)
my_uid = metagraph.hotkeys.index(wallet.hotkey.ss58_address)
tplr.logger.info(f"Running burn on UID {my_uid}, netuid {config.netuid}")
metagraph = subtensor.metagraph(config.netuid)
if wallet.hotkey.ss58_address not in metagraph.hotkeys:
tplr.logger.error(
f"Hotkey {wallet.hotkey.ss58_address} is not registered on netuid {config.netuid}"
)
sys.exit(1)
my_uid = metagraph.hotkeys.index(wallet.hotkey.ss58_address)
tplr.logger.info(f"Running burn on UID {my_uid}, netuid {config.netuid}")
🤖 Prompt for AI Agents
In `@neurons/burn.py` around lines 32 - 34, metagraph.hotkeys.index(...) can raise
ValueError when wallet.hotkey.ss58_address is not registered; before calling
index, check membership (e.g., if wallet.hotkey.ss58_address in
metagraph.hotkeys) and if missing log a clear error via tplr.logger.error with
context (include the hotkey and config.netuid) and exit/return cleanly;
alternatively wrap the metagraph.hotkeys.index call in a try/except ValueError
that logs the same clear message and exits to avoid an unhandled exception.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant