Skip to content

Comments

fix(api): Introduced VPC status property#320

Open
bcavnvidia wants to merge 1 commit intoNVIDIA:mainfrom
bcavnvidia:resource-pool-alloc-source
Open

fix(api): Introduced VPC status property#320
bcavnvidia wants to merge 1 commit intoNVIDIA:mainfrom
bcavnvidia:resource-pool-alloc-source

Conversation

@bcavnvidia
Copy link
Contributor

@bcavnvidia bcavnvidia commented Feb 17, 2026

Description

  • Adds a status property to VPC with only an optional vni property for now.
    • status.vni will be populated with the active VNI of the VPC.
      • In the case of an auto-allocated VNI, vpc.vni will be None.
      • In the case of a caller explicitly requesting a VNI during VPC creation, both vpc.vni and vpc.status.vni will match.
  • Removes DPA VNI allocation code and cleans up the DB.
    • DPA VNI will now be populated based on VPC VNI in gRPC responses
  • Moves the original Vpc.vni proto field to a deprecated_vni field that we will continue to populate using the value of vpc.status.vni
  • Adds a new Vpc.vni proto field to hold only the VNI when explicitly requested during VPC creation.
  • Re-arranged the VPC creation and vni allocation so we can get rid of an extra DB call
    • Also removed the separate test for duplicate VNI since this is now handled in the test for creating the VPC.

This lets an API consumer know the source of the active VNI (auto/non-auto) and also gives us a place to handle additional dynamic/asynchronously populated fields in the future.

Type of Change

  • Add - New feature or capability
  • Change - Changes in existing functionality
  • Fix - Bug fixes
  • Remove - Removed features or deprecated functionality
  • Internal - Internal changes (refactoring, tests, docs, etc.)

Related Issues (Optional)

Breaking Changes

  • This PR contains breaking changes

Testing

  • Unit tests added/updated
  • Integration tests added/updated
  • Manual testing performed
  • No testing required (docs, internal refactor, etc.)

Additional Notes

@bcavnvidia bcavnvidia requested a review from a team as a code owner February 17, 2026 22:07
@bcavnvidia bcavnvidia self-assigned this Feb 17, 2026
@Matthias247
Copy link
Contributor

@yoks Maybe you can take a look. You also started working on getting the APIs standardized.

(
"DPA_VNI",
format!("{}", vpc.dpa_vni.unwrap_or_default()).into(),
format!("{}", vpc.status.and_then(|s| s.vni).unwrap_or_default()).into(),
Copy link
Contributor

@poroh poroh Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this copy/paste bug or we don't need this column in output anymore?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not copy-pasta, but maybe the DPA field in the proto should be marked deprecated. DPA VNI is going to come from the VPC VNI now.

The question is whether we want to leave it in the proto as a separate field for any reason. @chet

If we do, then this line should pull from dpa_vni. If not, then this line could maybe just go away entirely.

Copy link
Contributor

@chet chet Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah it can probably just go away entirely. I think there's something to be said re: communicating that the same VNI is used for N/S and E/W, but that doesn't really have to do with how we print this column, lol.

I could say it's temporarily useful just for people to realize, but since the DPA VNI is a new thing that people aren't leveraging yet, they'd be none the wiser anyway.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect, so no need for any deprecation process or anything? Just blow it all away completely?

It seemed like that was the case but wanted the review to make sure.

/// be used by DPA as pools.dpa-vni
/// DPA (aka Cluster Interconnect Network) related configuration
/// In addition to enabling DPA and specifying the mqtt endpoint,
/// the vni to be used by DPA will come from pools.vpc-vni.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this actually say "the VNI to be used by DPA will be the same VNI as the VPC?" This just makes it read like we're allocating a separate VNI for E/W, and not re-using the same VNI?

uint32 available_31_segments = 2;
uint64 total_linknet_segments = 3;
uint64 available_linknet_segments = 4;
uint64 available_linknet_segments = 4;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extra whitespace 😛


Ok(())
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a basic test we'd want to keep in principle, even if it looks totally different? Or is it like totally infallible so there's way to even write it up?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I refactored to drop the set_vni function entirely, which is really what this was testing.
  2. The create_vpc test already, now, covers the "duplicate vni" case.

(last bullet point in the PR description, but might not be clear enough)


-- Add a unique "constraint" similar to the one on
-- the original VNI column.
CREATE UNIQUE INDEX ON vpcs(((status->>'vni')::integer));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original index had a WHERE deleted IS NULL clause on it (so that there can be multiple with the same vni so long as they're deleted)... I think we may need to carry that forward here if possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants