-
Notifications
You must be signed in to change notification settings - Fork 0
test pr 2 #18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test pr 2 #18
Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThe changes refactor the handling of random state in acquisition functions by removing its management from constructors and serialization, instead requiring it to be passed explicitly to Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant BayesianOptimization
participant AcquisitionFunction
participant GaussianProcessRegressor
participant TargetSpace
User->>BayesianOptimization: suggest(random_state)
BayesianOptimization->>AcquisitionFunction: suggest(gp, target_space, ..., random_state)
AcquisitionFunction->>GaussianProcessRegressor: (fit if required)
AcquisitionFunction->>TargetSpace: (access bounds, etc.)
AcquisitionFunction->>AcquisitionFunction: _acq_min(..., random_state)
AcquisitionFunction->>AcquisitionFunction: _random_sample_minimize(..., random_state)
AcquisitionFunction->>AcquisitionFunction: _smart_minimize(..., random_state)
AcquisitionFunction-->>BayesianOptimization: suggested_point
BayesianOptimization-->>User: suggested_point
Poem
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR centralizes the use of random_state in suggest() calls (instead of constructors), renames the n_l_bfgs_b parameter to n_smart, and updates tests and default acquisition-function instantiations accordingly.
- Deprecate
random_statein acquisition-function constructors and emit warnings - Rename
n_l_bfgs_b→n_smartacross API and tests - Update defaults in
bayesian_optimization.pyand propagaterandom_statethroughsuggest()
Reviewed Changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated no comments.
| File | Description |
|---|---|
| tests/test_bayesian_optimization.py | Accidental debug print(i) introduced into loop |
| tests/test_acquisition.py | Removed constructor random_state, updated suggest() calls |
| bayes_opt/bayesian_optimization.py | Drop ctor random_state, pass it into suggest() |
| bayes_opt/acquisition.py | Deprecate ctor random_state, rename n_l_bfgs_b → n_smart, add random_state param and warnings |
|
@CodeRabbit review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🔭 Outside diff range comments (2)
bayes_opt/acquisition.py (2)
309-318:⚠️ Potential issueSeeds for smart optimiser are chosen from the worst instead of the best samples
np.argsort(ys)[-n_x_seeds:]returns indices of the largest acquisition values, but we want the smallest (best) values to start L-BFGS-B / DE.- idxs = np.argsort(ys)[-n_x_seeds:] + idxs = np.argsort(ys)[:n_x_seeds]This simple one-liner noticeably improves convergence quality.
1225-1268: 💡 Verification agent🧩 Analysis chain
GPHedge.suggest()may crash when anintseed is supplied
_sample_idx_from_softmax_gainsexpects aRandomStateand calls.rand().
If callers provide anint(allowed by the public signature), anAttributeErroris raised.Add an early conversion:
def suggest( self, gp: GaussianProcessRegressor, @@ - self.i += 1 + random_state = ensure_rng(random_state) + self.i += 1Also adjust the subsequent call:
- idx = self._sample_idx_from_softmax_gains(random_state=random_state) + idx = self._sample_idx_from_softmax_gains(random_state)
I’ll inspect the implementation of
GPHedge.suggest,_sample_idx_from_softmax_gains, and see ifensure_rngis available.
🏁 Script executed:
#!/bin/bash set -e # Locate acquisition file(s) echo "=== Finding acquisition.py files ===" fd acquisition.py # Show GPHedge class around suggest() definition echo "=== GPHedge.suggest signature ===" rg -A5 -B5 "class GPHedge" -n --color=never bayes_opt/acquisition.py # Find suggest() method in GPHedge echo "=== suggest() definition ===" rg -n "def suggest" -n --color=never bayes_opt/acquisition.py # Inspect the body for random_state usage echo "=== suggest() body context ===" rg -n "random_state" -A5 -B5 --color=never bayes_opt/acquisition.py # Locate _sample_idx_from_softmax_gains echo "=== _sample_idx_from_softmax_gains implementation ===" rg -n "def _sample_idx_from_softmax_gains" -A20 --color=never bayes_opt/acquisition.py # Check for ensure_rng helper echo "=== ensure_rng references ===" rg -n "ensure_rng" -n --color=never .Length of output: 22036
Convert
random_stateto aRandomStatein GPHedge.suggestThe override in
GPHedge.suggestnever callsensure_rng(), so if you pass anintseed,_sample_idx_from_softmax_gainswill dorandom_state.rand()on anintand blow up. Add an early conversion and pass theRandomStateonward:--- a/bayes_opt/acquisition.py +++ a/bayes_opt/acquisition.py @@ class GPHedge(AcquisitionFunction): def suggest( self, gp: GaussianProcessRegressor, target_space: TargetSpace, n_random: int = 10_000, n_smart: int = 10, fit_gp: bool = True, random_state: int | RandomState | None = None, ) -> NDArray[Float]: + # Ensure we always have a numpy RandomState + random_state = ensure_rng(random_state) if len(target_space) == 0: msg = ( "Cannot suggest a point without previous samples. Use " @@ self.previous_candidates = np.array(x_max) - idx = self._sample_idx_from_softmax_gains(random_state=random_state) + idx = self._sample_idx_from_softmax_gains(random_state) return x_max[idx]
- Adds the missing
ensure_rng(random_state)call.- Passes the converted
RandomStateinto_sample_idx_from_softmax_gains.Fixes the
AttributeErrorwhen anintseed is supplied.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (4)
bayes_opt/acquisition.py(27 hunks)bayes_opt/bayesian_optimization.py(2 hunks)tests/test_acquisition.py(11 hunks)tests/test_bayesian_optimization.py(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
bayes_opt/acquisition.py (4)
tests/test_acquisition.py (4)
random_state(45-46)target_space(55-56)fun(137-141)gp(50-51)bayes_opt/util.py (1)
ensure_rng(8-30)bayes_opt/target_space.py (5)
max(617-645)TargetSpace(34-707)random_sample(558-596)bounds(187-194)res(647-678)bayes_opt/parameter.py (4)
random_sample(68-87)random_sample(262-281)random_sample(359-381)bounds(59-61)
🔇 Additional comments (2)
bayes_opt/bayesian_optimization.py (1)
252-255: 👍 Correct propagation ofrandom_stateto the new APIPassing
random_state=self._random_statematches the refactoredsuggest()signature and retains reproducibility.
No further action required.bayes_opt/acquisition.py (1)
73-76: Deprecation warning handled correctlyNice touch issuing a deprecation warning when a user still passes
random_stateto the constructor.
Implementation is concise and future-proof.
| # Test that multiple subsequent suggestions match | ||
| for _ in range(5): | ||
| print(i) | ||
| suggestion1 = optimizer.suggest() | ||
| suggestion2 = new_optimizer.suggest() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
print(i) will raise NameError and pollute test output
Inside the loop you declared for _ in range(5):, the loop variable is _, not i, therefore print(i) triggers a NameError, breaking the whole test suite.
Even if it were valid, uncontrolled print statements make tests noisy and slow CI pipelines.
- for _ in range(5):
- print(i)
+ for _ in range(5):
+ # Optional: keep an index for debugging
+ # print(_)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Test that multiple subsequent suggestions match | |
| for _ in range(5): | |
| print(i) | |
| suggestion1 = optimizer.suggest() | |
| suggestion2 = new_optimizer.suggest() | |
| # Test that multiple subsequent suggestions match | |
| for _ in range(5): | |
| # Optional: keep an index for debugging | |
| # print(_) | |
| suggestion1 = optimizer.suggest() | |
| suggestion2 = new_optimizer.suggest() |
🤖 Prompt for AI Agents
In tests/test_bayesian_optimization.py around lines 581 to 585, remove the
print(i) statement inside the loop since the loop variable is _ and i is
undefined, causing a NameError. Also, eliminate the print statement to avoid
noisy test output and slow CI. Simply delete or comment out the print(i) line to
fix the issue.
| def test_acquisition_optimization(gp, target_space): | ||
| acq = MockAcquisition(random_state=42) | ||
| acq = MockAcquisition() | ||
| target_space.register(params={"x": 2.5, "y": 0.5}, target=3.0) | ||
| res = acq.suggest(gp=gp, target_space=target_space) | ||
| assert np.array([3.0, 1.0]) == pytest.approx(res) | ||
|
|
||
| with pytest.raises(ValueError): | ||
| acq.suggest(gp=gp, target_space=target_space, n_random=0, n_l_bfgs_b=0) | ||
| acq.suggest(gp=gp, target_space=target_space, n_random=0, n_smart=0) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Flaky tests: deterministic RNG removed
After the refactor, the acquisition no longer stores an internal RNG, so calls like
res = acq.suggest(gp=gp, target_space=target_space)now draw from an unseeded generator, potentially giving different minima and sporadic test failures.
Pass the shared random_state fixture instead:
- res = acq.suggest(gp=gp, target_space=target_space)
+ res = acq.suggest(gp=gp, target_space=target_space, random_state=random_state)Apply the same fix to other MockAcquisition tests for full determinism.
🤖 Prompt for AI Agents
In tests/test_acquisition.py around lines 92 to 100, the test calls acq.suggest
without passing a random_state, causing nondeterministic results due to the
removal of internal RNG. Fix this by passing the shared random_state fixture to
acq.suggest as an argument to ensure deterministic behavior. Also, update other
MockAcquisition tests similarly to use the shared random_state for full test
determinism.
Summary by CodeRabbit
New Features
Bug Fixes
Refactor
n_l_bfgs_bhas been renamed ton_smartacross the user interface for greater clarity.Tests