Acquisition Functions¶
Acquisition functions guide the selection of next experiments in Bayesian optimization. They balance exploration (reducing uncertainty) and exploitation (targeting optimal regions).
BoTorch Acquisition¶
PyTorch-based acquisition functions with batch support.
alchemist_core.acquisition.botorch_acquisition.BoTorchAcquisition(search_space, model=None, acq_func='ucb', maximize=True, random_state=42, acq_func_kwargs=None, batch_size=1, ref_point=None, directions=None, objective_names=None, outcome_constraints=None)
¶
Bases: BaseAcquisition
Acquisition function implementation using BoTorch.
Supported acquisition functions: - 'ei': Expected Improvement - 'logei': Log Expected Improvement (numerically stable) - 'pi': Probability of Improvement - 'logpi': Log Probability of Improvement (numerically stable) - 'ucb': Upper Confidence Bound - 'qei': Batch Expected Improvement (for q>1) - 'qucb': Batch Upper Confidence Bound (for q>1) - 'qipv' or 'qnipv': q-Negative Integrated Posterior Variance (exploratory)
Initialize the BoTorch acquisition function.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
search_space
|
The search space (SearchSpace object) |
required | |
model
|
A trained model (BoTorchModel) |
None
|
|
acq_func
|
Acquisition function type (see class docstring for options) |
'ucb'
|
|
maximize
|
Whether to maximize (True) or minimize (False) the objective |
True
|
|
random_state
|
Random state for reproducibility |
42
|
|
acq_func_kwargs
|
Dictionary of additional arguments for the acquisition function |
None
|
|
batch_size
|
Number of points to select at once (q) |
1
|
|
ref_point
|
Reference point for MOBO hypervolume (list of floats) |
None
|
|
directions
|
Per-objective direction list ('maximize'/'minimize') |
None
|
|
objective_names
|
List of objective names (for multi-objective) |
None
|
|
outcome_constraints
|
List of outcome constraint callables for MOBO |
None
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If acq_func is not a valid acquisition function name |
select_next(candidate_points=None)
¶
Suggest the next experiment point(s) using BoTorch optimization.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
candidate_points
|
Candidate points to evaluate (optional) |
None
|
Returns:
| Type | Description |
|---|---|
|
Dictionary with the selected point or list of points |
find_optimum(model=None, maximize=None, random_state=None)
¶
Find the point where the model predicts the optimal value.
This uses the same approach as regret plot predictions: generate a grid in the original variable space, predict using the model's standard pipeline, and find the argmax/argmin. This ensures categorical variables are handled correctly through proper encoding/decoding.
Skopt Acquisition¶
Scikit-optimize based acquisition functions.
alchemist_core.acquisition.skopt_acquisition.SkoptAcquisition(search_space, model=None, acq_func='ei', maximize=True, random_state=42, acq_func_kwargs=None)
¶
Bases: BaseAcquisition
Simple acquisition function implementation using scikit-optimize (skopt).
Supported acquisition functions:
- 'ei' or 'EI': Expected Improvement
- 'pi' or 'PI': Probability of Improvement
- 'ucb' or 'UCB': Upper Confidence Bound (mapped to LCB in skopt)
- 'gp_hedge': GP-Hedge (portfolio of acquisition functions)
Initialize the acquisition function.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
search_space
|
The search space (SearchSpace object or list of skopt dimensions) |
required | |
model
|
A trained model (SklearnModel or compatible) |
None
|
|
acq_func
|
Acquisition function ('ei', 'pi', 'ucb', or 'gp_hedge') |
'ei'
|
|
maximize
|
Whether to maximize (True) or minimize (False) the objective |
True
|
|
random_state
|
Random state for reproducibility |
42
|
|
acq_func_kwargs
|
Dictionary of additional arguments for the acquisition function |
None
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If acq_func is not a valid acquisition function name |
select_next(candidate_points=None, **kwargs)
¶
Suggest the next experiment point.
find_optimum(model, maximize=True, random_state=42)
¶
Find the point where the model predicts the optimal value.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
Trained model with predict method |
required | |
maximize
|
Whether to maximize (True) or minimize (False) the objective |
True
|
|
random_state
|
Random seed for reproducibility |
42
|
Returns:
| Name | Type | Description |
|---|---|---|
dict |
Contains 'x_opt' (optimal point), 'value' (predicted value), 'std' (standard deviation at optimum) |
Available Strategies¶
Single-Point Strategies¶
| Strategy | Code | Description | Best For |
|---|---|---|---|
| Expected Improvement | 'EI' |
Balances exploration/exploitation | General use |
| Log EI | 'LogEI' |
Numerically stable EI | Noisy data |
| Probability of Improvement | 'PI' |
Conservative improvement | Risk-averse |
| Log PI | 'LogPI' |
Numerically stable PI | Noisy data |
| Upper Confidence Bound | 'UCB' |
More exploratory | Unknown spaces |
Batch Strategies¶
| Strategy | Code | Description | Best For |
|---|---|---|---|
| Batch Expected Improvement | 'qEI' |
Parallel experiments | Lab workflows |
| Batch UCB | 'qUCB' |
Parallel + exploratory | Exploration |
| Negative Integrated Posterior Variance | 'qNIPV' |
Pure exploration | Model improvement |
See Also¶
- OptimizationSession - High-level acquisition interface
- BoTorch Acquisition Guide - Detailed strategy explanations
- Models - Model training for acquisition