Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Out-of-design trials may cause repeated trials #1568

Open
ailitw opened this issue Apr 4, 2023 · 8 comments
Open

Out-of-design trials may cause repeated trials #1568

ailitw opened this issue Apr 4, 2023 · 8 comments
Labels
bug Something isn't working enhancement New feature or request

Comments

@ailitw
Copy link

ailitw commented Apr 4, 2023

I have a setting where parameter constraints and the number of digits cause the search space to be quite narrow and as a result some trials from ax_client.get_next_trial() end up being out-of-design. Whenever that happens all subsequent trials end up being the same exact point and I'm unable to get new trials.

I suspect this has something to do with filtering out-of-design trials, when those points are filtered the acquisition function optimization just completes exactly as in the last step that produced the out-of-design trial. I have tried setting the trials as abandoned with ax_client.abandon_trial(trial_index=trial_index) but it seems abandoned trials are also filtered to not include out-of-design points.

Below is a minimal example to reproduce the issue:

import numpy as np

from ax.service.ax_client import AxClient, ObjectiveProperties
from ax.utils.measurement.synthetic_functions import hartmann6


def evaluate(parameters):
    x = np.array([parameters.get(f"x{i+1}") for i in range(6)])
    return {
        "hartmann6": (hartmann6(x), 0.0),
    }


ax_client = AxClient()

parameters = [
    {
        "name": f"x{i+1}",
        "type": "range",
        "bounds": [0.0, 1.0],
        "value_type": "float",
        "digits": 1,
    }
    for i in range(6)
]

experiment_kwargs = {
    "name": "hartmann_test_experiment",
    "parameters": parameters,
    "objectives": {"hartmann6": ObjectiveProperties(minimize=True)},
    "parameter_constraints": [
        "x1 + x2 + x3 + x4 + x5 + x6 <= 1.0",
        "x1 + x2 + x3 + x4 + x5 + x6 >= 0.999",
    ],
}

ax_client.create_experiment(**experiment_kwargs)

for i in range(20):
    new_point, trial_index = ax_client.get_next_trial()
    ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(new_point))
[INFO 04-04 14:10:01] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the `verbose_logging` argument to `False`. Note that float values in the logs are rounded to 6 decimal points.
[INFO 04-04 14:10:01] ax.service.utils.instantiation: Created search space: SearchSpace(parameters=[RangeParameter(name='x1', parameter_type=FLOAT, range=[0.0, 1.0], digits=1), RangeParameter(name='x2', parameter_type=FLOAT, range=[0.0, 1.0], digits=1), RangeParameter(name='x3', parameter_type=FLOAT, range=[0.0, 1.0], digits=1), RangeParameter(name='x4', parameter_type=FLOAT, range=[0.0, 1.0], digits=1), RangeParameter(name='x5', parameter_type=FLOAT, range=[0.0, 1.0], digits=1), RangeParameter(name='x6', parameter_type=FLOAT, range=[0.0, 1.0], digits=1)], parameter_constraints=[ParameterConstraint(1.0*x1 + 1.0*x2 + 1.0*x3 + 1.0*x4 + 1.0*x5 + 1.0*x6 <= 1.0), ParameterConstraint(-1.0*x1 + -1.0*x2 + -1.0*x3 + -1.0*x4 + -1.0*x5 + -1.0*x6 <= -0.999)]).
[INFO 04-04 14:10:01] ax.modelbridge.dispatch_utils: Using Models.GPEI since there are more ordered parameters than there are categories for the unordered categorical parameters.
[INFO 04-04 14:10:01] ax.modelbridge.dispatch_utils: Calculating the number of remaining initialization trials based on num_initialization_trials=None max_initialization_trials=None num_tunable_parameters=6 num_trials=None use_batch_trials=False
[INFO 04-04 14:10:01] ax.modelbridge.dispatch_utils: calculated num_initialization_trials=12
[INFO 04-04 14:10:01] ax.modelbridge.dispatch_utils: num_completed_initialization_trials=0 num_remaining_initialization_trials=12
[INFO 04-04 14:10:01] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+GPEI', steps=[Sobol for 12 trials, GPEI for subsequent trials]). Iterations after 12 will take longer to generate due to model-fitting.
[INFO 04-04 14:10:02] ax.service.ax_client: Generated new trial 0 with parameters {'x1': 0.2, 'x2': 0.1, 'x3': 0.3, 'x4': 0.1, 'x5': 0.2, 'x6': 0.1}.
[INFO 04-04 14:10:02] ax.service.ax_client: Completed trial 0 with data: {'hartmann6': (-0.157493, 0.0)}.
[INFO 04-04 14:10:02] ax.service.ax_client: Generated new trial 1 with parameters {'x1': 0.0, 'x2': 0.2, 'x3': 0.4, 'x4': 0.2, 'x5': 0.1, 'x6': 0.1}.
[INFO 04-04 14:10:02] ax.service.ax_client: Completed trial 1 with data: {'hartmann6': (-0.118905, 0.0)}.
[INFO 04-04 14:10:02] ax.service.ax_client: Generated new trial 2 with parameters {'x1': 0.2, 'x2': 0.1, 'x3': 0.1, 'x4': 0.3, 'x5': 0.1, 'x6': 0.2}.
[INFO 04-04 14:10:02] ax.service.ax_client: Completed trial 2 with data: {'hartmann6': (-0.237511, 0.0)}.
[INFO 04-04 14:10:02] ax.service.ax_client: Generated new trial 3 with parameters {'x1': 0.1, 'x2': 0.1, 'x3': 0.7, 'x4': 0.0, 'x5': 0.0, 'x6': 0.1}.
[INFO 04-04 14:10:02] ax.service.ax_client: Completed trial 3 with data: {'hartmann6': (-0.048001, 0.0)}.
[INFO 04-04 14:10:02] ax.service.ax_client: Generated new trial 4 with parameters {'x1': 0.1, 'x2': 0.2, 'x3': 0.0, 'x4': 0.1, 'x5': 0.5, 'x6': 0.1}.
[INFO 04-04 14:10:02] ax.service.ax_client: Completed trial 4 with data: {'hartmann6': (-0.067061, 0.0)}.
[INFO 04-04 14:10:03] ax.service.ax_client: Generated new trial 5 with parameters {'x1': 0.1, 'x2': 0.2, 'x3': 0.2, 'x4': 0.3, 'x5': 0.0, 'x6': 0.2}.
[INFO 04-04 14:10:03] ax.service.ax_client: Completed trial 5 with data: {'hartmann6': (-0.111679, 0.0)}.
[INFO 04-04 14:10:03] ax.service.ax_client: Generated new trial 6 with parameters {'x1': 0.2, 'x2': 0.2, 'x3': 0.3, 'x4': 0.1, 'x5': 0.1, 'x6': 0.1}.
[INFO 04-04 14:10:03] ax.service.ax_client: Completed trial 6 with data: {'hartmann6': (-0.100623, 0.0)}.
[INFO 04-04 14:10:03] ax.service.ax_client: Generated new trial 7 with parameters {'x1': 0.0, 'x2': 0.1, 'x3': 0.4, 'x4': 0.0, 'x5': 0.3, 'x6': 0.2}.
[INFO 04-04 14:10:03] ax.service.ax_client: Completed trial 7 with data: {'hartmann6': (-0.295631, 0.0)}.
[INFO 04-04 14:10:03] ax.service.ax_client: Generated new trial 8 with parameters {'x1': 0.3, 'x2': 0.4, 'x3': 0.0, 'x4': 0.0, 'x5': 0.2, 'x6': 0.1}.
[INFO 04-04 14:10:03] ax.service.ax_client: Completed trial 8 with data: {'hartmann6': (-0.067841, 0.0)}.
[INFO 04-04 14:10:03] ax.service.ax_client: Generated new trial 9 with parameters {'x1': 0.3, 'x2': 0.3, 'x3': 0.3, 'x4': 0.1, 'x5': 0.0, 'x6': 0.0}.
[INFO 04-04 14:10:03] ax.service.ax_client: Completed trial 9 with data: {'hartmann6': (-0.033633, 0.0)}.
[INFO 04-04 14:10:03] ax.service.ax_client: Generated new trial 10 with parameters {'x1': 0.0, 'x2': 0.1, 'x3': 0.6, 'x4': 0.1, 'x5': 0.2, 'x6': 0.0}.
[INFO 04-04 14:10:03] ax.service.ax_client: Completed trial 10 with data: {'hartmann6': (-0.063682, 0.0)}.
[INFO 04-04 14:10:04] ax.service.ax_client: Generated new trial 11 with parameters {'x1': 0.2, 'x2': 0.0, 'x3': 0.2, 'x4': 0.1, 'x5': 0.3, 'x6': 0.2}.
[INFO 04-04 14:10:04] ax.service.ax_client: Completed trial 11 with data: {'hartmann6': (-0.350309, 0.0)}.
[INFO 04-04 14:10:08] ax.service.ax_client: Generated new trial 12 with parameters {'x1': 0.1, 'x2': 0.0, 'x3': 0.2, 'x4': 0.1, 'x5': 0.3, 'x6': 0.3}.
[INFO 04-04 14:10:08] ax.service.ax_client: Completed trial 12 with data: {'hartmann6': (-0.645269, 0.0)}.
[INFO 04-04 14:10:13] ax.service.ax_client: Generated new trial 13 with parameters {'x1': 0.1, 'x2': 0.0, 'x3': 0.2, 'x4': 0.1, 'x5': 0.3, 'x6': 0.4}.
[INFO 04-04 14:10:13] ax.service.ax_client: Completed trial 13 with data: {'hartmann6': (-1.061896, 0.0)}.
[INFO 04-04 14:10:13] ax.modelbridge.base: Leaving out out-of-design observations for arms: 13_0
[INFO 04-04 14:10:18] ax.service.ax_client: Generated new trial 14 with parameters {'x1': 0.1, 'x2': 0.0, 'x3': 0.2, 'x4': 0.1, 'x5': 0.3, 'x6': 0.4}.
[INFO 04-04 14:10:18] ax.service.ax_client: Completed trial 14 with data: {'hartmann6': (-1.061896, 0.0)}.
[INFO 04-04 14:10:18] ax.modelbridge.base: Leaving out out-of-design observations for arms: 13_0
[INFO 04-04 14:10:22] ax.service.ax_client: Generated new trial 15 with parameters {'x1': 0.1, 'x2': 0.0, 'x3': 0.2, 'x4': 0.1, 'x5': 0.3, 'x6': 0.4}.
[INFO 04-04 14:10:22] ax.service.ax_client: Completed trial 15 with data: {'hartmann6': (-1.061896, 0.0)}.
[INFO 04-04 14:10:22] ax.modelbridge.base: Leaving out out-of-design observations for arms: 13_0
[INFO 04-04 14:10:27] ax.service.ax_client: Generated new trial 16 with parameters {'x1': 0.1, 'x2': 0.0, 'x3': 0.2, 'x4': 0.1, 'x5': 0.3, 'x6': 0.4}.
[INFO 04-04 14:10:27] ax.service.ax_client: Completed trial 16 with data: {'hartmann6': (-1.061896, 0.0)}.
[INFO 04-04 14:10:27] ax.modelbridge.base: Leaving out out-of-design observations for arms: 13_0
[INFO 04-04 14:10:31] ax.service.ax_client: Generated new trial 17 with parameters {'x1': 0.1, 'x2': 0.0, 'x3': 0.2, 'x4': 0.1, 'x5': 0.3, 'x6': 0.4}.
[INFO 04-04 14:10:31] ax.service.ax_client: Completed trial 17 with data: {'hartmann6': (-1.061896, 0.0)}.
[INFO 04-04 14:10:31] ax.modelbridge.base: Leaving out out-of-design observations for arms: 13_0
[INFO 04-04 14:10:36] ax.service.ax_client: Generated new trial 18 with parameters {'x1': 0.1, 'x2': 0.0, 'x3': 0.2, 'x4': 0.1, 'x5': 0.3, 'x6': 0.4}.
[INFO 04-04 14:10:36] ax.service.ax_client: Completed trial 18 with data: {'hartmann6': (-1.061896, 0.0)}.
[INFO 04-04 14:10:36] ax.modelbridge.base: Leaving out out-of-design observations for arms: 13_0
[INFO 04-04 14:10:40] ax.service.ax_client: Generated new trial 19 with parameters {'x1': 0.1, 'x2': 0.0, 'x3': 0.2, 'x4': 0.1, 'x5': 0.3, 'x6': 0.4}.
[INFO 04-04 14:10:40] ax.service.ax_client: Completed trial 19 with data: {'hartmann6': (-1.061896, 0.0)}.
@Balandat
Copy link
Contributor

Balandat commented Apr 5, 2023

as a result some trials from ax_client.get_next_trial() end up being out-of-design

This shouldn't really be happening in the first place. Seems like the candidate generation produces outputs that slightly violate the rather tight constraints.

From your constraint it looks like what you'd actually want is to just constrain the parameters to live on the simplex? Seems like the ideal solution would be to actually support equality constraints?

cc @mpolson64

@ailitw
Copy link
Author

ailitw commented Apr 11, 2023

as a result some trials from ax_client.get_next_trial() end up being out-of-design

This shouldn't really be happening in the first place. Seems like the candidate generation produces outputs that slightly violate the rather tight constraints.

In principle getting slightly out-of-bounds outputs doesn't matter for my use cases as I can just try with a new trial. I'm willing to wait a little longer to get a valid output with these narrow constraints. However, getting stuck is a problem, I would need some way to nudge the generator to give me new point.

From your constraint it looks like what you'd actually want is to just constrain the parameters to live on the simplex? Seems like the ideal solution would be to actually support equality constraints?

Yes, supporting equality constraints would allow more flexible use cases without workarounds.

@lauralepomaki
Copy link

I'm having very similar issues with the optimization getting stuck to an out-of-design sample and generating that all over again as a new trial.

@sgbaird
Copy link
Contributor

sgbaird commented May 30, 2023

xref: #510, and equality constraints are on the wishlist #566

@lena-kashtelyan
Copy link
Contributor

I believe that the Sobol fallback on stuck optimization, which @saitcakmak is planning to work on, will help with this, so assigning this to him.

@ga92xug
Copy link

ga92xug commented Sep 22, 2023

I can give a small update on this if you put should_deduplicate=True in your generation step, at least the trials won't be repeated.

@sgbaird
Copy link
Contributor

sgbaird commented Feb 8, 2024

I have another case where this is surfacing, even though the constraints are relatively lax compared to what @ailitw has shown. While a stock branin function is no problem, when I modified this to illustrate the idea behind reparameterizing a linear equality constraint as an equality constraint per #727, I changed the function to add a term based on a hidden "x3". See the Colab reproducer, with some of it copied below for provenance.

import numpy as np
from ax.service.ax_client import AxClient, ObjectiveProperties

obj1_name = "branin"


def branin(x1, x2):
    y = float(
        (x2 - 5.1 / (4 * np.pi**2) * x1**2 + 5.0 / np.pi * x1 - 6.0) ** 2
        + 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x1)
        + 10
    )

    # Contrived way to incorporate the hidden x3 into the objective
    y = y * (1 + 0.1 * x1 * x2 * (1 - x1 - x2))

    return y


# Define total for compositional constraint, where x1 + x2 + x3 == total
total = 10.0


ax_client = AxClient(random_seed=42)
# note how lower bound of x1 is now 0.0 instead of -5.0, which is for the sake of illustrating a composition, where negative values wouldn't make sense
ax_client.create_experiment(
    parameters=[
        {"name": "x1", "type": "range", "bounds": [0.0, total]},
        {"name": "x2", "type": "range", "bounds": [0.0, total]},
    ],
    objectives={
        obj1_name: ObjectiveProperties(minimize=True),
    },
    parameter_constraints=[
        f"x1 + x2 <= {total}",  # reparameterized compositional constraint, which is a type of sum constraint
    ],
)


for _ in range(21):
    parameterization, trial_index = ax_client.get_next_trial()

    # extract parameters
    x1 = parameterization["x1"]
    x2 = parameterization["x2"]

    results = branin3(x1, x2)
    ax_client.complete_trial(trial_index=trial_index, raw_data=results)


best_parameters, metrics = ax_client.get_best_parameters()

I end up with the following:

...
INFO 02-08 21:11:24] ax.service.ax_client: Generated new trial 19 with parameters {'x1': 5.465987, 'x2': 4.534013}.
[INFO 02-08 21:11:24] ax.service.ax_client: Completed trial 19 with data: {'branin': (-595.518689, None)}.
[INFO 02-08 21:11:24] ax.modelbridge.base: Leaving out out-of-design observations for arms: 16_0
[INFO 02-08 21:11:24] ax.modelbridge.torch: The observations are identical to the last set of observations used to fit the model. Skipping model fitting.
[INFO 02-08 21:11:30] ax.service.ax_client: Generated new trial 20 with parameters {'x1': 5.465987, 'x2': 4.534013}.
[INFO 02-08 21:11:30] ax.service.ax_client: Completed trial 20 with data: {'branin': (-595.518689, None)}.
[INFO 02-08 21:11:30] ax.modelbridge.base: Leaving out out-of-design observations for arms: 16_0

In each case, the constraint is only violated slightly.

@sgbaird
Copy link
Contributor

sgbaird commented Feb 8, 2024

This is the best workaround I could come up with so far, since I'm not sure how to adjust the parameterization associated with the current trial in the Service API, and I wasn't sure if marking the trial as failed or abandoned and then adding a point effectively right next to it might throw things off. This isn't ideal, because it means I attach the same trial twice, once with the original invalid parameters and then again with adjusted parameters.

import numpy as np
from ax.service.ax_client import AxClient, ObjectiveProperties

obj1_name = "branin"


def branin3(x1, x2, x3):
    y = float(
        (x2 - 5.1 / (4 * np.pi**2) * x1**2 + 5.0 / np.pi * x1 - 6.0) ** 2
        + 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x1)
        + 10
    )

    # Contrived way to incorporate x3 into the objective
    y = y * (1 + 0.1 * x1 * x2 * x3)

    return y


total = 10.0

ax_client = AxClient(random_seed=42)
ax_client.create_experiment(
    parameters=[
        {"name": "x1", "type": "range", "bounds": [0.0, total]},
        {"name": "x2", "type": "range", "bounds": [0.0, total]},
    ],
    objectives={
        obj1_name: ObjectiveProperties(minimize=True),
    },
    parameter_constraints=[
        f"x1 + x2 <= {total}",  # compositional constraint
    ],
)

for _ in range(21):
    parameterization, trial_index = ax_client.get_next_trial()

    # # calculate x3 based on compositional constraint, x1 + x2 + x3 == 1
    x1 = parameterization["x1"]
    x2 = parameterization["x2"]

    x3 = total - (x1 + x2)

    results = branin3(x1, x2, x3)

    ax_client.complete_trial(trial_index=trial_index, raw_data=results)

    # If x1 + x2 is slightly greater than total within a certain tolerance, adjust it
    if x1 + x2 > total:
        excess = (x1 + x2) - total
        # Adjust x1 and x2 proportionally to their current values
        x1 -= excess * (x1 / (x1 + x2))
        x2 -= excess * (x2 / (x1 + x2))

        x3 = total - (x1 + x2)
        results = branin3(x1, x2, x3)

        parameterization, trial_index = ax_client.attach_trial({"x1": x1, "x2": x2})
        ax_client.complete_trial(trial_index=trial_index, raw_data=results)


best_parameters, metrics = ax_client.get_best_parameters()

@saitcakmak saitcakmak removed their assignment Jul 10, 2024
@bernardbeckerman bernardbeckerman added the bug Something isn't working label Jul 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

8 participants