Skip to content

Optimization

coordinate_descent(operating_point, fn, best_mse=float('-inf'), granularity=10, percentage=0.05)

Performs coordinate descent on the operating point.

Parameters:

Name Type Description Default
operating_point Dict[str, float]

operating point to be optimised. Order of that dict matters.

required
fn Callable

function to be optimised

required
best_mse float

best mse so far

float('-inf')
granularity int

granularity of the search

10
percentage float

percentage of the search

0.05

Returns:

Type Description

best operating point, best mse

Source code in cmtj/utils/optimization.py
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
def coordinate_descent(
    operating_point: Dict[str, float],
    fn: Callable,
    best_mse: float = float("-inf"),
    granularity: int = 10,
    percentage: float = 0.05,
):
    """Performs coordinate descent on the operating point.
    :param operating_point: operating point to be optimised. Order of that dict matters.
    :param fn: function to be optimised
    :param best_mse: best mse so far
    :param granularity: granularity of the search
    :param percentage: percentage of the search
    :returns: best operating point, best mse
    """
    opt_params = operating_point
    for k, org_v in tqdm(operating_point.items(), desc="Coordinate descent"):
        new_params = operating_point.copy()
        for v in tqdm(
                np.linspace((1 - percentage) * org_v, (1 + percentage) * org_v,
                            granularity),
                desc=f"Optimising {k}",
                leave=False,
        ):
            new_params[k] = v
            mse = fn(**new_params)
            if mse > best_mse:
                opt_params = new_params
                best_mse = mse
    return opt_params, best_mse

hebo_optimization_loop(cfg, fn, error_fn, target, fixed_parameters, n_iters=150, n_suggestions=8)

Optimizes the parameters of a function using HEBO. See HEBO documentation for more details: https://github.com/huawei-noah/HEBO

Parameters:

Name Type Description Default
cfg dict

configuration of the design space

required
fn Callable

function to be optimised fn(parameters, fixed_parameters)

required
error_fn Callable

function to compute the error: error_fn(target, result)

required
target np.ndarray

target data

required
fixed_parameters dict

parameters that are fixed

required
n_iters int

number of iterations

150
n_suggestions int

number of suggestions per iteration

8
Source code in cmtj/utils/optimization.py
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
def hebo_optimization_loop(
    cfg: dict,
    fn: Callable,
    error_fn: Callable,
    target: np.ndarray,
    fixed_parameters: dict,
    n_iters: int = 150,
    n_suggestions: int = 8,
):
    """Optimizes the parameters of a function using HEBO.
    See HEBO documentation for more details: https://github.com/huawei-noah/HEBO
    :param cfg: configuration of the design space
    :param fn: function to be optimised fn(**parameters, **fixed_parameters)
    :param error_fn: function to compute the error: error_fn(target, result)
    :param target: target data
    :param fixed_parameters: parameters that are fixed
    :param n_iters: number of iterations
    :param n_suggestions: number of suggestions per iteration
    """
    try:
        from hebo.design_space.design_space import DesignSpace
        from hebo.optimizers.hebo import HEBO
    except ImportError as e:
        raise ImportError(
            "HEBO is not installed. Please install it with `pip install HEBO`"
        ) from e
    space = DesignSpace().parse(cfg)
    opt = HEBO(space)
    best_mse = float("inf")
    for i in tqdm(range(1, n_iters + 1), desc="HEBO optimization loop"):
        rec = opt.suggest(n_suggestions)
        errors = multiprocess_simulate(
            fn=fn,
            error_fn=error_fn,
            suggestions=rec.to_dict(orient="records"),
            target=target,
            fixed_parameters=fixed_parameters,
        )
        opt.observe(rec, errors)
        val = opt.y.min()
        if val < best_mse:
            best_mse = val
            best_params = opt.best_x.iloc[0].to_dict()
            print(f"iteration {i} best mse {best_mse}")
            print(best_params)
    return opt