Hyperparameter tuning

Basics

The basic steps in order to run an optimisation process with Tune are the following:

  1. Define what to optimise:

    OR

  2. Define the hyperparameters to optimise

  3. (optional) Define the resources per trial

  4. (optional) Choose a search algorithm and/or a trial scheduler

  5. (optional) Setup early stopping

  6. (optional) Define when to save model checkpoints

  7. Run the optimisation

  8. Analyse the results and fetch the best configuration

https://docs.ray.io/en/latest/_images/tune-workflow1.png

Defining the optimisation function

After defining an objective function, such as:

Example:

def objective(x, a, b):
    return a * (x ** 0.5) + b

There are two options to define what to optimise:

Define a method that iteratively calls the objective function and logs the score.

Example:

def trainable(config):
    # config (dict): A dict of hyperparameters.

    for x in range(20):
        score = objective(x, config["a"], config["b"])

        tune.track.log(score=score)  # This sends the score to Tune.

Training (tune.Trainable, tune.report) - Ray 0.9.0.dev0 documentation

Define a tune.Trainable class that calls the objective function (iteration and logging is built in).

Example:

from ray import tune

class Trainable(tune.Trainable):
    def _setup(self, config):
        # config (dict): A dict of hyperparameters
        self.x = 0
        self.a = config["a"]
        self.b = config["b"]

    def _train(self):  # This is called iteratively.
        score = objective(self.x, self.a, self.b)
        self.x += 1
        return {"score": score}

Training (tune.Trainable, tune.report) - Ray 0.9.0.dev0 documentation

<aside> đŸ’¡ Notice how, in both approaches, there is a dictionary input config. This is required as it's where the updated parameter values can be retrieved from.

</aside>