The basic steps in order to run an optimisation process with Tune are the following:
Define what to optimise:
OR
Define the hyperparameters to optimise
(optional) Define the resources per trial
(optional) Choose a search algorithm and/or a trial scheduler
(optional) Setup early stopping
(optional) Define when to save model checkpoints
Run the optimisation
Analyse the results and fetch the best configuration
After defining an objective function, such as:
Example:
def objective(x, a, b):
return a * (x ** 0.5) + b
There are two options to define what to optimise:
Define a method that iteratively calls the objective function and logs the score.
Example:
def trainable(config):
# config (dict): A dict of hyperparameters.
for x in range(20):
score = objective(x, config["a"], config["b"])
tune.track.log(score=score) # This sends the score to Tune.
Training (tune.Trainable, tune.report) - Ray 0.9.0.dev0 documentation
Define a tune.Trainable
class that calls the objective function (iteration and logging is built in).
Example:
from ray import tune
class Trainable(tune.Trainable):
def _setup(self, config):
# config (dict): A dict of hyperparameters
self.x = 0
self.a = config["a"]
self.b = config["b"]
def _train(self): # This is called iteratively.
score = objective(self.x, self.a, self.b)
self.x += 1
return {"score": score}
Training (tune.Trainable, tune.report) - Ray 0.9.0.dev0 documentation
<aside>
💡 Notice how, in both approaches, there is a dictionary input config
. This is required as it's where the updated parameter values can be retrieved from.
</aside>