amber.architect

Rationale

This module amber.architect can be devided into four major components. It’s easy to imagine a model space to sample models from, and a training environment as a placeholder for different components to interact. Currently the model space is pre-defined, but we can make it dynamic and evolving in the future.

The remaining two components do the most of the heavy-lifting. Search algorithms, such as a controller recurrent neural network vs a genetic algorithm, are separated by sub-packages (a folder with __init__.py) and each variant of a search algorithm is a separate file. This makes it easy to re-use code within a search algorithm, relatively independent across different searchers, and let them share the same configurations of model space and training environments (although it’s possible only certain combinations of searcher - model space - train env are viable).

The manager is perhaps the most tedious. The general workflow is:

  1. takes a model architecture arch as input,

  2. passes arch to the modeler model=amber.modeler.model_fn(arch),

  3. trains the model model.fit(train_data),

  4. evaluates the model’s reward reward=reward_fn(model),

  5. stores the model in a buffer buffer_fn.store(model, reward),

  6. returns the reward signal to the search algorithm.

Each of the steps has variantions, but the overall layout should almost always stay as described above.

Model Space

Train Environment

Controller(s)

General Controller

MultiIO Controller

Operation Controller

Zero-Shot Controller (AMBIENT)

Manager

Buffer

Reward

Store

Common Operations