Silver BlogAutomated Machine Learning Project Implementation Complexities

To demonstrate the implementation complexity differences along the AutoML highway, let's have a look at how 3 specific software projects approach the implementation of just such an AutoML "solution," namely Keras Tuner, AutoKeras, and automl-gs.


Photo by Soroush Zargar on Unsplash


Automated machine learning (AutoML) spans the fairly wide chasm of tasks which could reasonably be thought of as being included within a machine learning pipeline.

An AutoML "solution" could include the tasks of data preprocessing, feature engineering, algorithm selection, algorithm architecture search, and hyperparameter tuning, or some subset or variation of these distinct tasks. Thus, automated machine learning can now be thought of as anything from solely performing a single task, such as automated feature engineering, all the way through to a fully-automated pipeline, from data preprocessing, to feature engineering, to algorithm selection, and so on.

However, another important dimension of practical AutoML is its implementation complexity. This is the dimension governing the amount of configuration and engineering elbow grease needed to implement and configure an AutoML project. There are solutions which integrate easily into existing software APIs; those which are wrappers around existing APIs; and those which telescope out even further from existing APIs, being invoked by a command line or a single line of code.

To demonstrate the implementation complexity differences along the AutoML highway, let's have a look at how 3 specific software projects approach the implementation of just such an AutoML "solution," namely Keras Tuner, AutoKeras, and automl-gs. We will see how these projects are philosophically quite different from one another, and will get an idea of the different roles and levels of machine learning learning knowledge may be necessary or appropriate to implement each of these approaches.

Note that the first 2 of these projects are directly tied to Keras and TensorFlow, and so are specific to neural networks. However, there is no reason why other AutoML software at these same relative implementation complexities need be specific to neural networks; these two tools simply provide an easy method of comparison between the implementation complexities.

Also note that the complexity being assessed is that of the practical code implementation of a solution. There are many other complexities of an AutoML undertaking which would contribute to its overall complexity, including the dataset size, dimensionality, and much more.


Keras Tuner

Let's start with Keras Tuner, what I will refer to as a "some assembly required" automated machine learning project. In order to successfully implement a solution using the project, you would need a working understanding of neural networks, their architecture, and writing code using the Keras library. As such, this is much more "in the weeds" than the other libraries treated herein.

Essentially, Keras Tuner provides automated hyperparameter tuning for Keras. You define a Keras model and note which hyperparameters you want to have included in the automated tuning, along with a search space, and Keras Tuner performs the heavy lifting. These hyperparameters can include conditional parameters, and the search space can be as restricted as you like, but essentially this is a hyperparameter tuning application.

Recall that the complexity we are referring to in this article is not the number of AutoML tasks that a particular project performs, but that of the code which implements these tasks. In this regard, given that what we can call lower-level base library code must be written and integrated with our AutoML library, Keras Tuner represents the more complex end of the AutoML implementation complexity spectrum.

The most likely user of Keras Tuner would be a machine learning engineer or data scientist. You are not likely to find experts of a particular domain with little to no coding or machine learning expertise jumping straight to Keras Tuner, as opposed to one of the other projects below. To see why, here's a quick overview of how to implement some very basic Keras Tuner code (example from the Keras Tuner documentation website).

First you need a function to return a complied Keras model. It takes an argument from which hyperparameters are sampled:

from tensorflow import keras
from tensorflow.keras import layers
from kerastuner.tuners import RandomSearch

def build_model(hp):
    model = keras.Sequential()
    model.add(layers.Dense(10, activation='softmax'))
                      values=[1e-2, 1e-3, 1e-4])),
    return model

Then you need a tuner, which specifies, among other things, the model building function, the objective to optimize, number of trials, and more.

tuner = RandomSearch(

Then start the search for the best hyperparameter configuration:, y,
             validation_data=(val_x, val_y))

Finally, either check for the best model or print results summary:

# Best model(s)
models = tuner.get_best_models(num_models=2)

# Summary of results

You may hesitate to refer to this implementation's code as terribly complex, but when you compare it to the following projects I hope you change your mind.

To see more details about the above code, the Keras Tuner process more generally, and what more you can do with the project, see its website.



Next up is AutoKeras, which I will refer to as an "off the shelf" solution, one which is prepackaged and more or less ready to go, using a more restrictive code template. AutoKeras describes itself as:

The ultimate goal of AutoML is to provide easily accessible deep learning tools to domain experts with limited data science or machine learning background.

To accomplish this, AutoKeras performs both architecture search and hyperparameter tuning for Keras neural network models.

Here's a basic code footprint for using AutoKeras:

import autokeras as ak

clf = ak.ImageClassifier(), y_train)
results = clf.predict(x_test)

If you've used Scikit-learn, this should be familiar syntax. The above code uses the task API; there are others, however, which are of higher complexity. You can find further information on these additional APIs, and more fleshed-out tutorials, on the project's documentation website.

It should be obvious that the above AutoKeras code is of substantially reduced complexity when compared to that of Keras Tuner. You do, however, give up some degree of precision when you reduce this complexity, the obvious trade-off. For domain experts with limited machine learning expertise, however, this might be a good balance.



The third of the solutions we will look at is automl-gs, which takes a 30,000 foot view of AutoML implementations. This goes beyond the "off the shelf" implementation complexity, and offers an approach somewhat akin to the Staples easy button.

automl-gs offers a "zero code/model definition interface." You simply point it at a CSV file, identify the target field to predict, and let it go. It generates Python code which can be integrated into existing machine learning workflows, similar to what popular AutoML tool TPOT does. automl-gs also boasts that it is no black box, in that you can see how data is processed and models are constructed, allowing for tweaks to be made after-the-fact.

automl-gs performs data preprocessing, and currently builds models using neural networks (via Keras) and XGBoost, while plans to implement CatBoost and LightGBM have been announced.

Here is a comparison of the 2 ways to call automl-gs, via command line and via a single line of code. Note that you can find further information on configuration options, as well as inspecting output, on the project's website.

Command line:

automl_gs titanic.csv Survived

Python code:

from automl_gs import automl_grid_search
automl_grid_search('titanic.csv', 'Survived')

It should now be easy to compare the code complexities of these 3 levels of AutoML project undertakings.

automl-gs can be executed via single command line command or single line Python code API call. As such, this project could potentially be used by anyone at all, from professional data scientists looking for a project baseline, to amateurs with limited coding skills or without statistical knowledge looking to test the waters of data science (insert the standard warning about messing with powers you don't understand here). While an amateur undertaking resulting in some important decisions being made based on the predictions may be problematic (not a very likely prospect, IMHO), opening up machine learning and AutoML to anyone looking to learn more about it certainly has value.


Sample automl-gs output code (source)


Similar to TPOT, I see the value here being the potential low-bar entry into creating project baselines. It could be useful to point automl-gs at a CSV and tell it to do its thing in parallel to hand-crafting competing solutions, and comparing results. This could be done with other AutoML tools as well, but the absolute simplicity of a tool of this low level of complexity relies on such little setup and consideration of almost anything that it gets the ball rolling very quickly. Being able to review models afterwards and make edits is also appealing, and could be added as another layer to this parallel AutoML/manual model building process.



Machine learning presents an array of tasks which can be automated to varying degrees to help simplify pipelines and increase success. Automated machine learning projects take different approaches to which tasks they automate, as well as to the precision of control they allow over the configuration, execution, and follow-up of these tasks. Hopefully the 3 projects spotlighted herein provide some concrete example as to the practical code complexity differences between AutoML tools, and how and who they are useful for.