Deep studying has revolutionized machine studying. However designing the best neural community — layers, neurons, activation capabilities, optimizers — can really feel like an infinite experiment. Wouldn’t it’s good if somebody did the heavy lifting for you?
That’s precisely what AutoML for deep studying goals to resolve.
On this article, I’ll introduce you to 2 highly effective but approachable instruments for automating deep studying: AutoKeras and Keras Tuner. We’ll deep dive into these libraries and do some hands-on mannequin constructing.
Why automate deep studying?
Deep studying mannequin design and hyperparameter tuning are resource-intensive. It’s straightforward to:
- Overfit by utilizing too many parameters.
- Waste time testing architectures manually.
- Miss better-performing configurations.
AutoML instruments take away a lot of the guesswork by automating structure search and tuning.
How do these libraries work?
AutoKeras
AutoKeras leverages Neural Structure Search (NAS) strategies behind the scenes. It makes use of a trial-and-error method powered by Keras Tuner below the hood to check completely different configurations. As soon as a very good candidate is discovered, it trains it to convergence and evaluates.
Keras Tuner
Keras Tuner is targeted on hyperparameter optimization. You outline the search house (e.g., variety of layers, variety of models, studying charges), and it makes use of optimization algorithms (random search, Bayesian optimization, Hyperband) to seek out one of the best configuration.
Putting in required libraries
Putting in these libraries is kind of straightforward; we simply want to make use of pip. We are able to run the command under within the Jupyter pocket book to put in each libraries.
pip set up autokeras
pip set up keras-tuner
AutoKeras: Finish-to-end automated deep studying
AutoKeras is a high-level library constructed on high of TensorFlow and Keras. It automates:
- Neural structure search (NAS)
- Hyperparameter tuning
- Mannequin coaching
With just some traces of code, you’ll be able to prepare deep studying fashions for photos, textual content, tabular, and time-series information.
Creating the mannequin
For this text, we’ll work on picture classification. We’ll load the MNIST dataset utilizing Tensorflow Datasets. This dataset is freely accessible on Tensorflow, you’ll be able to test it out here, after which we’ll use the ImageClassifier of Autokeras.
import autokeras as ak
from tensorflow.keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
clf = ak.ImageClassifier(max_trials=3) # Strive 3 completely different fashions
clf.match(x_train, y_train, epochs=10)
accuracy = clf.consider(x_test, y_test)
print("Take a look at accuracy:", accuracy)
We are able to see within the output screenshot that it took 42 minutes for trial 2, and it’ll proceed until 3 trials after which present us one of the best mannequin with parameters.
Keras Tuner: Versatile hyperparameter optimization
Keras Tuner, developed by the TensorFlow group, is a hyperparameter optimization library. Not like AutoKeras, it doesn’t design architectures from scratch — as a substitute, it tunes the hyperparameters of the structure you outline.
Creating the mannequin
Not like Auto Keras, right here we must create the complete mannequin. We’ll use the identical Mnist Picture Dataset and create a CNN picture classifier mannequin.
import tensorflow as tf
import keras_tuner as kt
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def build_model(hp):
mannequin = tf.keras.Sequential()
mannequin.add(tf.keras.layers.Flatten())
mannequin.add(tf.keras.layers.Dense(hp.Int('models', 32, 512, step=32), activation='relu'))
mannequin.add(tf.keras.layers.Dense(10, activation='softmax'))
mannequin.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return mannequin
tuner = kt.Hyperband(build_model, goal='val_accuracy', max_epochs=10)
tuner.search(x_train, y_train, epochs=10, validation_split=0.2)

So, within the output, we will clearly see that trial 21 accomplished in 22 seconds with a validation accuracy of ~97.29% and the mannequin additionally retains one of the best accuracy which is ~97.93%.
To seek out out one of the best mannequin, we will run the instructions given under.
fashions = tuner.get_best_models(num_models=2)
best_model = fashions[0]
best_model.abstract()

We are able to additionally discover out the High 10 trials that Keras Tuner carried out utilizing the command under.
tuner.results_summary()

Actual life use-case
A superb instance of how we will use each these libraries is A telecom firm that wishes to foretell buyer churn utilizing structured buyer information. Their information science group makes use of AutoKeras to shortly prepare a mannequin on tabular options with out writing complicated structure code. Later, they use Keras Tuner to fine-tune a customized neural community that features area information options. This hybrid method saves weeks of experimentation and improves mannequin efficiency.
Conclusion
Each AutoKeras and Keras Tuner make deep studying extra accessible and environment friendly.
Use AutoKeras if you need a fast, end-to-end mannequin with out worrying about structure. Use Keras Tuner when you have already got a good suggestion of your structure however wish to squeeze out one of the best efficiency by way of hyperparameter tuning.
Automating elements of deep studying frees you as much as concentrate on understanding your information and deciphering outcomes, which is the place the actual worth lies.