Boost your augmentation with a custom layer

In Perceptilabs we have the ability to augment our dataset (images) with handy options in the dataset settings like random flip, random crop & random rotation. These are really useful for combating overfitting and helps our model generalize better - especially if there is a limited number of samples to train our model on. Random horizontal flip alone effectively doubles the samples in the dataset, and by enabling more augmentation we can double it a couple of times even more. In most cases, this is perfectly enough.

But what if we want even more samples? Or want some type of augmentation that isn’t available as an option in Perceptilabs?

While it would be hard to cover all possible sorts of augmentation in a nice graphical frontend to satisfy every thinkable scenario, Perceptlilabs offer the use of custom components where it let’s us put in code of our own And we can harness them to do just that, more augmentation!

EDITED: The new version does not rely on TF 2.7, so no need to change it

To our model, we add a custom component and place it between the input and the rest of our model:

Then, we open the custom component’s code by (with the LayerCustom_1 component selected) clicking the “open code </>” button on the settings panel to the right.

In the code editor, first import the experimental.preprocessing module. Then, we want to put the second part of our code (build section) under the class() declaration like so:

import tensorflow.keras.layers.experimental.preprocessing as pp
class LayerCustom_LayerCustom_1Keras(tf.keras.layers.Layer, PerceptiLabsVisualizer):
    def build(self, input_shape):
        self.preprocess = tf.keras.Sequential([
            pp.Rescaling(1./127.5, -1),
            pp.RandomContrast(0.1),
            pp.RandomZoom(0.1, 0.1),
            pp.RandomTranslation(0.1, 0.1),
            pp.RandomFlip(),
            pp.RandomRotation(0.1)
        ])

Here, we stack a couple of augmentation layers inside a tf.keras.Sequential:
tf.keras.layers.Rescaling(1./127.5, -1) scales the input from the range [0, 255] to [-1, 1]. Don’t use if you’re already doing normalization in the data settings.
tf.keras.layers.RandomContrast() applies random contrast by a factor of 0.1 (or, 10%)
tf.keras.layers.RandomZoom(0.1, 0.1) randomly zooms the image in or out by a factor of 0.1
tf.keras.layers.RandomTranslation(0.1, 0.1) randomly translates the image in any direction by a factor of 0.1
tf.keras.layers.RandomFlip() randomly flips the image horizontally and/or vertically
tf.keras.layers.RandomRotation(0.1) randomly rotates the image by 0.1 * 2Ď€ rad

Finally, we just need to call the layer in the call() section:

        input_ = inputs['input']
        output = self.preprocess(input_)

And that’s all! Here, the entire code for our custom component:

import tensorflow.keras.layers.experimental.preprocessing as pp
class LayerCustom_LayerCustom_1Keras(tf.keras.layers.Layer, PerceptiLabsVisualizer):
    def build(self, input_shape):
        self.preprocess = tf.keras.Sequential([
            pp.Rescaling(1./127.5, -1),
            pp.RandomContrast(0.1),
            pp.RandomZoom(0.1, 0.1),
            pp.RandomTranslation(0.1, 0.1),
            pp.RandomFlip(),
            pp.RandomRotation(0.1)
        ])
    def call(self, inputs, training=True):
        """ Takes a tensor and one-hot encodes it """
        input_ = inputs['input']
        output = self.preprocess(input_)
        self._outputs = {            
            'output': output,
            'preview': output,
        }
        return self._outputs
    def get_config(self):
        """Any variables belonging to this layer that should be rendered in the frontend.
        Returns:
            A dictionary with tensor names for keys and picklable for values.
        """
        return {}
    @property
    def visualized_trainables(self):
        """ Returns two tf.Variables (weights, biases) to be visualized in the frontend """
        return tf.constant(0), tf.constant(0)
class LayerCustom_LayerCustom_1(Tf2xLayer):
    def __init__(self):
        super().__init__(
            keras_class=LayerCustom_LayerCustom_1Keras
        )

By doing this way, new samples will be created on the fly and will be different every epoch. So if we want more samples, we just train more epochs :slight_smile:

5 Likes

Have you tried, if just disconnecting the “input” and “LayerCustom_1” and then reconnecting “input” and “Convultion_1” would keep the model training intact? (I’m expecting it not to work)

I am guessing that the direction would be to have a data augmentation step to be implemented, which covers more bases than Perceptilabs currently allows for.

Personally I have generated additional data by using tensorflow separately to create a dataset that has been augmented, this kind of sidesteps the need for the augmentation layer, would help loads if Perceptilabs would have the functionality to build your own augmentation function for the training process.

2 Likes

Where to start!?!?!? @birdstream

Let’s begin with joy-joy feelings (that’s as close as I could get. No “Demolition Man” emojis??)

If there were upvotes rather than just favouriting they should be piling up for this! Great stuff! Thank you

Specifics: TF 2.7? That was brave… and it worked with whatever CUDA setup you had for TF2.5? That inspires me to break the explicit dependency.

And I think this is a very helpful addition to the custom code info available. @danaf’s documentation for custom components here is nice and concise but doesn’t go into all the details of other methods such as build (or the use of the visualized_trainables)

I was about to embark on a custom component and putting this all together I am almost ready now.

Questions: why did you limit the amount of rotation? With the symmetries of random flipping (default is both horizontal and vertical, it says here) wouldn’t something more like π/2 (or 4, can’t be bothered to check) be better, i.e. (per documentation for the randomrotation factor) RandomRotation(0.25)

Pondering

I’m curious about why the export doesn’t work… if everything else is OK I can’t think what might be blocking it.

1 Like

Thats a good idea! I will test and see if that works :slight_smile: However i do want to keep the Rescaling layer in this case. But of course that would be better and save some precious time :sweat_smile:

Hi @JulianSMoore! :slight_smile: I didn’t mess with the cuda stuff at all, just installed TF2.7, and have yet to discover any problems with the current cuda setup. Of course, should I run in to problems there (or at least problems that may indicate such) I’ll post that here :slight_smile:

Thanks for the tip about visualised trainables, I’ll definitely check that out!

And for the question: You’re absolutely right, I just put something in there and tested it :sweat_smile: I guess there are situations though when the training really doesn’t benefit from stronger augmentation and just takes longer. Like everything else, this is something one would fine-tune for whatever data the model is trained on.

Good news about the CUDA setup… I’ve been updating my links to key info in anticipation of a need/benefit to updating but it’s nice to know it could be done incrementally.

I don’t even know what “visualised trainables” are :wink: Do let us know.

I guess there are situations though when the training really doesn’t benefit from stronger augmentation and just takes longer.

This is yet another facet of model training and generalisation: how more data can hurt - at least in the short term.

I’ve been doing a lot of digging into training past the initial validation minimum, though overfitting and into generalisation and think there are some key ideas many people could benefit from… I might write something up.

And of course it should be even better when PL can add AdamW (with weight decay) - per the feature request we’ve both already upvoted

1 Like

@birdstream,
This is amazing!!
I agree with @JulianSMoore that it’s strange that some exports (Gradio) works when others don’t, we will be looking into it :slight_smile:

@Taeivas, that’s a great point! We want to spend a lot of time next year focusing on customizability and shareability, where we are going to start with letting you customize the loss function, but then the pre-processing shouldn’t come far after.
Here’s a list of everything we do want to open up for customization at some point:

  • Loss function
  • Training loop (for callbacks and other cool things like GANs)
  • Optimizer
  • Pre-processing
  • Data types (if you want to build a Object Detection type for example)
  • Data loader (if you for example want to enable streaming)
  • Visualizations (much later, but likely to happen at some point)

With the goal that they all should be possible to easily share with other people.

2 Likes

Oh, I just misread that as the guide contained information about that at the time, which it apparently does not :see_no_evil:

Anyway, @robertl was kind and looked into the issue, and there seems to be a compability problem with 2.7 and how PL rebuilds the model for export. But, luckily, the very same preprocessing layers are available to 2.5 (unbeknownst to me until about half an hour ago) in the tf.keras.layers.experimental.preprocessing (oh thats lenghty :sweat_smile:) module. So we can just run with that and no need for installing 2.7.

Of course, I will change my guide accordingly

2 Likes

Thank you @robertl And also, thank you for looking in to the issue

1 Like

Good to know the transforms etc. are also in TF2.5 - and thanks for the update: I can skip the dependency breakage :wink:

1 Like

Sigh… checking in on the thread several days later, I first now realize my mistake in not changing the final code :see_no_evil: NOW it should be right :sweat_smile: