Top Five TensorFlow Issues Solved by PerceptiLabs

TensorFlow is a powerful, open-ended API for building ML models, but it does have shortcomings. In our blog An Overview of TensorFlow and how PerceptiLabs Makes it Easier, we discussed how TensorFlow works and how PerceptiLabs' GUI and visual API make it easier to build TensorFlow models.

Top Five TensorFlow Issues Solved by PerceptiLabs

TensorFlow is a powerful, open-ended API for building ML models, but it does have shortcomings. In our blog An Overview of TensorFlow and how PerceptiLabs Makes it Easier, we discussed how TensorFlow works and how PerceptiLabs' GUI and visual API make it easier to build TensorFlow models. Here we explore five common challenges that ML practitioners face with TensorFlow and how we’ve solved them with PerceptiLabs.

The TensorFlow challenges we’ll review are:

  1. Lack of Visibility into the Model Architecture
  2. Extra Code (and time!) Needed to Visualize the Results
  3. Inefficient Tracking and Comparison of Model Performance
  4. Long Iterations to Experiment With Settings
  5. Lack of a Scalable Infrastructure

1. Lack of Visibility into the Model Architecture

As any developer can attest to, working in a pure-code environment requires you to visualize what's happening. This can be further compounded when working with TensorFlow because the purpose of the API is to construct a graph of operations through which tensors flow (hence the name TensorFlow). As a result, it can be difficult to get a clear view of the model's architecture without some sort of diagram. That's why TensorFlow is a prime candidate for visualization tools.

Figure 1: PerceptiLabs' modeling tool showing visualizations for a VGG16 CNN that performs image recognition to find tumors using brain scans. Here you can immediately see the output of the CNN along with its dense layers and final prediction.

PerceptiLabs solves this by automatically visualizing the model and metrics for you. During modeling, training, and testing the visualizations provide you with instant feedback as shown in Figure 1. This means you can always see the model's dimensions and visual transformations on a per-Component basis. Perceptilabs makes this possible by constantly re-running your model using the first sample from your training data each time you update the model. This functionality separates the modeling process from the training/testing phases which allows for quick iteration.

2. Extra Code (and time!) Needed to Visualize the Results

With pure-code approaches like TensorFlow, setting up boilerplate visualizations is time consuming and may require unnecessary iterations. This typically involves embedding code within the model and then trying to decipher the output, which may be limited to text-based debug statements, or simple graphs. On top of this, the whole model must be run before you can view the output or even verify if the output code has been set up correctly.

PerceptiLabs makes this easier by eliminating the need to embed code for debug statements, graphs, and other output. Instead, PerceptiLabs' components and rich statistics windows provide all the information required to understand each transformation of the model and how the model is performing.

Figure 2: Statistics windows in PerceptiLabs, updated in real-time as the model is training.

3. Inefficient Tracking and Comparison of Model Performance

It's common to develop multiple models to experiment and try different settings, and then compare the results to see which performs the best. However, since TensorFlow's API is focused on model creation, it doesn't provide facilities for keeping track of models or comparing their performance. It’s up to you to provide this through some sort of application, script, etc. And if you want to run multiple models at the same time, you need to set up multiple development environments.

PerceptiLabs lets you develop, train, and test multiple models at the same time. To help manage this, PerceptiLabs' ModelHub screen lists all models that you're working on in PerceptiLabs along with their current training status, training duration, test history, export status, and last modification date/time as shown in Figure 2.

Figure 3: The ModelHub in PerceptiLabs lists all of your current models and their training status.

Using the ModelHub, you get a bird's-eye view on the state of your models, all on one screen. And soon, PerceptiLabs will also include version control functionality, so you can more easily track the history and changes to your models.

4. Long Iterations to Experiment With Settings

One of the most common tasks during modeling is to experiment with different settings (hyperparameters). However, it's not always obvious which line of TensorFlow code you need to change to modify a setting or what effect a change will have on the model. You also need to re-run the whole model before you can determine what effect those changes made, and doing so can take a long time with large datasets

PerceptiLabs' Components surface all of the settings into the GUI. Using this you can clearly state the settings available in the layer settings, and since PerceptiLabs re-runs your model using the first piece of training data as you update your model, you can get instant visual feedback on what effect they have on the model.

5. Lack of a Scalable Infrastructure

Finally, training ML models can require a lot of processing power, especially when running multiple models concurrently. Setting up a scalable infrastructure to support this can be tricky.

PerceptiLabs sets up TensorFlow-based ML environments behind the scenes for you. And our enterprise and (upcoming) cloud versions automatically start and scale these instances for you.

When you use PerceptiLabs, these five challenges become a thing of the past. This leaves you free to focus on modeling while PerceptiLabs takes care of the TensorFlow details.

Get Started Today!

Check out our Quickstart Guide for more information on how you can get started with Perceptilabs.