Today's Top Four Machine Learning Modeling Challenges and how PerceptiLabs Solves Them

Machine learning (ML) modeling is challenging – we know from experience! From wrangling data to choosing an appropriate ML algorithm, and then debugging and iterating on it, it can be a daunting task to create or update your model.

Solving these types of issues is what fueled us to create PerceptiLabs, a visual API and GUI for TensorFlow, and it’s what continues to drive our innovation and passion for making the ML modeling process easier, quicker, and even more fun.

In this blog, we want to share with you what we think are the top four ML modeling challenges today, and how they can make your life difficult or at least very challenging, namely:

  • Visualizing the Model Architecture
  • Debugging Models
  • Working with Hyperparameters and Low-Level Code
  • Working With Large and Sophisticated Models

Then we'll show you how PerceptiLabs solves them.

Figure 1: The PerceptiLabs visual ML modeling tool, in all its glory (Image source: PerceptiLabs).

Challenge #1: Visualizing the Model Architecture

Lots of frameworks and tools have been devised to help people build and train their ML models, but most of these still fall short when it comes to visualizing how the model is architected.

Keras for example, offers a high-level API built on top of TensorFlow, but despite its abstractions, Keras is still a programmer-oriented framework which inherently limits your ability to visualize models. Here is what it currently provides to visualize a model:

  • A function to render TensorFlow's underlying data flow graph to an image file.
  • Model.summary() to print a text-based summary of the model to the console window.
  • Simple print() calls to output debugging information to the console window.

While all three methods are useful, they generally don’t provide enough information for you to really understand your model's architecture. In addition, all three methods must be invoked at runtime. In other words, you must add code to your script to invoke them, and the whole script must be run, often to completion, before output is generated. And if you are training a large model, then it can be a long wait for that output.

Another tool which is gaining popularity is TensorBoard, TensorFlow's visualization toolkit.

Figure 2: A data flow graph view in TensorBoard (Image Source: Google Developers).

TensorBoard provides a browser-based UI for tuning, visualizing, and training models, and includes a rich set of views like histograms, scalars, and distributions. It also has a data flow graph view of the model, though some report that the graph can be difficult to follow, especially for larger models. Like Keras, TensorBoard must be invoked at runtime via the programmer's script and requires that the model be trained before TensorBoard can display output. This output may not provide the necessary level of granularity to identify where specific issues are being introduced in the model.

In addition to all of this, thinking of a model in terms of a data flow graph may not be the right level of abstraction for all users. This is why PerceptiLabs' introduced the concept of a visual API comprised of Components which wrap the underlying TensorFlow code into higher level abstractions like data sources, ML algorithms, math operations, etc. In doing so, as an ML practitioner, you can focus on how your data will be transformed and trained, without having to worry about micro-level details such as how tensors flow in and output operations.

This is why model visualization is at the heart of PerceptiLabs visual modeling workflow. By building the model through PerceptiLabs' drag and drop interface, the whole model is always visible, without having to invoke and wait for specialized output calls. An additional benefit is that you can organize the layout of the model's Components in a way that makes sense to you, rather than relying on auto-generated graphs.

PerceptiLabs also provides real-time previews on a per-Component level. When Data Components are added to a model and their underlying data sources are specified, PerceptiLabs automatically runs the first data sample through the model. By doing this, PerceptiLabs is able to provide previews of each Component in near-real time to show how data is transformed by each Component that makes up the model. These previews also help to effectively self-document how the model is architected, such that a wide range of users can quickly and easily understand its various transformations.

Challenge #2: Debugging Models

Related to visualization is the need to debug models. Debugging models ranges from finding simple mechanical issues such as incorrect code syntax and missing or incorrect data dimensions, to logic and design issues like bad hyperparameter values or even using the wrong algorithm for a given use case.

Unfortunately when working directly in programmatic frameworks, you’re often limited to simple print() functions which must be embedded in code and whose output won't be visible until the script is run.

PerceptiLabs solves this by constantly running the model while it's being built. As Components are added, changed, or removed, PerceptiLabs runs the model and identifies errors and warnings, and catches any code exceptions. This information is immediately displayed for you in PerceptiLabs' Debug window and problematic lines of code are highlighted where possible.

From a visual standpoint, problematic Components are marked with an icon to quickly indicate a problem. Such problems can range from issues like missing data to more complex issues such as incompatible or wrong data dimensions which may require the insertion of new Components to perform more intermediary transformations.

In Figure 3 below you can see that both Data Components in the model have some sort of issue (e.g., no data sources have been assigned), and that the Dense layer has a problem (e.g., missing input).

Figure 3: Screenshot showing problematic Components and the error pane in PerceptiLabs (Image source: PerceptiLabs).

PerceptiLabs' live previews, which can be seen in Figure 1 at the start of this article, also play a key role in debugging. By providing a per-Component preview, you can immediately see how that Component is transforming its input. This means you can quickly identify Components which don't seem to be producing the correct output, and traverse the graph to narrow down the specific Component which may be the culprit. This in turn, can more quickly lead ML practitioners to the faulty hyperparameter values or to the underlying code which the Component wraps. In a programmatic framework, you would have to read through dozens or even hundreds of lines of code before you could identify the problem.

Challenge #3: Working with Hyperparameters and Low-Level Code

Getting your model's performance just right is of critical importance, and this requires that the model's hyperparameters be tweaked and tuned iteratively during the modeling process.

Using a programmatic framework directly to do this, not only requires that the whole model be run before results are available, but also forces you to dig through code to find a given hyperparameter setting. This is something that less-technical users or even new advanced users who join a team can find difficult. If this sounds similar to the debugging issues described above, that's because there is a lot of overlap.

PerceptiLabs solves this by exposing all hyperparameters for a given Component through its Settings pane.

Figure 3: The Settings pane in PerceptiLabs where Hyperparameter values are configured for a Component (Image source: PerceptiLabs).

A Component's hyperparameters are exposed to PerceptiLabs' GUI via a Component's init() method. Users can then easily tweak the settings and the Component's live preview updates immediately. In doing so, the code in the Component's underlying init() method is automatically updated and the model is run to update the live preview.

Of course you need to start with some base set of hyperparameter values in order to experiment with and improve upon. But when first creating a model, it can be anyone's guess as to what these starting values should be. PerceptiLabs' Model Autoconfig functionality solves this by analyzing the model, and identifying and assigning good hyperparameter values. PerceptiLabs also provides a number of model templates for common ML algorithms (e.g., image classification, linear regression, generative adversarial networks (GAN), etc.) complete with sample data, that can be used as a starting point. In combination with Model Autoconfig, these templates should provide you with a good foundation from which to build more complex models.

At the code level, frameworks like TensorFlow can make it difficult to know where to begin, especially for those new to ML or those trying to learn the framework for the first time. For example, TensorFlow is composed of multiple layers of APIs. This means you need to decide the level of abstraction to work with, which can range from low-level constructs up to Keras' high-level design. In addition, constructs from some of the lower-level APIs can be used with higher-level APIs.

Figure 4: The platform Components and various layers that make up TensorFlow (Image source: Google Developers).

As mentioned previously, PerceptiLabs' visual API does the hard work by automatically generating TensorFlow code and wrapping it in high-level Components. Therefore, updating a Component's hyperparameters via the Settings pane will automatically update the Components code, while advanced users can still easily access and customize the code directly.

Challenge #4: Working With Large and Sophisticated Models

Large and sophisticated models often mean more code, more hyperparameter values to tweak, and more debugging.

Using pure-code programmatic frameworks can require strong programming skills in order to find a specific aspect of the Model, and this task can become exponentially more complex as the model grows. A prime example is a U-Net model, such as that used in our U-Net tutorial to enhance dark photos:

Figure 5: Example of a U-Net illustrating a fairly complex ML model (Image source: PerceptiLabs).

You can see here that a U-Net downsamples data via its contracting path, and up samples via its expansive path while making use of skip connections. Trying to visualize this architecture from code alone can be nearly impossible, while running such a model just to tweak and debug it can be time consuming. Visualizing each level of downsampling and upsampling in such an architecture is of critical importance when trying to ensure the U-Net is working as intended.

This is a case where PerceptiLabs' visualization really shines, as it takes care of all of these issues in one fell swoop:

Figure 6: A U-Net model in PerceptiLabs, complete with live previews for each Component. Imagine how difficult this would be to visualize and tune with just pure code? (Image source: PerceptiLabs)

First off, the Components can be organized in a U shape which makes it easy for users of all levels to immediately understand what type of architecture the model is based upon. Second, live previews play a critical role in being able to see the result at each level of downsampling and upsampling.

It's also very easy to add or remove levels by simply dragging or removing Components and adjusting the connections accordingly, all without having to rewrite code. And in conjunction with live preview, you can quickly see if adding more levels provides any benefit.

And finally, thanks to PerceptiLabs' rich set of training statistics, you can quickly experiment with model changes and compare the performance to past iterations.

Conclusion

ML modeling is inherently complex and many of today's tools and frameworks haven't addressed the challenges that ML practitioners face. That's why we still strongly believe in our visual-based approach to ML modeling because it solves so many of these problems, while providing an efficient and comprehensive workflow.

If you haven't already checked it out, be sure to grab our free version of PerceptiLabs using:

$ pip install perceptilabs

$ perceptilabs

For additional information checkout our Quickstart Guide.