Low-Code: The Best Approach for TensorFlow
PerceptiLabs adopted a low-code approach right from the start to increase your DL workflow productivity, provide tools for blazingly-fast modeling, and get you to a working model faster. This blog explores why low-code is so effective for working with TensorFlow code.

According to this Gartner report, 70% of new applications developed by enterprises will use low-code application platforms (LCAP) by 2025. And for good reason, as the report states it enables businesses to quickly deliver new solutions and modernize business capabilities.
This doesn't come as a surprise to us. Right from the start, we adopted a low-code approach to increase your deep learning (DL) workflow productivity, provide tools for blazingly-fast modeling, and create a working model faster. PerceptiLabs' low-code approach also makes DL less intimidating for DL practitioners of different backgrounds and technical capabilities.
PerceptiLabs' visual API eliminates the complexities of writing raw TensorFlow code, by encapsulating pre-generated code in Components which you connect together. You can then adjust settings through the GUI and optionally modify Component code. Let's take a closer look at these low-code features.
Model Generation
One of the most effective approaches for creating a new DL model is to start with a good baseline model built around an existing dataset. This gets you up and running faster, and can even be initiated by non-technical DL practitioners. This is why PerceptiLabs generates models for you. You can start with either your existing dataset or use one of the publicly-available datasets in our Dataset Garden shown in Figure 1:

Alternatively, you can import an existing model from our Model Garden.
When using your own dataset, simply map your data to your labels via a CSV file, and then load it, along with the data samples, into PerceptiLabs. If you use a dataset from the Dataset Garden, PerceptiLabs takes care of this for you.
Our Data Wizard, shown in Figure 2 below, configures how your data will be used by the model and provides optional pre-processing settings (e.g., to resize images):

This provides the following benefits:
- A declarative workflow where you focus up front on defining and solving a DL problem, rather than the underlying code infrastructure. You end up with a fully-working model, in which all of the code has been written for you.
- The model includes good starting-point settings and can be a good foundation for Transfer Learning.
- Your model's code and project settings are all stored in one place, easily accessible through a single GUI application.
You can rapidly instantiate new models for a given dataset through the GUI (e.g., to compare their performance as you adjust each model). Whereas, with a traditional pure-code approach, you'd have to copy, modify, and re-run your code to accomplish this.
Code Partitioning
Developers divide and conquer problems by partitioning their code into logical groups or modules. PerceptiLabs' Components build on this idea by encapsulating TensorFlow code for common DL modeling constructs like neural networks, layers, and operations. Components are connected together via their Input and Output sockets to visually program the flow and transformations of data end to end. Non-programmers can treat the model as a flow-chart, while programmers can quickly locate code for specific parts of the model, rather than wading through a repository of source code files.
Each Component provides instant visualization showing how it transforms its input data, so you can see the effects of each change you make on a granular, per-Component basis. PerceptiLabs facilitates this by constantly re-running the model as you change it using your first data sample. This avoids the need to re-run the whole model before you can see results. You can then decide when to train the model using the whole dataset. By separating out the process of training you can model blazingly fast.
PerceptiLabs' model visualization can unify your DL team, because it provides a common, visual language around which stakeholders of different DL skill levels can collaborate. Moreover, it contributes to the explainability of your model, which is beneficial during design and post deployment.
GUI Features and Optional Coding
Small changes like adjustments to model settings, can be done through the GUI. This means you don't have to write code if you don't want to, and you won't need to in most cases. As a result, non-technical team members can easily experiment and iterate on the model without diving into its code.
For more flexibility you can modify Component code or write your own Custom Component to extend PerceptiLabs. Non-developers can author TensorFlow models visually, while aspiring developers can optionally peek at the TensorFlow code generated by PerceptiLabs to learn its structure. These features can help bridge the various perspectives that different team members bring to the table.
The model export and deployment procedure is also accessible to everyone. Unlike pure-code approaches where TensorFlow programmatically performs exports, PerceptiLabs simplifies this into a few button clicks. You can export to TensorFlow or optimize for TensorFlow Lite. You can also deploy to Gradio and FastAPI targets which generate both the model and a fully-working sample app built around it. Over time, we’ll add new targets, such as our upcoming support for OpenVINO™. You can always let us know what you’d like to see by posting to our Feature Requests forum.
Debugging Features
We have some cool features for debugging models which align with our low-code approach.
In the Modeling Tool, Components with invalid settings (e.g., incorrect input dimensions) are highlighted, to show model design problems. Similarly, Component visualizations are a great way to visually isolate transformations that need to be modified. And for the more technical DL practitioners, the Code Editor instantly identifies and highlights compile errors.
Take a Low-Code Approach to Your Next DL Project
Ready to increase your DL workflow productivity and get to a working model blazingly fast?
- Follow our Quickstart Guide to get up and running with our free version of PerceptiLabs.
- Then follow our Basic Image Recognition tutorial, which is a great hello-world style model to try out PerceptiLabs' low-code approach.