PerceptiLabs
v0.13
v0.11v0.12v0.13
v0.13
v0.11v0.12v0.13
  • Welcome
  • ⚡Getting Started
    • Quickstart Guide
      • Requirements
      • Installation
      • Video Tutorials
    • Load and Pre-process Data
    • Build Models
    • Train Models
    • Evaluate Models
    • Export and Deploy Models
    • Manage, Version, and Track Models
  • 🙏How to Contribute
    • Datasets
    • Models
    • Components
  • 👨‍🍳Tutorials
    • Basic Image Recognition
    • Basic Image Segmentation
  • 🛠️Advanced
    • Common Testing Issues
    • Common Training Issues
    • Components
      • Input and Target
      • Processing
      • Deep Learning
      • Operations
      • Custom
    • CSV File Format
    • Debugging and Diagnostic Features
    • How PerceptiLabs Works With TensorFlow
    • Included Packages
    • Types of Tests
    • UI Overview
      • Data Wizard
      • Overview Screen
      • Model Training Settings
      • Modeling Tool
      • Training View
      • Evaluate View
      • Deploy View
    • Using the Exported/Deployed Model
  • 💡Use Cases
    • General
      • A Guide to Using U-Nets for Image Segmentation
      • A Voice Recognition model using Image Recognition
    • Environmental
      • Automated Weather Analysis Using Image Recognition
      • Wildfire Detection
    • Healthcare & Medical
      • Brain Tumor Detection
      • Breast Cancer Detection
      • Classifying Chest X-Rays to Detect Pneumonia
      • Classifying Ways to Wear a Face Mask
      • Detecting Defective Pills
      • Highlighting Blood Cells in Dark-field Microscopy Using Image Segmentation
      • Ocular Disease Recognition
      • Retinal OCT
      • Skin Cancer Classification
    • Industrial IoT & Manufacturing
      • Air Conditioner Piston Check
      • Classifying Fruit
      • Classifying Wood Veneers Into Dry and Wet
      • Defect Detection in Metal Surfaces
      • Fabric Stain Classification
  • 📖Support
    • FAQs
    • Changelog
  • Code of Conduct
  • Marketing Site
Powered by GitBook
On this page

Was this helpful?

  1. Advanced
  2. UI Overview

Evaluate View

PreviousTraining ViewNextDeploy View

Last updated 3 years ago

After you've trained your model, you will be provided with the option to run a test of the model using the Evaluate screen. Alternatively, you can navigate directly to the Evaluate screen at any time to run tests on trained models.

Running tests on a model performs inference on the model using the data which you allocated in the test partition via the Data Settings (i.e., data which the model hasn't seen before during training and validation).

PerceptiLabs' Evaluate View allows you to run tests on one or more trained models and provides the following:

The following are the main components of the Evaluate View:

  1. New Test: click to configure and run a new test.

  2. Labels Classification Metrics Table (shown for classification models): displays the following metrics:

    1. Categorical accuracy: accuracy for each category averaged over all of them.

    2. Precision: accuracy of positive predictions.

    3. Recall: percentage of positives found (i.e., not misclassified as negatives instead of positives).

    4. Top K Categorical Accuracy: frequency of the correct category among the top K predicted categories.

  3. Confusion Matrix (shown for classification models): displays an interactive confusion matrix for the label predictions.

Note: the classification metrics table and confusion matrix are shown for classification models. Other statistics may be shown for other models. See Types of Tests for more information.

🛠️