Use Case: Skin Cancer Classification
To accurately diagnose cancer cases, doctors need every possible tool at their disposal. In the case of skin cancer, the affected areas can often be visually examined without the need for invasive diagnostics typically involved with cancers inside the body.

To accurately diagnose cancer cases, doctors need every possible tool at their disposal. In the case of skin cancer, the affected areas can often be visually examined without the need for invasive diagnostics typically involved with cancers inside the body.
Initiatives such as ISIC 2018 have challenged researchers to develop image analysis tools to automatically identify and diagnose certain skin cancers from dermoscopic images. Taking inspiration from this, we decided to build a machine learning (ML) model in PerceptiLabs to see what sort of accuracy we could achieve using image recognition techniques.
Dataset
To train our model, we used the Skin Cancer MNIST: HAM10000 dataset on Kaggle that comprises images representing seven categories of pigmented lesions: Actinic keratoses and intraepithelial carcinoma / Bowen's disease (akiec), basal cell carcinoma (bcc), benign keratosis-like lesions (solar lentigines / seborrheic keratoses and lichen-planus like keratosis, bkl), dermatofibroma (df), melanoma (mel), melanocytic nevi (nv) and vascular lesions (angiomas, angiokeratomas, pyogenic granulomas and hemorrhage, vasc).
Figure 1: Examples of images from the dataset.
We created a .csv file to map the images to their respective classification labels for use in loading the data using PerceptiLabs' Data Wizard. Below is a partial example of how the .csv file looks:
Example of the .csv file to load data into PerceptiLabs that maps the images to the seven pigmented lesion labels.
Model Summary
Our model was built with just three Components:
Component 1: ResNet50 | include_top=No, input_shape=(224,224) |
Component 2: Dense | Activation=ReLU, Neurons=128 |
Component 3: Dense | Activation=Softmax, Neurons=10 |
Training and Results
With a training time of just under 15 minutes, we were able to achieve training accuracy of 96.04% and a validation accuracy of 79.79%. In the following screenshot from PerceptiLabs, you can see how the training accuracy ramped up mostly during the first five or so epochs, while validation accuracy remained fairly stable:
Vertical Applications
In the realm of diagnosing skin cancer, a model like this could be used to help automate the process of examining skin conditions from different sources of images (e.g., still photos, video streams., etc.) and to notify doctors as to which cases may require a closer look.
Such a project could also be used by medical students or practitioners looking to build next-generation ML-based medical technology. The model itself could also be used as the basis for transfer learning to create additional models for detecting other types of skin conditions or even other types of visible ailments.
Summary
This use case is a simple example of how ML can be used to identify ailments using image recognition. If you want to build a deep learning model similar to this, run PerceptiLabs and grab a copy of our pre-processed dataset from GitHub.