Skip to main content
FME Hub user mkriger just uploaded a new transformer to the FME Hub.

SupervisedImageClassificator

Overview

The SupervisedImageClassificator is a FME Custom Transformer designed for performing supervised image classification on raster datasets, such as satellite or aerial imagery, using machine learning techniques. This transformer automates the process of classifying pixels based on input raster bands and training polygons, producing a classified raster output. It enables users to train their own machine learning models and apply them to perform image classification.

Requirements

R installed: Ensure that R is installed on your system.

R Interpreter in FME: R must be properly integrated into FME by configuring the RCaller to use R as its interpreter.

R Packages:

Caret: Used for training machine learning models (install with install.packages("caret")).

Terra: Used for raster data manipulation and analysis (install with install.packages("terra")).

Inputs and Outputs

Inputs

Red: The red raster band from the input imagery. This band is typically part of a multispectral image stack and contributes to the classification process.

TrainingPolygons: Pre-defined vector polygons representing different classes in the dataset. These polygons must have:

A class label attribute: A string attribute describing the category (e.g., "water," "forest").

A class id attribute: An integer attribute used by the machine learning algorithm to identify the classes.

AOI (Area of Interest): A vector layer that defines the specific area to be classified. This allows you to focus the classification on a particular region of interest within the image.

Infrared: The infrared raster band from the input imagery, which is often used in remote sensing to highlight vegetation or other spectral characteristics.

OtherBands: Any additional bands in the raster dataset that can be used as predictors for the classification (e.g., blue, green, etc.).

Outputs

Classification: The classified raster image where each pixel is assigned to one of the categories defined by the training polygons. Each pixel will have a class idl that corresponds to the categories identified during model training.

Legend: A lookup table that maps class IDs to their corresponding class labels (e.g., "1 = Water," "2 = Forest"). This helps in interpreting the classified raster.

TrainingData: This output contains the training data used during the classification process, including both the training polygons and the pixel values associated with each class. It can be saved and reused for future model training or analysis.

Workflows

Read and Adjust Raster Data:

The input raster bands (e.g., red, infrared, and other bands) are read into the transformer. These bands may be pre-processed or adjusted (e.g., clipped to the AOI) before being stacked for classification.

Create Predictors:

Predictors (or features) are derived from the raster bands. These predictors will be used as inputs for the machine learning model. For example, indices like NDVI (Normalized Difference Vegetation Index) can be calculated from the red and infrared bands to improve classification accuracy.

Build Raster Stack:

The individual raster bands are stacked together to form a multi-band dataset that includes all relevant spectral information. This raster stack serves as the input for the classification model.

Process Training Polygons:

The training polygons are processed to extract pixel values from the raster stack. These pixel values, along with the class IDs, form the training dataset, which will be used to train the machine learning model.

Generate Training Data:

Pixel values from each band, corresponding to the training polygons, are compiled into a dataset. This dataset is used to train the machine learning model, which learns to associate pixel values with class labels.

Train Model and Predict in RCaller:

The transformer calls an R script in the RCaller to train a supervised machine learning model (e.g., Random Forest, Support Vector Machine, only Random Forest is working) on the training data. Once the model is trained, it is used to predict class labels for the entire raster stack within the AOI.

Visualize Prediction:

The output raster, containing predicted class labels for each pixel, is visualized. This raster can be further analyzed or exported for use in mapping and reporting. A legend can also be created to help users interpret the classification results.

Benefits

Train Your Own Model:

Users can provide their own training data and train a machine learning model directly within FME. This allows for flexibility and customization of the classification process to suit specific datasets and objectives.

Do Your Own Classification:

Once the model is trained, users can apply it to classify their own raster datasets, enabling them to perform supervised image classification without needing external software or tools.

Re-use Your Model:

The training data and the machine learning model can be saved and reused for future classifications. This feature allows users to maintain consistency across multiple projects or datasets by applying the same classification model.



Would you like to know more? Click here to find out more details!