SqueezeBrains SDK 1.18
Guides

Table of Contents

There are some guides that explain some key aspects on how to use the SB library.

System Requirements Specification

SB Library

  • RAM:
    • runtime/inference: minumum 1 GBytes (2 GBytes with deep learning modules)
    • training: minimum 8 GBytes, recommended 16 GBytes
  • CPU: X86 architecture, Intel or AMD
    Runtime/inference and training: the processor depends on the speed of analysis required by the system. When choosing the processor and in view of the trade-off between costs and benefits, a maximum clock frequency higher than the number of cores should be preferred.
    From SB SDK version 1.15.0, Intel CPUs can be used also by our OpenVino extension to optimize detection times in Deep Learning project. Further informations about supported hardware available at the following link: https://docs.openvino.ai/2024/about-openvino/release-notes-openvino/system-requirements.html .
  • Operative system:
  • Nvidia GPU: recommended for deep learning modules (but not necessary!).
    In the case you need a Nvidia GPU to reduce the inference or training time, it is mandatory to use an Nvidia GPU with CUDA support, e.g. GeForce family.
    • runtime / inference: the best choice depends on the vision task. Also low-level Nvidia GPU (e.g. GeForce GTX 980/1050) are enough if no strict time requirements are required. Inference time decrease with high-performing GPUs. Minimum VRAM: 1 GBytes
    • training: minimum GPUs with almost 4 GBytes of VRAM, recommended GPUs with almost 12 GBytes of VRAM, e.g. Geforce RTX 3080, Geforce RTX 3090,.
  • Nvidia Video Driver: necessary to use GPU acceleration for deep learning modules. The minimum required driver versions are:
    • Windows: >= 452.39 (Windows 10: recommended < 5xx.xx to avoid possible slowdown at detection time)
      To get the driver version you can run the program nvidia-smi.exe from a console.
      If you have installed nvidia driver in the default folder you should find nvidia-smi.exe in the folder C:\Program Files\NVIDIA Corporation\NVSMI\
    • Linux: >= 450.80.02
      To get the version you can run the command nvidia-smi from terminal.
    For more information see https://docs.nvidia.com/deploy/cuda-compatibility/index.html
  • Intel GPU: used for detection only in deep learning modules (optional).
    runtime / inference: from SB SDK versions 1.15.0 it is also possible to use Intel GPUs for detection. According to OpenVino toolkit both integrated and discrete GPUs are supported. Further informations about supported hardware available at the following link: https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html .
  • Intel Video Driver: required only to use OpenVino extension.
    Machine with integrated Intel GPUs mostly have already installed the driver necessary to use the devices. This is not the case of discrete GPUs that sometimes required additional drivers.

SB GUI

  • RAM: minimum 4 GBytes, recommended 16 GBytes
  • CPU: X86 architecture, recommended cpus: Intel Core i7 8700K, i7 8086K, i7-9700K
  • Operative system:
    • Windows 7 SP3, Windows 8.1, Windows 10, both 32 and 64 bit
    • Linux:
      • Debian >= 11
      • Fedora >= 32
      • Ubuntu >= 20.04
      • Mint >= 20
  • Browser: Chrome, Firefox, Opera, Explorer. SB_GUI doesn't work well with Edge.

Installation

The installation of the SB Library and of the SB GUI is performed with the self-installing package downloaded from the FaberVision website (https://www.fabervision.com/). The installation and update process is managed by the SB Maintenance Tool application which supports the following operations:

  • Add / remove components : add or remove specific components or versions of the SqueezeBrains suite.
  • Update components: update installed components.
  • Remove all components: remove all the installed components.
SB Maintenance Tool

After the installation you will find the SB Maintenance Tool application in the installation folder, default ones are:

  • Windows: C:\Program Files\SqueezeBrains
  • Linux: /opt/SqueezeBrains

In case of a new SB SDK release or update a dialog will appear when opening the SB GUI informing that a new update is available. Select Update Manager in order to open a dialog that shows the updated packages highlighted. Select Update Now and then Yes in the following popup dialog. The SB Maintenance Tool will open showing the three installation options described before.

SB update

Choose the second option and in the next window select the packages to update and then confirm. At the end of the update you will be asked to Restart the Maintenance Tool.
In case a new SB SDK version is released and the update dialog does not pop up when opening the SB GUI or in case the installed version is less than 1.6 it is necessary to launch the SB_Maintenance_Tool.exe manually.

The following image shows how the SB components are organized in the repository installed by the SB Maintenance Tool.

Installation folders structure
See also
System Requirements Specification
Warning
When installing the SB SDK on Linux Debian, make sure that the installing user is in the sudoers list. To add a user to the sudoers list run the following command, where username is the user to be added to sudoers group:
  • "/usr/sbin/adduser username sudo"
    Then logout and login with that user to make the command effective. If it is still not working, reboot Linux.

SB GUI

The SB GUI is released for the following operating systems:

  • Windows: 32bit and 64 bit on X86 architecture
  • Linux 64 bit on X86 architecture
    • Debian 11
    • Fedora 32
    • Ubuntu 20.04
    • Mint 20

The SB GUI is installed with the SB Maintenance Tool with all the required dependencies.

  • Windows
    The SB GUI compiled with Microsoft Visual Studio compiler v143.
    In the following the dependencies:
    • Qt5xxx.dll (Qt 5 Runtime Libraries)
    • msvcp140.dll (Microsoft(R) C Runtime Library)
    • vcruntime140.dll (Microsoft(R) C Runtime Library)
    • vcomp140.dll (Microsoft(R) Visual C++ Redistributable Package)
    • msvcp140_1.dll (Microsoft(R) C Runtime Library)
    • vcruntime140_1.dll (Microsoft(R) C Runtime Library)
    • zlib.dll (Zlib Library)
    • quazip.dll (Quazip Library)
    • sb.dll (SB Library)
    • SB Deep Learning Framework , only if you have the need to use sb_project_detection or sb_svl_run functions with Deep Cortex or Deep Surface modules.
  • Linux
    The SB GUI is compiled with glibc 2.31 using the Qt 5.15.0 downloaded through the Qt Online Installer.
    In the following the dependencies:
    • libQt5xxx.so (Qt 5 Runtime Libraries)
    • libxcb-xinerama.so.0 (Requested by Qt 5 Runtime)
    • libicudata.so.56 (Requested by Qt 5 Runtime)
    • libicui18n.so.56 (Requested by Qt 5 Runtime)
    • libicuuc.so.56 (Requested by Qt 5 Runtime)
    • libzlib.so (Zlib Library)
    • libquazip.so (Quazip Library)
    • libsb.so (SB Library)
    • SB Deep Learning Framework , only if you need to use sb_project_detection or sb_svl_run functions with Deep Cortex or Deep Surface modules.
See also
System Requirements Specification

SB Library

The SB Library includes:

SB dynamic library

The SB dynamic library is:

  • the interface library between the world and the SB Library
  • released for the operating systems Windows, Linux on X86 architecture, and Android on ARM architecture.
  • a Dynamic Link Library, for Windows, or a Shared Object, for Linux and Android.
  • written in C language

In the following the compiler used and the dependencies:

  • Windows
    • X86 architecture
    • Compiled with Microsoft Visual Studio compiler v143.
    • Compiled for both 32 and 64 bit operating system.
    • Files for C/C++ interface:
      • sb.dll
      • sb.lib
      • sb.h
    • Files for C# interface (see C# Wrapper):
      • sb.dll
      • sb_cs.dll
    • It also requires the following library dependencies:
      • msvcp140.dll (Microsoft(R) C Runtime Library)
      • vcruntime140.dll (Microsoft(R) C Runtime Library)
      • vcomp140.dll (Microsoft(R) Visual C++ Redistributable Package)
      • msvcp140_1.dll (Microsoft(R) C Runtime Library)
      • vcruntime140_1.dll (Microsoft(R) C Runtime Library)
      • SB Deep Learning Framework , only if you need to use sb_project_detection or sb_svl_run functions with Deep Cortex or Deep Surface modules.
  • Linux
  • Android
    • ARM architecture
    • Compiler armeabi-v7a (Qt 5.13.1 for Android ARMv7)
    • Files:
Attention
  1. All the enums are 32 bits both for 32 and 64 bits library version
  2. All the string variables are multi byte and not UNICODE.

SB Deep Learning Framework

The SB Deep Learning Framework is released for the operating systems Windows and Linux 64 bit.
During the installation procedure, if the detected operative system is compatible, the package is checked by default and its content is installed in the folder "dl_frameworks" located inside the installation folder of the specific SB SDK version. In case it is not installed, install it with SB_Maintenance_Tool .
SB Deep Learning Framework is a module containing all the dependencies necessary to run training and detection with SB deep learning modules, i.e. Deep Cortex and Deep Surface.
All deep learning algorithms and functions developed by FaberVision team are contained in the following extension libraries:

If the SB Deep Learning Framework is not installed it will be still possible to manage and perform basic operations on Deep Cortex and Deep Surface projects, e.g. creating a project, setting parameters, doing labeling and so on.
In the following the list of library dependencies included in SB Deep Learning Framework:

  • Windows (total size: 3.27 GBytes)
    • SB Deep Learning Extension v1.16.2: sb_dl.dll
    • Libtorch v1.11.0: asmjit.dll, c10.dll, c10_cuda.dll, fbgemm.dll, libiomp5md.dll, torch_cpu.dll, torch_cuda_cpp.dll, torch_cuda_cu.dll, uv.dll
    • Nvidia CUDA v11.3: cublas64_11.dll, cublasLt64_11.dll, cufft64_10.dll, curand64_10.dll, cusolver64_11.dll, cusparse64_11.dll, nvToolsExt64_1.dll
    • Nvidia cuDNN: v8.x: cudnn_cnn_infer64_8.dll, cudnn_cnn_train64_8.dll, cudnn_ops_infer64_8.dll, cudnn_ops_train64_8.dll, cudnn64_8.dll
    • Nvidia Management Library: nvml.dll (used to get information about GPUs)
    • Intel OpenVino C++ v2024.0: openvino.dll, openvino_intel_cpu_plugin.dll, openvino_intel_gpu_plugin.dll, openvino_onnx_frontend.dll, tbb12.dll
    • SB OpenVino converter: pytorch_to_onnx_converter.exe v1.0.0.2
  • Linux (total size: 3.39 GBytes)
    • SB Deep Learning Extension v1.16.2: libsb_dl.so (glibc ver. 2.28)
    • Libtorch v1.11.0: libc10.so, libc10_cuda.so, libcudart-a7b20f20.so.11.0, libgomp-52f2fd74.so.1, libnvToolsExt-24de1d56.so.1, libtorch_cpu.so, libtorch_cuda_cpp.so, libtorch_cuda_cu.so.
    • Intel OpenVino C++ v2024.0: libopenvino.so.2400, libopenvino_intel_cpu_plugin.so, libopenvino_intel_gpu_plugin.so, libopenvino_onnx_frontend.so.2400, libtbb.so.2, libpugixml.so.1 .
    • SB OpenVino converter: pytorch_to_onnx_converter v1.0.0.2
Note
Libtorch Linux libraries are self-contained of CUDA and cuDNN dependencies.

SB Deep Learning Frameworks include also a folder "dl_frameworks\pre-training" with a set of pre-training parameters configurations to use for facilitating the training.

C# Wrapper

The SB Library is released together with a C# Wrapper that could be useful in case the library needs to be integrated inside a C# .NET Application. The Wrapper is contained in the sb_cs.dll and it is contained in the same folder of the sb.dll.

Attention
At the moment the C# Wrapper supports only the Minimal Integration of the SB Library. That means it is only possible to perform the detection and not the training.

The following list shows the main classes of the C# Wrapper:

For a quickstart guide see the C# Tutorials section

Solution structure

A SB solution is based on two main file formats:

So the files of a project are:

  • solution file
  • image information file
  • images

Regarding the location of these files you have full freedom apart from a single constraint: all the images used by sb_svl_run for training must be located in the same folder.
This folder is set by the parameter sb_t_svl_par::project_path and must contain also the rtn files of the images.
You have also to specify the extensions of the image to be used for the training with the parameter sb_t_svl_par::image_ext.
The same folder could also contain images used for test or actually not used.
You can specify which images are used for training, for testing and not used calling the function sb_image_info_set_type.
An image without the rtn file is considered as not used by default.
You could put the solution file rprj in the same folder or everywhere you want.
You can specify the folder of the rprj file when you call the function sb_project_load .

The SB GUI, for simplicity, puts the rprj , rtn files and images files in a folder named "solution" located at:

  • Windows: "C:/Users/USERNAME/AppData/Roaming/Squeezebrains/SB_GUI_1.18"
  • Linux: "/home/USERNAME/.local/share/Squeezebrains/SB_GUI_1.18"

This working directory is fixed and cannot be changed.
However the user is free to save the current solution to any location on the disk as a ".zip" file. Note that each time a ".zip" solution is loaded from the SB_GUI all its contents is copied in the working directory and all the changes will interest only the files in the latter folder. In order to make the changes effective on the zip file the user need to manually press on Save Project / Save Project As.
In the following image it is shown the contents of the SB GUI working directory of the example solution "surface_wood.rprj".

Working directory contents of the SB GUI project
See also
sb_project_create
sb_project_load
sb_image_info_load

Functionalities

In the following a brief description of all the possible functionalities and applications using the modules of SB SDK.

Object Detection

Object Detection means that computer vision technique used to locate instances of objects in digital images.
Instances can belong to different models, have different size and scale and be not limited in number. Also instances that are not entirely visible because covered, overlapped with other instances or partially outside the image are detectable.
In general, a few dozen of training images are enough to reach optimal results.
For this functionality is possible to use the following SB modules:

  • Retina: it's the built-in tool to resolve this kind of problem. It returns object localizations by rectangular bounding-boxed samples. It's able to distinguish overlapped object instances.
  • Surface or Deep Surface: it's able to detect objects performing pixels segmentation over the image. Contiguous pixels that sharing some characteristics or property are merged together in blobs. The blob position and orientation is fully descripted by a set of property available to the user. Multiple segmented objects that "touch" each other are merged in a single object instance.

Image classification

Image Classification means that computer vision technique used to assign a model (or a class) to an entire image depending on its content. The content to classify may be the whole information contained in the image, but also a part of it (as an image of an industrial component). The only SB module working for this functionality is Deep Cortex. It uses deep learning algorithm to classify an image returning a weight (i.e. a confidential score) for each possible model associable to it.

Anomaly detection

Anomaly detection means that computer vision technique used to examine an image and to detect those occurrences that are different (the "anomalies") from the established pattern learned by the algorithm (the "good" part). It usually involves unsupervised or semi-unsupervised training. SB SDK does not include yet a module specially designed for this purpose, but similar results are obtained by the following modules:

  • Retina: this module can be used in semi-unsupervised way. It is possible to train the algorithm to localize only the good object in the image and at detection time to consider a negative localization as an anomalous object. During the training procedure may be necessary to add some defective images to SVL dataset to make the algorithm more robust. Defective images must not be labeled.
    At detection time, both for positive and negative samples, Retina, also returns a weight map that shows the score distribution in the image.
    This approach is proved to perform well when anomalies regards geometry or surface defects and the object is characterized by circular symmetry.
  • Surface: this module allows also training without defectiveness, but its application is limited to not complex anomaly detection tasks. It is proved to reach good results especially in cases where the background texture is well defined. If the results is not satisfying, it is possible to add and label defects at any time. In this case the task shall be considered as Defect segmentation problem.

OCR

OCR (Optical Character Recognition) is the computer vision technique used to distinguish letters or text inside digital images. This technology is already widely applied to automatically read papers or digital documents, but it finds application also in other fields as industrial manufacturing, where reading characters or codes differently printed on various objects and surfaces is often necessary. However, traditional OCR are not able to work in this situations.
SB SDK provides to the user the Retina module to solve the problem. Given that Retina is an object detector, it is possible to associate each character to be found to a model and and treat as an object detection problem. Detected characters are grouped into strings according to their relative spatial location. The fact of not having any a priori-knowledge about characters feature therefore is an advantage. The algorithm specializes itself only on the characters provided by the user and is able to guarentee robustness to all that phenomena that characterized not digital written characters, such as distortions, discontinuity, blurring and so on.
Another advantage is that it can be trained to recognize special characters, such as symbols and logos, that are not contained in any pre-trained ocr algorithm.

Defect segmentation

Defect segmentation is the computer vision technique used to locate, classify and segment (detecting the boundaries) different models of defect on surfaces in digital images. It can be considered as a particular application field of the more generic family of Semantic segmentation algorithms. The segmentation of the defects is usually done at pixel level. The contributions of all the pixels are combined in order to form a vote plane. Only later all the pixels that are classified with the same "defect" and are contiguous are merged together in a blob representation.
SB SDK includes the following built-in modules to solve defect segmentation tasks:

  • Surface: based on shallow learning algorithms.
  • Deep Surface: based on deep learning algorithms. It is able to solve more complex tasks than Surface due to its higher generalization capability.

Instance segmentation

Instance Segmentation is the computer vision technique that involves identifying, segmenting and separating individual objects within an image. It's similar to Defect segmentation with the only difference that contiguous pixels that belong to different object instances are not merged together. Therefore it allows to segment and separate partially overlapped object.
SB SDK does not include yet a module specially designed for this purpose, but in case of separated objects we may consider Surface and Deep Surface modules working as instance segmenters.

Keypoint location

Keypoint location is the computer vision technique used to locate peculiar points or spatial features of an object inside the image. These points usually are invariant to image rotation, shrinkage, translation, distortion, and so on.
SB SDK does not include yet a module specially designed for this purpose, but is possible to train Retina module to detect a characteristic parts of an object. To reach this objective may be useful to run different Retina tools in cascade, where the first used to locate the object and the later to find the keypoint on it.

How to use Retina to sort good and bad samples

Attention
Retina projects only

In the major of the applications Retina is applied to solve object detection problems using a supervised approach. Supervised learning consists in training a classifier using a dataset in which the target of the detection task is "well" labeled and the algorithm learns to generalize about the object from the object instances provided by the user.
However, Retina demonstrates to reach good results even used in a semi-supervised manner. Let suppose that our objective is to distinguish between good and bad pieces at the output of a stage in a production line. We have no idea about what kind of defects may appear on the object: the only information that we can provide to the classifier is if a given piece is good or not. Such approach is defined as semi-supervised learning, because the system is able to identify a reject using only good pieces marking as "bad" whatever piece does not produce a detection.
Here, some guidelines for an semi-supervised use of Retina to sort good and bad samples follow:

  • The user has to collect a dataset containing both good and bad pieces, but only good ones have to be labeled. If some instances are critical, because it is not clear if it has a defect or not, mark it as Optional.
  • If the object can assume various orientations, use perturbation to increase its variability.
  • For training use an incremental approach. It is advisable to iteratively select those test images which contains misclassified object instances (FALSE POSITIVE or FALSE NEGATIVE) to enrich the dataset.
  • Set sb_t_par_model.num_occurrences parameter to 1 to extract the first instance with negative weight. In this way, even test results have no errors, it's possible to reinforce the training moving in the training set the test images which contains TRUE NEGATIVE instances with higher weight in addition to the TRUE POSITIVE ones with the lower weight. Sometimes may be useful for the user to define an interval of weights centered around 0 (e.g. [-0.2, 0.2]) to identify a region of uncertainty about the prediction. All the piece whose score falls in this interval requires a particular attention and may be necessary to perform a a further manual control (by an operator) on them.
  • Verify the quality of the training visualizing the weight map sb_t_sample_weights_image::img for each occurrence. If it is correct, all the area highlighted in green has to correspond to the good part of the object while the area in in red has to correspond to the defect.
  • Use the parameters sb_t_par_model::defect_area_percentage and sb_t_par_model::defect_area_threshold to filter out and classify as "bad" the occurrences that contain a certain percentage of bad area, that is an area where the weight is negative in the sample weight map. The first parameter sb_t_par_model::defect_area_percentage sets the minimum contiguous bad area to give a "bad" even if the total occurrence weight is positive. The parameter is expresses as a percentage [0, 1.0] of the total sample area. The second parameter sb_t_par_model::defect_area_threshold sets the weight threshold below which to consider a pixel as a "bad" value. The default value is 0 and the range is [-1.0, 1.0].

In the following images an example of semi-supervised use of Retina for sorting good and bad samples. In this case, the objective is to identify defects on the orange socket of the metal valve and in the central hole. Model "good" is associated only to those samples with no defects. The training results visible from the weight maps are quite satisfying: the classifier has learned to well distinguish between good and bad pieces providing also useful information about the defect position (marked in red).

Target object
Labeling for a semi-supervised approach in Retina
Weight maps
See also
sb_t_sample_weights_image
sb_t_par_model

How SB works

The SB Library includes four modules divided in two categories:

  • Shallow Learning modules. They are algorithms that learn the parameters of the statistical model extracting a set of pre-defined features directly from images in the training dataset. SqueezeBrains shallow learning modules are:
    • Retina
      It has the ability to learn and recognize objects in an image.
    • Surface
      It has the ability to learn and detect defects in an image.
  • Deep Learning modules. They are modules based on the popular Convolutional Neural Network (CNN) architectures. CNNs are algorithms that learn the parameters of the statistical model through an iterative and hierarchical processing which takes as inputs the eleaborated outputs of the previous blocks. In this case, features used to discrimate between classes are ideally infinite and automatically adjusted during training. SqueezeBrains deep learning modules are:
    • Deep Cortex
      It has the ability to learn and classifiy an image.
    • Deep Surface
      It has the ability to learn and detect defects in an image.

The figure below graphically shows the relation between Shallow Learning and Deep Learning algorithms in relation to Artificial intellligence.

Shallow Learning

The main characteristics of SqueezeBrains modules are summarized in the following table:

Characteristic Supporting modules Description
Retina Surface Deep Cortex Deep Surface
It simulates the human vision perception system
  • When our brain sees an image, it always tries to simplify it as much as possible
  • Our perception is always willing to organize what it sees in the most logical and comprehensible set
  • So you can say that:
    perception is the expectation of finding a model
Generic analysis not dedicated to any specific task
Many artificial intelligence algorithms are designed to work only on specific vision tasks. Think, for example, to "faces recognition" or "vehicle detection" systems used in security or in automotive industry: they represent the state-of-art of the available technology in those sector, but their applications are extremely limited. All SqueezeBrains modules overcome these limitations because are able to extract generic information from images which may fit to different vision applications.
Reduced number of configuration parameters
SqueezeBrains modules are designed in order to be user-friendly. This implies a reduced number of configuration parameters that user have to set. In more detail:
It learns through the training
With traditional analysis, for each vision problem to be solved, a sequence of fixed operations must be defined. Instead, machine learning techniques allows to have a library that automatically learns what to do. In the case of Retina or Deep Cortex they learn the characteristics of the object/objects in the image, in the case of Surface or Deep Surface they learn the defect to be found.
Reduced number of images
For most of applications, few tens of training images are enough to reach optimal results. The only module that requires an higher number of images is Deep Cortex.
Supervised learning (SVL) with human-machine interaction
Learning begins with a small set of images. Then the labeling assistant function of the SB GUI helps the operator to do the labeling: the system proposes and the operator confirms, and if there is an error the operator corrects it and adds the image to the training dataset. In this way it quickly reaches a stable learning ready to be used in the machine.
See also
SVL - training
How to do labeling
Multi-models management
It is possible to create multiple models to associate to different objects/defects in the same project. Each model can be enabled/disabled independently during the detection phase.
User scale management
  • Retina: scaling is possible changing sb_t_sample::scale parameter. The model must be created by taking as a reference the smallest dimension that guarantees sufficient information for a correct detection, i.e. the worst case. Then the scale allows you to manage the occurrences that occur with larger dimensions. When adding the sample, the scale allows you to resize the rectangle of the model in order contain the object entirely. The scale can be set between SB_SAMPLE_SCALE_MIN and SB_SAMPLE_SCALE_MAX with a step of SB_SAMPLE_SCALE_GRANULARITY.
  • Surface: scaling is possible changing sb_t_par_model::levels . You can enable the search for defects on one or more scales at the same time. Using different scales is useful when the defect can have variable dimensions.
  • Deep Cortex, Deep Surface: scaling cannot be directly managed by the user, but is automatically inferred from the training images.
Optional samples and defects
Being able to declare a sample or a defect as optional allows you to handle those cases in which even for user it is difficult to understand whether that sample is good or bad, whether that defect is actually a defect or is a good part. Optional samples/defects are not used for training and do not affect statistical indices such as accuracy. For optional samples/defects two special classes have been defined called Optional Positive and Optional Negative depending on whether the predicted weight, or confidence, is greater than or equal to 0 or less than 0.
See also
sb_t_sample::classify_mode
Collaborating models management
Collaboration between models allows you to correctly manage cases in which you have models of objects that, because of their variability, can become very similar to each other. In this case, it is advisable to set up these models as collaborating to improve detection results. In Retina and Surface collaborating models are used also in training phase.
Support for multi core processing
It is possible to set the maximum number of CPU cores that the library can use for training and image processing.
See also
Parallel computing
GPU is not mandatory
Contrary to the majority of existing machine learning software, the use of GPU NVIDIA is not mandatory. Retina and Surface tools works only on CPU, while Deep Cortex and Deep Surface can works both with CPU and GPU NVIDIA. In the latter case GPU NVIDIA is strongly recommended only for training phase.
See also
Device management

Shallow Learning modules

This section briefly describes how Retina and Surface works.
Both the modules are based on the same algorithms, but they differ in the format of the output results: Retina returns coordinates of the occurrences of the objects found, while Surface returns a voting plan.
The approach used is that of the Sliding Window. The window always has the size of the model defined with the sb_t_par_model::obj_size parameter. To reduce calculation times, the model window is scrolled on the image with a scan step, which is predefined and equal to 4 pixels in Surface projects, while, in Retina projects it is defined by the variable sb_t_par_model::obj_stride_coarse. In Retina projects, if in a point of the image the probability that there is an object increases then, in the neighborhood of the point, a finer scanning step, defined by the variable sb_t_par_model::obj_stride_fine, is used.

A Sliding Window approach

The scale is managed as follows: the image is scaled while the model remains of the original size.
The model window is moved on the image and in each position the features are extracted and assembled in a data vector.

Features extraction

The sb_svl_run training function automatically chooses the best features but it is still possible: 1) to set the features from which it choose automatically or 2) manually choose the features that the training must use. See the guide features for more information about the features and their selection. In each position of the scan grid of the window on the image, the vector of the features is calculated and sent to a classifier to predict the presence of the object or defect. So the prediction generates a confidence or weight value that has a value between -1 and 1.
The minimum dimension of an object or a defects is 8x8 pixel.

Features classification

Deep Learning modules

This section briefly describes how Deep Cortex and Deep Surface works.
Deep learning modules developed by FaberVision are all based on Convolutional Neural Networks (CNN) architectures. The core of these algorithms is the Feature Extracting Block (also called Backbone or Encoder) which is responsible for the following operations:

  • features designing: descriptors used to extract features from the image are not defined a priori as in shallow learning modules, but are continuously update during training by a backpropagation process in order to get the best configuration that minimize the training loss, i.e. the error between predicted and expected values on training images.
  • feature extraction: each layer of the network is responsible to extract features from the output of the previous layer in a hierarchical process of features extraction. Features extracted by last layers are in general more complex and take into account a greater receptive field of the image providing generalization capability to the algorithm. Each layer extracts multiple features which are grouped in feature maps.

Feature maps at the output of the backbone are used to fed a processing block which differs according to the project type:

  • Classification Block (Deep Cortex): it is responsible to combine the information extracted by the backbone in order to perform a classification on the image, i.e. determines which model is associated to the image.
  • Decoder Block (Deep Surface): it is responsible to spatially increase the resolution of the feature maps and combine information at the backbone output. It returns a vote plane in which each value is the classification result of the associated region in the input image. The spatial merge of the values higher than a certain score and belonging to the same model produces the segmentation result of the image.

In the following image a simplified blocks diagram of SB deep learning algorithms is reported.

SB deep learning algorithms workflow
See also
sb_t_svl_dl_par_network.type

Retina results

The result of the analysis of Retina project is a list of samples with the following properties:

  1. truth (TP, FP, TN, FN, OP, ON)
  2. centre
  3. model
  4. confidence or weight, [-1,1]
  5. Iou (Intersection over Union), [0,1]
  6. scale, always >= 1
  7. weight map
    Result of detection with RETINA

Deep Cortex results

The result of the analysis of Deep Cortex project is a list of samples with size equal to sb_t_par_models::size. Each sample contains the classification result of the image for a specific model of the project. The order policy of the samples returned by sb_project_detection function may differ according to the following conditions:

ground truth almost a sample with weight > 0 criterion order
no yes descending order of weight
yes yes descending order of weight
yes no first sample with the same model of the ground truth (FALSE NEGATIVE), others in descending order of weight

Each sample is completely described by the following properties:

  1. truth (TP, FP, TN, FN)
  2. model
  3. confidence or weight, [-1,1]

To facilitate a graphical representation, centre and vertices of the sample are set in such a way that sample has the same size of the image to classify.

Surface/Deep Surface results

The main result of the analysis of Surface or Deep Surface project is the segmentation.
The function sb_project_detection calculates:

  • voting plane: image of the "weight or confidence".
    Each pixel has a value between [-1,1], positive values for defects and negative values for background.
  • model plane: the image of the gray levels of the models / background.
    Each pixel contains the gray level of the model that got the greater weight, or 0 if it is background.
    You can use the sb_surface_model_to_gl function to get the gray level corresponding to a model.

Moreover the function sb_project_detection calculates a blob analysis on the voting plane in order to group defects areas with same models and allow you to filter out the defects based on various criteria of form and proximity (see blob analysis parameters). You will find the list of the blobs with their properties in the struct sb_t_surface_res . See sb_project_detection to learn how to enable the blob analysis.

Result of detection of SURFACE/DEEP SURFACE

The function sb_project_detection calculates more properties if the ground truth or labeling has been passed to the function, in particular:

  • truth plane: the image of the truth values ( see sb_t_truth )
    The truth plane is based on the blob analysis of the labeling and the vote plane.
  • statistics in the sb_t_res_model structure and refer to the truth values of the defects computed by the blob analysis.

The following table summarizes the truth value (TP, FP, FN, TN, OP, ON) expected output given the ground truth/labeling and the defect occurrence found.
In the images of the Ground truth / Labeling in the tables below with the red color you can see the required defects and with the brown color the optional defects.

Ground truth defect areaOptional?Occurrence defect areaTruth valueGround truth/LabelingResult
area >= sb_t_blob_par.area_minNOarea >= sb_t_blob_par.area_minTP
area >= sb_t_blob_par.area_minNOarea < sb_t_blob_par.area_minFN
area < sb_t_blob_par.area_minNOarea < sb_t_blob_par.area_minTN
area < sb_t_blob_par.area_minNOarea >= sb_t_blob_par.area_minTP
area >= sb_t_blob_par.area_minYESarea >= sb_t_blob_par.area_minOP
area >= sb_t_blob_par.area_minYESarea < sb_t_blob_par.area_minON
area < sb_t_blob_par.area_minYESarea < sb_t_blob_par.area_minON
area < sb_t_blob_par.area_minYESarea >= sb_t_blob_par.area_minOP
NO defectNAarea >= sb_t_blob_par.area_minFP
NO defectNAarea < sb_t_blob_par.area_minTN

Regarding the required and optional ground truth areas, there are two different situations that may happen and lead to an additional OP/ON results or not.
See the following table:

SituationTruthGround truth/LabelingResult image
The occurrence blob overlaps both the required and the optional ground truth blobs1 TP
The occurrence blob overlaps only the required ground truth blob1 TP and 2 ON
The occurrence blob overlaps only the required ground truth blob and the merge distance is large enought to merge the ON blobs1 TP and 2 ON

Special case: Surface/Deep Surface TN counting

Image results
In the results of image detection the TN blobs are always counted both in the global results and in those per model.
Furthermore, the TN counter in the global results is set to 1 if there are no TP, FP, FN blobs in the image.

Statistics / metrics
In statistics, the counting of TN instance follows a different logic in order to have metrics that are more meaningful.
The TN counter of the models is always set to 0 while the TN counter of the global increases by 1 for each image without any TP, FP and FN blobs. The same convention is also used in Retina projects.
Example
Let's consider the case of a project with two models, 'a' and 'b', 100 test images and only one of these images has a defect of model 'a', furthermore we assume that we have no FP blobs.
The statistics of the model 'a' are different depending on whether:

  • The defect is detected:
    • TP = 1
    • TN = 0
    • FP = 0
    • FN = 0
    • Accuracy = (TP + TN)/(TP + TN + FP + FN) = 1/1 = 100%
  • The defect is not detected:
    • TP = 0
    • TN = 0
    • FP = 0
    • FN = 1
    • Accuracy = (TP + TN)/(TP + TN + FP + FN) = 0/1 = 0%
  • The sb_t_stat_model::tn counter referring to model 'a' is also increased for the 99 images that do not have the 'a' defect:
    • TP = 0
    • TN = 99
    • FP = 0
    • FN = 1
    • Accuracy = (TP + TN)/(TP + TN + FP + FN) = 99/100 = 99%

Therefore in the first case the accuracy of the model 'a' is equal to 0% while in the second case it is 99% with the consequence that the FN is "hidden" by the TN counters.
This way of counting the TN in the statistics allows the metrics to "focus" on the images in which the defect is present while neglecting the others.
The image below shows the case of two images: the sb_t_stat_model::tn counter of the global result is 1, because there only 1 image that has no TP, FP and FN blobs, while the sb_t_res_model::tn counter is equal to 2 for the first image and equla to 3 for the second one.

TN counter

Naming convention

In this section the naming of the SB library is explained.

Functions

Naming convention of the functions of the library is:

For example: sb_project_get_par where "sb_project" is the acronym, "get" is the action and "par" is the object.
When defining a function, the order of the parameters is usually: input, then output. For example the function sb_project_get_stat .
But if the fuction returns an SB_HANDLE then the handle is always the first parameter of the function, for example sb_project_load .

acronym

the acronym is formed as follows: sb _ name of the class.
In the following the list of the acronyms of the SB library:

action

Some typical actions are in the following table:

action example
create sb_lut_create
destroy sb_image_destroy
format sb_license_format_info
get sb_project_get_par
set sb_project_set_par
load sb_project_load
clone sb_project_clone
save sb_project_save
add sb_par_add_model
remove sb_par_remove_model

object

Some typical objects of the action are in the following table:

object example
sb_t_version sb_solution_get_version
sb_t_info sb_project_get_info
sb_t_res sb_project_get_res
sb_t_stat sb_project_get_stat
sb_t_par sb_project_get_par
sb_t_par sb_project_set_par

Types

Types like structures or enumerators are always in the form: sb_t_ followed by the name of the class, for example sb_t_par or sb_t_project_type.
If the structure has a sub type the name has the following format: sb_t_ type _ subtype , for example sb_t_par_models.

Handle management

The SB Library is based on "handles" which are objects with type SB_HANDLE.
An "handle" is a "black box" object and can only be managed with functions.
The handles of the library are the following:

The handles project and image information are threads save.
The SB Library has also many other objects that are structures. The main objects are the following:

Solution and projects management

In this section the main operations on a solution and its projects are described.
With solution we mean a file, created by the library, which contains one or more projects of different type. The solution files created with the SB GUI have the extension rprj, but the user can use the extension he prefers. Basic operation for managing projects in a solution are the following:

  • sb_project_create : it creates a project handle and load it in memory
  • sb_project_clone : it creates a project handle by cloning a project already loaded in memory. New project is loaded in memory too.
  • sb_solution_remove_project : remove a project module from a solution file
  • sb_project_save : save a project handle already loaded in memory in a solution file. It has to be called every time user wants to write current project to disk. If the solution file does not already exists, the function acts as a New solution operation, otherwise as an Add project to solution.
  • sb_project_load : load in memory a project handle from a solution file.
  • sb_project_change_type : change the project type of a project in a solution file.

At any time is possible to have access to general information about a solution by sb_solution_get_version and sb_solution_get_info functions. Especially the latter is useful because, among other information, returns the current active project of the solution. The user can modify this value calling sb_solution_set_current_project function. For more details see Solution structure.
All these operation are fully integrated in SB GUI in order to facilitate the user to manage solution and its projects. Figure below shows the basic graphic utilities to perform all the operations. Note that in SB GUI, every change to solution structure is automatically saved in the corresponding rprj file.

Basic SB GUI operation for projects management in a solution

The following chapters show how to manage a solution file and projects with SB Library.

Create a project

To create a project you should call the function sb_project_create.
In the example below you can see the creation of a new Retina project named "retina_project"

SB_HANDLE sb_handle = NULL;
sb_t_err err = sb_project_create(&sb_handle, "retina_project", SB_PROJECT_TYPE_RETINA);
// Do something
sb_project_destroy(&sb_handle);
sb_t_err
Errors code enum.
Definition: sb.h:6230
void * SB_HANDLE
HANDLE definition.
Definition: sb.h:6766
sb_t_err sb_project_create(SB_HANDLE *phandle, const char *const project_name, sb_t_project_type project_type)
Creates a new project of the specifed type.
sb_t_err sb_project_destroy(SB_HANDLE *phandle)
Frees all the resources of the project handle.
@ SB_PROJECT_TYPE_RETINA
Project Retina.
Definition: sb.h:9670

Instead of SB_PROJECT_TYPE_RETINA you can use SB_PROJECT_TYPE_SURFACE or SB_PROJECT_TYPE_DEEP_SURFACE or SB_PROJECT_TYPE_DEEP_CORTEX to create a project of different type.

Save a project in a solution file

To save a project in a solution you should call the function sb_project_save.
In the example below you can see the saving of a project handle in a solution named "example_solution.rprj"

// 1) Save project handle in the solution file
// sb_handle is the project handle of the previous section
sb_t_err err = sb_project_save(sb_handle, "example_solution.rprj", SB_PROJECT_MODE_DETECTION_AND_SVL);
sb_t_err sb_project_save(SB_HANDLE handle, const char *const solution_file, sb_t_project_mode mode)
Saves the project to file.
@ SB_PROJECT_MODE_DETECTION_AND_SVL
Load/save all the module information.
Definition: sb.h:9823

Set a project as the current project of its own solution

To set the project as the current project of its own solution you should call the function sb_solution_set_current_project.
In the example below you can see setting a project as the current project of its own solution.
sb_handle is a project handle previously loaded with the function sb_project_load or created with the function sb_project_create.

sb_t_project_info prj_info = {};
// 1) Get project information
err = sb_project_get_info(sb_handle, &prj_info);
// 2) Set the project of the current project of its own solution
err = sb_solution_set_current_project("example_solution.rprj", prj_info.uuid);
sb_t_err sb_project_get_info(SB_HANDLE handle, sb_t_project_info *project_info)
Gets the project information from the handle.
sb_t_err sb_solution_set_current_project(const char *const solution_file, const char *const project_uuid)
Set current project in the solution_file.
Project info structure.
Definition: sb.h:9695
char uuid[SB_PROJECT_UUID_LEN]
Project UUID.
Definition: sb.h:9697

Load a project from solution file

To load a project from solution you should do the following operations:

Clone a project

To clone a project you should call the function sb_project_clone.
In the example below you can see the cloning of a project handle already loaded in memory. New project uuid is assigned to the new clone project.
sb_handle is a project handle previously loaded with the function sb_project_load or created with the function sb_project_create.

SB_HANDLE sb_handle_clone = NULL;
sb_t_err err = sb_project_clone(sb_handle, &sb_handle_clone, 1);
sb_t_err sb_project_clone(SB_HANDLE src, SB_HANDLE *pdst, sb_t_project_mode mode, int regenerate_uuid)
Clones the SqueezeBrains project.

Remove a project from solution file

To remove a project from solution you should call the function sb_solution_remove_project.
In the example below you can see the removal of the j-th project module from "example_solution.rprj" file.

// solution is the solution info of the previous point
sb_t_err err = sb_solution_remove_project("example_solution.rprj", solution->projects[j].uuid);
sb_t_err sb_solution_remove_project(const char *const solution_file, const char *const project_uuid)
Delete the project with the specified uuid from file.

Destroy the solution info structure

To destroy the solution info structure you should call the function sb_solution_destroy_info

sb_t_err sb_solution_destroy_info(sb_t_solution_info **const solution)
Destroys the structure of the solution information.
See also
Full integration

Modify parameters in a project

To set parameter in a project you should do the following operations,
where sb_handle is a project handle previously loaded with the function sb_project_load or created with the function sb_project_create :

sb_t_par* par = NULL;
// 1) Get the actual parameters from the project handle.
err = sb_project_get_par(sb_handle, &par);
if(err != SB_ERR_NONE) goto FnExit;
// 2) Change the parameters fields needed.
par->num_threads = 4;
// 3) Set the new parameters into the project handle.
err = sb_project_set_par(sb_handle, par);
if(err != SB_ERR_NONE) goto FnExit;
FnExit:
// 4) Destroy the project parameter structure
return err;
sb_t_err sb_par_destroy(sb_t_par **const par)
Destroys the project parameters structure.
sb_t_err sb_project_set_par(SB_HANDLE handle, const sb_t_par *const par)
Sets the parameters structure into the project handle.
sb_t_err sb_project_get_par(SB_HANDLE handle, sb_t_par **const par)
Retrieves the project parameters structure.
Project parameters.
Definition: sb.h:11797
int num_threads
Maximum number of OpenMP threads that detection can use.
Definition: sb.h:11851

Enable/disable a model in a project

To enable/disable a model of the project you should do the following operations,
where sb_handle is a project handle previously loaded with the function sb_project_load or created with the function sb_project_create :

sb_t_par* par = NULL;
// 1) Get the actual parameters from a project handle.
err = sb_project_get_par(sb_handle, &par);
if(err != SB_ERR_NONE) goto FnExit;
// 2) Enable the first model.
par->models.model[0] = 1;
// 3) Disable the second model.
par->models.model[1] = 0;
// 4) Set the new parameters into the project handle.
err = sb_project_set_par(sb_handle, par);
if(err != SB_ERR_NONE) goto FnExit;
FnExit:
// 5) Destroy the project parameter structure
return err;
sb_t_par_model model[SB_PAR_MODELS_NUM]
Array of models.
Definition: sb.h:11738
sb_t_par_models models
Models parameters.
Definition: sb.h:11804

Add model to a project

To add a model to a project you should do the following operations,
where sb_handle is a project handle previously loaded with the function sb_project_load or created with the function sb_project_create :

sb_t_par* par = NULL;
// 1) Get the actual parameters from the project handle.
err = sb_project_get_par(sb_handle, &par);
if(err != SB_ERR_NONE) goto FnExit;
// 2) Add the model named "model1".
err = sb_par_add_model(par, "model1");
if(err != SB_ERR_NONE) goto FnExit;
// 3) Set the model parameters.
par->models.model[par->models.size-1].obj_size = sb_size(128, 128);
par->models.model[par->models.size-1].obj_min_distance = sb_size(128, 128);
// 4) Set the new parameters into the project handle.
err = sb_project_set_par(sb_handle, par);
if(err != SB_ERR_NONE) goto FnExit;
FnExit:
// 5) Destroy the project parameter structure
return err;
SB_INLINE sb_t_size sb_size(int width, int height)
Inline constructor of structure sb_t_size.
Definition: sb.h:6574
sb_t_err sb_par_add_model(sb_t_par *const par, const char *const model_name)
Adds the model to the parameter structure.
sb_t_size obj_size
Model size.
Definition: sb.h:11576
sb_t_size obj_stride_coarse
Coarse search step.
Definition: sb.h:11668
sb_t_size obj_stride_fine
Fine search step.
Definition: sb.h:11682
sb_t_size obj_min_distance
Minimum distance between two samples.
Definition: sb.h:11653
int size
Number of models that is number of elements of the array model.
Definition: sb.h:11744

Remove a model from a project

To remove a model from a project you should do the following operations,
where sb_handle is a project handle previously loaded with the function sb_project_load or created with the function sb_project_create :

sb_t_par* par = NULL;
sb_t_par_changes_info* info = NULL;
// 1) Get the actual parameters from the project handle.
err = sb_project_get_par(sb_handle, &par);
if(err != SB_ERR_NONE) goto FnExit;
// 2) Remove the model named "model1".
err = sb_par_remove_model(par, "model1");
if(err != SB_ERR_NONE) goto FnExit;
// 3) Get the information on parameters changes to know what the sb_project_set_par and
// sb_image_info_apply_par_changes do when they apply the new parameters.
err = sb_project_get_par_changes_info(sb_handle, par, &info);
if(err != SB_ERR_NONE) goto FnExit;
// 4) Set the new parameters into the project handle.
err = sb_project_set_par(sb_handle, par);
if(err != SB_ERR_NONE) goto FnExit;
// 5) Than it is necessary to apply the changes to all the SB_IMAGE_INFO_EXT files associated to the images of the project.@n
// For all the images:
for(int i ; ;)
{
SB_HANDLE image_info;
err = sb_image_info_load(&image_info, image_name[i], sb_handle);
if(err != SB_ERR_NONE) goto FnExit;
// Apply parameters changes to the i-th image
err = sb_image_info_apply_par_changes(image_info, info);
if(err != SB_ERR_NONE) goto FnExit;
err = sb_image_info_destroy(&image_info);
if(err != SB_ERR_NONE) goto FnExit;
}
FnExit:
// 6) Destroy the par changes structure
// 7) Destroy the project parameter structure
return err;
sb_t_err sb_image_info_load(SB_HANDLE *image_info, const char *const image_file, SB_HANDLE module_handle)
Creates a SqueezeBrains image info handle.
sb_t_err sb_image_info_apply_par_changes(SB_HANDLE image_info, const sb_t_par_changes_info *info)
Apply the par changes to the SqueezeBrains image info handle.
sb_t_err sb_image_info_destroy(SB_HANDLE *image_info)
Destroys the SqueezeBrains image info handle.
sb_t_err sb_par_remove_model(sb_t_par *const par, const char *const model_name)
Removes the model from the structure of the project parameters.
sb_t_err sb_project_get_par_changes_info(SB_HANDLE handle, const sb_t_par *const par, sb_t_par_changes_info **const par_changes_info)
Returns the information on what sb_project_set_par and sb_image_info_apply_par_changes do when they a...
sb_t_err sb_project_destroy_par_changes_info(sb_t_par_changes_info **const par_changes_info)
Destroys the structure.
Information on what sb_project_set_par and sb_image_info_apply_par_changes do when they apply the new...
Definition: sb.h:13054

Elaborate an image

To elaborate an image with a trained project you should do the following operation (See also tutorial_3_retina_detect),
where sb_handle is a project handle previously loaded with the function sb_project_load or created with the function sb_project_create :

// 1) Load the image from file
sb_t_image* image = NULL;
sb_t_roi* roi = NULL;
sb_t_res* res = NULL;
err = sb_image_load(&image, "002.jpg");
if(err != SB_ERR_NONE) goto FnExit;
// 2) Create the detection roi
err = sb_roi_create(&roi, image->width, image->height);
if(err != SB_ERR_NONE) goto FnExit;
// 3) Set the detection roi
err = sb_roi_set_rect(roi, 255, sb_rect(0, 0, image->width, image->height), 0);
if(err != SB_ERR_NONE) goto FnExit;
// 4) Perform the detection
err = sb_project_detection(sb_handle, image, roi, NULL, NULL);
if(err != SB_ERR_NONE) goto FnExit;
// 5) Get the results
err = sb_project_get_res(sb_handle, &res, 0);
if(err != SB_ERR_NONE) goto FnExit;
FnExit:
// 6) Destroy the image
// 7) Destroy the roi
// 8) Destroy the detection results
return err;
SB_INLINE sb_t_rect sb_rect(int x, int y, int width, int height)
Inline constructor of structure sb_t_rect.
Definition: sb.h:6632
sb_t_err sb_image_destroy(sb_t_image **const pimg)
Destroys the image.
sb_t_err sb_image_load(sb_t_image **const img, const char *const file)
Loads an image from a file.
sb_t_err sb_project_detection(SB_HANDLE handle, const sb_t_image *const img, const sb_t_roi *const roi, const sb_t_roi *const roi_defects, const sb_t_samples *const samples)
Detection function.
sb_t_err sb_project_get_res(SB_HANDLE handle, sb_t_res **const res, int details)
Retrieves the results of the last elaborated image.
sb_t_err sb_res_destroy(sb_t_res **const res)
Destroys the module results structure.
sb_t_err sb_roi_set_rect(sb_t_roi *const roi, unsigned char gl, sb_t_rect rect, int reset_roi)
Sets a rectangular ROI.
sb_t_err sb_roi_create(sb_t_roi **const roi, int width, int height)
Creates a ROI.
sb_t_err sb_roi_destroy(sb_t_roi **const roi)
Destroys the ROI.
Defines an image.
Definition: sb.h:8100
int width
Width, in pixel, of the image.
Definition: sb.h:8104
int height
Height, in pixel, of the image.
Definition: sb.h:8105
Results of detection.
Definition: sb.h:12672
Defines a roi.
Definition: sb.h:8520

Get the statistics

To get the statistics related to the last processed images you should do the following operations,
where sb_handle is a project handle previously loaded with the function sb_project_load or created with the function sb_project_create :

sb_t_image* image = NULL;
sb_t_roi* roi = NULL;
sb_t_stat* stat = NULL;
// 1) Reset the current statistics
// To be called before the elaboration of the images
err = sb_project_reset_stat(sb_handle);
if(err != SB_ERR_NONE) goto FnExit;
// 2) Load the image from file
err = sb_image_load(&image, "002.jpg");
if(err != SB_ERR_NONE) goto FnExit;
// 3) Create the detection roi
err = sb_roi_create(&roi, image->width, image->height);
if(err != SB_ERR_NONE) goto FnExit;
// 4) Set the detection roi
err = sb_roi_set_rect(roi, 255, sb_rect(0, 0, image->width, image->height), 0);
if(err != SB_ERR_NONE) goto FnExit;
// 5) Perform the detection
err = sb_project_detection(sb_handle, image, roi, NULL, NULL);
if(err != SB_ERR_NONE) goto FnExit;
// ... Other images detection ...
// 6) Get the statistics
err = sb_project_get_stat(sb_handle, &stat);
if(err != SB_ERR_NONE) goto FnExit;
FnExit:
// 7) Destroy the image
// 8) Destroy the roi
// 9) Destroy statistics structure
return err;
sb_t_err sb_project_reset_stat(SB_HANDLE handle)
Resets the internal statistics of the elaborations.
sb_t_err sb_project_get_stat(SB_HANDLE handle, sb_t_stat **const stat)
Gets the statistics from the handle.
sb_t_err sb_stat_destroy(sb_t_stat **const stat)
Destroys the sb_t_stat structure.
Statistics of the elaborations done with the function sb_project_detection .
Definition: sb.h:12932

Manage custom parameters

You can store your own custom parameters both in a project and in a SB_IMAGE_INFO_EXT file. The information are save in xml format.

Load custom parameters

To load the custom parameters use the function sb_project_get_custom_par_root.
sb_handle is a project handle previously loaded with the function sb_project_load or created with the function sb_project_create.

// Get the custom parameters xml node root from a project.
SB_HANDLE node_root = NULL;
sb_t_err err = sb_project_get_custom_par_root(sb_handle, &node_root);
if(err != SB_ERR_NONE) return err;
sb_t_err sb_project_get_custom_par_root(SB_HANDLE handle, SB_HANDLE *node_root)
Returns the xml root node of your own custom parameters.

Add a parameter

To add a new parameter use the following function

Add a structured parameter

To add a new structured parameter with sub-parameters use the following function

Remove a parameter

To remove a parameter use the following function

Set a parameter

To set the value of a parameter use the following function

  • sb_xml_node_set
    // Set the parameter called "par1" with value "value1"
    sb_t_err err = sb_xml_node_set(node_root, "par1", "value1");
    if(err != SB_ERR_NONE) return err;
    sb_t_err sb_xml_node_set(SB_HANDLE parent, const char *const name, const char *const content)
    Sets the content of the node with the specified name. The function checks if the node exists and othe...

Get a parameter

To get the value of a parameter use the following function

Get a parameter node

To get the xml node of a parameter use the following function

Example

Suppose you want to add the following custom parameters XML structure:

<custom_parameters>
<model>
<name>"Model1"</name>
<color>"Red"</color>
<model>
</custom_parameters>

Please use the following code to get it,
where sb_handle is a project handle previously loaded with the function sb_project_load or created with the function sb_project_create :

SB_HANDLE node_root = NULL;
// 1) Get the custom parameters xml node root from a project
sb_t_err err = sb_project_get_custom_par_root(sb_handle, &node_root);
if(err != SB_ERR_NONE) return err;
// 2) Add the "model" node
SB_HANDLE node_model = sb_xml_node_add(node_root, "model", NULL);
if(node_model == NULL) return SB_ERR_INTERNAL;
// 3) Add the "name" sub node
SB_HANDLE node_sub = sb_xml_node_add(node_model, "name", "Model1");
if(node_sub == NULL) return SB_ERR_INTERNAL;
// 4) Add the "color" sub node
node_sub = sb_xml_node_add(node_model, "color", "Red");
if(node_sub == NULL) return SB_ERR_INTERNAL;
See also
sb_solution_get_info
sb_solution_get_version
sb_solution_set_current_project
sb_solution_remove_project
sb_par_destroy
sb_par_add_model
sb_par_remove_model
sb_project_get_info
sb_project_destroy
sb_project_load
sb_project_save
sb_project_clone
sb_project_invalidate
sb_project_check_trained
sb_project_set_sensitivity
sb_project_get_sensitivity
sb_project_get_notes
sb_project_set_notes
sb_project_get_name
sb_project_set_name
sb_project_get_custom_par_root
sb_project_get_svl_version
sb_project_detection
sb_project_get_par_changes_info
sb_project_destroy_par_changes_info
sb_project_get_par
sb_project_set_par
sb_project_get_res
sb_project_reset_stat
sb_project_get_stat

Management of information relative to an image

Each image of the solution is associated with a file, called image information file, that contains all image information necessary for svl and detection. The file has the same name as the corresponding image but extension SB_IMAGE_INFO_EXT.
Image information file contains:

To manage the image information file it is necessary to call functions with the acronym sb_image_info (e.g. sb_image_info_load, sb_image_info_save).
All these functions work separately on a image information handle, which is associated to a single project contained in the image information file.

Add the "image information" handle related to a project image to an "image information file"

To add the image information related to a project image to an image information file you should do the following operations,
where sb_handle is a project handle previously loaded with the function sb_project_load or created with the function sb_project_create :

SB_HANDLE image_info = NULL;
// 1) Creates an "image information" handle that connect the image ("001.jpg") to a project
err = sb_image_info_load(&image_info, "001.jpg", sb_handle));
if(err != SB_ERR_NONE) goto FnExit;
// Performs some operations on the image information handle, like adding samples as explained later
// 2) Save the "image information handle" into the "image information file".
// The file will have the same name as the image but extension SB_IMAGE_INFO_EXT
err = sb_image_info_save(image_info));
if(err != SB_ERR_NONE) goto FnExit;
FnExit:
// 3) Destroy the "image information handle"
err = sb_image_info_destroy(&image_info);
sb_t_err sb_image_info_save(SB_HANDLE image_info)
Saves the SqueezeBrains image info handle into the image file.

Add a labeling sample to "image information handle" of an image

To add a sample to an existing image information handle you should do the following operations,
where image_info is a "image information handle" previously loaded or created with the function sb_image_info_load :

sb_t_sample sample;
// 1) Set the sample parameters
memset(&sample, 0, sizeof(sample));
sample.type = SB_OBJ_TEST;
sample.scale = 1.0f;
strcpy(sample.model_name, "model1");
sample.centre = sb_point(78, 141);
err = sb_get_uuid(sample.uuid, sizeof(sample.uuid));
if(err != SB_ERR_NONE) return err;
// 2) Add the sample to the "image information handle"
err = sb_image_info_add_sample(image_info, &sample);
if(err != SB_ERR_NONE) return err;
sb_t_err sb_get_uuid(char *const str, size_t size)
Creates a new uuid.
SB_INLINE sb_t_point sb_point(int x, int y)
Inline constructor of structure sb_t_point.
Definition: sb.h:6546
sb_t_err sb_image_info_add_sample(SB_HANDLE image_info, const sb_t_sample *const sample)
Adds a sample into the SqueezeBrains image info handle.
@ SB_OBJ_TEST
Object is not used for learning.
Definition: sb.h:8762
@ SB_SAMPLE_REQUIRED
Definition: sb.h:8845
Sample of an image.
Definition: sb.h:8908
sb_t_point centre
Coordinates of the centre of the sample.
Definition: sb.h:8916
char model_name[SB_PAR_STRING_LEN]
Name of the model.
Definition: sb.h:8981
char uuid[36]
uuid of the sample.
Definition: sb.h:8909
float scale
Scale factor.
Definition: sb.h:8994
sb_t_sample_classify_mode classify_mode
Classification mode.
Definition: sb.h:9063
sb_t_obj_type type
Sample type.
Definition: sb.h:8999

Set the roi for the "image information handle"

To set the roi to an existing image information handle you should do the following operations,
where image_info is a "image information handle" previously loaded or created with the function sb_image_info_load :

sb_t_roi *roi = NULL;
// 1) Get the roi from the "image information handle"
// set the parameters width and height to -1 to obtain a roi with a resolution equal to that of the image
err = sb_image_info_get_roi(image_info, &roi, -1, -1, 0);
if(err != SB_ERR_NONE) goto FnExit;
// Edit the roi as needed
// 2) Set the roi into the "image information handle"
err = sb_image_info_set_roi(image_info, roi);
if(err != SB_ERR_NONE) goto FnExit;
FnExit:
// 3) Destroy the roi structure
sb_t_err sb_image_info_set_roi(SB_HANDLE image_info, const sb_t_roi *const roi)
Sets the ROI in a SqueezeBrains image info handle.
sb_t_err sb_image_info_get_roi(SB_HANDLE image_info, sb_t_roi **const roi, int width, int height, int compressed)
Gets the ROI from a SqueezeBrains image info handle.

Reset an "image information" related to a project

To reset image information related to a project you should do the following operations,
where image_info is a "image information handle" previously loaded or created with the function sb_image_info_load :

// 1) Reset the "image information" (erases the ground truth samples, roi, ground truth roi defects),
sb_t_err err = sb_image_info_reset(image_info);
if(err != SB_ERR_NONE) return err;
sb_t_err sb_image_info_reset(SB_HANDLE image_info)
Erases all the data from the SqueezeBrains image info handle.

Clone an "image information handle" and write it to the "image information file"

To clone an image information handle and write it to *image information file** you should do the following operations,
where image_info is a "image information handle" previously loaded or created with the function sb_image_info_load :

SB_HANDLE image_info_dest = NULL;
// 1) Clone "image information handle" and assign it to another project of the solution
err = sb_image_info_clone(image_info, &image_info_dst, sb_handle_dest);
// 2) Save cloned "image information handle" to the "image information file" associated to the image
err = sb_image_info_save(image_info_dst);
// 3) Destroy the image info handle
sb_image_info_destroy(&image_info_dst);
sb_t_err sb_image_info_clone(SB_HANDLE src, SB_HANDLE *pdst, SB_HANDLE module_handle)
Clones the SqueezeBrains image info.

Remove "image information" related to a project from "image information file"

To remove image information related to a project from image information file you should follow the following operations,
where sb_handle is a project handle previously loaded with the function sb_project_load or created with the function sb_project_create :

// 1) Get project information associated to the "image information" to remove,
err = sb_project_get_info(sb_handle, &info);
// 2) Remove "image information" of the specified project uuid from "image information file"
err = sb_image_info_remove_project("001.jpg", info.uuid);
sb_t_err sb_image_info_remove_project(const char *const image_file, const char *const project_uuid)
Remove the project with the specified UUID from the data associated to the image.
See also
Solution structure
sb_image_info_load
sb_image_info_clone
sb_image_info_save
sb_image_info_destroy
sb_image_info_remove_project
sb_image_info_add_sample
sb_image_info_set_samples
sb_image_info_get_samples
sb_image_info_reset
sb_image_info_set_type
sb_image_info_get_type
sb_image_info_get_notes
sb_image_info_set_notes
sb_image_info_get_name
sb_image_info_set_results
sb_image_info_get_results
sb_image_info_change_project
sb_image_info_get_details
sb_image_info_destroy_details
sb_image_info_set_roi
sb_image_info_get_roi
sb_image_info_set_roi_defects
sb_image_info_get_roi_defects
sb_image_info_get_custom_par_root

How to do labeling

In supervised machine learning application, image labeling is a fundamental operation having a strong influence over the goodness of the final results. Labeling is the procedure that consists of locating any sample or defect instance in the correct position over the image. It is (almost always) manually done by an user and for this reason may be a time-consuming operation, especially when involves a great number of images. Its importance is quite intuitive: once learning process is started, elaboration criteria are directly inferred from image data, without any user interaction. Thus, before to start any SVL it is necessary to have a training dataset labelled as accurate as possible.
You can do the labeling using the SB GUI.
To facilitate labeling, the Labeling Assistant has been developed in the SB GUI, see Labeling assistant with SB GUI for more information.
Below the labeling procedure for SqueezeBrains tools is described.

Labeling with Retina

Labeling procedure for Retina projects is not particularly complex. After defining a new model, the user has to add a sample for each model occurrences into the image using sb_image_info_add_sample or sb_image_info_set_samples function. Each sample is a bounding box item and it is fully described by the information of the structure sb_t_sample . In the following some hints are reported in order to do a correct labeling:

  • object has to be centered in the sample instance with an extra margin. This margin is equal to 8 pixel * sb_t_sample::scale value and it has to be considered along any direction.
  • set an appropriate trade-off between model dimension and the sb_t_sample.scale "scale" of the samples. Note that model instances with small object size and samples with high scale values are faster to be trained at the cost of a loss of resolution information.
  • sometimes, if the entire object is particularly complex to be learned, may be useful to label only a portion or a detail of the object, which can be easier to locate
  • if a sample is "border-line", i.e. the user is not sure about its classification or otherwise he is sure, but its appearance is strongly difference from other instances of the same model, set sb_t_sample::sb_t_sample_classify_mode property of the sample equal to optional. In this way, sample is not considered for the training of the model not influencing the convergence of the inner classifier.

In the figure below some of the previous points are graphically represented in SB GUI.

Retina labeling. Detail of a labeled sample with its 'extra margin' (left) and an example of Required and Optional samples (right).

Labeling with Deep Cortex

Labeling procedure for Deep Cortex is very simple. After defining a new model, the user has to set at most one sample to the image using sb_image_info_set_samples function. The sample describes the model associated to the image. If the image is not associated to any model, no sample has to be set.
Note that in a Deep Cortex project, information about sample position in the image is not important for training and inference. However, to facilitate the graphical representations of the sample over the image, it is advisable to set sb_t_sample::centre equal to the coordinates of image centre and sb_t_sample::scale = 1.0.
In the SB GUI it is also possible to select a group of images and set the model for all of them at once.
The figure below shows how to do labeling in SB GUI.

Deep Cortex labeling in SB GUI.

Labeling with Surface/Deep Surface

Labeling procedure for Surface and Deep Surface projects is more complex than Retina and Deep Cortex ones. This is because surface defects cannot be described and labelled as rectangular bounding boxes but requires a labeling that can perfectly fit to their shape and orientation at pixel level. Thus, defects are labelled over a ROI defects, which allows an higher degree of flexibility. ROI defects is set over the image using sb_image_info_set_roi_defects function. In the following some hints about how to mark ROI defects are reported:

  • ROI defects has to overlap as much as possibile the defects instance, with the care of completely covering its entire area
  • in case of filiform defects, ROI defects area has to been centered along the defect axis. This condition is strongly recommended for Surface projects when the defects dimension is lower than 16 pixels, which is the minimum block size for inner elaboration. A detailed example from SB GUI is reported in the figure below.
    Surface labeling. Original defect image (left), correct labeled defect (upper right) and not centered labeled defects (bottom right).
  • in case of shaded surface defects (defects that exhibit a transition area between regions with an undefined degree of defectiveness and regions with sure defectiveness) user is recommended to label, first of all, the entire defect area as optional and only after more accurately label the "true" defect as mandatory. The general rule is that optional defect regions are not used by training. The only exception to this rule being in Surface if optional defect regions are contained in a minimum processing block including the mandatory defects.
    A detailed example from SB GUI is shown in the image below.
    Surface labeling. Left: original image with shaded defect, Right: correct labeling (mandatory defects in red, optional defects in light green).
  • optional defects area, which cannot be assigned to a specific model, has to be labelled as optional for all models. ROI defect pixels assigned to this class are set to SB_SURFACE_OPTIONAL_GRAY_LEVEL value.

Labeling assistant with SB GUI

In order to facilitate and speed up data labeling procedure, SB GUI offers a Labeling assistant tool. This tool allows user to use the SVL training currently saved in memory for data labeling of unlabeled (or partially labeled) images. Labeling assistant graphically suggests for each image a list of sample/defect instances, that can be freely modified and then accepted by the user. However it is always necessary a careful supervision on the automatic procedure, because the classifier may commit mistakes, especially when the current training is not robust.
Image below shows the overview of labeling assistant tool in Retina, with its main operations.

Labeling assistant overview for a Retina project in SB GUI

Features

Attention
Retina and Surface projects only

To recognize objects and defects with Retina and Surface projects, a set of features is extracted from the image and classified. Available features are fixed and designed by SqueezeBrains. Generally the optimal feature set is automatically selected by the SVL, i.e. by the sb_svl_run function. The set of features is configured with the parameter sb_t_svl_sl_par::features, and is initialized with a predefined set of different features depending on the project type, Retina and Surface. See the table below.

Project Predefined set of features
Retina 0A, 0B, 2A, 2B, 2C
Surface 2A, 2B, 2C, 2A_R, 2AA_R, 2B_R, 2C_R, 2A_G, 2AA_G, 2B_G, 2C_G, 2A_B, 2AA_B, 2B_B, 2C_B

Obviously it is possible to create your own set of features so as to condition the choice of the SVL.
In the project settings menu of the SB GUI , it is possible to select the features in two different ways:

  • Simple Mode: for a non-expert user. This mode brings the user to a more targeted and conscious choice of the feature set with a series of question about the dataset. Depending on the user answers, the optimal feature set is automatically selected.
  • Advanced Mode: for expert user. The user has to manually select the features that best fit to the target detection task. A Brief description about the features and their application is available at the section Features description.
    Features selection in SB GUI. Overview of Simple Mode (left) and Advanced Mode (right).

However, the set of features effectively used for the training, may varies according to the sb_t_svl_sl_par::optimization_mode parameter and to the type of images. For example, some features can only be used with BW images, others only with color images (RGB or BGR), and others with both BW and color images. Use the function sb_feature_description to get this information from the library.

Note
Deep Cortex and Deep Surface projects not use features from a predefined set of available features. Convolutional Neural Network algorithm are able, during SVL, to dynamically compute the best set of features to solving the current vision task, without any action requested to the user. Thus, if you are interested to Deep Cortex or Deep Surface skip the following section. See Deep Learning modules for more information.

Features names

The name of the features has the following format: 2A[A][_RI][.R].
The name is composed by the following elements:

  • Number: the first number represents the category of the feature. Allowed values are 0, 2, 3, 4.
  • Letter: the second letter specifies the behaviour of the feature. Allowed values are A, B, C, D.
  • Letter: the third letter is optional, it is a "A" and it is used only for the 2A features. It represents a variant of the 2A feature.
  • _RI: this suffix identifies the rotation invariant features, that are features insensible to the rotation. These are used for Surface projects in which the defect is present with different rotations. This variant applies only to 0 and 2 features category.
  • .R: the last optional suffix identifies the specific chromatic channel (R, G or B) processed by the feature. If specified, the feature considers only that channel. This variant applies only to the features 2.

Features description

The following table lists the features and their characteristics.

Name Image format Description
Category "0"
0A
0B
BW
RGB
BGR
The feature is sensitive to the transitions between light and dark and viceversa.
It is insensitive to the level of brightness.
It is a feature very effective in detecting the shape of the objects.
It is used both for BW and for colour images. In case of colour images for each pixel it is chosen the component R, G, B that maximizes the gradient.
  • 0A: A dark object on a bright background or a bright object on a dark background make no difference.
  • 0B: A dark object on a bright background or a bright object on a dark background are different.
0A_RI
0B_RI
BW
RGB
BGR

Similar to the 0A and 0B but invariant to rotation.
Useful for Surface projects when the defect or the background can have different rotations.

Category "2"
2A
2B
2C
2D
BW
RGB
BGR
They are features designed to see the texture of surfaces but can also make a significant contribution to the analysis of the shape of objects.
They are insensitive to the level of brightness.
As the letter (A, B, etc.) increases, the size of the area analyzed punctually by the feature itself increases.
Up to version 1.4.0 they could only be used with BW images.
From version 1.5.0 onwards they can also be used on color images. In this case the luminance is used.
2A_RI
2B_RI
2C_RI
BW
RGB
BGR
Variations of features 2A_RI, 2B_RI, 2C_RI invariant to rotation, indicated for Surface projects when the defect or background can assume different rotations.
An example is the detection of defects on circular pieces however rotated.
2A.R
2A.G
2A.B
2AA.R
2AA.G
2AA.B
2B.R
2B.G
2B.B
2C.R
2C.G
2C.B
RGB
BGR
They are variants of features 2 that apply only to color images and work only on the specified color channel (R, G or B).
These features are particularly suitable for Surface projects where the chromatic component of the defect and / or background is important for the characterization of the defects.
For the other aspects they are equivalent to the original features "2".
2A_RI.R
2A_RI.G
2A_RI.B
2AA_RI.R
2AA_RI.G
2AA_RI.B
2B_RI.R
2B_RI.G
2B_RI.B
2C_RI.R
2C_RI.G
2C_RI.B
RGB
BGR

Variations of chromatic features invariant to rotation, indicated for Surface projects when the defect or background can take on different rotations.
An example is the detection of defects on colored circular pieces.

Category "3"
3A
3B
3C
3D
3E
RGB
BGR
Analyze color hue and saturation without considering luminance.
It is used to distinguish between objects with different colors.
The spectrum of visible light, from blue to red, is divided into an histogram of 16, 32, 64, 128, 256 bins respectively for feature 3A, 3B, 3C, 3D, 3E, so the feature manages to separate quite different colors, not shades of color.
The image below shows the case of feature 3A with 16 bins.
BW

From version 1.8.1 onwards features "3" can also be used on BW images. In this case the luminance is used.
It is used to distinguish between objects with different luminance.
The luminance from black to white,is divided into an histogram of 16, 32, 64, 128, 256 bins respectively for feature 3A, 3B, 3C, 3D, 3E.

Category "4"
4A BW
RGB
BGR
It analyzes the local contrast level based on the luminance in the case of grayscales images and based on the chromatic components in the case of color images.
It could be used together with features "2" in Surface projects, but it is not advisable to use it alone as it is not sufficiently characterizing.

How to choose the features?

This section gives some guidelines for choosing the features for the Retina and Surface projects.
Retina project
The guidelines for choosing features for Retina can be the following:

  1. The most important features for Retina are all those of category "0" which are the most suitable for modeling the shape of an object.
  2. Also the features of category "2" are important but only the "variant to rotation".
  3. The feature "invariant to rotation" rarely make a contribution.
  4. Features "3" and "4" very rarely can be used alone but usually they are placed alongside features "2" or "0".
  5. Usually Retina needs few features and already with 1 feature excellent results can be obtained.

A flow chart for Retina training could be the following:

  1. Optimization mode
    Start training using optimization mode equal to SB_SVL_PAR_OPTIMIZATION_TIME_FAST and only if you does not obtain good results change it to SB_SVL_PAR_OPTIMIZATION_TIME_MEDIUM and SB_SVL_PAR_OPTIMIZATION_TIME_SLOW.
    1. If the analysis time is too long
      You can delete some features deselecting them from the set of features.
      Keep in mind that the features that take the most time are the "2C".

Surface projects
The guidelines for choosing features for Surface can be the following:

  1. The most important features for Surface are all those of category "2" which are the most suitable for modeling the texture of the surface. In fact, the default features for Surface are the "2" in the "variant to rotation" version:

    Features 2A, 2B and 2C use luminance while all the others are the respective chromatic versions that work on a single color channel R, G, B.

  2. Of all the features "2" there is also the version "invariant at rotation":
  3. To choose between variant or invariant features, the following scheme may be useful:

    Significant dataset Defects of
    any orientation
    Background of
    any orientation
    Features
    1 YES YES YES Variants/Invariants
    2 YES NO YES Variants/Invariants
    3 YES YES NO Variants/Invariants
    4 YES NO NO Variants
    5 NO YES YES Invariants
    6 NO YES NO Invariants
    7 NO NO YES Invariants
    8 NO NO NO Variants/Invariants

    "Defect of any orientation" it means a shape that has a main axis, for example a scratch, and that does not have specific directions but can have any angle.
    "Background of any orientation" it means the presence of a texture that has a directionality and that can have any angle.
    The criterion could be that, you start with the default features "variant to rotation", unless you are in cases 5, 6, 7, so preferably you start with the fetaures "invariant to rotation".
    In the other cases, you start with the "variants" and only when the results are poor are the "invariants" tested.

  4. The "0" features rarely make a contribution, they are good for Retina because they are more suitable for modeling shapes. In fact, these features have not been put into the default set for Surface.
  5. Features "3" and "4" very rarely can be used alone but usually they are placed alongside features "2" or "0".
  6. Surface usually needs a lot of features and to have good results, generally, you need at least 3 features.

A flow chart for Surface training could be the following:

  1. Choice of scale
    The bigger the defect, the bigger the scale must be.
    If the defect has very different dimensions, set more scales.
  2. Choice of the variant or invariant rotation features
    Try training with all "2" variants or invariant features and let SVL automatically choose the features.
  3. Optimization mode
    Start training using optimization mode equal to SB_SVL_PAR_OPTIMIZATION_TIME_FAST and only if you does not obtain good results change it to SB_SVL_PAR_OPTIMIZATION_TIME_MEDIUM and SB_SVL_PAR_OPTIMIZATION_TIME_SLOW.
  4. If the analysis time is too long
    Automatic feature selection may choose many features.
    You can delete some features deselecting them from the set of features.
    Keep in mind that the features that take the most time are the "2C".

Features speed

Another criterion for choosing the features to be used in the project is the processing speed. When the machine's cycle time is critical and you need to reduce the execution time of the sb_project_detection function as much as possible you can set the sb_t_svl_sl_par::optimization_mode parameter to the SB_SVL_PAR_OPTIMIZATION_USE_SELECTED value so you can choose exactly the fastest features that suit your application. As explained in the How SB works chapter, the analysis basically consists of two parts: the extraction of the features and their classification. The extraction and classification time of the features do not follow the same trend, so there are features that require less time for extraction but more time for classification and vice versa. Furthermore, as the implementation of the features is different for BW and color images, their times are also different for BW and color images.
Below are two tables, one for BW images and the other for color images, with the intent of giving a useful indication for the choice of features based on execution times. Since it is not possible to give absolute execution times, then it has been indicated as an increase compared to the faster feature that is taken as a reference. These data were obtained by averaging the results of the many different projects. it is emphasized that these data are only indicative and that it may be that with your project different results are obtained so it is always necessary to carry out verification tests.
Features are sorted by increasing runtimes.

In the following table for gray level images:

Feature
extraction
Speed
factor
Feature
classification
Speed
factor
2A x 1.0 2B_RI x 1.0
2B 2A_RI
2A_RI x 1.5 2C_RI
2B_RI 0A x 1.5
0A 0A_RI
0B 2A
0B_RI 2B
0A_RI 0B
2D x 2.5 0B_RI
2C 2D x 2.5
2C_RI x 3.0 2C

In the following table for color images:

Feature
extraction
Speed
factor
Feature
classification
Speed
factor
3A x 1.0 2B_RI.B x 1.0
2A.R x 2.5 2B_RI.G
2B.B 2A_RI.R
2AA.R 2B_RI
2AA.B 2B_RI.R
2AA.G 2A_RI.B
2A.B 2A_RI.G
2B.R 2AA_RI.B
2B 2A_RI
2A.G 2AA_RI.G
2B.G 2AA_RI.R
2A 2C_RI.B
2AA_RI.B x 3.5 2C_RI.G
2AA_RI.R 2C_RI
2AA_RI.G 2C_RI.R
2A_RI.B x 4.0 0A x 1.5
2A_RI 0A_RI
2A_RI.R 3A
2A_RI.G 2B
2B_RI 2B.B
2B_RI.G 2AA.R
2B_RI.B 2A.G
2B_RI.R 2A.R
2D.G x 6.0 2A
2D.R 2AA.B
2D 2A.B
2D.B 2AA.G
2C.B 2B.R
2C.R 2B.G
2C 0B
2C.G 0B_RI
0A_RI 2D x 2.5
0B 2D.B
0A 2D.R
0B_RI 2D.G
2C_RI.B x 9.0 2C.B
2C_RI 2C
2C_RI.R 2C.R
2C_RI.G 2C.G

Here are some considerations on how to use the previous tables.
Even if there are multiple models, the sb_project_detection function extracts the features only once for each level of scale and then performs the classification for each model. So if there are more models it is better to prefer a feature that is faster in classification than a faster feature in extraction.

See also
sb_t_svl_par
sb_project_set_par
sb_project_get_par
sb_par_destroy

Levels

Attention
Surface projects only

To recognize defects with Surface, the image can be processed on different levels for each model, that means models can work on different scales from the others.
In this section the levels configuration is described.
Generally the image is processed at its original scale and resolution without any resize.
There are some situations when it is convenient and more effective to process the images at different scales:

  • the image resolution is very high and the defects are very big: in this case it is advisable to use a scale greater than 1.0 in order to both reduce the processing time and to improve the detection accuracy.
  • the defects have very different sizes: in this case using different levels (scales) may help the defects detection because different defects can be found on different scales.

From version 1.10.0 a new functionality has been added in the SVL of Surface project which automatically chooses the scale levels starting from the size of the defects.
To enable or disable this functionality use the parameter sb_t_svl_sl_par::auto_levels .
If enabled you should not modify the sb_t_par_model::levels list of the models.
The image below shows the parameter selection in the Settings menu of the SB GUI.

Automatic Levels training

The set of levels to be processed is configured with the sb_t_par_model::levels parameter and is initialized in a different way depending on the project type. See the table below.
Retina project has only 1 level because the scale is managed in another way by using the scale of the sample, see sb_t_sample::scale .
The parameter is not used by Deep Cortex and Deep Surface projects.

Project Predefined set of levels Number of levels
Retina 1 level at 1.0 scale 1 fixed level at 1.0 scale
Surface 0 levels Up to 32 levels with a scale ranging from SB_PAR_LEVEL_SCALE_MIN to SB_PAR_LEVEL_SCALE_MAX
Deep Cortex 0 levels (not used) the parameter is not used and the scale is automatically managed
Deep Surface 0 levels (not used) the parameter is not used and the scale is automatically managed

Level scales must be multiple of SB_PAR_LEVEL_SCALE_GRANULARITY and must be stored in the sb_t_par_model::levels array in ascending order. Use the functions sb_par_add_level and sb_par_remove_level to respectively add or remove a level to/from the sb_t_par_model::levels list.
These functions ensure that the list is always sorted in ascending order.

Levels configuration

Levels configuration is only possible with Surface projects.
Level can be added with the function sb_par_add_level and removed with the function sb_par_remove_level.
To enable the automatic levels scale training do the following operations,
where sb_handle is a project handle previously loaded with the function sb_project_load or created with the function sb_project_create :

sb_t_par* par = NULL;
// 1) Get the actual parameters from the project handle previously loaded or created
err = sb_project_get_par(sb_handle, &par);
if(err != SB_ERR_NONE) goto FnExit;
// 2) Enable
par->svl.sl.auto_levels = 1;
// 3) Set the project parameters
err = sb_project_set_par(sb_handle, par);
if(err != SB_ERR_NONE) goto FnExit;
FnExit:
// 4) Destroy project parameter structure
sb_t_svl_par svl
SVL parameters.
Definition: sb.h:11913
sb_t_svl_sl_par sl
Shallow Learning SVL parameters.
Definition: sb.h:11342
int auto_levels
Enable the automatic Surface levels training.
Definition: sb.h:11219

To add a level you should do the following operations, where sb_handle is a project handle previously loaded with the function sb_project_load or created with the function sb_project_create :

sb_t_par* par = NULL;
// 1) Get the actual parameters from the project handle previously loaded or created
err = sb_project_get_par(sb_handle, &par);
if(err != SB_ERR_NONE) goto FnExit;
// 2) Add level 1.0 to the model with index 0
int idx = 0;
err = sb_par_add_level(par, par->models.model[idx].name, 1.0f);
if(err != SB_ERR_NONE) goto FnExit;
// Repeat step 2 to add other levels to the models
// 3) Set the project parameters
err = sb_project_set_par(sb_handle, par);
if(err != SB_ERR_NONE) goto FnExit;
FnExit:
// 4) Destroy project parameter structure
sb_t_err sb_par_add_level(sb_t_par *const par, const char *const model_name, float scale)
Adds the level to the parameter structure.
char name[SB_PAR_STRING_LEN]
Model name.
Definition: sb.h:11500

To remove a level you should do the following operations,
where sb_handle is a project handle previously loaded with the function sb_project_load or created with the function sb_project_create :

sb_t_par* par = NULL;
// 1) Get the actual parameters from the project handle previously loaded or created
err = sb_project_get_par(sb_handle, &par);
if(err != SB_ERR_NONE) goto FnExit;
// 2) Remove the level 1.0 from model with index 0
int idx = 0;
if(par->models.model[idx].levels.size > 0)
{
err = sb_par_remove_level(par, par->models.model[idx].name, 1.0f);
if(err != SB_ERR_NONE) goto FnExit;
}
// Repeat step 2 to remove other levels to the models
// 3) Set the project parameters
err = sb_project_set_par(sb_handle, par);
if(err != SB_ERR_NONE) goto FnExit;
FnExit:
// 4) Destroy project parameter structure
sb_t_err sb_par_remove_level(sb_t_par *const par, const char *const model_name, float scale)
Removes the l-th level from the structure of the project parameters.
int size
Number of levels that is number of elements of the array level.
Definition: sb.h:11487
sb_t_par_levels levels
Array of levels parameters for this model.
Definition: sb.h:11604

To disable a level of a model you should do the following operations,
where sb_handle is a project handle previously loaded with the function sb_project_load or created with the function sb_project_create :

sb_t_par* par = NULL;
// 1) Get the actual parameters from the project handle previously loaded or created
err = sb_project_get_par(sb_handle, &par);
if(err != SB_ERR_NONE) goto FnExit;
// 2) Disable the first level of the model with index 0
int idx = 0;
if(par->models.model[idx].levels.size > 0)
{
par->models.model[idx].levels.level[0].enabled = 0;
}
// Repeat step 2 to remove other levels
// 3) Set the project parameters
err = sb_project_set_par(sb_handle, &par);
if(err != SB_ERR_NONE) goto FnExit;
FnExit:
// 4) Destroy project parameter structure
int enabled
Enabling flag of the level.
Definition: sb.h:11463
sb_t_par_level level[SB_PAR_LEVELS_NUM]
Array of levels.
Definition: sb.h:11482

Levels configuration with the SB GUI

In the SB GUI, levels for each model can ben configured for a Surface project in section Settings->Model.

Levels selection
See also
sb_t_par_model::levels
sb_project_set_par
sb_project_get_par
sb_par_destroy
sb_par_add_level
sb_par_remove_level

SVL - training

SVL is the acronym for Super-Vised Learning and is the supervised procedure that carries out the training of a SqueezeBrains project using a set of labeled images.
SVL processing is composed by the following steps:

The field sb_t_svl_res::running_step is filled with the description of the current step. SVL elaborates every learning image present in the folder path specified by the parameter sb_t_svl_par::project_path and with the extension compatible with the file extensions specified by the parameter sb_t_svl_par::image_ext . An image is marked to be used for learning with the function sb_image_info_set_type.

SVL control

There are 3 ways to control the training process:

  • Start: the start action launches the svl training loading the last saved SVL status if present. In this case an incremental training is performed, thus reducing the required training time compared to a full training session.
  • Break: the break action stops the SVL training asking to the user whether to save (if possible) or not the current status. Saving of the current training status is possible only if the SVL has already reached the step training recipe that occurs after the initialization and choose recipe steps.
  • Reset: the reset action clears the SVL status and current progress. The next start action will perform a complete training process from scratch. A change in the training dataset may automatically force a reset of the SVL on the next Start action, see sb_t_svl_stop_reason.
    In other cases the user can choose whether to reset or not to reset the training status in order to perform an incremental training or a complete training. It is advisable to reset the SVL in some specific conditions like the ones described below:
    • The training dataset is significantly changed from the previous saved training.
    • There were some labeling errors in the previous training.
    • New images with different format have been added to the training set.
      Svl start/reset

SVL goodness

Attention
Retina and Surface projects only

The goodness is a measure used to evaluate the training quality of Retina and Surface projects. It is an estimation of the separation between the weight or confidence of TRUE POSITIVE and TRUE NEGATIVE samples, i.e. between the foreground and the background, or, in case of Surface project, between good and defective surface. Goodness is evaluated with the equation shown below:

Equation of goodness

The following image shows a graphic representation of goodness. On the ordinate axis the sample weights. If training ends without errors, the zero is set exactly halfway between the set of samples TRUE POSITIVE and that of the TRUE NEGATIVE, and the goodness will be greater o equal than 0. Otherwise, if the training ends with errors (SB_TRUTH_FALSE_POSITIVE FP or SB_TRUTH_FALSE_NEGATIVE FN), the goodness will be less than 0.

Goodness

The image below shows the SVL page of the SB GUI at the end of the training. The samples are shown in descending order of weight, starting at the top left with the TRUE POSITIVE sample with the greatest weight up to the TRUE NEGATIVE sample at the bottom with the lowest weight. Only the first 100 TRUE NEGATIVE samples per model are displayed. In practice you can see an imaginary weight axis that connects all the samples and that runs from left to right and from top to bottom. If the SVL ends without errors, the TRUE POSITIVE sample with lower weight and the TRUE NEGATIVE sample with greater weight are equal in absolute value.

Goodness
See also
sb_t_svl_sl_par::goodness_target

SVL history

When the SVL starts, it loads the history from a previously saved training and uses it to proceed from that status with an incremental training. In case no SVL history is present or the function sb_svl_reset has been called, the processing starts from the beginning. The following table illustrates some specific cases and how they affect the previously saved SVL history.

Case condition Effect
Retina/Surface Deep Cortex/Deep Surface
The models order is changed SVL history is maintained SVL history for all models is lost
A model has been disabled SVL history for that model is maintained SVL history for all models is lost
A model has been invalidated due to model parameters changes SVL history for that model is lost SVL history for all models is lost
All models has been invalidated due to parameters changes SVL history for all models is lost

In order to check the current status of the SVL or to customize some behaviors and make some choices a set of callbacks is provided.

Enable/Disable SVL for a model

Attention
Retina and Surface projects only

Internally, the training works on each model independently from the others. This means that the user is free to enable/disable some models before running an SVL without affecting the training results. This may be useful when it is necessary to effectively train only a subset of models, for example to speed up the training time. The user can enable/disable a specific model setting to 1/0 the flag sb_t_par_model::enabled, see the example Enable/disable a model in a project .

Note
In Deep Cortex and Deep Surface projects is not possible to run a SVL if some models are disabled.
See also
sb_t_par_model::enabled
Enable/disable a model in a project

See sb_project_set_par for changes to other parameters that invalidate the training.

See also
sb_t_svl_par
sb_project_set_par

SVL Callbacks

In the parameters structure sb_t_svl_par there are 3 callback pointers for a better integration of the module SVL in a custom software.
Of course is it possible to leave the callback pointers to NULL. In this case the SVL will work in the predefined mode.
The parameter sb_t_svl_par::user_data is passed to each callback so that the user can find his data inside the callback.
In the following the description of the callbacks.

  1. sb_fp_svl_progress

    The callback sb_t_svl_par::sb_fp_svl_progress is continually called to inform the caller about the progress of the SVL. For example the current accuracy.
    Most of the time the callback is called only to signal to the user that the SVL is running and isn't blocked, that is, sb_t_svl_res::time_ms is increasing.
    When something is changed and the user should refresh the information, the callback is called with the flag force set to 1.
    The field sb_t_svl_res::running_step informs about the current step of the training.

    See also
    sb_t_svl_res
  2. sb_fp_svl_pre_elaboration

    The function sb_svl_run calls the callback sb_t_svl_par::fp_pre_elaboration for each image with type SB_IMAGE_INFO_TYPE_SVL in the folder specified with the parameter sb_t_svl_par::project_path. In the callback the user fills the sb_t_svl_pre_elaboration structure and in particular:

  3. sb_fp_svl_command

The SVL calls the callback sb_t_svl_par::fp_command in some different conditions:

The user decides the action the SVL will do with the parameter command passed to the callback. The parameter command can have the following values:

In the table below what sb_svl_run does by any combination of stop_reason and command parameters is reported.

stop_reason \ command SB_SVL_COMMAND_STOP SB_SVL_COMMAND_ABORT SB_SVL_COMMAND_CONTINUE SB_SVL_COMMAND_CONTINUE_NO_RESET
SB_SVL_STOP_CONFLICT stop abort continue continue
SB_SVL_STOP_USER_REQUEST stop abort continue continue
SB_SVL_STOP_MEMORY stop stop stop stop
SB_SVL_STOP_RESET_MANDATORY stop abort reset reset
SB_SVL_STOP_RESET_OPTIONAL stop abort reset continue without reset
SB_SVL_STOP_WARNING stop abort continue continue

Stop: after sb_svl_run has finished, you can call sb_project_save to save the training results.
Abort: after sb_svl_run has finished, you can not call sb_project_save to save the training results.

Perturbations

Perturbations is a procedure widely used in Machine Learning. It consists in generating synthetic data in order to increase the variability of instances processed by the training algorithm. Its main objective is to make the algorithm more robust on new unseen data, i.e to improve its generalization capability on test images.
In SB Library perturbations are differently implemented depending if project is of Shallow Learning or Deep Learning type.

Shallow Learning Perturbations

Shallow learning perturbations are used only by Retina projects.
In this case, the perturbation is applied per model to the samples of the image. Artificial samples obtained from the perturbation are added to the original ones of the image. Depending on the parameters, a SVL perturbation generates one or more synthetic samples.
You can see the perturbations as a sequence of operations on the image of the sample.
The operations are executed in this specific order:

  1. rotation (it is pseudo-randomic by default, disable sb_t_svl_par::reproducibility for a full randomness)
  2. flip around horizontal axis
  3. flip around vertical axis
  4. num_synthetic_samples rotations of the sample with a random angle between angle_range

If you need to do only a flip you should set num_synthetic_samples equal to 1 and set angle_range equal to 0.
If you need both the sample flipped around y-axis, and the one flipped around x-axis, you should configure two perturbations, one for each flip. This solution is strongly suggested especially when the model has a symmetry, for example: horizontal, vertical or circular. If the model you want to detect has a rotation variability in a specific interval of degrees, we suggest using perturbations to make the training more robust.
In the image below you can see the panel in the menu setting of the SB GUI. In the example below two perturbations has been added: a vertical flip and an horizontal flip.

Settings of the perturbation
Attention
Adding too many synthetic version of the same sample could generate overfitting.
See also
sb_t_par_model::perturbations

Deep Learning Perturbations

The Deep Learning perturbations are used for Deep Surface and Deep Cortex projects.
In this case a perturbation is applied during SVL to the entire training image and not to a part of this (e.g. to a sample) as in shallow learning case. In addition deep learning perturbation does not contribute to numerically increase the amount of images processed by an epoch (i.e. a loop over the entire training set), but only force to use every time a different perturbated version of the same image. When multiple perturbations are enabled, their order does not affect the gloabal perturbated image. Operations executed by perturbation may be of two types:

All deep learning perturbations are applied by default in pseudo-random way depending from the current epoch. This allow, at same condition of training images and parameters, to be sure that each image is always equally perturbated at the same epoch. For a full randomness disable sb_t_svl_par::reproducibility.
In the image below you can see the panel Perturbations where choosing deep learning perturbations in the Settings Menu of the SB GUI.

Settings of the Deep Learning perturbations
Attention
Enable perturbations in Deep Surface and Deep Cortex projects is in general strongly reccommended. In fact, a good practice for training Deep Convolutional Neural Network is to avoid that network processes multiple times the same image in order to avoid overfitting. However, perturbations have to be set always with attention to the target task: avoid to apply such perturbations that may change the model distinctive features (e.g. applying flip to image with object/defect model dependent on orientation).
See also
sb_t_svl_dl_par::perturbations

Lut

Sometime the user doesn't want to elaborate the acquired image but a warped version of it. However, he wants to show to the operator the acquired image, and he wants that the operator sets the sample on this image. So happens that the coordinates of the samples in the RTN files are referred to the acquired image while the SVL needs to have the samples referred to the warped image.
To manage correctly this situation, the SVL needs to warp the point coordinates of the samples using the lut functions.
Using these functions it is possible to create luts to map the coordinates of a point referred to the acquired image into the coordinates referred to the warped one and vice versa. The SVL will use this luts to warp the coordinates of the samples.
To use the lut the user should do the following steps.

  1. Create the lut with the function sb_lut_create , to map the coordinates of a point from warped image to the original one.
  2. Get the pointer to lut array with the function sb_lut_get_ptr .
  3. Fill the lut for each pixel of the image.
  4. Assign the function callback pointer sb_t_svl_par::fp_pre_elaboration .
  5. Set the parameters sb_t_svl_pre_elaboration::warped2ori in the callback function sb_fp_svl_pre_elaboration .
    See also
    SVL Callbacks
    sb_t_sample::centre_warp
    sb_t_lut_point
    sb_lut_create
    sb_lut_destroy
    sb_lut_get_ptr
    sb_lut_get_size
    sb_lut_save
    sb_lut_load
    sb_lut_warp_point

ROI management

Attention
Retina, Surface and Deep Surface projects only.

It is possibile to create and manage a ROI (Region Of Interest) with a generic shape.
The ROI is an image with the same size of the source image.
You can set each pixel of the ROI as belonging to the ROI or not. Only the selected pixels will be elaborated.
Each pixel of the ROI is 8 bit depth.
There are two different types of ROI:

  1. Analysis ROI: this ROI defines the region of the image to be analyzed by Retina, Surface or Deep Surface. A maximum of 255 not overlapping ROIs can be represented, value 0 means the absence of ROI. The actual ROI levels to be processed by sb_project_detection are defined in the lut field of the sb_t_roi structure.
    Note
    Deep Cortex projects does not use Analysis ROI because classification always takes into account the entire image.
  2. Defects ROI: this ROI is used only by Surface or Deep Surface and defines the regions of the defects for each model. Value 0 means the absence of defect. It is possible to manage a maximum number of 127 defect models. Each model has two values: one to set required defects and the other to optional defects. Levels from 255 to 129 are assigned to required defects and levels from 127 to 1 to optional defects as shown in the table below:
    Model Required Optional
    0 255 127
    1 254 126
    . . . . . . . . .
    125 130 2
    126 129 1
    The value SB_SURFACE_OPTIONAL_GRAY_LEVEL is used for all models defect.
    Note
    Optional defects are not taken into account by training phase.
    To set a ROI you can use the following functions:
  3. sb_roi_set_rect
  4. sb_roi_set_circle
  5. sb_roi_set_ellipse
  6. sb_roi_set_data

To check if a pixel of the image will be elaborated, the functions, for instance sb_project_detection , uses the following formula :

  • if (roi->lut[roi->img->data[x]] != 0) then elaborate the pixel

In order to reduce the memory usage you can call sb_roi_compress to encode the ROI with RLE (Run Length Encoding). You can use sb_roi_decompress to decode the ROI.

Warning
The function sb_roi_evaluate_bounding_box must be called before sb_project_detection every time the ROI is edited.
In particular it is necessary to call this function when a ROI operation that deletes a portion of the ROI causes a potential change of the bounding box. In case no such operation is performed it is not necessary to call this function.

Create and set the ROI

To create a ROI for an existing image use the following procedure

// Image is an existing image
sb_t_image *img = NULL;
sb_t_roi *roi = NULL;
// ...
// Create the ROI with the size of the image
sb_t_err err = sb_roi_create(&roi, img->width, img->height);
if(err != SB_ERR_NONE) return err;
// Set the ROI as a circle of radius 200 located in the center of the image
err = sb_roi_set_circle(roi, 255, img->width/2, img->height / 2, 200, 1);
if(err != SB_ERR_NONE) return err;
sb_t_err sb_roi_set_circle(sb_t_roi *const roi, unsigned char gl, int center_x, int center_y, int radius, int reset_roi)
Sets a circular ROI.
See also
sb_roi_create
sb_roi_create_header
sb_roi_destroy
sb_roi_reset
sb_roi_evaluate_bounding_box
sb_roi_set_data
sb_roi_set_rect
sb_roi_set_circle
sb_roi_set_circular_crown
sb_roi_set_ellipse
sb_roi_set_blob
sb_roi_clone
sb_roi_compress
sb_roi_decompress
sb_roi_plot
sb_roi_apply_lut
sb_image_info_set_roi
sb_image_info_get_roi
sb_image_info_set_roi_defects
sb_image_info_get_roi_defects

Image management

SB Library manages BW and color images with the following format:

The Image is described by the structure sb_t_image which can be created in two modes:

  • Loading an image from file
  • Using an existing image in memory
Note
In deep learning projects (Deep Cortex, Deep Surface), image is always automatically converted to SB_IMAGE_FORMAT_RGB888 in sb_svl_run and sb_project_detection functions.

Loading an image from file

The function sb_image_load loads an image from file. The supported file types are:

  • .bmp
  • .pgm
  • .ppm
  • .png
  • .tif or .tiff. The following compression methods are managed: none, LZW and pack bits.
  • .jpg or .jpeg

To load an image from file use the following function:

sb_t_image* img = NULL;
sb_t_err err = sb_image_load(&img, "image.png");
if(err != SB_ERR_NONE) return err;

Using an existing image in memory

In case the image is already in memory, it is possible to create a sb_t_image object and then connect or copy the image data pointer.

  • Connect the data
    sb_t_image* img = NULL;
    // Create an image header with the same format of your own image.
    // In the examples below we suppose a 1022 x 768 BW image and each row aligned to 1024 bytes.
    sb_t_err err = sb_image_create_header(&img, 1022, 768, 1024, SB_IMAGE_FORMAT_BW8);
    if(err != SB_ERR_NONE) return err;
    // Connect your own image data to the sb_t_image.
    // "image_data" is a buffer that contains the image data loaded in memory.
    sb_t_err err = sb_image_set_data(img, image_data);
    if(err != SB_ERR_NONE) return err;
    sb_t_err sb_image_set_data(sb_t_image *const img, unsigned char *const data)
    Sets the pointer to a block of memory for image data.
    sb_t_err sb_image_create_header(sb_t_image **const pimg, int width, int height, int width_step, sb_t_image_format format)
    Creates an image but doesn't allocate memory for the data.
    @ SB_IMAGE_FORMAT_BW8
    BW: one channel, 8 bit per channel.
    Definition: sb.h:8048
  • Copy the data
    sb_t_image* img = NULL;
    // Create an image with the same format of your own image.
    // In the example we suppose an image 1022 x 768 BW. Each row is automatically aligned to 1024 bytes.
    if(err != SB_ERR_NONE) return err;
    // Copy the data.
    // "image_data" is a buffer that contains your own image data loaded in memory.
    memcpy(img->data, image_data, img->size);
    sb_t_err sb_image_create(sb_t_image **const pimg, int width, int height, sb_t_image_format format)
    Creates an image.
    unsigned char * data
    Pointer to the data.
    Definition: sb.h:8101
    int size
    Dimension, in bytes, of the vector sb_t_image::data .
    Definition: sb.h:8107
See also
sb_image_create
sb_image_create_header
sb_image_destroy
sb_image_set_clip_blend
sb_image_get_pixel
sb_image_set_data
sb_image_clone
sb_image_copy
sb_image_copy_rect
sb_image_clean
sb_image_save
sb_image_load
sb_image_convert
sb_image_flip
sb_image_rotate
sb_image_resize
sb_image_get_sample

Device management

Attention
Deep Cortex and Deep Surface projects only

A fundamental step to configure Deep Cortex and Deep Surface projects is the selection of the computational devices to run inner operations of sb_svl_run and sb_project_detection functions. In fact, deep learning algorithms generally need a computational effort significantly higher than Retina or Surface and this may lead to long training and detection time.
A strongly recommended practice for deep learning users who want to run training is to have installed on the machine a NVIDIA GPU (Graphics Processing Unit) with CUDA support. CUDA (Compute Unified Device Architecture) is a parallel computing platform and API that allows to use GPU NVIDIA to accelerate graphics processing. Here, the official list of GPU NVIDIA marketed by NVIDIA: https://developer.nvidia.com/cuda-gpus .
For what concern detection, alongside SB_DEVICE_CPU and SB_DEVICE_GPU_NVIDIA, SqueezeBrains allows the user also to use OpenVino compatible devices. Thus, from version 1.15.0, also the following are available: SB_DEVICE_CPU , SB_DEVICE_IGPU_INTEL and SB_DEVICE_DGPU_INTEL . They run an optimized version of the algorithm and detection time is comparable to what obtained on GPU NVIDIA. It's important to note that elaboration time depends also on Deep Learning parameters set by the user.
User can recover at any time the info of devices currently available on machine with the function sb_get_info.

Get available devices and set a device to the project parameters

To set computational device for Deep Cortex or Deep Surface projects you should do the following operations,
where sb_handle is a project handle previously loaded with the function sb_project_load or created with the function sb_project_create :

sb_t_par* par = NULL;
sb_t_info* info = NULL;
int str_info_size = 4096;
char* str_info = NULL;
// 1) Get the system info with computational devices list
err = sb_get_info(&info, 1);
if(err != SB_ERR_NONE) goto FnExit;
// 2) Print info and choose which device to use among available ones
str_info = (char*)malloc(str_info_size);
err = sb_format_info(info, str_info, str_info_size);
if(err != SB_ERR_NONE) goto FnExit;
printf(str_info);
// [...]
// devices = 4
// device 0:
// available = "yes"
// type = "CPU"
// framework = "SB", "PT", "OV"
// name = "Intel(R) Core(TM) i9-10900X CPU @ 3.70GHz"
// id = 0
// [...]
// device 1:
// available = "yes"
// type = "GPU"
// framework = "PT"
// name = "NVIDIA GeForce RTX 3090"
// id = 0
// compute_capability = "8.6"
// [...]
// device 2: // We want to set this device!!
// available = "yes"
// type = "GPU"
// framework = "PT"
// name = "NVIDIA GeForce RTX 3060""
// id = 1
// compute_capability = "8.6"
// [...]
// 3) Get the actual parameters from the project handle previously loaded or created
err = sb_project_get_par(sb_handle, &par);
if(err != SB_ERR_NONE) goto FnExit;
// 4) Set type and id of the device you want to use
par->devices.id[0] = 1;
// 5) Set the project parameters
err = sb_project_set_par(sb_handle, par);
if(err != SB_ERR_NONE) goto FnExit;
FnExit:
if (str_info) free(str_info);
// 6) Destroy sb library information structure
// 7) Destroy project parameters structure
sb_t_err sb_destroy_info(sb_t_info **const info)
Destroys the structure.
sb_t_err sb_get_info(sb_t_info **const info, int dl_devices_info)
The function gets information about the sb library and the available computational devices.
sb_t_err sb_format_info(const sb_t_info *const info, char *const str, int str_size)
Formats the sb library information structure.
@ SB_FRAMEWORK_TYPE_PYTORCH
Framework type Pytorch. To be used for Deep Cortex and Deep Surface projects.
Definition: sb.h:6846
@ SB_DEVICE_GPU_NVIDIA
GPU NVidia device.
Definition: sb.h:6834
int id[SB_DEVICES_MAX_NUMBER]
Identifier of the devices to be used.
Definition: sb.h:11252
sb_t_device_type type
Device type.
Definition: sb.h:11237
sb_t_framework_type framework
Framework associated to the device.
Definition: sb.h:11243
General information about sb library and computing devices like CPU and GPUs.
Definition: sb.h:6946
sb_t_devices_par devices
Devices used for detection.
Definition: sb.h:11864

If no device is set by the user, the following rule is applied:

It may happen that the selected device is not available on the machine. This can be due to the following reasons:

  • machine not have a device of that type installed and default device is not changed;
  • a wrong sb_t_devices_par::id greater than the number of devices of that type available on the machine is set by the user;
  • the project is moved between machines with different number of device of same type mounted on them.

In all this cases no error is notified. Depending on the case the following rule is applied:

  • devices of the same type but with lower id are installed on the machine: the device with same type and with higher id is automatically set.
  • devices of the same type are not installed on the machine: device SB_DEVICE_CPU will be selected. In order to check if the device set is actually used by sb_svl_run and sb_project_detection functions, the user can respectively look at the value of sb_t_res.devices or sb_t_svl_res.devices, which contains the devices used to compute the current results.
Warning
Distributed training on multiple GPUs is not already implemented in SB Library. For this reason all elements of sb_t_devices_par::id at index > 0 will be ignored.
See also
sb_get_info
sb_device_type_format

How to choose the GPU

You can find useful advices for choosing the GPU for your project in the following document Guida_scelta_GPU.pdf.

Parallel computing

The SB library has been developed to parallelize operations on multiple CPU threads. Today almost all processors have multiple cores and also hyper threading technology that makes a physical CPU appear as two logical CPUs.
Parallelizing the operations allows to reduce the calculation times.
It is difficult to predict what is the number of physical / logical processors with which you have the least calculation time because the architecture of a PC is complex and includes many other parts that are involved in the calculation and that can become bottlenecks. Consider that image processing involves a large transfer of data between memory and CPU, the greater the larger the images. So the bandwidth of the communication buses between RAM and CPU is another particularly important parameter. Furthermore, linked to this parameter, there are also the sizes of the L1, L2 and L3 caches. Of course, CPU frequency is also a determining factor.
Having made these considerations, it is clear that it is not possible to predict a priori the number of threads that minimizes the calculation time and therefore all that remains is to do tests.
A good compromise is a number of threads equal to the physical number of cores of a CPU, but this is not always the best value. Usually the greatest reduction occurs when going from 1 to 2 threads then, adding more threads the time reduction is always lower up to the point where, instead of decreasing, the time increases again. The increase in time is mainly due to the congestion of the communication buses and the L1, L2 and L3 caches.
The procedure we recommend to optimize the number of threads is to start from 1 thread and increase the value of 1 thread at a time: for each thread value you must run a test on a set of images whose total processing time is greater than 10 s (this because the processors go into power save mode and take a while to return to the nominal speed). The optimal thread value is the one that minimizes computation time.
As shown in the image below, the SB GUI, in the test section, shows both the total detection time and the average detection time.

Test with SB GUI

The following image shows two examples. The test was done on an i7-4710HQ processor which has 4 physical hyper threading cores for which 8 logical Cores. In the first example the number of threads that minimizes the detection time is 7 while in the second is 2.

Detection time vs number of threads: 7 is the optimal number of threads. 640x222 pixel, RGB888, 1 model, scale 1-1.9
Detection time vs number of threads: 2 is the optimal number of threads. 680x270 pixel, BW8, 20 models, scale 1
See also
sb_t_par::num_threads
sb_t_svl_par::num_threads

License

For the library to be used in all its functions it is necessary to have an active license.

License types

There are three different license:

  • Software Demo License
    • Software license
      • An xml file with extension ".lic".
      • The function sb_init needs this file in order to verify the license.
    • 30gg days expiration
      • Extendable on request by SqueezeBrains.
      • The timer starts from the first execution.
    • It requires an internet connection.
      • The license is divided into two parts, one is in the "lic" file and the other resides on the SqueezeBrains license server.
      • The logical address of the license server is licensing.fabervision.com and the TCP/IP port is 64083.
    • You can use the same file on different PCs. On each PC on which it is used, the license server issues a different license.
  • Software Master License
    • Software license
      • An xml file with extension ".lic".
      • The function sb_init needs this file in order to verify the license.
    • no expiration
    • For now this type of license is used only for internal uses.
    • The license is linked to the PC.
  • Hardware Master License

License configurations

In order to modulate the price of the license, 3 configurations have been created, basic, standard and Premium. The Basic configuration is the cheapest but the most limited, the Premium configuration is the most expensive but has no limitations. The configuration are based on the following parameters:

Configuration parameters
parameter Description
Training It indicates if training is enabled or not. If it is not enabled the function sb_svl_run will return the error SB_ERR_LICENSE_TRAINING.
Number of models It is the maximum number of models that can be set in the sb_t_par structure.
The functions sb_svl_run and sb_project_detection will return the error SB_ERR_LICENSE_MODELS_NUMBER if the project has more models than the maximum allowed by the license configuration.
Number of features Only for Retina and Surface modules.
It is the maximum number of features that the SVL can choose automaticaly.
See Features to know how to configure the set of features.
The function sb_svl_run will return error if mode SB_SVL_PAR_OPTIMIZATION_USE_SELECTED is set in the structure sb_t_svl_par and, in the set of features, there are more than those allowed by the license configuration.
The function sb_project_detection will return the error SB_ERR_LICENSE_FEATURES_NUMBER is the project has more features than those allowed by the license configuration.
Speed

It has 3 levels: slow, medium, fast.
Each levels depends on 2 parameters:

Both the parameters are in the structure sb_t_par.
In the following table the speed configurations:

Speed Number of CPU cores Speed Boost
slow 1 0%
medium 4 0%
fast Up to the maximum CPU cores/threads of the device 100%

The functions sb_svl_run and sb_project_detection will not return error if you set speed_boost and / or num_threads with a value greater than the maximum allowed by the license configuration, but they limit the parameters to the maximum allowed value.

Note
The speed limitation has a very low effect on SVL or detection time with Deep Cortex and Deep Surface modules if the device (see sb_t_par::devices and sb_t_svl_par::devices) is a GPU_NVIDIA .


The following table shows all the properties of the configurations.

Configuration License module Training Number of models Number of features Speed Description
Basic Retina runtime no 1 1 slow Basic Retina runtime
Retina yes 1 1 slow Basic Retina
Surface runtime no 1 3 slow Basic Surface runtime
Surface yes 1 3 slow Basic Surface
Deep Cortex runtime no 1 not used slow Basic Deep Cortex runtime
Deep Cortex yes 1 not used slow Basic Deep Cortex
Deep Surface runtime no 1 not used slow Basic Deep Surface runtime
Deep Surface yes 1 not used slow Basic Deep Surface
Deep Retina runtime no 1 not used slow Basic Deep Retina runtime
Deep Retina yes 1 not used slow Basic Deep Retina
Standard Retina runtime no 5 3 medium Standard Retina runtime
Retina yes 5 3 medium Standard Retina
Surface runtime no 3 6 medium Standard Surface runtime
Surface yes 3 6 medium Standard Surface
Deep Cortex runtime no 5 not used medium Standard Deep Cortex runtime
Deep Cortex yes 5 not used medium Standard Deep Cortex
Deep Surface runtime no 3 not used medium Standard Deep Surface runtime
Deep Surface yes 3 not used medium Standard Deep Surface
Deep Retina runtime no 5 not used medium Standard Deep Retina runtime
Deep Retina yes 5 not used medium Standard Deep Retina
Premium Retina runtime no 64 unlimited fast Premium Retina runtime
Retina yes 64 unlimited fast Premium Retina
Surface runtime no 64 unlimited fast Premium Surface runtime
Surface yes 64 unlimited fast Premium Surface
Deep Cortex runtime no 64 not used fast Premium Deep Cortex runtime
Deep Cortex yes 64 not used fast Premium Deep Cortex
Deep Surface runtime no 64 not used fast Premium Deep Surface runtime
Deep Surface yes 64 not used fast Premium Deep Surface
Deep Retina runtime no 64 not used fast Premium Deep Retina runtime
Deep Retina yes 64 not used fast Premium Deep Retina

A license includes up four modules, also with different configurations.
You can use the function sb_license_get_info to get information about your current license configuration.

License initialization

The license is initialized by the function sb_init.
Once the sb_init is finished, the license may not yet be initialized so if the sb_svl_run or sb_project_detection function is called immediately it would give the error SB_ERR_LICENSE. So, after the sb_init function, you need to wait for the license to be validated by calling the function sb_license_get_info in a loop. See the tutorial init_library.c as an example of initialization. In particular, see the function wait_license, that waits for the license to become active.

Attention
When the license is invalid, it is possible to call all the library functions except for the sb_svl_run and sb_project_detection functions.

USB dongle key

When you receive the dongle USB key it will be empty, with no license enabled.
Together with the dongle you will also receive a file with v2c extension (Vendor to Customer) that will be used to enable the licenses you have purchased.
You can use the function sb_license_apply_v2c to apply the v2c file to the dongle, or you can also use the SB GUI as shown in the image below.

project property pages

License configuration compatibility

When you create a project you can also know which license configuration you need.
The function sb_license_configuration_check checks if your project is compatible with a certain configuration.
The compatibility only affects the functions sb_svl_run and sb_project_detection .
There are two levels of compatibility:

See also the tutorial check_license_configuration.c for more information

In the example below how to check the license configuration compatibility of a project:

SB_HANDLE project = NULL;
char* msg = NULL;
// 1) Load the project with sb_project_load
. . .
// 2) Check the license configuration
if (err != SB_ERR_NONE)
{
//ERROR: you cannot to use the project
//In the string msg you can see the reasons for the error
}
else if (msg)
{
// WARNING: you can use the project but with some limitations
//In the string msg you can see the reasons for the warning
}
else
{
// OK: you can use the project without any limitations
}
// 3) free the message string
if(msg)
sb_free(msg);
sb_t_err sb_license_configuration_check(SB_HANDLE module, sb_t_license_configuration_id configuration_id, char **const msg)
Checks if the project can be used with a license configuration.
@ SB_LICENSE_STANDARD
Standard configuration.
Definition: sb.h:7604
void sb_free(void *ptr)
Deallocates a memory block.
See also
sb_init
sb_license_get_hw_info
sb_license_check
sb_license_configuration_check
sb_license_get_info
sb_license_format_info
sb_license_format_module_id
sb_license_format_module_status
sb_license_format_configuration
sb_license_apply_v2c