SqueezeBrains SDK 1.13
sb_t_svl_dl_par Struct Reference

SVL parameters to configure the Deep Learning training. More...

#include <sb.h>

Collaboration diagram for sb_t_svl_dl_par:

Data Fields

char network_path [512]
 Network weights file path with extension SB_DL_WEIGHTS_EXT. More...
 
sb_t_svl_dl_par_network network
 Network parameters. More...
 
int pre_trained
 The network is loaded as pre-trained, i.e. network parameters are not randomly initialized before training but they start from a pre-existing configuration. More...
 
sb_t_svl_dl_par_perturbation perturbations
 Perturbations for deep learning training. More...
 
float learning_rate
 Learning rate. More...
 
int num_epochs
 Number of epochs. More...
 
int batch_size
 Size of the batch used during SVL. More...
 
float validation_percentage
 Validation percentage. More...
 
int save_best
 At the end of the training, the best internal parameters configuration is recovered. More...
 
sb_t_size tile_factor
 Number of horizontal and vertical tiles used to process the image. More...
 
int auto_tiling
 Enable the automatic tiling for image processing. More...
 
sb_t_size_flt scale
 Scale to applied to the image before the processing. More...
 
sb_t_loss_fn_type loss_fn
 Loss function. More...
 

Detailed Description

SVL parameters to configure the Deep Learning training.

Used only by Deep Cortex and Deep Surface projects.

Definition at line 9865 of file sb.h.

Field Documentation

◆ auto_tiling

int sb_t_svl_dl_par::auto_tiling

Enable the automatic tiling for image processing.

This feature was added to facilitate the use of the library in case you need to analyze images with variable resolution but you want to keep constant the pixel/mm ratio used by the deep learning network to process the images. If enabled, the tiling is automatically applied in according to the sb_t_svl_dl_par::scale parameter.
The image below shows the case with 3x2 tiles. The tiles are placed so that the image is centered.
The orange tiles, having no analysis roi, will not be processed, on the contrary the green ones.

Automatic tiling grid

Images with different resolution may be subdivided by different number of tiles.
Note that the analysis time increases as the resolution of the image increases as the automatic tiling procedure will have to use more tiles to cover the whole image. But there is also a large range of image size variability so the number of tiles does not vary as shown in the image below.
The sb_par_auto_tiling_get_img_range function returns the variability range of the image resolution given the scale factor and the number of tiles.

Horizontal image resolution range with 3 tiles

Used only by Deep Surface projects.

Default
Default value is 0, i.e. disabled.
Attention
Calling the function sb_project_set_par after a change of the parameter invalidates the training of all the models.
See also
sb_par_auto_tiling_get_img_range
sb_t_svl_dl_par::scale
sb_t_svl_dl_par::tile_factor

Definition at line 9998 of file sb.h.

◆ batch_size

int sb_t_svl_dl_par::batch_size

Size of the batch used during SVL.

The number of data to be processed before an update of the network weights.
The size of the batch must be a powers of 2, more than or equal to SB_SVL_DL_BATCH_SIZE_MIN and less than or equal to SB_SVL_DL_BATCH_SIZE_MAX .
Higher batch size values on computational device with limited memory resources may cause SB_ERR_DL_CUDA_OUT_OF_MEMORY .
There is no general rule to determine the optimal batch size. However usual values are in range from 4 to 256, depending on the number of images in training dataset.
Usually Deep Cortex projects require a batch value greater than the Deep Surface projects.

Definition at line 9922 of file sb.h.

◆ learning_rate

float sb_t_svl_dl_par::learning_rate

Learning rate.

It represents the step size at each iteration while moving toward a minimum of a loss function. When setting a learning rate, there is a trade-off between the rate of convergence and the overfitting. Setting to small learning rate values may lead to overfitting.
Values range from SB_SVL_DL_LEARNING_RATE_MIN to SB_SVL_DL_LEARNING_RATE_MAX .

Default
The default value is SB_SVL_DL_LEARNING_RATE_DEFAULT .
Warning
Readable only.

Definition at line 9906 of file sb.h.

◆ loss_fn

sb_t_loss_fn_type sb_t_svl_dl_par::loss_fn

Loss function.

Used only by Deep Surface projects.
It is the function used to compute the amount of loss/error that have to be minimized at each training step, i.e. at the end of every batch.

Warning
Readable only.

Definition at line 10021 of file sb.h.

◆ network

sb_t_svl_dl_par_network sb_t_svl_dl_par::network

Network parameters.

Set of parameters defining the structure of the network

Definition at line 9880 of file sb.h.

◆ network_path

char sb_t_svl_dl_par::network_path[512]

Network weights file path with extension SB_DL_WEIGHTS_EXT.

Path to the file containing a pre-trained SVL parameters configuration to use if sb_t_svl_dl_par::pre_trained is enabled.
Currently pre-training exists only for the following network type: SB_NETWORK_TYPE_EFFICIENTNET_B0, SB_NETWORK_TYPE_EFFICIENTNET_B1 and SB_NETWORK_TYPE_EFFICIENTNET_B2. Pre-training is the official one released by Libtorch and computed on ImageNet dataset (link at the official website: https://www.image-net.org).

Definition at line 9874 of file sb.h.

◆ num_epochs

int sb_t_svl_dl_par::num_epochs

Number of epochs.

The number of epochs is the number of SVL complete passes through the entire training dataset.

Definition at line 9912 of file sb.h.

◆ perturbations

sb_t_svl_dl_par_perturbation sb_t_svl_dl_par::perturbations

Perturbations for deep learning training.

See also
Deep Learning Perturbations

Definition at line 9895 of file sb.h.

◆ pre_trained

int sb_t_svl_dl_par::pre_trained

The network is loaded as pre-trained, i.e. network parameters are not randomly initialized before training but they start from a pre-existing configuration.

The use of a pre-trained network has great advantages and usually leads to better results and faster training time than a training from scratch. All this provided that pre-trained network has been properly trained and learned parameters fit well to the current vision task.
Pre-trained weights are not reset after a SVL reset.

Warning
Readable only.

Definition at line 9889 of file sb.h.

◆ save_best

int sb_t_svl_dl_par::save_best

At the end of the training, the best internal parameters configuration is recovered.

The best internal parameters configuration is the value of the weights at the epoch with the lowest validation loss. If training validation is disabled, the epoch with lowest training loss is selected.
0 Means disabled.

Definition at line 9949 of file sb.h.

◆ scale

sb_t_size_flt sb_t_svl_dl_par::scale

Scale to applied to the image before the processing.

Used only by Deep Surface projects and if sb_t_svl_dl_par::auto_tiling is enabled.
Couple of values that effects processing both in training and in detection routine, going to determine the number of tiles effectively used. Numerically it determines the horizontal and vertical scale at which each pixel of the original image is elaborated by the algorithm.
Possible values range from SB_SVL_DL_SCALE_MIN to SB_SVL_DL_SCALE_MAX with granularity SB_SVL_DL_SCALE_GRANULARITY .

Default
Default value is (1.0, 1.0).
Attention
Calling the function sb_project_set_par after a change of the scale invalidates the training of all the models.
See also
sb_par_auto_tiling_get_img_range
sb_t_svl_dl_par::auto_tiling

Definition at line 10012 of file sb.h.

◆ tile_factor

sb_t_size sb_t_svl_dl_par::tile_factor

Number of horizontal and vertical tiles used to process the image.

Used only by Deep Surface projects and if sb_t_svl_dl_par::auto_tiling is disabled.
Couple of values that determines a grid scheme used to subdivide, both horizontally and vertically, the original image into tile_factor.width * tile_factor.height tiled images. Each tile is processed by sb_svl_run and sb_project_detection function as a single image.
Applying a tile factor > {1, 1} is useful to increase the image resolution at the input of the elaboration algorithm, especially when the sb_t_svl_dl_par_network::input_size of the network is significantly lower than the resolution of the image. This may help to detect small defects instances and to have a more accurate segmentation at pixel levels.
On the other hand, the higher the number of tiles of the grid, the higher is the training and detection time and also the GPU memory usage of the detection.
Values range from SB_SVL_DL_TILE_FACTOR_MIN to SB_SVL_DL_TILE_FACTOR_MAX.
If it is necessary that the image is processed with a defined scale factor in order to guarantee the minimum size of the defects, the automatic tiling can be enabled in order to set the required scale.

Default
Default value is {1, 1}, which means disabled.
Tiling grid at different tile factor values
Note
To set the optimal tile factor, the user must take into account the minimum defect size and defect granularity required for the current vision task along each direction. A basic guideline is to use a tile factor that satisfies the following inequalities:

\[ tile\, factor_{i} >= ceil(\frac{image\, resolution_{i}}{(network\, input\, size - 32) * min\, defect\, size_{i}}) \quad \textrm{with} \quad i=x,y\]


Attention
Calling the function sb_project_set_par after a change of the tile factor invalidates the project training.
See also
sb_t_svl_dl_par::auto_tiling

Definition at line 9973 of file sb.h.

◆ validation_percentage

float sb_t_svl_dl_par::validation_percentage

Validation percentage.

Percentage of the training images to be used to validate the training.
The number is rounded to the smallest integer.
With incremental SVL:

  • In case the number of validation images increases, because an higher validation percentage is set or some images are added to the training, the user is queried whether to exit and reset the SVL or continue. In the second case, current validation images are maintained and the new ones are randomly chosen from the training set.
  • In case the number of validation images is reduced, because a lower validation percentage is set or some images are removed from the training, some images are randomly removed from the current validation set.

The use of validation increases the amount of memory required.
The value ranges from SB_SVL_DL_VALIDATION_PERCENTAGE_MIN to SB_SVL_DL_VALIDATION_PERCENTAGE_MAX .

Default
The default value is 0.
Attention
With Deep Cortex projects it is advisable to use validation only with datasets with many SVL images.

Definition at line 9941 of file sb.h.


The documentation for this struct was generated from the following file: