SB SDK 1.11
sb_t_svl_dl_par Struct Reference

SVL parameters to configure the Deep Learning training. More...

#include <sb.h>

Collaboration diagram for sb_t_svl_dl_par:

Data Fields

char network_path [512]
 Network weights file path with extension SB_DL_WEIGHTS_EXT. More...
 
sb_t_svl_dl_par_network network
 Network parameters. More...
 
int pre_trained
 The network is loaded as pre-trained, i.e. network parameters are not randomly initialized before training but they start from a pre-existing configuration. More...
 
sb_t_svl_dl_par_perturbation perturbations
 Perturbations for deep learning training. More...
 
float learning_rate
 Learning rate. More...
 
int num_epochs
 Number of epochs. More...
 
int batch_size
 Size of the batch used during SVL. More...
 
float validation_percentage
 Validation percentage. More...
 
int save_best
 At the end of the training, the best internal parameters configuration is recovered. More...
 
sb_t_size tile_factor
 Number of horizontal and vertical tiles used to process the image. More...
 

Detailed Description

SVL parameters to configure the Deep Learning training.

Used only by Deep Cortex and Deep Surface projects.

Definition at line 8986 of file sb.h.

Field Documentation

◆ batch_size

int sb_t_svl_dl_par::batch_size

Size of the batch used during SVL.

The number of data to be processed before an update of the network weights.
The size of the batch must be a powers of 2, more than or equal to SB_SVL_DL_BATCH_SIZE_MIN and less than or equal to SB_SVL_DL_BATCH_SIZE_MAX .
Higher batch size values on computational device with limited memory resources may cause SB_ERR_DL_CUDA_OUT_OF_MEMORY .
There is no general rule to determine the optimal batch size. However usual values are in range from 4 to 256, depending on the number of images in training dataset.
Usually Deep Cortex projects require a batch value greater than the Deep Surface projects.

Definition at line 9043 of file sb.h.

◆ learning_rate

float sb_t_svl_dl_par::learning_rate

Learning rate.

It represents the step size at each iteration while moving toward a minimum of a loss function. When setting a learning rate, there is a trade-off between the rate of convergence and the overfitting. Setting to small learning rate values may lead to overfitting.
The default value is SB_SVL_DL_LEARNING_RATE_DEFAULT . Values range from SB_SVL_DL_LEARNING_RATE_MIN to SB_SVL_DL_LEARNING_RATE_MAX .

Warning
Readable only.

Definition at line 9027 of file sb.h.

◆ network

sb_t_svl_dl_par_network sb_t_svl_dl_par::network

Network parameters.

Set of parameters defining the structure of the network

Definition at line 9001 of file sb.h.

◆ network_path

char sb_t_svl_dl_par::network_path[512]

Network weights file path with extension SB_DL_WEIGHTS_EXT.

Path to the file containing a pre-trained SVL parameters configuration to use if sb_t_svl_dl_par::pre_trained is enabled.
Currently pre-training exists only for the following network type: SB_NETWORK_TYPE_EFFICIENTNET_B0, SB_NETWORK_TYPE_EFFICIENTNET_B1 and SB_NETWORK_TYPE_EFFICIENTNET_B2. Pre-training is the official one released by Libtorch and computed on ImageNet dataset (link at the official website: https://www.image-net.org).

Definition at line 8995 of file sb.h.

◆ num_epochs

int sb_t_svl_dl_par::num_epochs

Number of epochs.

The number of epochs is the number of SVL complete passes through the entire training dataset.

Definition at line 9033 of file sb.h.

◆ perturbations

sb_t_svl_dl_par_perturbation sb_t_svl_dl_par::perturbations

Perturbations for deep learning training.

See also
Deep Learning Perturbations

Definition at line 9016 of file sb.h.

◆ pre_trained

int sb_t_svl_dl_par::pre_trained

The network is loaded as pre-trained, i.e. network parameters are not randomly initialized before training but they start from a pre-existing configuration.

The use of a pre-trained network has great advantages and usually leads to better results and faster training time than a training from scratch. All this provided that pre-trained network has been properly trained and learned parameters fit well to the current vision task.
Pre-trained weights are not reset after a SVL reset.

Warning
Readable only.

Definition at line 9010 of file sb.h.

◆ save_best

int sb_t_svl_dl_par::save_best

At the end of the training, the best internal parameters configuration is recovered.

The best internal parameters configuration is the value of the weights at the epoch with the lowest validation loss. If training validation is disabled, the epoch with lowest training loss is selected.
0 Means disabled.

Definition at line 9070 of file sb.h.

◆ tile_factor

sb_t_size sb_t_svl_dl_par::tile_factor

Number of horizontal and vertical tiles used to process the image.

Used only by Deep Surface projects.
Couple of values that determines a grid scheme used to subdivide, both horizontally and vertically, the original image into tile_factor.width * tile_factor.height tiled images. Each tile is processed by sb_svl_run and sb_project_detection function as a single image.
Applying a tile factor > {1, 1} is useful to increase the image resolution at the input of the elaboration algorithm, especially when the sb_t_svl_dl_par_network::input_size of the network is significantly lower than the resolution of the image. This may help to detect small defects instances and to have a more accurate segmentation at pixel levels.
On the other hand, the higher the number of tiles of the grid, the higher is the training and detection time and also the GPU usage of the detection.
Values range from SB_SVL_DL_TILE_FACTOR_MIN to SB_SVL_DL_TILE_FACTOR_MAX .
Dafault value is {1, 1}, which means disabled.

Tiling grid at different tile factor values
Note
To set the optimal tile factor, the user must take into account the minimum defect size and defect granularity required for the current vision task along each direction. A basic guideline is to use a tile factor that satisfies the following inequalities:

\[ tile\, factor_{i} >= ceil(\frac{image\, resolution_{i}}{(network\, input\, size * 0.9) * min\, defect\, size_{i}}) \quad \textrm{with} \quad i=x,y\]


Attention
Calling the function sb_project_set_par after a change of the tile factor invalidates the training of all the models.

Definition at line 9091 of file sb.h.

◆ validation_percentage

float sb_t_svl_dl_par::validation_percentage

Validation percentage.

Percentage of the training images to be used to validate the training.
The number is rounded to the smallest integer.
With incremental SVL:

  • In case the number of validation images increases, because an higher validation percentage is set or some images are added to the training, the user is queried whether to exit and reset the SVL or continue. In the second case, current validation images are maintained and the new ones are randomly chosen from the training set.
  • In case the number of validation images is reduced, because a lower validation percentage is set or some images are removed from the training, some images are randomly removed from the current validation set.

The use of validation increases the amount of memory required.
The value ranges from SB_SVL_DL_VALIDATION_PERCENTAGE_MIN to SB_SVL_DL_VALIDATION_PERCENTAGE_MAX .
The default value is 0.

Attention
With Deep Cortex projects it is advisable to use validation only with datasets with many SVL images.

Definition at line 9062 of file sb.h.


The documentation for this struct was generated from the following file: