![]() |
SB SDK 1.11
|
SVL parameters to configure the Deep Learning training. More...
#include <sb.h>
Data Fields | |
char | network_path [512] |
Network weights file path with extension SB_DL_WEIGHTS_EXT. More... | |
sb_t_svl_dl_par_network | network |
Network parameters. More... | |
int | pre_trained |
The network is loaded as pre-trained, i.e. network parameters are not randomly initialized before training but they start from a pre-existing configuration. More... | |
sb_t_svl_dl_par_perturbation | perturbations |
Perturbations for deep learning training. More... | |
float | learning_rate |
Learning rate. More... | |
int | num_epochs |
Number of epochs. More... | |
int | batch_size |
Size of the batch used during SVL. More... | |
float | validation_percentage |
Validation percentage. More... | |
int | save_best |
At the end of the training, the best internal parameters configuration is recovered. More... | |
sb_t_size | tile_factor |
Number of horizontal and vertical tiles used to process the image. More... | |
SVL parameters to configure the Deep Learning training.
Used only by Deep Cortex and Deep Surface projects.
int sb_t_svl_dl_par::batch_size |
Size of the batch used during SVL.
The number of data to be processed before an update of the network weights.
The size of the batch must be a powers of 2, more than or equal to SB_SVL_DL_BATCH_SIZE_MIN and less than or equal to SB_SVL_DL_BATCH_SIZE_MAX .
Higher batch size values on computational device with limited memory resources may cause SB_ERR_DL_CUDA_OUT_OF_MEMORY .
There is no general rule to determine the optimal batch size. However usual values are in range from 4 to 256, depending on the number of images in training dataset.
Usually Deep Cortex projects require a batch value greater than the Deep Surface projects.
float sb_t_svl_dl_par::learning_rate |
Learning rate.
It represents the step size at each iteration while moving toward a minimum of a loss function. When setting a learning rate, there is a trade-off between the rate of convergence and the overfitting. Setting to small learning rate values may lead to overfitting.
The default value is SB_SVL_DL_LEARNING_RATE_DEFAULT . Values range from SB_SVL_DL_LEARNING_RATE_MIN to SB_SVL_DL_LEARNING_RATE_MAX .
sb_t_svl_dl_par_network sb_t_svl_dl_par::network |
char sb_t_svl_dl_par::network_path[512] |
Network weights file path with extension SB_DL_WEIGHTS_EXT.
Path to the file containing a pre-trained SVL parameters configuration to use if sb_t_svl_dl_par::pre_trained is enabled.
Currently pre-training exists only for the following network type: SB_NETWORK_TYPE_EFFICIENTNET_B0, SB_NETWORK_TYPE_EFFICIENTNET_B1 and SB_NETWORK_TYPE_EFFICIENTNET_B2. Pre-training is the official one released by Libtorch and computed on ImageNet dataset (link at the official website: https://www.image-net.org).
int sb_t_svl_dl_par::num_epochs |
sb_t_svl_dl_par_perturbation sb_t_svl_dl_par::perturbations |
Perturbations for deep learning training.
int sb_t_svl_dl_par::pre_trained |
The network is loaded as pre-trained, i.e. network parameters are not randomly initialized before training but they start from a pre-existing configuration.
The use of a pre-trained network has great advantages and usually leads to better results and faster training time than a training from scratch. All this provided that pre-trained network has been properly trained and learned parameters fit well to the current vision task.
Pre-trained weights are not reset after a SVL reset.
int sb_t_svl_dl_par::save_best |
At the end of the training, the best internal parameters configuration is recovered.
The best internal parameters configuration is the value of the weights at the epoch with the lowest validation loss. If training validation is disabled, the epoch with lowest training loss is selected.
0 Means disabled.
sb_t_size sb_t_svl_dl_par::tile_factor |
Number of horizontal and vertical tiles used to process the image.
Used only by Deep Surface projects.
Couple of values that determines a grid scheme used to subdivide, both horizontally and vertically, the original image into tile_factor.width * tile_factor.height tiled images. Each tile is processed by sb_svl_run and sb_project_detection function as a single image.
Applying a tile factor > {1, 1} is useful to increase the image resolution at the input of the elaboration algorithm, especially when the sb_t_svl_dl_par_network::input_size of the network is significantly lower than the resolution of the image. This may help to detect small defects instances and to have a more accurate segmentation at pixel levels.
On the other hand, the higher the number of tiles of the grid, the higher is the training and detection time and also the GPU usage of the detection.
Values range from SB_SVL_DL_TILE_FACTOR_MIN to SB_SVL_DL_TILE_FACTOR_MAX .
Dafault value is {1, 1}, which means disabled.
float sb_t_svl_dl_par::validation_percentage |
Validation percentage.
Percentage of the training images to be used to validate the training.
The number is rounded to the smallest integer.
With incremental SVL:
The use of validation increases the amount of memory required.
The value ranges from SB_SVL_DL_VALIDATION_PERCENTAGE_MIN to SB_SVL_DL_VALIDATION_PERCENTAGE_MAX .
The default value is 0.