Abstract
To ensure that ML and AI models can maintain solid and consistent performance,
even when faced with variable conditions, noise in the data or disturbances in the
environment, it is necessary to evaluate them under different operating conditions. Researchers
VRAIN have extensive experience in analyzing the robustness of models, understanding
thus their ability to generalize well to different data sets and
to handle unexpected situations effectively. In VRAIN we have developed
specific methodologies for assessing the robustness of models by measuring impact
that the complexity of the data and the noise level have in the models.