[eng] In recent years, there has been an explosion of
artificial intelligence thanks to neural networks. Neural networks
have enabled significant advances in computer vision and natural
language processing, including the use of large language models
(LLMs) and chatbots developed by companies such as OpenAI
and Google. This widespread adoption has brought artificial
intelligence to millions of people.
However, a major challenge with these models is their lack
of explainability, as they are often seen as black boxes with
unclear decision-making processes. To address this issue, the field
of Explainable AI (XAI) has emerged.
This paper focuses on investigating the confidence levels
generated by LIME (Local Interpretable Model-Agnostic Explanations) explanations in convolutional network algorithms. The
study involved a questionnaire administered to a sample of 43
participants who were asked to discriminate between predictions
made by a trained and an untrained network. The results showed
that the participants were able to correctly identify predictions
in 82.91 % of the 20 images included in the survey.