Artificial intelligence is becoming a common element of our times. It is becoming more and more pervading into every aspect of our lives. Mass applications of artificial intelligence started when it began to be used in video games, but now it is available to everyone and can help with many tasks that, up to a few years ago, could be done only by humans. Discussions about artificial intelligence began very early before it existed. Most of the science-fiction literature tried to imagine many forms of AI and the consequences, both good and evil, of its use. But now artificial intelligence is a real, concrete thing and its mass usage must be subordinated to a risk evaluation and mitigation process to make it safe. In this paper, an introduction to this risk assessment will be made and the main guidelines for it will be defined. These guidelines could be used by researchers, designers, developers and even users to validate an AI-based application before delivering it to people. The paper considers the basic concepts of risk and tailors them to provide effective support in developing risk analysis for the specific area of artificial intelligence. Then a set of typical risks are defined and methods to detect and minimize them are provided. In conclusion, a call for stricter regulation of AI and high-performance processing is issued.

Guidelines for Risk Evaluation in Artificial Intelligence Applications

Luca Lezzerini
Primo
Writing – Original Draft Preparation
2023

Abstract

Artificial intelligence is becoming a common element of our times. It is becoming more and more pervading into every aspect of our lives. Mass applications of artificial intelligence started when it began to be used in video games, but now it is available to everyone and can help with many tasks that, up to a few years ago, could be done only by humans. Discussions about artificial intelligence began very early before it existed. Most of the science-fiction literature tried to imagine many forms of AI and the consequences, both good and evil, of its use. But now artificial intelligence is a real, concrete thing and its mass usage must be subordinated to a risk evaluation and mitigation process to make it safe. In this paper, an introduction to this risk assessment will be made and the main guidelines for it will be defined. These guidelines could be used by researchers, designers, developers and even users to validate an AI-based application before delivering it to people. The paper considers the basic concepts of risk and tailors them to provide effective support in developing risk analysis for the specific area of artificial intelligence. Then a set of typical risks are defined and methods to detect and minimize them are provided. In conclusion, a call for stricter regulation of AI and high-performance processing is issued.
2023
Lezzerini, Luca
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11392/2569657
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact