Deep Neural Networks (DNNs) are popular machine learning models which have found successful application in many different domains across computer science. Nevertheless, providing formal guarantees on the behavior of neural networks is hard and therefore their reliability in safety-critical domains is still a concern. Verification and repair emerged as promising solutions to address this issue. In the following I will present some of my recent efforts in this area.

Safety Analysis of Deep Neural Networks / Guidotti, D.. - In: IJCAI. - ISSN 1045-0823. - (2021), pp. 4887-4888. (Intervento presentato al convegno 30th International Joint Conference on Artificial Intelligence, IJCAI 2021 tenutosi a Virtual, Online nel 2021).

Safety Analysis of Deep Neural Networks

Guidotti D.
2021-01-01

Abstract

Deep Neural Networks (DNNs) are popular machine learning models which have found successful application in many different domains across computer science. Nevertheless, providing formal guarantees on the behavior of neural networks is hard and therefore their reliability in safety-critical domains is still a concern. Verification and repair emerged as promising solutions to address this issue. In the following I will present some of my recent efforts in this area.
2021
Safety Analysis of Deep Neural Networks / Guidotti, D.. - In: IJCAI. - ISSN 1045-0823. - (2021), pp. 4887-4888. (Intervento presentato al convegno 30th International Joint Conference on Artificial Intelligence, IJCAI 2021 tenutosi a Virtual, Online nel 2021).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11388/348762
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? ND
social impact