Skip to Main Content (Press Enter)

Logo UNIMI
  • ×
  • Home
  • Persone
  • Attività
  • Ambiti
  • Strutture
  • Pubblicazioni
  • Terza Missione

Expertise & Skills
Logo UNIMI

|

Expertise & Skills

unimi.it
  • ×
  • Home
  • Persone
  • Attività
  • Ambiti
  • Strutture
  • Pubblicazioni
  • Terza Missione
  1. Pubblicazioni

Features Disentanglement For Explainable Convolutional Neural Networks

Contributo in Atti di convegno
Data di Pubblicazione:
2024
Citazione:
Features Disentanglement For Explainable Convolutional Neural Networks / P. Coscia, A. Genovese, F. Scotti, V. Piuri (PROCEEDINGS - INTERNATIONAL CONFERENCE ON IMAGE PROCESSING). - In: 2024 IEEE International Conference on Image Processing (ICIP)[s.l] : IEEE, 2024 Sep 27. - ISBN 979-8-3503-4939-9. - pp. 514-520 (( convegno ICIP tenutosi a Abu Dhabi nel 2024 [10.1109/icip51287.2024.10647568].
Abstract:
Explainable methods for understanding deep neural networks are currently being employed for many visual tasks and provide valuable insights about their decisions. While post-hoc visual explanations offer easily understandable human cues behind neural networks’ decision-making processes, comparing their outcomes still remains challenging. Furthermore, balancing the performance-explainability trade-off could be a time-consuming process and require a deep domain knowledge. In this regard, we propose a novel auxiliary module, built upon convolutional-based encoders, which acts on the final layers of convolutional neural networks (CNNs) to learn orthogonal feature maps with a more discriminative and explainable power. This module is trained via a disentangle loss which specifically aims to decouple the object from the background in the input image. To quantitatively assess its impact on standard CNNs, and compare the quality of the resulting visual explanations, we employ metrics specifically designed for semantic segmentation tasks. These metrics rely on bounding-box annotations that may accompany image classification (or recognition) datasets, allowing us to compare both ground-truth and predicted regions. Finally, we explore the impact of various self-supervised pre-training strategies, due to their positive influence on vision tasks, and assess their effectiveness on our considered metrics.
Tipologia IRIS:
03 - Contributo in volume
Keywords:
Explainable AI (XAI); ResNet; self-supervised learning (SSL); disentanglement
Elenco autori:
P. Coscia, A. Genovese, F. Scotti, V. Piuri
Autori di Ateneo:
COSCIA PASQUALE ( autore )
GENOVESE ANGELO ( autore )
PIURI VINCENZO ( autore )
SCOTTI FABIO ( autore )
Link alla scheda completa:
https://air.unimi.it/handle/2434/1104968
Link al Full Text:
https://air.unimi.it/retrieve/handle/2434/1104968/2549196/icip24.pdf
Titolo del libro:
2024 IEEE International Conference on Image Processing (ICIP)
Progetto:
Edge AI Technologies for Optimised Performance Embedded Processing (EdgeAI)
  • Aree Di Ricerca

Aree Di Ricerca

Settori (2)


Settore IINF-05/A - Sistemi di elaborazione delle informazioni

Settore INFO-01/A - Informatica
  • Informazioni
  • Assistenza
  • Accessibilità
  • Privacy
  • Utilizzo dei cookie
  • Note legali

Realizzato con VIVO | Progettato da Cineca | 25.11.5.0