Skip to Main Content (Press Enter)

Logo UNIMI
  • ×
  • Home
  • Persone
  • Attività
  • Ambiti
  • Strutture
  • Pubblicazioni
  • Terza Missione

Expertise & Skills
Logo UNIMI

|

Expertise & Skills

unimi.it
  • ×
  • Home
  • Persone
  • Attività
  • Ambiti
  • Strutture
  • Pubblicazioni
  • Terza Missione
  1. Pubblicazioni

A Theory of Interpretable Approximations

Contributo in Atti di convegno
Data di Pubblicazione:
2024
Citazione:
A Theory of Interpretable Approximations / M. Bressan, N. Cesa Bianchi, E. Esposito, Y. Mansour, S. Moran, M. Thiessen (PROCEEDINGS OF MACHINE LEARNING RESEARCH). - In: The Thirty Seventh Annual Conference on Learning Theory / [a cura di] S. Agrawal, A. Roth. - [s.l] : PMLR, 2024. - pp. 648-668 (( Intervento presentato al 37. convegno Conference on Learning Theory tenutosi a Edmonton nel 2024.
Abstract:
Can a deep neural network be approximated by a small decision tree based on simple features? This question and its variants are behind the growing demand for machine learning models that are interpretable by humans. In this work we study such questions by introducing interpretable approximations, a notion that captures the idea of approximating a target concept c by a small ag- gregation of concepts from some base class H. In particular, we consider the approximation of a binary concept c by decision trees based on a simple class H (e.g., of bounded VC dimension), and use the tree depth as a measure of complexity. Our primary contribution is the following remarkable trichotomy. For any given pair of H and c, exactly one of these cases holds: (i) c cannot be ap- proximated by H with arbitrary accuracy; (ii) c can be approximated by H with arbitrary accuracy, but there exists no universal rate that bounds the complexity of the approximations as a function of the accuracy; or (iii) there exists a constant κ that depends only on H and c such that, for any data distribution and any desired accuracy level, c can be approximated by H with a complexity not exceeding κ. This taxonomy stands in stark contrast to the landscape of supervised classifi- cation, which offers a complex array of distribution-free and universally learnable scenarios. We show that, in the case of interpretable approximations, even a slightly nontrivial a-priori guarantee on the complexity of approximations implies approximations with constant (distribution-free and accuracy-free) complexity. We extend our trichotomy to classes H of unbounded VC dimension and give characterizations of interpretability based on the algebra generated by H.
Tipologia IRIS:
03 - Contributo in volume
Keywords:
interpretability; learning theory; boosting
Elenco autori:
M. Bressan, N. Cesa Bianchi, E. Esposito, Y. Mansour, S. Moran, M. Thiessen
Autori di Ateneo:
BRESSAN MARCO ( autore )
CESA BIANCHI NICOLO' ANTONIO ( autore )
ESPOSITO EMMANUEL ( autore )
Link alla scheda completa:
https://air.unimi.it/handle/2434/1087069
Link al Full Text:
https://air.unimi.it/retrieve/handle/2434/1087069/2504159/bressan24a.pdf
Titolo del libro:
The Thirty Seventh Annual Conference on Learning Theory
Progetto:
European Lighthouse of AI for Sustainability (ELIAS)
  • Aree Di Ricerca

Aree Di Ricerca

Settori (2)


Settore INF/01 - Informatica

Settore INFO-01/A - Informatica
  • Informazioni
  • Assistenza
  • Accessibilità
  • Privacy
  • Utilizzo dei cookie
  • Note legali

Realizzato con VIVO | Progettato da Cineca | 25.11.5.0