Skip to Main Content (Press Enter)

Logo UNIMI
  • ×
  • Home
  • Persone
  • Attività
  • Ambiti
  • Strutture
  • Pubblicazioni
  • Terza Missione

Expertise & Skills
Logo UNIMI

|

Expertise & Skills

unimi.it
  • ×
  • Home
  • Persone
  • Attività
  • Ambiti
  • Strutture
  • Pubblicazioni
  • Terza Missione
  1. Attività

BIAS, RISK, OPACITY in AI: design, verification and development of Trustworthy AI

Progetto
Safety-critical systems incorporate more and more autonomous decision-making, using Artificial Intelligence (AI) techniques into real-life applications. These have a very concrete impact on people’s lives. With safety a major concern, problems of opacity, bias and risk are pressing. Creating Trustworthy AI (TAI) is thus of paramount importance. Advances in AI design still struggle to offer technical implementations driven by conceptual knowledge and qualitative approaches. This project aims at addressing these limitations, by developing design criteria for TAI based on philosophical analyses of transparency, bias and risk combined with their formalization and technical implementation for a range of platforms, including both supervised and unsupervised learning. We argue that this can be obtained through the explicit formulation of epistemic and normative principles for TAI, their development in formal design procedures and translation into computational implementations. A first objective of this project is to formulate an epistemological and normative analysis of TAI as undermined by bias and risk, not only with respect to their reliability, but also to their social acceptance. Accordingly, we will analyse the Meaningful Human Control (MHC) requirement for more transparent AI systems operating in safety-critical and ethically sensitive domains. A second objective is to define a comprehensive formal ontology, including a taxonomy of biases and risks and their mutual relations for autonomous decision systems. Our task is to offer a systematic characterization of the bias types, to make them viable for formal and automatic identification, and define risks involved in the construction and use of possibly biased complex AI systems. A third objective is to design (sub)-symbolic formal models to reason about safe TAI, and produce associated verification tools. We will articulate principles of opacity, bias and risk in terms of cognitive representation by extensions of Description Logics and inferential uncertainty modelling in terms of proof-theoretical and semantic approaches to trust, feasible for formal verification. Finally, a fourth objective consists in developing a novel computational framework for TAI systems explanation capabilities, aimed at mitigating the opacity of Machine Learning (ML) models in terms of hierarchical structure and compositional properties of middle-level features. Overall, this project will advance the state of the art on TAI by: developing an epistemically and ethically guided analysis of opacity, bias and risk; investigating the integration of logical symbolic systems with currently applied statistical techniques; and supporting the verification and implementation of less opaque and more trustworthy systems.
  • Dati Generali
  • Aree Di Ricerca
  • Pubblicazioni
  • Contatti

Dati Generali

Partecipanti

PRIMIERO GIUSEPPE   Responsabile scientifico  

Dipartimenti coinvolti

Dipartimento di Filosofia Piero Martinetti   Principale  

Tipo

PRIN2020 - PRIN bando 2020

Finanziatore

MINISTERO DELL'ISTRUZIONE E DEL MERITO
Organizzazione Esterna Ente Finanziatore

Capofila

UNIVERSITA' DEGLI STUDI DI MILANO

Periodo di attività

Giugno 1, 2022 - Maggio 31, 2025

Durata progetto

36 mesi

Aree Di Ricerca

Settori


Settore M-FIL/02 - Logica e Filosofia della Scienza

Pubblicazioni

Pubblicazioni (20)

  • crescente
  • decrescente
  • Tutti
  • Articolo
  • Capitolo di libro
  • Contributo in Atti di convegno
  • Curatela
Checking trustworthiness of probabilistic computations in a typed natural deduction system 
JOURNAL OF LOGIC AND COMPUTATION
OXFORD UNIVERSITY PRESS
2025
Articolo
Open Access
Altmetric disabilitato. Abilitalo su "Utilizzo dei cookie"
A 2-categorical analysis of context comprehension 
THEORY AND APPLICATIONS OF CATEGORIES
2024
Articolo
Open Access
A Logic of Knowledge and Justifications, with an Application to Computational Trust 
STUDIA LOGICA
SPRINGER
2024
Articolo
Partially Open Access
Altmetric disabilitato. Abilitalo su "Utilizzo dei cookie"
A Pragmatic Theory of Computational Artefacts 
MINDS AND MACHINES
SPRINGER
2024
Articolo
Open Access
Altmetric disabilitato. Abilitalo su "Utilizzo dei cookie"
A possible worlds semantics for trustworthy non-deterministic computations 
INTERNATIONAL JOURNAL OF APPROXIMATE REASONING
ELSEVIER
2024
Articolo
Open Access
Altmetric disabilitato. Abilitalo su "Utilizzo dei cookie"
Grounding operators: transitivity and trees, logicality and balance 
JOURNAL OF APPLIED NON-CLASSICAL LOGICS
ROUTLEDGE TAYLOR & FRANCIS GROUP
2024
Articolo
Reserved Access
Altmetric disabilitato. Abilitalo su "Utilizzo dei cookie"
BRIO: From topology to a logic of uncertainty 
THE REASONER
2023
Articolo
Low-Level Analysis of Trust in Probabilistic and Opaque Programs 
THE REASONER
2023
Articolo
Matematica da costruire 
ARCHIMEDE
2023
Articolo
Copying safety and liveness properties of computational artefacts 
JOURNAL OF LOGIC AND COMPUTATION
OXFORD UNIV PRESS
2022
Articolo
Partially Open Access
Altmetric disabilitato. Abilitalo su "Utilizzo dei cookie"
Transparent assessment of information quality of online reviews using formal argumentation theory 
INFORMATION SYSTEMS
ELSEVIER
2022
Articolo
Open Access
Altmetric disabilitato. Abilitalo su "Utilizzo dei cookie"
Rilevazione e mitigazione dei bias negli algoritmi di classificazione con il metodo BRIO: il caso del credit scoring 
COLLANA DI STUDI SCIENTIFICI / UNIVERSITÀ DEGLI STUDI DI MILANO, DIPARTIMENTO DI STUDI INTERNAZIONALI, GIURIDICI E STORICO-POLITICI
G. GIAPPICHELLI
2025
Capitolo di libro
Causality Problems in Machine Learning Systems 
ROUTLEDGE
2024
Capitolo di libro
Reserved Access
Altmetric disabilitato. Abilitalo su "Utilizzo dei cookie"
Handling Mobility Failures by Modal Types 
LOGIC, EPISTEMOLOGY, AND THE UNITY OF SCIENCE
SPRINGER NATURE
2024
Capitolo di libro
Reserved Access
Altmetric disabilitato. Abilitalo su "Utilizzo dei cookie"
Hyperintensions for Probabilistic Computations 
TRIBUTES
COLLEGE PUBLICATIONS
2022
Capitolo di libro
Open Access
BRIOxAlkemy: A Bias detecting tool 
CEUR WORKSHOP PROCEEDINGS
CEUR WORKSHOP PROCEEDINGS
2024
Contributo in Atti di convegno
Open Access
Bias Amplification Chains in ML-based Systems with an Application to Credit Scoring 
CEUR WORKSHOP PROCEEDINGS
CEUR-WS
2024
Contributo in Atti di convegno
Open Access
Categorical Models of Subtyping 
DAGSTUHL PUBLISHING
2024
Contributo in Atti di convegno
Open Access
Altmetric disabilitato. Abilitalo su "Utilizzo dei cookie"
Data Quality Dimensions for Fair AI 
CEUR WORKSHOP PROCEEDINGS
CEUR
2024
Contributo in Atti di convegno
Open Access
Proceedings of the 3rd Workshop on Bias, Ethical AI, Explainability and the role of Logic and Logic Programming co-located with the 23rd International Conference of the Italian Association for Artificial Intelligence (AIxIA 2024) 
CEUR WORKSHOP PROCEEDINGS
CEUR WORKSHOP PROCEEDINGS
2024
Curatela
  • «
  • ‹
  • {pageNumber}
  • ›
  • »
{startItem} - {endItem} di {itemsNumber}

Contatti

Sito Web

https://sites.unimi.it/brio/
  • Informazioni
  • Assistenza
  • Accessibilità
  • Privacy
  • Utilizzo dei cookie
  • Note legali

Realizzato con VIVO | Progettato da Cineca | 25.11.5.0