Skip to Main Content (Press Enter)

Logo UNIMI
  • ×
  • Home
  • People
  • Projects
  • Fields
  • Units
  • Outputs
  • Third Mission

Expertise & Skills
Logo UNIMI

|

Expertise & Skills

unimi.it
  • ×
  • Home
  • People
  • Projects
  • Fields
  • Units
  • Outputs
  • Third Mission
  1. Projects

BIAS, RISK, OPACITY in AI: design, verification and development of Trustworthy AI

Project
Safety-critical systems incorporate more and more autonomous decision-making, using Artificial Intelligence (AI) techniques into real-life applications. These have a very concrete impact on people’s lives. With safety a major concern, problems of opacity, bias and risk are pressing. Creating Trustworthy AI (TAI) is thus of paramount importance. Advances in AI design still struggle to offer technical implementations driven by conceptual knowledge and qualitative approaches. This project aims at addressing these limitations, by developing design criteria for TAI based on philosophical analyses of transparency, bias and risk combined with their formalization and technical implementation for a range of platforms, including both supervised and unsupervised learning. We argue that this can be obtained through the explicit formulation of epistemic and normative principles for TAI, their development in formal design procedures and translation into computational implementations. A first objective of this project is to formulate an epistemological and normative analysis of TAI as undermined by bias and risk, not only with respect to their reliability, but also to their social acceptance. Accordingly, we will analyse the Meaningful Human Control (MHC) requirement for more transparent AI systems operating in safety-critical and ethically sensitive domains. A second objective is to define a comprehensive formal ontology, including a taxonomy of biases and risks and their mutual relations for autonomous decision systems. Our task is to offer a systematic characterization of the bias types, to make them viable for formal and automatic identification, and define risks involved in the construction and use of possibly biased complex AI systems. A third objective is to design (sub)-symbolic formal models to reason about safe TAI, and produce associated verification tools. We will articulate principles of opacity, bias and risk in terms of cognitive representation by extensions of Description Logics and inferential uncertainty modelling in terms of proof-theoretical and semantic approaches to trust, feasible for formal verification. Finally, a fourth objective consists in developing a novel computational framework for TAI systems explanation capabilities, aimed at mitigating the opacity of Machine Learning (ML) models in terms of hierarchical structure and compositional properties of middle-level features. Overall, this project will advance the state of the art on TAI by: developing an epistemically and ethically guided analysis of opacity, bias and risk; investigating the integration of logical symbolic systems with currently applied statistical techniques; and supporting the verification and implementation of less opaque and more trustworthy systems.
  • Academic Signature
  • Overview
  • Research Areas
  • Publications
  • Contacts

Academic Signature

Il servizio di classificazione ACADEMIC SIGNATURE è IN BETA TESTING e i risultati potrebbero non essere corretti

Academic Signature (2)

proof theory
academic discipline
proof theory
mathematical logic

Overview

Contributors

PRIMIERO GIUSEPPE   Scientific Manager  

Departments involved

Dipartimento di Filosofia Piero Martinetti   Principale  

Type

PRIN2020 - PRIN bando 2020

Funder

MINISTERO DELL'ISTRUZIONE E DEL MERITO
External Organization Funding Organization

Date/time interval

June 1, 2022 - May 31, 2025

Project duration

36 months

Research Areas

Concepts


Settore M-FIL/02 - Logica e Filosofia della Scienza

Publications

Outputs (20)

  • ascending
  • descending
  • All
  • Academic Article
  • Chapter
  • Conference Paper
  • Edited Book
Checking trustworthiness of probabilistic computations in a typed natural deduction system 
JOURNAL OF LOGIC AND COMPUTATION
OXFORD UNIVERSITY PRESS
2025
Academic Article
Open Access
Altmetric is disabled. Enable it on "Use of Cookies"
A 2-categorical analysis of context comprehension 
THEORY AND APPLICATIONS OF CATEGORIES
2024
Academic Article
Open Access
A Logic of Knowledge and Justifications, with an Application to Computational Trust 
STUDIA LOGICA
SPRINGER
2024
Academic Article
Partially Open Access
Altmetric is disabled. Enable it on "Use of Cookies"
A Pragmatic Theory of Computational Artefacts 
MINDS AND MACHINES
SPRINGER
2024
Academic Article
Open Access
Altmetric is disabled. Enable it on "Use of Cookies"
A possible worlds semantics for trustworthy non-deterministic computations 
INTERNATIONAL JOURNAL OF APPROXIMATE REASONING
ELSEVIER
2024
Academic Article
Open Access
Altmetric is disabled. Enable it on "Use of Cookies"
Grounding operators: transitivity and trees, logicality and balance 
JOURNAL OF APPLIED NON-CLASSICAL LOGICS
ROUTLEDGE TAYLOR & FRANCIS GROUP
2024
Academic Article
Reserved Access
Altmetric is disabled. Enable it on "Use of Cookies"
BRIO: From topology to a logic of uncertainty 
THE REASONER
2023
Academic Article
Low-Level Analysis of Trust in Probabilistic and Opaque Programs 
THE REASONER
2023
Academic Article
Matematica da costruire 
ARCHIMEDE
2023
Academic Article
Copying safety and liveness properties of computational artefacts 
JOURNAL OF LOGIC AND COMPUTATION
OXFORD UNIV PRESS
2022
Academic Article
Partially Open Access
Altmetric is disabled. Enable it on "Use of Cookies"
Transparent assessment of information quality of online reviews using formal argumentation theory 
INFORMATION SYSTEMS
ELSEVIER
2022
Academic Article
Open Access
Altmetric is disabled. Enable it on "Use of Cookies"
Rilevazione e mitigazione dei bias negli algoritmi di classificazione con il metodo BRIO: il caso del credit scoring 
COLLANA DI STUDI SCIENTIFICI / UNIVERSITÀ DEGLI STUDI DI MILANO, DIPARTIMENTO DI STUDI INTERNAZIONALI, GIURIDICI E STORICO-POLITICI
G. GIAPPICHELLI
2025
Chapter
Causality Problems in Machine Learning Systems 
ROUTLEDGE
2024
Chapter
Reserved Access
Altmetric is disabled. Enable it on "Use of Cookies"
Handling Mobility Failures by Modal Types 
LOGIC, EPISTEMOLOGY, AND THE UNITY OF SCIENCE
SPRINGER NATURE
2024
Chapter
Reserved Access
Altmetric is disabled. Enable it on "Use of Cookies"
Hyperintensions for Probabilistic Computations 
TRIBUTES
COLLEGE PUBLICATIONS
2022
Chapter
Open Access
BRIOxAlkemy: A Bias detecting tool 
CEUR WORKSHOP PROCEEDINGS
CEUR WORKSHOP PROCEEDINGS
2024
Conference Paper
Open Access
Bias Amplification Chains in ML-based Systems with an Application to Credit Scoring 
CEUR WORKSHOP PROCEEDINGS
CEUR-WS
2024
Conference Paper
Open Access
Categorical Models of Subtyping 
DAGSTUHL PUBLISHING
2024
Conference Paper
Open Access
Altmetric is disabled. Enable it on "Use of Cookies"
Data Quality Dimensions for Fair AI 
CEUR WORKSHOP PROCEEDINGS
CEUR
2024
Conference Paper
Open Access
Proceedings of the 3rd Workshop on Bias, Ethical AI, Explainability and the role of Logic and Logic Programming co-located with the 23rd International Conference of the Italian Association for Artificial Intelligence (AIxIA 2024) 
CEUR WORKSHOP PROCEEDINGS
CEUR WORKSHOP PROCEEDINGS
2024
Edited Book
  • «
  • ‹
  • {pageNumber}
  • ›
  • »
{startItem} - {endItem} of {itemsNumber}

Contacts

Web site

https://sites.unimi.it/brio/
  • Guide
  • Help
  • Accessibility
  • Privacy
  • Use of cookies
  • Legal notices

Powered by VIVO | Designed by Cineca | 26.5.0.0