Our Approach
We aim at a normative analysis of trust, focusing on bias and risk as its determinants. Bias and risk can be analyzed in purely epistemic terms, e.g. reliability; but also in the light of the actual behavior of AI technologies in real contexts, where non-epistemic dimensions such as benevolence and integrity are essential. We plan to investigate the role of trust both at the epistemic level (as efficacy) and at non-epistemic (pragmatic) level (as effectiveness).
Basic research lines
(i) Reliability connected to epistemic risks: risks due to false-positive and false-negative errors, proper amalgamation of different sources of data;
(ii) Epistemic systematic errors: design-related biases and epistemic confounding factors;
(iii) Pragmatic elements connected to non-epistemic risks associated to effectiveness;
(iv) Biases related to AI technologies in real contexts and forms of discrimination.
Specific application contexts
AI enhanced bioengineering technologies, e.g. wearable devices; autonomous robotics.
Our methodology
We adopt an epistemologically-grounded ethical approach to the analysis of trust, focusing on bias and risk as its determinants.
Our projects
At the moment we do not have any funded project on this topic.