Our Approach
The research group LUCI investigates, among other topics, trust and distrust in the context of information transmission models and in AI systems.
Basic research lines:
1. How do realistic agents trust and distrust information?
2. What does it mean for an AI system to be trustworthy?
3. What does it mean for an online information source to qualify as trustworthy?
Specific application contexts
1. What are the formal characteristics of trustworthy machine learning systems?
2. How can information in social networks be checked for trustworthiness?
3. How does trustworthiness of information affects pandemic scenarios?
Our methodology
We use a combination of formal models and implementations. The former include proof-theoretic and semantic modelling, especially for multi-agent non-monotonic systems. The latter include agent-based simulations, logic programming, model- and proof-checking systems.
Our projects
1. Foundations of Fair and Trustworthy AI
2. Information Quality and Argumentation in Online Information Diffusion
3. Abstraction & Classification: Computation & Types