Our Approach:
The research group “Trust in Information” investigates the trust (and distrust) that is placed in computer models – as well as the modeling of trust. The models are simulation and AI models. In this context, trust in models is often hard to separate from trust in the people who develop or use them.Our group focuses on trust in information in six sub-projects:
Basic research lines
- How can appropriate trust in computer-intensive research be developed?
- How can the trustworthiness of assertions or people be tested by AI methods?
- How can deception be countered by AI methods?
Specific application contexts
- What are the characteristics of trustworthy medical systems?
- How can computer-intensive methods in the field of criminological work be developed to be trustworthy?
- How can conflict-intensive urban planning processes be mediated by trustworthy design tools?
Our Methodology:
Computer models are currently viewed primarily in terms of their reliability. However, this is trust in a limited sense only. Reliability can hardly be assessed by laymen, as they experience the results of computer models at best indirectly and only in a very partial manner.To compensate for these two deficits, we work with a virtue ethics perspective of trust and distrust, without excluding other approaches.
Our Projects:
- Trust in Computational Models
- Trust, AI and Deception
- Trust in Medical Systems
- Trust in Crime Reconstruction
- Trust in Conflict Intense Urban Planning