P331 - BTW2023- Datenbanksysteme für Business, Technologie und Web
Permanent URI for this collectionhttps://dl.gi.de/handle/20.500.12116/40312
Authors with most Documents
Browse
3 results
Search Results
Conference Paper Reliable Rules for Relation Extraction in a Multimodal Setting(Gesellschaft für Informatik e.V., 2023) Engelmann, Björn; Schaer, Philipp; König-Ries, Birgitta; Scherzinger, Stefanie; Lehner, Wolfgang; Vossen, GottfriedWe present an approach to extract relations from multimodal documents using a few training data. Furthermore, we derive explanations in the form of extraction rules from the underlying model to ensure the reliability of the extraction. Finally, we will evaluate how reliable (high model fidelity) extracted rules are and which type of classifier is suitable in terms of F1 Score and explainability. Our code and data are available at https://osf.io/dn9hm/?view_only=7e65fd1d4aae44e1802bb5ddd3465e08.Conference Paper VERIFAI - A Step Towards Evaluating the Responsibility of AI-Systems(Gesellschaft für Informatik e.V., 2023) Göllner, Sabrina; Tropmann-Frick, Marina; König-Ries, Birgitta; Scherzinger, Stefanie; Lehner, Wolfgang; Vossen, GottfriedThis work represents the first step towards a unified framework for evaluating an AI system's responsibility by building a prototype application.The python based web-application uses several libraries for testing the fairness, robustness, privacy, and explainability of a machine-learning model as well as the dataset which was used for training the model.The workflow of the prototype is tested and described using images of a healthcare dataset since healthcare represents an area where automatic decisions affect decisions about human lives, and building responsible AI in this area is therefore indispensable.Conference Paper Enhancing Explainability and Scrutability of Recommender Systems(Gesellschaft für Informatik e.V., 2023) Ghazimatin, Azin; König-Ries, Birgitta; Scherzinger, Stefanie; Lehner, Wolfgang; Vossen, GottfriedOur increasing reliance on complex algorithms for recommendations calls for models and methods for explainable, scrutable, and trustworthy AI. While explainability is required for understanding the relationships between model inputs and outputs, a scrutable system allows us to modify its behavior as desired. These properties help bridge the gap between our expectations as end users and the algorithm's behavior and accordingly boost our trust in AI. Aiming to cope with information overload, recommender systems play a crucial role in filtering content (such as products, news, songs, and movies) and shaping a personalized experience for their users. Consequently, there has been a growing demand from the information consumers to receive proper explanations for their personalized recommendations. To this end, we put forward proposals for explaining recommendations to the end users. These explanations aim at helping users understand why certain items are recommended to them and how their previous inputs to the system relate to the generation of such recommendations. Such explanations usually contain valuable clues as to how a system perceives user preferences and more importantly how its behavior can be modified. Therefore, as a natural next step, we develop a framework for leveraging user feedback on explanations to improve their future recommendations. We evaluate all the proposed models and methods with real user studies and demonstrate their benefits at achieving explainability and scrutability in recommender systems.
Load citations