Doctoral project: Visual analytics for explainable and trustworthy machine learning
This doctoral project aims to develop foundational principles, techniques, and tools for visual analytics, for analyzing data and machine models with diverse applications in the context of data-intensive sciences. The overall goal is to make complex machine learning models better understandable and explainable, as well as to provide reliable trust in the models and their results.
Doctoral student Angelos Chatzimparmpas Supervisor Andreas Kerren Assistant supervisors Rafael Messias Martins, Ilir Jusufi Financier Linnaeus University Centre for Data Intensive Sciences and Applications (DISA) Timetable 5 Feb 2018–5 Feb 2023 Subject Computer and information science (Department of Computer Science and Media Technology, Faculty of Technology)
More about the project
Research in machine learning (ML) and artificial intelligence (AI) has become very popular in recent years, with many types of models proposed to comprehend and predict patterns and trends in data originated from different domains. As these models get more and more complex, it also becomes harder for users to assess and trust their results, since their internal operations are mostly hidden in black boxes. The explanation of ML/AI models is currently a hot topic in the information visualization (InfoVis) community, with results showing that providing insights from ML models can lead to better predictions and improve the trustworthiness of the results.
Visual analytics (VA) enables us to analyze large and complex information spaces that are collected from complex systems, experiments, or other data sources. Various types of models (e.g., statistical models or machine learning algorithms) are used to examine the collected data sets, to classify them, or to predict future trends. Those models even depend on suitable parameter settings and might be integrated into a larger context. All these aspects need to be clearly understood by human experts. For this and for providing reliable trust into the models, we develop foundational VA principles, techniques and tools for analyzing both data and models.
Our current focus on this research area is (1) methodological, by providing surveys and guidance through qualitative and quantitative analyses of its literature and research community, as well as (2) technical, by developing VA methods to open the black boxes of various ML/AI models. In the latter case, our research encompasses both unsupervised dimensionality reduction (DR) models and supervised learning models, such as single classifiers or multiple classifier systems (i.e., ensemble learning methods).
For doing this research, we combine the expertise of people coming from several fields: from interactive visualization and VA to machine learning and individual domain knowledge. The doctoral project is performed within the ISOVIS research group.