visualisation figure

Visualisation important to understand machine learning

Researchers from the VAESS and ISOVIS research groups in computer science have made a meta-analysis about the visual interpretation of machine learning models. Their survey of surveys, published by Sage Journals, confirms the increasing trend of interpreting machine learning with visualisations, and identifies research opportunities for visualisation researchers.

In all, the researchers from the Visual Analytics for Engineering Smarter Systems (VAESS) and Information and Software Visualization (ISOVIS) research groups’ “survey of surveys” covered 18 surveys discussing 520 publications.

Teach a computer to predict

Machine learning is about teaching a computer system to make as accurate predictions as possible when supplied with sample data. You train a machine learning algorithm to know specific answers for previously existing samples, and then to answer the same questions for future unknown data – i.e., to make a prediction. The benefit of using machines instead of humans for analysing data is that by deploying automatic techniques, the efficiency and scalability for large data sets can be tremendously improved.

Machine learning is nowadays used in many disciplines such as medicine, bioinformatics and construction sciences, in both academia and industry. An example is the diagnosis of a disease by applying machine learning techniques on X-ray and/or MRT images from patients in a hospital. The techniques used might provide a result if someone is healthy or not, according to the prior data collected from other patients and trained with the help of medical doctors. 

A crucial point is now that humans, i.e., doctors, patients, nurses, etc. in our example, must have trust in the results of machine learning. They should also understand why a prediction has been made. Here, visualisation is able to help and to provide answers to those questions; hence, visualisation is a key aspect of the emerging field of explainable artificial intelligence (XAI).

Identified research opportunities

“The most important findings of our survey of surveys are the research opportunities that we have identified, which are helpful to visualisation researchers, real-world practitioners from various disciplines, and machine learning experts. In more detail, we can confirm the increasing trend of interpreting machine learning with visualisations in the past years, and that visualisation can assist in, for example, online training processes of deep learning models (a subset of machine learning techniques) and enhancing trust in machine learning models.

“But still, the question of exactly how this assistance should take place is a remaining open challenge of the visualisation community. Our survey of surveys brings these issues to light and proposes promising directions for future research”, says Angelos Chatzimparmpas, doctoral student in computer science and main author of the article.

The article A survey of surveys on the use of visualization for interpreting machine learning models has been published by Sage Journal’s Information Visualization journal.