Interpretability of Riemannian tools used in Brain Computer Interfaces
TdS, Tristan Venot, Marie-Constance Corsi, Florian YgerPublished in IEEE Machine Learning For Signal Processing (MLSP), 2025
Download paper | View on HALRiemannian methods have established themselves as state-of-the-art approaches in Brain-Computer Interfaces (BCI) in terms of performance. However, their adoption by experimenters is often hindered by a lack of interpretability. In this work, we propose a set of tools designed to enhance practitioners’ understanding of the decisions made by Riemannian methods. Specifically, we develop techniques to quantify and visualize the influence of the different sensors on classification outcomes. Our approach includes a visualization tool for high-dimensional covariance matrices, a classifier-agnostic tool that focuses on the classification process, as well as methods that leverage the data’s topology to better characterize the role of each sensor. We demonstrate these tools on a specific dataset and provide Python code to facilitate their use by practitioners, thereby promoting the adoption of Riemannian methods in BCI.