Our mission is to model brain structure and function, as well as their associations with the underlying genetic determinants relying on statistical/machine and deep learning methods. The main current research activities focus on modeling structural/functional/effective connectivity, brain microstructure, imaging genetics and brain computer interfaces (BCI). We participate to the activities of international standardization initiatives that is ISO/IEC JTC1 within the JPEG DNA IG and DICOM WG32.
The activities of the BraiNavLab articulate on different themes, including Imaging Genetics, Brain connectivity and dynamics, Brain microstructure modeling and Brain, expoiting explainable AI as enabling technology.
Imaging Genetics (IG)
The imaging genetics targets the investigation and modelling of the link between image-derived endophenotypes and genetic determinants at different scales. Information regarding genetic variants, either at SNP or aggregated levels, and imaging descriptors derived from different modalities is jointly investigated relying on statistical/machine/deep learning models equipped with explanation and interpretation tools, opening to the holistic modeling of the brain and beyond.
Brain microstructure modeling
The diffusion Magnetic Resonance Imaging (dMRI) is sensitive to random movements of nuclear spins carried by particles such as water molecules. The diffusion process in the brain tissue is hindered by its complex architecture due to the broad variety of cellular structures. Microstructure properties can be inferred by fitting the dMRI signal using mathematical models involving different set of parameters. In-vivo microstructure imaging as enabled by the dMRI is a considerable, indispensable, and exciting challenge for shedding light on the biological structures and processes underneath the health and disease brain.
Brain dynamics and connectivity
Modelling brain connectivity allows characterizing the interplay among different regions, either in the form of functional dependencies (functional and effective connectivity) or backbone of connections (structural connectivity). Besides this, understanding the relationship between such different forms of connectivity and disentangling the link between structural (physical) connections and functionality represents one of the current hot topics in the field. Functional measures are usually derived from functional MRI (BOLD and ASL) or electroencephalographic recordings (EEG) during task activities or while resting, enabling capturing patterns of statistical dependence among neural elements. Moreover, focusing on the temporal dynamics of functional connectivity allows to uncover how brain activity evolves across the brain regions and over time. These approaches result into spatial dynamic states and associated summary measures, in particular persistence, transition, frequency and dwell time that can represent novel indices to discriminate across pathological conditions. In case of effective connectivity, approaches such as dynamic causal modeling, Granger causality and adaptive directed transfer function can be applied to further inform on the directed causal effects.
From a structural point of view, the connectivity is represented by the white matter streamlines linking the different brain regions. Such a link can be quantified by the normalized number of fibers or by summary statistics of microstructural indices collected along the fiber bundles (microstructure-informed connectome).
Brain structure and function can be finally represented in the form of brain networks, comprising a set of nodes (neural elements) and edges (their mutual connections). These networks can be further examined with network science methods extracting measures of segregation, integration or influence.
Brain Computer Interfaces
Active BCI
This project intends to fill the gap between the human brain and external devices in the framework of motor-imagery (MI)-based brain computer interfaces (BCIs) technologies. The BCI is a system able to read voluntary changes in brain activity and then translate neural signals into a message or a computational command in real-time. Among the various types of systems, a non-invasive BCI based on electroencephalography (EEG) and MI transforms neuroelectric signals derived from motor regions into command outputs for external effectors. The success of translating the signal in messages depends on several methodological contributing factors, i.e. decoding algorithms, calibrated on the motor regions using a suitable representation of the data that simplifies the classification or detection of specific brain patterns. The state-of-the-art devices provide good performance albeit limited, when few EEG electrodes are used, calling for novel approaches. In answer to this call, we propose a new method for extracting discriminative features allowing to distinguish different mental movements within a multivariate brain connectivity and deep learning framework. The system is trained to decode the brain activity during MI tasks using the coupling information or causal influence between signals and will produce corresponding signals to an interface that controls an external device.
Passive BCI
Passive BCI allows the monitoring of a subject’s mental state, without the need of any active input. It has applications in fields where operator mistakes may cause severe accidents, such as air traffic control, plant surveillance, or driving. It can also be applied in the manufactory context, specifically by monitoring an operator’s level of vigilance, which can be defined as the capacity to sustain one’s attention during the realization of a task, and their mental workload, that is the effort furnished to respond to the task's demands. Both vigilance and mental workload fluctuations can be observed over time by analyzing the EEG, since they are associated with specific behaviors. The goal of the project is to create a deep learning pipeline capable of monitoring an operator’s mental state in real-time, during the execution of prolonged activities. The model needs to alert operators by audio or video cues, should they become too tired to work without endangering themselves or the assembly procedures.
EXplainable Artificial Intelligence (XAI)
While machine/deep learning approaches have undeniable advantages, the more the model is complex the more having explanations is mandatory to build trust on the outcomes and interpret the results, especially in medicine, healthcare and neuroscience fields. Such complexity leads to questions of trust, bias, and interpretability as machine/deep learning methods are often a ”black-box”. XAI was born to make the model behaviour comprehensible from humans, aiming at explaining how the model reached a specific outcome, how the features contributed, and to what extent the model is confident about the decision (uncertainty).



