A Brain-Computer Interface (BCI) decodes brain activity with the aim to establish a communication pathway that bypasses speech and other forms of muscular activity. This has led to alternative solutions for patients devoid of these abilities as in the case of late-stage amyotrophic lateral sclerosis, severe cerebral palsy, head trauma, and spinal cord injuries. In recent years, BCIs have kindled interest not only from the research community and assistive technology industry but also from trend watchers and captains of industry. The quest for performant and affordable solutions is most apparent in EEG-based visual BCIs but 2 issues that have only been scantily addressed hinder further progress. First, visual BCIs have trouble to operate efficiently when the user is attending a selectable target in the visual periphery (covert attention) as opposed to gazing at the desired target directly (overt attention). Gaze-free visual BCIs would offer great benefits for patients with limited eye control capabilities, when eye-tracking-based solutions fall short. A second issue is with the decoding of rapid deflections in EEG activity, in synchrony with external stimuli, called event-related potentials (ERPs). BCIs that rely on these potentials traditionally focus on detecting the presence of a single, usually the largest component while a stimulus factually elicits multiple components. Our hypothesis is that speed, accuracy and usability of visual BCIs can be significantly improved when accounting for multiple ERP components and their overt/covert attention dependency. We propose a new multicomponent decoder and visual interface capable of handling overt and covert attention scenarios.
arne [dot] vandenkerchove [at] univ-lille.fr