Virtual reality and brain-computer interface engineer

RITMEA Project

Development and deployment of a BCI+VR platform

One of the BCI team’s research projects is part of Axis 5 of the CPER RITMEA, entitled “Silver economy – smart cities”. More specifically, it is part of workpackage 1 of this axis, which involves specifying, designing, developing and testing an instrumented wheelchair. This wheelchair can be used in uncontrolled environments by people with severe motor disabilities.

One of the challenges of our research will be to assess the user’s motor skills in real time and, if these no longer enable him to control the wheelchair effectively, to propose an alternative interaction and control technique. The analysis of the user’s abilities as well as the establishment of a channel for piloting the wheelchair will be carried out by a brain-computer interface.

To validate the approach of on-line analysis of the user’s motor skills and, if necessary, offer him an alternative channel for controlling the wheelchair, it will be necessary to start by developing a simulator. A first version of this simulator will use a virtual reality headset to visualize the user’s environment. To this simulator, we will integrate an electroencephalography system and develop an interface authorizing the use of this additional sensor.

The simulator will enable us to carry out an experimental study involving a large number of disabled people. The precise objectives of this study and the methodology chosen to carry it out will be validated by a research ethics committee.

For more information, download the detailed job description.

PhD position, 2024 (2)

inner speech detection

Detection of neuromarkers of an inner speech task by electroencephalography: application to the detection of auditory verbal hallucinations

Download in PDF


The brain regions involved in speech production were identified in the middle of the 19th century by Paul Broca, a French doctor and researcher, who also gave his name to the brain area mainly concerned. Numerous markers of brain activity (neuromarkers) correlated with the performance of this task can be highlighted in other regions of the brain, notably in Wernicke’s area which is mainly associated with language comprehension.

“Inner speech” is considered a task very similar to speech production, during which speech is simulated without going as far as articulation (Rapin 2011). Functional magnetic resonance imaging (fMRI) makes it possible to locate with good precision the brain areas activated during an inner speech production task, without however going so far as to be able to clearly explain the different mental correlates of this task  (Loevenbruck et al. 2019).

From a brain activity perspective, some studies tend to show that auditory verbal hallucinations (AVHs) are very similar to inner speech (Chung 2023). AVHs are among the phasic symptoms that most handicap patients suffering from severe psychiatric disorders, such as schizophrenia. It is estimated that approximately 25% of affected patients do not see an improvement in their symptoms when they receive pharmacological or psychological treatment.

Recently, new therapeutic approaches have been considered to treat AVHs, including fMRI-based neurofeedback (Fovet et al. 2016). The patient’s brain activity is recorded continuously, analyzed in order to extract neuromarkers, certain characteristics of which are finally transformed into simple sensory information sent back to the patient. This allows them to regain control of their brain activity, and therefore potentially reduce certain symptoms.

State of research in the CRIStAL and LILNCOG laboratories

For the moment, neuromarkers potentially useful for implementing a neurofeedback loop are obtained by analyzing fMRI signals. Even though clinical studies are still underway to prove its effectiveness, we already know that the use of this new therapeutic approach will be very limited due to the difficulty of accessing MR imaging facilities. It is therefore essential to start considering another functional imaging modality now, notably electroencephalography (EEG).

The “Lille Neuroscience and Cognition” laboratory, more precisely the Plasticity and Subjectivity team, is a pioneer in the specification and implementation of fMRI-based neurofeedback therapies. Recent work carried out in this team, notably in the ANR INTRUDE project1, has shown that it is possible to decode in real-time brain activity measured by fMRI. This decoding highlights AVH crises and thus paves the way to a potential neurofeedback therapy.

The BCI team at the CRIStAL laboratory is specialized in the specification, design and validation of brain–computer interfaces (BCIs). These devices allow people with severe motor disabilities to maintain a channel of communication with those around them when they have lost all abilities to control their muscular activity. In the BCIs developed at CRIStAL, brain activity is measured by simple EEG devices.

The researchers from the two teams know each other well and have already established collaborative links, notably through the co-supervision of Candela Donantueno’s thesis funded by the PEARL program2 (Donantueno et al. 2023). This second Doctoral thesis, co-supervised by researchers from both teams, will allow them to concretely continue this collaboration, strengthening the research component associated with the NeurotechEU alliance3 of which our university is a member.

Objectives and progress of the thesis

The primary objective of the thesis will be to show that functional brain imaging by electroencephalography allows the detection of neuromarkers of an inner speech task. If this first objective is fully achieved, a secondary objective will consist of testing whether the same neuromarkers, or similar neuromarkers, allows the detection of auditory verbal hallucinations crises in schizophrenia patients.

Initially, it will be a question of studying the state-of-the-art in the field of analysis of inner speech by EEG. The analysis of speech production by EEG is a field that has been particularly explored in recent years and the bibliography is rich. In contrast, the analysis of inner speech has been carried out mainly by functional MRI. Only a few studies have already been carried out in the field of inner speech analysis, and the first step will naturally be to identify all the existing methods. In conclusion of the analysis of the state-of-the-art, it will be necessary to select approaches which make it possible to precisely analyze the activity of the brain in predefined areas, in this case those which could contribute to the occurrence of episodes of AVHs. A first comparison of performances will be carried out on pre-existing public data (Nieto et al. 2022).

It will then be necessary to perform a first experimental study, on control subjects, which should make it possible to verify that the method(s) of EEG analysis correctly detect the phases of an inner speech task. Depending on the needs identified during the analysis of the state-of-the-art, the tasks carried out by the subjects may or may not integrate a verbal interaction with another person and/or an avatar (Loevenbruck et al. 2019). The PhD candidate will have to write the experimental protocol and to have it validated by the research ethics committee of the University of Lille. The data recorded during this experiment should, if possible, be made public in order to serve as a basis for comparing different EEG processing methods.

If this first study makes it possible to identify reliable neuromarkers, resulting from EEG signal processing algorithms that can be executed in real time, a second experimental study could be considered. This second experimental study would include both control subjects and patients.


The person recruited to prepare this Doctoral thesis must have a Master’s degree, or equivalent Graduate degree, authorizing registration at the MADIS Doctoral School. This diploma must correspond to Master’s specialties located in the scope of automation, industrial computing, signal and image processing, computational neuroscience, computer science or artificial intelligence.

The selected candidate will have solid programming skills, particularly in Python but not only, which will allow them to quickly exploit signal analysis and classification libraries already available.

An ability to conduct experimental research, acquired during a project or internship and attested by at least one letter of recommendation, will be an asset. Obviously, creativity, autonomy, team spirit and communication skills are also valuable assets.

Concerning the question of secularism, it is recalled by the lawyer of the University of Lille that the contractual doctoral student, whether or not they are in a teaching position, is considered as a public agent and cannot therefore demonstrate their religious affiliation, in particular by displaying a sign or clothing intended to mark their religious affiliation.

Finally, the research work will be carried out in a restricted access zone (ZRR) within the meaning of article R413-5-1 of the penal code or a sensitive unit. Your appointment and/or assignment can only take place, depending on your situation, after advice from the Senior Defense and Security Officer (HFDS).


  1. Chung, K. H. L. 2023. “Who Is Talking Inside My Head? Establishing the Neurophysiological Basis of Inner Speech and Its Relation to Auditory Verbal Hallucinations.” PhD thesis, University of New South Wales, Sydney.

  2. Donantueno, C., P. Yger, F. Cabestaing, and R. Jardri. 2023. “fMRI-Based Neurofeedback Strategies and the Way Forward to Treating Phasic Psychiatric Symptoms.” Frontiers in Neuroscience 17.

  3. Fovet, T., N. Orlov, M. Dyck, P. Allen, K. Mathiak, and R. Jardri. 2016. “Translating Neurocognitive Models of Auditory-Verbal Hallucinations into Therapy: Using Real-Time fMRI-Neurofeedback to Treat Voices.” Frontiers in Psychiatry 7.

  4. Loevenbruck, H., R. Grandchamp, L. Rapin, M. Perrone‐Bertolotti, C. Pichat, C. Haldin, E. Cousin, et al. 2019. “Neural Correlates of Inner Speaking, Imitating and Hearing: An fMRI Study.” In Proceedings of the 19th International Congress of Phonetic Sciences.

  5. Nieto, N., V. Peterson, H.-L. Rufiner, J.-E. Kamienkowski, and Spies R. 2022. “Thinking Out Loud, an Open-Access Eeg-Based Bci Dataset for Inner Speech Recognition.” Nature Scientific Data 9.

  6. Rapin, L. 2011. “Hallucinations Auditives Verbales et Langage Intérieur Dans La Schizophrénie: Traces Physiologiques et Bases Cérébrales.” PhD thesis, Université de Grenoble.