PhD position, 2024 (2)

inner speech detection

Detection of neuromarkers of an inner speech task by electroencephalography: application to the detection of auditory verbal hallucinations

Download in PDF


The brain regions involved in speech production were identified in the middle of the 19th century by Paul Broca, a French doctor and researcher, who also gave his name to the brain area mainly concerned. Numerous markers of brain activity (neuromarkers) correlated with the performance of this task can be highlighted in other regions of the brain, notably in Wernicke’s area which is mainly associated with language comprehension.

“Inner speech” is considered a task very similar to speech production, during which speech is simulated without going as far as articulation (Rapin 2011). Functional magnetic resonance imaging (fMRI) makes it possible to locate with good precision the brain areas activated during an inner speech production task, without however going so far as to be able to clearly explain the different mental correlates of this task  (Loevenbruck et al. 2019).

From a brain activity perspective, some studies tend to show that auditory verbal hallucinations (AVHs) are very similar to inner speech (Chung 2023). AVHs are among the phasic symptoms that most handicap patients suffering from severe psychiatric disorders, such as schizophrenia. It is estimated that approximately 25% of affected patients do not see an improvement in their symptoms when they receive pharmacological or psychological treatment.

Recently, new therapeutic approaches have been considered to treat AVHs, including fMRI-based neurofeedback (Fovet et al. 2016). The patient’s brain activity is recorded continuously, analyzed in order to extract neuromarkers, certain characteristics of which are finally transformed into simple sensory information sent back to the patient. This allows them to regain control of their brain activity, and therefore potentially reduce certain symptoms.

State of research in the CRIStAL and LILNCOG laboratories

For the moment, neuromarkers potentially useful for implementing a neurofeedback loop are obtained by analyzing fMRI signals. Even though clinical studies are still underway to prove its effectiveness, we already know that the use of this new therapeutic approach will be very limited due to the difficulty of accessing MR imaging facilities. It is therefore essential to start considering another functional imaging modality now, notably electroencephalography (EEG).

The “Lille Neuroscience and Cognition” laboratory, more precisely the Plasticity and Subjectivity team, is a pioneer in the specification and implementation of fMRI-based neurofeedback therapies. Recent work carried out in this team, notably in the ANR INTRUDE project1, has shown that it is possible to decode in real-time brain activity measured by fMRI. This decoding highlights AVH crises and thus paves the way to a potential neurofeedback therapy.

The BCI team at the CRIStAL laboratory is specialized in the specification, design and validation of brain–computer interfaces (BCIs). These devices allow people with severe motor disabilities to maintain a channel of communication with those around them when they have lost all abilities to control their muscular activity. In the BCIs developed at CRIStAL, brain activity is measured by simple EEG devices.

The researchers from the two teams know each other well and have already established collaborative links, notably through the co-supervision of Candela Donantueno’s thesis funded by the PEARL program2 (Donantueno et al. 2023). This second Doctoral thesis, co-supervised by researchers from both teams, will allow them to concretely continue this collaboration, strengthening the research component associated with the NeurotechEU alliance3 of which our university is a member.

Objectives and progress of the thesis

The primary objective of the thesis will be to show that functional brain imaging by electroencephalography allows the detection of neuromarkers of an inner speech task. If this first objective is fully achieved, a secondary objective will consist of testing whether the same neuromarkers, or similar neuromarkers, allows the detection of auditory verbal hallucinations crises in schizophrenia patients.

Initially, it will be a question of studying the state-of-the-art in the field of analysis of inner speech by EEG. The analysis of speech production by EEG is a field that has been particularly explored in recent years and the bibliography is rich. In contrast, the analysis of inner speech has been carried out mainly by functional MRI. Only a few studies have already been carried out in the field of inner speech analysis, and the first step will naturally be to identify all the existing methods. In conclusion of the analysis of the state-of-the-art, it will be necessary to select approaches which make it possible to precisely analyze the activity of the brain in predefined areas, in this case those which could contribute to the occurrence of episodes of AVHs. A first comparison of performances will be carried out on pre-existing public data (Nieto et al. 2022).

It will then be necessary to perform a first experimental study, on control subjects, which should make it possible to verify that the method(s) of EEG analysis correctly detect the phases of an inner speech task. Depending on the needs identified during the analysis of the state-of-the-art, the tasks carried out by the subjects may or may not integrate a verbal interaction with another person and/or an avatar (Loevenbruck et al. 2019). The PhD candidate will have to write the experimental protocol and to have it validated by the research ethics committee of the University of Lille. The data recorded during this experiment should, if possible, be made public in order to serve as a basis for comparing different EEG processing methods.

If this first study makes it possible to identify reliable neuromarkers, resulting from EEG signal processing algorithms that can be executed in real time, a second experimental study could be considered. This second experimental study would include both control subjects and patients.


The person recruited to prepare this Doctoral thesis must have a Master’s degree, or equivalent Graduate degree, authorizing registration at the MADIS Doctoral School. This diploma must correspond to Master’s specialties located in the scope of automation, industrial computing, signal and image processing, computational neuroscience, computer science or artificial intelligence.

The selected candidate will have solid programming skills, particularly in Python but not only, which will allow them to quickly exploit signal analysis and classification libraries already available.

An ability to conduct experimental research, acquired during a project or internship and attested by at least one letter of recommendation, will be an asset. Obviously, creativity, autonomy, team spirit and communication skills are also valuable assets.

Concerning the question of secularism, it is recalled by the lawyer of the University of Lille that the contractual doctoral student, whether or not they are in a teaching position, is considered as a public agent and cannot therefore demonstrate their religious affiliation, in particular by displaying a sign or clothing intended to mark their religious affiliation.

Finally, the research work will be carried out in a restricted access zone (ZRR) within the meaning of article R413-5-1 of the penal code or a sensitive unit. Your appointment and/or assignment can only take place, depending on your situation, after advice from the Senior Defense and Security Officer (HFDS).


  1. Chung, K. H. L. 2023. “Who Is Talking Inside My Head? Establishing the Neurophysiological Basis of Inner Speech and Its Relation to Auditory Verbal Hallucinations.” PhD thesis, University of New South Wales, Sydney.

  2. Donantueno, C., P. Yger, F. Cabestaing, and R. Jardri. 2023. “fMRI-Based Neurofeedback Strategies and the Way Forward to Treating Phasic Psychiatric Symptoms.” Frontiers in Neuroscience 17.

  3. Fovet, T., N. Orlov, M. Dyck, P. Allen, K. Mathiak, and R. Jardri. 2016. “Translating Neurocognitive Models of Auditory-Verbal Hallucinations into Therapy: Using Real-Time fMRI-Neurofeedback to Treat Voices.” Frontiers in Psychiatry 7.

  4. Loevenbruck, H., R. Grandchamp, L. Rapin, M. Perrone‐Bertolotti, C. Pichat, C. Haldin, E. Cousin, et al. 2019. “Neural Correlates of Inner Speaking, Imitating and Hearing: An fMRI Study.” In Proceedings of the 19th International Congress of Phonetic Sciences.

  5. Nieto, N., V. Peterson, H.-L. Rufiner, J.-E. Kamienkowski, and Spies R. 2022. “Thinking Out Loud, an Open-Access Eeg-Based Bci Dataset for Inner Speech Recognition.” Nature Scientific Data 9.

  6. Rapin, L. 2011. “Hallucinations Auditives Verbales et Langage Intérieur Dans La Schizophrénie: Traces Physiologiques et Bases Cérébrales.” PhD thesis, Université de Grenoble.



PhD position, 2024 (1)

Multimodal human-machine interactions for bedridden people

Thesis subject for doctorate in computer science, BCI team, CRIStAL laboratory, University of Lille, campaign 2024

  • PhD thesis 2024-2027 in computer science
  • Location: Lille (CRIStAL, CNRS, University of Lille, in Villeneuve d’Ascq), BCI team
  • Thesis supervisor: José Rouillard (MdC HDR, CRIStAL, BCI, University of Lille)
  • Funding: Scholarship from the University of Lille (currently applying)

Summary: This thesis subject proposes to study human-machine interactions for bedridden people, following an accident or illness reducing their abilities. Patients who have suffered an accident or suffer from a degenerative disease are often bedridden and dependent on other people or robots to carry out tasks and communicate with those around them. Under these conditions, traditional means of interaction (keyboard/mouse) can no longer be used easily and new ways must be implemented to offer adaptive and personalized communication between the patient and the machine. Oral, visual interfaces (gaze tracking, detection of postures, gestures or micro-gestures), even brain-computer interfaces can allow a bedridden person to communicate (see locked-in syndrome for the most extreme cases, for which the patient can absolutely no longer move his muscles or communicate orally). Each case requires lengthy preparation and adjustment work of the computer system. Automatic adaptation of the system is sought within the framework of this thesis work in order to allow personalization, not only for the patient, but also for those around them and the evolution of their pathology.

Keywords: multimodality, natural user interface, health, disability, bedridden

Scientific and economic context

Despite the progress made in the field of human-machine interfaces (HMI), implementing effective solutions that can be used on a daily basis is a real scientific, technological and societal challenge. Dependent people (at home, in hospital, in nursing homes, etc.) and/or disabled people have a crucial need for communication, but no simple, effective solution that is easily adaptable to their physical and cognitive abilities is yet truly available. When these users are bedridden, certain tools and technologies are only partially usable. For example, the occipital part of the skull often rests on a pillow and the use of electrodes on this part of the brain is very difficult if not impossible. It is therefore necessary to study other complementary means to establish effective communication with the patient, and to propose original ideas to explore, such as the design of an adaptable pillow incorporating electrodes.

In this doctoral thesis in computer science, we recommend the study of different interaction modalities which can be used successively or jointly by a patient lying on a bed. A multimodal approach (Coutaz et al. 1995) could probably improve communication performance by coupling endogenous and exogenous solutions already known, but rarely used simultaneously (monitoring gaze, speech, when possible, detection of gestures and micro-gestures, detection of movement intention by electroencephalogram (EEG), magnetoencephalogram (MEG), etc. (Corsi et al. 2018).

The state of the subject in the host laboratory

The BCI (Brain-Computer Interface) team at the CRIStAL laboratory is particularly interested in brain-computer interfaces for patients suffering from severe disabilities and/or illnesses preventing them from using traditional HMIs to communicate and act on the world. We have been collaborating for many years with various organizations in the health field (Lille University Hospital, INSERM, SCALAB laboratory, etc.) to study solutions in which non-invasive BCIs are used. The theses of Alban Duprès (2013-2016) (Duprès 2016) and Jimmy Petit (2019-2022) (Petit 2022) have made it possible to advance in the study of hybrid multimodal BCIs and the exploitation of somesthetic evoked potentials, in particular by studying the cerebral responses to vibrations applied to the wrists of patients. This thesis subject aims to explore other avenues that are still little exploited, such as the use of different interaction modalities in a synergistic manner (visual, auditory, kinesthetic) in order to allow the patient to regain autonomy by dialoguing with their entourage or with an assistant robot. This could be done with partners from the European Brain and Technology Alliance1, in which our research team and the University from Lille are involved.

Objectives and expected results

This involves modeling and designing a truly usable human-machine dialogue solution for people who can no longer use a single mode of interaction. BCI will be one of the potential solutions, to be coupled with other means of interacting, adaptable according to the user, the session, the context, etc. A demonstrator will be feasible, initially within the interaction rooms of the CRIStAL laboratory, then tested outside the laboratory (depending on the availability of our partners, such as the Lille University Hospital, or the Hopale foundation2 from Berck, for example).

Forecast work program

First year:

  • State of the art: A bibliographic study will be conducted by the candidate in order to identify the state of the art in the field, for these bedridden users, both in terms of signal processing and on the ergonomic aspect of the solutions currently proposed around natural user interfaces (Han et al. 2023), (Spandana et al 2021). Typically, it has been shown that an SSVEP is certainly relatively easy to detect on the occipital part of the skull, but that for bedridden patients, this solution is difficult or even impossible to implement.
  • Usage study: The candidate will then have to find out from our partners (Hub Santé, health foundations and organizations, medical homes, etc.) about the uses and technologies currently used to help patients with reduced mobility and/or so-called impeded users (following a stroke, an accident, a progressive illness, etc.) to perform various tasks, with and without the help of a computerized system (communicate, ask for one-off help, emergency help, etc.).

Second year:

  • System modeling; Study protocol; Preliminary laboratory study and development

Third year:

A laboratory experiment will collect data and check whether the scientific hypotheses put forward will be validated or rejected. Ideally, an experimental campaign outside the laboratory will be carried out (medical homes, patients’ homes, etc.) in order to test the usability of the solutions proposed in situ. Publication of the scientific results obtained, writing of the thesis and preparation of the student’s professional project.

Application and skills sought

The successful candidate must hold a Master M2 or equivalent in computer science and must show a strong interest in carrying out high-quality research. The candidate must have experience or a strong interest in software development (Python, C#, JS, Firebase, MQTT, Node-RED Unity, etc.) and health-oriented human-machine interactions. Skills in signal processing (EEG, EMG, etc.), data fusion/fission and multimodality will be a plus.

Creativity, autonomy, team spirit and sense of communication are valuable assets. A good level of technical and scientific English will also be appreciated.

Finally, concerning the question of secularism, it is recalled by the lawyer of the University of Lille, that the contractual doctoral student, whether or not he is in a teaching position, is assimilated to a public agent and is not cannot therefore demonstrate their religious affiliation, in particular by displaying a sign or an outfit intended to mark their religious affiliation.

If this thesis subject interests you, please send an application email as soon as possible to with CV, cover letter, transcripts as well as any element allowing us to assess your application.


  1. Corsi MC, Chavez M, Schwartz D, Hugueville L, Khambhati AN, Bassett DS, De Vico Fallani F. Integrating EEG and MEG Signals to Improve Motor Imagery Classification in Brain-Computer Interface. Int J Neural Syst. 2019 Feb;29(1):1850014. Epub 2018 Apr 2. PMID: 29768971.

  2. Coutaz, J., Nigay, L., Salber, D., Blandford, A., May, J., & Young, R.M. (1995). Four easy pieces for assessing the usability of multimodal interaction: the CARE properties. INTERACT.

  3. Duprès Alban. Interface cerveau-machine hybride pour pallier le handicap causé par la myopathie de Duchenne. Thèse de doctorat. Traitement du signal et de l’image. Université Lille 1 – Sciences et Technologies, 2016.

  4. Toward a hybrid brain-machine interface for palliating motor handicap with Duchenne muscular dystrophy: A case report. Alban Duprès, François Cabestaing, José Rouillard, Vincent Tiffreau, Charles Pradeau Annals of Physical and Rehabilitation Medicine, Elsevier Masson, 2019,

  5. Han, Yi, Xiangliang Zhang, Ning Zhang, Shuguang Meng, Tao Liu, Shuoyu Wang, Min Pan, Xiufeng Zhang, and Jingang Yi. 2023. “Hybrid Target Selections by ”Hand Gestures + Facial Expression” for a Rehabilitation Robot” Sensors 23, no. 1: 237.

  6. Petit, Jimmy, Filtrage somesthésique pour des interfaces cerveau-ordinateur utilisant des stimulations vibro-tactiles, Thèse de doctorat, Université Lille, 2022. En langue anglaise.

  7. Petit Jimmy, Rouillard José, Cabestaing François, EEG-based Brain-Computer Interfaces exploiting Steady-State Somatosensory-Evoked Potentials: A Literature Review. Journal of Neural Engineering, IOP Publishing, 2021, 18 (5), pp.051003.

  8. E. Spandana et al 2021, Care-giver alerting for bedridden patients using hand gesture recognition system, J. Phys.: Conf. Ser. 1921 012077