Next webinar Registration

José Andrés González-López (Universidad de Granada)

Title: From Neural Signals to Fluent Speech: Recent Advances in Neural Speech Interfaces

Summary: Neural speech interfaces aim to restore natural communication in individuals who have lost the ability to speak while preserving cognitive function. Over the past decade, this field has undergone a remarkable transformation, moving from slow and cognitively demanding spelling-based brain–computer interfaces to systems capable of decoding continuous speech directly from neural activity. These advances have been driven by the convergence of high-resolution invasive neural recording technologies, improved experimental paradigms for speech production and perception, and powerful deep learning models inspired by modern automatic speech recognition systems. In this talk, I will review the state of the art in neural speech prostheses, with a particular focus on next-generation BCIs that translate cortical activity into text or synthetic speech. I will discuss key design choices, including neural recording techniques (such as ECoG, sEEG, and intracortical microelectrodes), target brain areas, decoding architectures, and evaluation metrics. I will also highlight recent clinical results demonstrating unprecedented levels of accuracy, fluency, and long-term stability in continuous speech decoding. Finally, I will outline current challenges and future directions, including scalability across users, real-time bidirectional feedback, and the path towards clinical and real-world deployment, illustrated with ongoing work from our research group.

Bio: Jose A. Gonzalez-Lopez is an Associate Professor at the University of Granada whose research sits at the frontier of artificial intelligence, computational neuroscience, and neural speech prostheses. His work addresses the core challenge of how to translate high-dimensional neural activity into fluent, natural speech, bridging invasive neural recordings with modern deep learning and speech–language models. He leads multiple competitive R&D projects on AI-driven speech restoration for individuals with severe neurological and phonatory impairments, with a strong emphasis on long-term robustness, scalability across users, and real-world clinical deployment. He has published over 100 papers in leading international journals and conferences. His contributions have been recognized with several awards for scientific excellence and technological innovation, and his research is embedded in a strong international collaboration network built through extended research visits to institutions such as the University of Sheffield, the University of Bremen, and Maastricht University.

If you would like to attend the next webinar, please fill out the form below.

Please answer the question