Title: BrainBraille: Towards a Non-invasive, High-throughput, Endogenous Brain-Computer Interface
Date: September 11th, 2024
Time: 3:00 PM - 5:00 PM EDT
Location: Technology Square Research Building (85 5th St NW), Room 217a
Virtual meeting: https://gatech.zoom.us/j/5586021238?pwd=aHBiODBjOTEvM0IxR3hvV3VYc3Rqdz09
Yuhui Zhao
School of Interactive Computing
College of Computing
Georgia Institute of Technology
Committee
Dr. Thad Starner (co-advisor) - School of Interactive Computing, Georgia Institute of Technology
Dr. Melody Jackson, (co-advisor) - School of Interactive Computing, Georgia Institute of Technology
Dr. Thomas Ploetz - School of Interactive Computing, Georgia Institute of Technology
Dr. Alexander T Adams - School of Interactive Computing, Georgia Institute of Technology
Dr. Vince D Calhoun - Center for Translational Research in Neuroimaging and Data Science, Georgia State University / Georgia Institute of Technology / Emory University
Abstract
Brain-computer interfaces (BCIs) detect and decode neuropsychological signals originated from the brain directly to communicate with external devices, bypassing the peripheral nervous system. They are core technologies for making augmentative and alternative communication devices for people with severe motor disabilities, e.g., people with locked-in syndrome (LIS). Non-invasive BCI systems suitable for people in the locked-in state have traditionally been very slow for spelling, achieving only a few characters per minute or less. People with LIS have limited energy, so optimizing the effort of communication is a priority for improving their quality of life.
I sought to create the fastest, non-invasive, endogenous BCI speller to date, with a goal of allowing people with LIS to communicate more efficiently. My dissertation work focused on developing such a system, BrainBraille, which enables users to chord-type Braille-like characters by tensing and relaxing combinations of six body parts. As the user attempts to perform the movements, functional magnetic resonance imaging detects the brain activities in the motor cortex, which are then decoded to characters. Compared to existing systems, BrainBraille demonstrated two novel design paradigms. First, the binary encoding of simple motor tasks (chord-typing) vastly expanded the size of the selection space per input---from generally ≥ 4 to 27. Second, BrainBraille's data processing pipeline explicitly models the staggered brain activity patterns of short adjacent tasks, shortening the input time per selection from typically ≥ 15 s to 1.5 s. Additionally, simple language models are used to further improve decoding results based on application scenarios. Initial evaluation on two neurotypical participants using subject-dependent models in an offline setting shows that BrainBraille is able to achieve high accuracy (>90%) typing at 40 characters per minute. The resulting information transfer rate shows more than an order of magnitude improvement over that of existing similar systems. My proposed work below address the limitations of the current system.
- Investigate if it is possible to adapt the current BrainBrialle system to decode in real-time.
- Evaluate to what extent can the BrainBrialle approach generalize to more participants.
- Explore to what extent can one develop a subject-independent BrainBraille decoding model that requires minimal calibration data.