VOCL began with Ye-Ye.
Over the summer, we visited James's house and met his grandfather—we affectionately called him Ye-Ye. He was 82 and battling ALS. Every visit that summer, we'd tell him about our day. He'd listen, silently tracking us with his eyes.
His problem was the same as millions of others: his brain worked, but his muscles didn't.
When Ye-Ye passed away late that summer, we were devastated. That fall, we read about Stanford researchers decoding inner speech with 74% accuracy—but it required invasive implants costing tens of thousands of dollars.
We realized: if they could decode thoughts invasively, why couldn't we decode attempted speech non-invasively?
That question became VOCL. Our goal isn't just to restore speech—it's to restore people's fundamental right to be heard.
What started as a personal project has grown into a serious research effort. We've built 7 prototypes, conducted 35 testing sessions, and achieved 82% phoneme classification accuracy—all while receiving guidance from leading neuroscientists at the University of Chicago, UC Davis, and Northwestern.
Our Mission
VOCL goes beyond just a device. It's a grandfather saying "I love you" to his grandson again. A teacher returning to the classroom. A parent reading bedtime stories. It's millions reclaiming their voices.