• Skip to main content
  • Skip to primary navigation
  • Departments
    • Bioengineering
    • Civil and Environmental Engineering
    • Electrical Engineering and Computer Sciences
    • Industrial Engineering and Operations Research
    • Materials Science and Engineering
    • Mechanical Engineering
    • Nuclear Engineering
    • Aerospace program
    • Engineering Science program
  • News
    • Berkeley Engineer magazine
    • Social media
    • News videos
    • News digest (email)
    • Press kit
  • Events
    • Events calendar
    • Commencement
    • Homecoming
    • Cal Day
    • View from the Top
    • Kuh Lecture Series
    • Minner Lecture
  • College directory
  • For staff & faculty
Berkeley Engineering

Educating leaders. Creating knowledge. Serving society.

  • About
    • Facts & figures
    • Rankings
    • Mission & values
    • Equity & inclusion
    • Voices of Berkeley Engineering
    • Leadership team
    • Milestones
    • Facilities
    • Maps
  • Admissions
    • Undergraduate admissions
    • Graduate admissions
    • New students
    • Visit
    • Maps
    • Admissions events
    • K-12 outreach
  • Academics
    • Undergraduate programs
    • Majors & minors
    • Undergraduate Guide
    • Graduate programs
    • Graduate Guide
    • Innovation & entrepreneurship
    • Kresge Engineering Library
    • International programs
    • Executive education
  • Students
    • New students
    • Advising & counseling
    • ESS programs
    • CAEE academic support
    • Student life
    • Wellness & inclusion
    • Undergraduate Guide
    • > Degree requirements
    • > Policies & procedures
    • Forms & petitions
    • Resources
  • Research & faculty
    • Centers & institutes
    • Undergrad research
    • Faculty
  • Connect
    • Alumni
    • Industry
    • Give
    • Stay in touch
Home > News > Put into words
Ann, in a wheelchair, is connected to computers that translate her brain signals into the speech and facial movements of an avatar. At left is UCSF clinical research coordinator Max Dougherty.Ann, a research participant in a study of speech neuroprostheses led by UCSF's Edward Chang, is connected to computers that translate her brain signals into the speech and facial movements of an avatar. At left is UCSF clinical research coordinator Max Dougherty.

Put into words

Berkeley Engineer Fall 2023
November 6, 2023 by Marni Ellery | Photo by Noah Berger
This article appeared in Berkeley Engineer magazine, Fall 2023
  • In this issue
    Forceps holding frozen uranium

    Nuclear power renaissance

    Gerbrand Ceder inside the fully automated A-Lab,

    Materially better

    Connecting neural data port in patient

    Decoding speech with AI

    Dean Liu greets aerospace engineering student Nihal Gulati and his family in a classroom at Homecoming

    Pioneering a flexible online degree

    Upfront

    • Engineering Center groundbreaking
    • Plug-and-play
    • Sequestering carbon
    • Curbing antibiotic resistance
    • New online master’s degree
    • A cool way to save coral
    • In a fog

    New & noteworthy

    • Unearthing a legacy
    • The greening of jeans
    • With flexibility comes possibility
    • Putting students front and center
    • Farewell
  • Past issues

Speech neuroprostheses may offer a way to communicate for people who are unable to speak due to paralysis or disease, but fast, high-performance decoding has not yet been demonstrated. Now, transformative work by researchers at UCSF and Berkeley Engineering shows that more natural speech decoding is possible using the latest advances in artificial intelligence.

Led by UCSF neurosurgeon Edward Chang, the researchers developed an implantable AI-powered device that, for the first time, translates brain signals into synthesized speech and facial expressions. As a result, a woman who lost the ability to speak due to a stroke was able to speak in her own voice and convey emotion using a digital avatar.

[arve url=”https://www.youtube.com/watch?v=iTZ2N-HJbwA” align=”right” /]

Berkeley Engineering graduate students Kaylo Littlejohn, Sean Metzger and Alex Silva were co-lead authors of the study, and Gopala Anumanchipalli, assistant professor of electrical engineering and computer sciences, was a co-author.

“Because people with paralysis can’t speak, we don’t have what they’re trying to say as a ground truth to map to. So we incorporated a machine-learning optimization technique called CTC loss, which allowed us to map brain signals to discrete units, without the need for ‘ground truth’ audio,” said Littlejohn.

“We also were able to personalize the participant’s voice by using a video recording of her making a speech at her wedding from about 20 years ago. We kind of fine-tuned the discrete codes to her voice,” said Anumanchipalli. “Once we had this paired alignment that we had simulated, we used the sequence alignment method, the CTC loss.”

Learn more: Novel brain implant helps paralyzed woman speak using a digital avatar; How artificial intelligence gave a paralyzed woman her voice back (UCSF); A high-performance neuroprosthesis for speech decoding and avatar control (Nature)

Topics: AI & robotics, Devices & inventions, Electrical engineering, Health
  • Contact
  • Give
  • Privacy
  • UC Berkeley
  • Accessibility
  • Nondiscrimination
  • instagram
  • X logo
  • linkedin
  • facebook
  • youtube
© 2025 UC Regents