Go to Polygence Scholars page
Avi Lekkelapudi's cover illustration
Polygence Scholar2022
Avi Lekkelapudi's profile

Avi Lekkelapudi

Bellarmine College PrepClass of 2025Sunnyvale, California

About

Projects

  • "What are inner speech's viability and relevant characteristics for Brain-computer interface use?" with mentor Andrew (Sept. 16, 2022)

Project Portfolio

What are inner speech's viability and relevant characteristics for Brain-computer interface use?

Started Mar. 19, 2022

Abstract or project description

This work centers around two paradigms with potential for use in Brain-computer Interfaces, inner speech and imagined speech. Inner speech refers to the semantic meanings of words and speech, often likened to an “inner voice”. Imagined speech focuses on the articulation of speech, and the literal sounds. All EEG data used in this work was attained from researchers at CONICET Santa Fe, an Argentinian research institute. Researchers collected data from 10 healthy participants, all of which were native spanish speakers. Trials were performed under one of three conditions: pronounced speech, inner speech, and visualized conditions. Pronounced speech involved the participant verbally stating “arriba,” “abajo,” “derecha,” or “izquierda” (i.e.“up”, “down”, “right”, “left” in spanish) based on the orientation of a stimulus. The pronounced speech trial tests imagined speech, measuring both nueral and motor activity. Inner speech involved participants imagining themselves stating the answer to the computer in their own voice, with more focus on semantic meanings. Visualized conditions involve mentally moving a circle in the direction the stimulus is oriented. A machine learning classification model was then trained on each trial condition’s EEG dataset individually. Finally, once a high accuracy was achieved for each model, the weights were examined to determine the viability of different characteristics of inner and imagined speech for BCI use. Some attributes of inner speech that the algorithm will likely weight heavily are frequency, as well as the location of activity. Frequency may also be a good predictor of which paradigm is being studied in the current trial, either inner speech or imagined speech. The viability of EEG data for inner speech could lead to advancements in BCI technology, specifically those used for producing language in paralyzed patients. Current widely adopted paradigms for BCIs such as P300 may be too slow or require greate effort from patients. Inner speech could provide a more natural way to control BCIs.