Maryam Shanechi, Dean’s Professor of Electrical and Computer Engineering and founding director of the USC Center for Neurotechnology, and her team have developed a new machine learning method that reveals surprisingly consistent intrinsic brain patterns across different subjects by disentangling these patterns from the effect of visual inputs.The work has been published in the Proceedings of the National Academy of Sciences (PNAS).
When performing various everyday movement behaviors, such as reaching for a book, our brain has to take in information, often in the form of visual input — for example, seeing where the book is. Our brain then has to process this information internally to coordinate the activity of our muscles and perform the movement. But how do millions of neurons in our brain perform such a task? Answering this question requires studying the neurons’ collective activity patterns, but doing so while disentangling the effect of input from the neurons’ intrinsic (aka internal) processes, whether movement-relevant or not.
That’s what Shanechi, her PhD student Parsa Vahidi, and a research associate in her lab, Omid Sani, did by developing a new machine-learning method that models neural activity while considering both movement behavior and sensory input.
“Prior methods for analyzing brain data have either considered neural activity and input but not behavior, or considered neural activity and behavior but not input,” Shanechi said. “We developed a method that can consider all three signals — neural activity, behavior, and input — when extracting hidden brain patterns. This allowed us to not only disentangle input-related and intrinsic neural patterns, but also separate out which intrinsic patterns were related to movement behavior and which were not.”
Shanechi and her team used this method to study three publicly available datasets, during which three different subjects performed one of two distinct movement tasks, consisting of moving a cursor on a computer screen over a grid or moving it sequentially to random locations.
“When using methods that did not consider all three signals, the patterns found in neural activity of these three subjects looked different.” Vahidi said. But when the team used the new method to consider all three signals, a remarkably consistent hidden pattern emerged from neural activity of all three subjects that was relevant to movement. This similarity was despite the fact that the tasks performed by the three subjects were also different.
“In addition to revealing this new consistent pattern, the method also improved the prediction of neural activity and behavior compared to when all three signals were not considered during machine learning, as in prior work.” Sani said. “The new method enables researchers to more accurately model neural and behavioral data by accounting for various measured inputs to the brain, such as sensory inputs as in this work, electrical or optogenetic stimulation, or even input from different brain areas.”
This method and the discovered pattern can help understand how our brains perform movements, guided by the information we receive from the external world. Further, by modeling the effect of input and separating out intrinsic patterns that are behavior-relevant, this method can help develop future brain-computer interfaces that regulate abnormal brain patterns in disorders such as major depression by optimizing external inputs such as deep brain stimulation therapy.
“We are excited about how this algorithm could facilitate both scientific discoveries and the development of future neurotechnologies for millions of patients with neurological or neuropsychiatric disorders,” Shanechi said.