Research
I am a postdoctoral fellow in the Neurobiology of Language group at the Basque Center on Cognition, Brain, and Language (BCBL), where I am advised by Manuel Carreiras. I am combining insights from magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), and behavioral tasks to better understand the neural basis of language comprehension. Specifically, I am interested in the neural correlates of extracting structured knowledge from speech. This is especially relevant in educational settings, where speech often aims to transmit knowledge about a specific (often new) subject. Using computational methods grounded in discourse linguistics, I model the structure of knowledge as an increasingly growing graph of inter-connected nodes. I am also working on a project that aims to develop biomarkers for post-surgical recuperation and reorganization of cognitive (especially language) functions in glioma patients.
Previously, I completed my PhD in the Neuroscience of Language Lab (NeLLab) at New York University (NYU), where I mainly worked with PIs Alec Marantz and Liina Pylkkänen. There, I conducted research on different aspects of the neural basis of language using magnetoencephalography (MEG)—a passive, non-invasive technique for acquiring the natural magnetic signals surrounding the head, originating from the brain. We choose this technique (i) because language tokens in general are characterized by fast rates, and MEG (a homologue of EEG) acquires signals at a high temporal resolution, and (ii) because MEG allows us to estimate the original cortical signals from the measured data, such that we can make an educated guess about where in the brain different processes take place.
The thread linking the bulk of my work tackles the neural basis of language comprehension. Specifically, I am interested in two kinds of processes the brain uses in comprehension to access latent information— information that is not directly available in the language we perceive:
(i) inferential processes, whereby the brain gleans information that is simply never present in a language token. One example is syntactic structure. I like to think of syntactic structure as a collection of invisible strings that link different words in a sentence together, and which aid the brain in making a structural sense of what it hears or sees. For example, when you read the sentence ‘The cat chasing the mouse is chubby,’ you know that ‘chubby‘ here is describing ‘the cat‘, even though the link between the two is latent (not visible), and even though ‘the mouse is chubby‘ is literally a sequence in the full sentence! Because syntax is often a bunch of invisible strings, the brain must infer those strings and how they interact in order to fully capture the meaning of a sentence.
(ii) predictive processes, whereby the brain predicts upcoming information before it can be perceived. For example, as you read the sentence: ‘The chubby cat chased a gray —‘, the brain can generate a prediction about the next word probably being ‘mouse‘. But the brain does not only predict upcoming words. There is evidence it can predict upcoming letter sounds, and even upcoming abstract linguistic information. For example, in a recent paper, we found that the brain’s activity is sensitive to the degree to which the next word’s category or class (say, noun or verb) is predictable.
A major problem with addressing questions about latent information processing is that it is very tricky to dissociate from other variables in an experiment. Because this information is hidden under the surface, we can only manipulate it by manipulating the surface itself; but this means we are introducing confounds — we can no longer tell with certainty whether our findings are really tapping into inferential/predictive processes, or are simply a result of having changed the actual perceptible part of language. In my work, I focus on developing experimental designs that cleanly dissociate between latent and explicit information, across different levels of language representation — from single sounds, to entire texts. For that, I often rely on the grammatical properties of different languages (such as Arabic).
Ultimately, comprehending language involves building a model of what is being communicated, and updating this model continuously. This is especially true of expository language — meaning, language we typically encounter in educational settings, where the goal is to transmit knowledge about a specific (often new) topic. With my colleague Maxime Tulling, we are currently pursuing the neural basis of how the brain builds and updates these models, using MEG data recorded while adults and children listened to natural expository speech. Do we find evidence in our data for model building and updating? Are the left and right hemispheres equally involved? Do children and adults process discourse differently? Stay tuned for more!
One other area I am interested in is the neural basis of temporal displacement in language: How does the brain handle tense? How does it extract and process information about the past, the present, or the future from language input? I am currently working on developing experimental designs that tap into these questions.
Previously, I completed both my Master’s and Bachelor’s degrees in the Technion’s department of Biomedical Engineering. During my Master’s, I worked in Shy Shoham’s Neural Interface Engineering Lab, where we developed novel optical methods to stimulate neurons. I used an optical device, called a Spatial Light Modulator (SLM), which can sculpt incoming laser light into dynamic 3D holographic shapes. Using the SLM to target microscopic heat absorbents dispersed in the vicinity of neurons from rat brain slices, I showed that we can control the firing of neurons in the network in time and space. The work was part of a larger context of developing novel optical interfaces to control brain activity and eventually restore vision in patients with age-related macular degeneration, for instance.