I am currently pursuing my doctoral studies in the Neuroscience of Language Lab (NeLLab) at New York University), where I mainly work with PIs Alec Marantz and Liina Pylkkänen. I conduct research on different aspects of the neural basis of language using magnetoencephalography (MEG)—a passive, non-invasive technique for acquiring the natural magnetic signals surrounding the head, originating from the brain. We choose this technique (i) because language tokens in general are characterized by fast rates, and MEG (a homologue of EEG) acquires signals at a high temporal resolution, and (ii) because MEG allows us to estimate the original cortical signals from the measured data, such that we can make an educated guess about where in the brain different processes take place.
The thread linking the bulk of my doctoral work tackles the fundamentals of the neural basis of syntactic processing during comprehension. I like to think of syntactic information as a collection of invisible strings that link different parts of a language token together in non-linear ways, and which aid the brain in making a structural sense of what it hears or sees. For example, the syntactic information in the sentence ‘The waiter with the clean apron is missing.’ helps the brain link each adjective to its corresponding noun; crucially, the adjective ‘missing’ is associated with ‘waiter’, even though it is linearly closer to ‘apron’.
A major problem with addressing questions about syntax is that it is very tricky to dissociate from other variables in an experimental paradigm. Syntactic information is often tightly intertwined with other types of linguistic information, such as semantics. In my work, I focus on developing experimental designs that cleanly dissociate between syntactic and non-syntactic processes, and using MEG to elucidate the neural basis of these syntactic processes at the sentence, phrase, and word level. For that, I often rely on the grammatical properties of different languages (such as Arabic).
But other than syntactic information, comprehending language involves building a discursive model of what is being communicated, and updating this model continuously. How does the brain manage that? With my colleague Maxime Tulling, we are currently pursuing this question using MEG data recorded while adults and children listened to natural expository texts—that is, texts you might hear in a classroom setting. Do we find evidence in our data for discourse building and updating? Are the left and right hemispheres equally involved? Do children and adults process discourse differently? Stay tuned for more!
One other area I am interested in is the neural basis of temporal displacement in language: How does the brain handle tense? How does it extract and process information about the past, the present, or the future from language input? I am currently working on developing experimental designs that tap into these questions.
Previously, I completed both my Master’s and Bachelor’s degrees in the Technion’s department of Biomedical Engineering. During my Master’s, I worked in Shy Shoham’s Neural Interface Engineering Lab, where we developed novel optical methods to stimulate neurons. I used an optical device, called a Spatial Light Modulator (SLM), which can sculpt incoming laser light into dynamic 3D holographic shapes. Using the SLM to target microscopic heat absorbents dispersed in the vicinity of neurons from rat brain slices, I showed that we can control the firing of neurons in the network in time and space. The work was part of a larger context of developing novel optical interfaces to control brain activity and eventually restore vision in patients with age-related macular degeneration, for instance.