Research

Comprehending language is like receiving spools of thread and seamlessly weaving them into an intricately structured tapestry. Much like these spools, a stream of language symbols as such contains little of the eventual structure; it is up to the human brain to weave into the perceived stream the structure necessary to fill the gaps in the reconstructed tapestry. For example, reading ‘The cat chasing the mouse is chubby.’ involves overriding the linear sequence ‘the mouse is chubby’ to extract a structural link between chubby and cat, despite the visual absence of such a link. Yet despite recent progress in understanding how brains process and encode meaning, the neural and cognitive mechanisms that weave structure into perceived language remain elusive. Unless we find these missing puzzle pieces, our understanding of how the thread rapidly becomes the tapestry remains fundamentally lacking.

I am an incoming Assistant Professor of Neurolinguistics at the University of Cambridge’s Department of Theoretical and Applied Linguistics. I combine insights from eelctroencephalography (EEG), magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), and behavioral tasks to better understand the neural basis of language comprehension. Specifically, I am interested in the neural correlates of extracting information structures from speech. This is especially relevant in educational settings, where speech often aims to transmit knowledge about a specific (often new) subject. Using computational methods grounded in discourse linguistics, I model the structure of knowledge as an increasingly growing graph of inter-connected nodes. I am also working on a project that aims to develop biomarkers for post-surgical recuperation and reorganization of cognitive (especially language) functions in glioma patients.

Currently, I am finishing a postdoctoral fellowship in the Neurobiology of Language group at the Basque Center on Cognition, Brain, and Language (BCBL), where I am advised by Manuel Carreiras. Previously, I completed my PhD in the Neuroscience of Language Lab (NeLLab) at New York University (NYU), where I mainly worked with PIs Alec Marantz and Liina Pylkkänen. In my PhD, I conducted research on different aspects of the neural basis of language using magnetoencephalography (MEG)—a passive, non-invasive technique for acquiring the natural magnetic signals surrounding the head, originating from the brain. We choose this technique (i) because language tokens in general are characterized by fast rates, and MEG (a homologue of EEG) acquires signals at a high temporal resolution, and (ii) because MEG allows us to estimate the original cortical signals from the measured data, such that we can make an educated guess about where in the brain different processes take place.

The thread linking the bulk of my work tackles the neural basis of language comprehension. Specifically, I am interested in how the brain processes structural information in language during comprehension at several levels of language representation:

(i) Syntactic processes—hierarchical structures that govern how words combine to create sentences, linking cat to chubby in the example ‘The cat chasing the mouse is chubby.’ Because syntactic structures are often imperceivable, research on syntax typically uses artificial stimuli that introduce confounds, such as pseudowords or word lists. Consequently, despite extensive work, the hypothesis space regarding the neural and cognitive bases of syntax remains vast, and syntactic processing remains a central and elusive conundrum in the cognitive neuroscience of language. Thus, I set out on a quest toelucidate the neural correlates of syntactic processing using fully grammatical experimental designs.
In my work, I resorted to Arabic’s grammatical properties to build simple designs that effectively isolated syntactic processes. In one experiment, I had participants read grammatical phrases that decoupled syntax from perceptual, lexical, and conceptual factors (Matar et al., 2021; Scientific Reports). In another experiment, a minimal manipulation varied the predictability of upcoming syntactic information without varying word-level predictions (Matar et al., 2019; Neuropsychologia). Thus, I delineated the timing dynamics of elusive syntactic processing, implicating the left posterior temporal cortex in bottom-up and top-down syntactic structure building. Additionally, my findings highlight the predictive nature of processing syntactic structure, showing how abstract syntactic predictions generated in frontal cortical regions may be relayed to early visual cortex to pre-empt processing. My syntax work challenges several accounts of language processing, including models positing Broca’s area as a sole syntactic hub and models conflating syntactic and lexicosemantic processing.

(ii) Another goal of my program is to understand how we process word-internal structures in comprehension. Currently, prominent neural, cognitive and computational models of speech comprehension account for how speech is segmented into words. But words often boast internal structures, like ‘un-[bear-able]’. Moreover, what constitutes a word varies considerably across languages, making word-level processing computationally insufficient. For instance, the Arabic equivalent of the sentence ‘She believed him.’ is the single word ‘ṣaddaqathu’, with a complex word-internal structure. In my recent work (Matar & Marantz, J Neurosci, accepted), I used Arabic’s grammatical properties to show neural responses to word-internal structure in speech comprehension, above and beyond word-, syllable-, and sound-level processing. Moreover, I demonstrated that word-internal structures are processed predictively in the bilateral superior temporal cortex. My findings challenge leading models of speech comprehension that assume words are atomic units of meaning, and offer an intricate computational and neural account of processing word-internal structures. They also open the door for future work on the interplay between inter-word and intra-word processes.

(iii) Often, the main goal in language comprehension is to process and link information across an entire article or story—i.e., an entire discourse. Consider this short discourse: ‘A polar bear’s fur is translucent. This allows it to change colors depending on its environment.’ Leading models propose that comprehension involves establishing discourse referents: elementary units (e.g., ‘polar bear’) that can be referred to (‘This allows it’, i.e., the polar bear). But whether and how referents are woven into coherent representations is unclear. I have developed a computational framework that models comprehension as a dynamic process that links referents into hierarchical discourse structures.

A major problem with addressing questions about structural information processing is that it is very tricky to dissociate from other variables in an experiment. Because this information is hidden under the surface, we can only manipulate it by manipulating the surface itself; but this means we are introducing confounds — we can no longer tell with certainty whether our findings are really tapping into inferential/predictive processes, or are simply a result of having changed the actual perceptible part of language. In my work, I focus on developing experimental designs that cleanly dissociate between latent and explicit information, across different levels of language representation — from single sounds, to entire texts. For that, I often rely on the grammatical properties of different languages (such as Arabic).

Ultimately, comprehending language involves building a model of what is being communicated, and updating this model continuously. This is especially true of expository language — meaning, language we typically encounter in educational settings, where the goal is to transmit knowledge about a specific (often new) topic. With my colleague Maxime Tulling, we are currently pursuing the neural basis of how the brain builds and updates these models, using MEG data recorded while adults and children listened to natural expository speech. Do we find evidence in our data for model building and updating? Are the left and right hemispheres equally involved? Do children and adults process discourse differently? Stay tuned for more!

One other area I am interested in is the neural basis of temporal displacement in language: How does the brain handle tense? How does it extract and process information about the past, the present, or the future from language input? I am currently working on developing experimental designs that tap into these questions.

Previously, I completed both my Master’s and Bachelor’s degrees in the Technion’s department of Biomedical Engineering. During my Master’s, I worked in Shy Shoham’s Neural Interface Engineering Lab, where we developed novel optical methods to stimulate neurons. I used an optical device, called a Spatial Light Modulator (SLM), which can sculpt incoming laser light into dynamic 3D holographic shapes. Using the SLM to target microscopic heat absorbents dispersed in the vicinity of neurons from rat brain slices, I showed that we can control the firing of neurons in the network in time and space. The work was part of a larger context of developing novel optical interfaces to control brain activity and eventually restore vision in patients with age-related macular degeneration, for instance.

Create a website or blog at WordPress.com