In my research, I focus on the roles of hierarchical structure and lexical probability in the neural representations of language. How does the brain infer hierarchical structure from input that acoustically has only sequential properties? What role does lexical probability play in this process? And how does (acquired) knowledge of this hierarchical structure affect representations at lower levels?

I am currently targeting some of these questions by analyzing magnetoencephalography (MEG) data with time-resolved multiple regression or ‘temporal response functions’ as part of my doctoral dissertation in dr. Andrea E. Martin‘s group LaCNS at the MPI (NL).

Project 1: Words in- and out of sentences

In this project, we investigated if responses to individual words were affected by being placed in sentence context. We did this by extracting low-frequency responses to word frequency with temporal response functions (TRFs) and comparing those between word lists (no structure & combinatorial meaning) and sentences. We found that the differences were most overwhelming in time (see below), but also in space. Download a poster here or read our paper (in JNeurosci).

Project 2: Building structure from probability (ongoing)

In this project, we investigated whether neural signatures of structure building change as a function of lexical contextual probability. We extracted low-frequency responses to a metric of syntactic depth (bottom-up node count) and compared these between high- and low surprisal contexts. We observed that low lexical probability from a short context delayed the response to structure building by 150 milliseconds. Using long-context probability extracted from GPT2 revealed that the response could be delayed by as much as 190 milliseconds! These findings were independent of word duration. Download a poster here (for SNL 2023).

Project 3: Subject-verb agreement & surprisal (ongoing)

More on this soon…