Research

Check out Timaeus's research for more work from my team.

Studying Small Language Models with Susceptibilities

Studying Small Language Models with Susceptibilities

2025-04-25
Garrett Baker=, George Wang=, Jesse Hoogland, Daniel Murfet

We develop a linear response framework for interpretability that treats a neural network as a Bayesian statistical mechanical system. A small, controlled perturbation of the data distribution, for example shifting the Pile toward GitHub or legal text, induces a first-order change in the posterior expectation of an observable localized on a chosen component of the network. The resulting susceptibility can be estimated efficiently with local SGLD samples and factorizes into signed, per-token contributions that serve as attribution scores. Building a set of perturbations (probes) yields a response matrix whose low-rank structure separates functional modules such as multigram and induction heads in a 3M-parameter transformer. Susceptibilities link local learning coefficients from singular learning theory with linear-response theory, and quantify how local loss landscape geometry deforms under shifts in the data distribution.

You Are What You Eat – AI Alignment Requires Understanding How Data Shapes Structure and Generalisation

You Are What You Eat – AI Alignment Requires Understanding How Data Shapes Structure and Generalisation

2025-02-08
Simon Pepin Lehalleur=, Jesse Hoogland=, Matthew Farrugia-Roberts=, Susan Wei, Alexander Gietelink Oldenziel, Stan van Wingerden, George Wang, Zach Furman, Liam Carroll, Daniel Murfet

In this position paper, we argue that understanding the relation between structure in the data distribution and structure in trained models is central to AI alignment. First, we discuss how two neural networks can have equivalent performance on the training set but compute their outputs in essentially different ways and thus generalise differently. For this reason, standard testing and evaluation are insufficient for obtaining assurances of safety for widely deployed generally intelligent systems. We argue that to progress beyond evaluation to a robust mathematical science of AI alignment, we need to develop statistical foundations for an understanding of the relation between structure in the data distribution, internal structure in models, and how these structures underlie generalisation.

Dynamics of Transient Structure in In-Context Linear Regression Transformers

Dynamics of Transient Structure in In-Context Linear Regression Transformers

2025-01-29
Liam Carroll=, Jesse Hoogland, Matthew Farrugia-Roberts, Daniel Murfet

Modern deep neural networks display striking examples of rich internal computational structure. Uncovering principles governing the development of such structure is a priority for the science of deep learning. In this paper, we explore the transient ridge phenomenon: when transformers are trained on in-context linear regression tasks with intermediate task diversity, they initially behave like ridge regression before specializing to the tasks in their training distribution. This transition from a general solution to a specialized solution is revealed by joint trajectory principal component analysis. Further, we draw on the theory of Bayesian internal model selection to suggest a general explanation for the phenomena of transient structure in transformers, based on an evolving tradeoff between loss and complexity. We empirically validate this explanation by measuring the model complexity of our transformers as defined by the local learning coefficient.

Differentiation and Specialization of Attention Heads via the Refined Local Learning Coefficient

Differentiation and Specialization of Attention Heads via the Refined Local Learning Coefficient

2024-10-04 • ICLR •  Spotlight
George Wang=, Jesse Hoogland=, Stan van Wingerden=, Zach Furman, Daniel Murfet

We introduce refined variants of the Local Learning Coefficient (LLC), a measure of model complexity grounded in singular learning theory, to study the development of internal structure in transformer language models during training. By applying these refined LLCs (rLLCs) to individual components of a two-layer attention-only transformer, we gain novel insights into the progressive differentiation and specialization of attention heads. Our methodology reveals how attention heads differentiate into distinct functional roles over the course of training, analyzes the types of data these heads specialize to process, and discovers a previously unidentified multigram circuit. These findings demonstrate that rLLCs provide a principled, quantitative toolkit for developmental interpretability, which aims to understand models through their evolution across the learning process. More broadly, this work takes a step towards establishing the correspondence between data distributional structure, geometric properties of the loss landscape, learning dynamics, and emergent computational structures in neural networks.

Loss Landscape Degeneracy Drives Stagewise Development in Transformers

Loss Landscape Degeneracy Drives Stagewise Development in Transformers

2024-02-04 • TMLR 2025 •  Best Paper @ HiLD Workshop
Jesse Hoogland=, George Wang=, Matthew Farrugia-Roberts, Liam Carroll, Susan Wei, Daniel Murfet

We show that in-context learning emerges in transformers in discrete developmental stages, when they are trained on either language modeling or linear regression tasks. We introduce two methods for detecting the milestones that separate these stages, by probing the geometry of the population loss in both parameter space and function space. We study the stages revealed by these new methods using a range of behavioral and structural metrics to establish their validity.