Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.

The DPhil in Computational Discovery is a multidisciplinary programme spanning projects in Advanced Molecular Simulations, Machine Learning and Quantum Computing to develop new tools and methodologies for life sciences discovery.

© John Cairns / Oxford University Images

This innovative course has been developed in close partnership between Oxford University and IBM Research. Each research project has been co-developed by Oxford academics working with IBM scientists. Students will have a named IBM supervisor/s and many opportunities for collaboration with IBM throughout the studentship.

The scientific focus of the programme is at the interface between Physical and Life Sciences. By bringing together advances in data and computing science with large complex sets of experimental data more realistic and predictive computational models can be developed. These new tools and methodologies for computational discovery can drive advances in our understanding of fundamental cellular biology and drug discovery. Projects will span the emerging fields of Advanced Molecular Simulations, Machine Learning and Quantum Computing addressing both fundamental questions in each of these fields as well as at their interfaces.

Students will benefit from the interdisciplinary nature of the course cohort as well as the close interactions with IBM Scientists.

Applicants who are offered places will be provided with a funding package that will include fees at the Home rate, a stipend at the standard Research Council rate + £2,400 (currently £16,062 pa) for four years. 

Project 1

Title: Defining computation and connectivity in neuronal population activity underlying motor learning.
PI: Andrew Sharott 
Summary: Neural network structure constrains the activity dynamics of the brain. Specifically, learning of movements guided by the outcome of previous actions leads to adaptations in the motor cortical network and its activity. To understand these mechanisms on the cellular level would require simultaneous recordings from hundreds of local neurons at millisecond timescale in vivo during learning of a skilled movement. We have successfully established an approach to simultaneously record thousands of neurons across motor regions in mice, using recently developed high-density electrode silicon-probes in combination with machine-learning based kinematic analysis and cell-type specific optogenetic modulation.

Motivated by recent work that link structure of population activity to the underlying synaptic connectivity (Dahmen et al., 2022) and our experience in cortical microcircuits (Peng et al., 2019, 2022), we aim to identify core changes in neuronal microcircuits that underlie motor learning and execution. We will develop novel approaches to extract activity signatures reflecting plastic changes on the local synaptic level and model how these constrain the overall dimensionality of neuronal population activity. The results will provide a microcircuit level understanding of learning in motor circuits and lay the groundwork to study neural network architecture in high-density electrophysiological recordings.

Dahmen, D. et al. Strong and localized recurrence controls dimensionality of neural activity across brain areas. Biorxiv 2020.11.02.365072 (2022).
Peng, Y. et al. High-throughput microcircuit analysis of individual human brains through next-generation multineuron patch-clamp. Elife (2019).
Peng, Y. et al. Spatially structured inhibition defined by polarized parvalbumin interneuron axons promotes head direction tuning. Science Advances (2021).

Project 2:

Title: Optimising therapy for brain disorders through AI-refined deep brain stimulation
PI: Hayriye Cagnan 
Summary: Brain stimulation is extensively used to modulate neural activity in order to alter behaviour. In recent years, closed-loop stimulation techniques have gained increasing traction to sense a biomarker such as elevated neural activity patterns, and deliver stimulation in time with such events. Closed-loop stimulation techniques are used both for establishing a causal link between behaviour and neural activity, and also to treat various neurological and psychiatric conditions. Building on our recent work (West et al 2022, Cagnan et al 2017), this PhD project aims to formalise stimulation parametrisation by using theoretical models of brain circuits in combination with state of the art machine learning approaches. Specifically, we will train artificial neural networks to classify discrete brain states of interest and optimise stimulation parameters to achieve precise manipulation of activity propagating across brain circuits. The successful development of such an approach would provide a powerful framework to guide next generation stimulation strategies both for usage in basic science and clinical applications.

Project 3
Title: Foundations of Stochastic Gradient Descent (and Generalization)
PI: Patrick Rebeschini
Summary: Stochastic gradient descent is one of the most widely used algorithmic paradigms in modern machine learning. Despite its popularity, there are many open questions related to its generalization capabilities. For instance, while there is preliminary evidence that early-stopped gradient descent applied to over-parameterized models is robust with respect to label mispecifications, a complete theory that can account for this phenomenon is currently lacking. The goal of this project is to rigorously investigate the robustness properties of early-stopped gradient descent from a theoretical point of view in simplified settings involving linear models, and to establish novel connections of such a methodology with the field of distributionally-robust optimization. The project will combine tools from the study of random structures in high-dimensional probability (e.g., concentration inequalities, theory of optimal transport) with the general framework of gradient and mirror descent methods from optimization and online learning (e.g., regularization).

Application Deadline

This course is open until 3 June 2022 for applications to the three projects shown.  One funded place is available for a student with Home Fees.

Programme Director

Professor Phil Biggin

Supported By

IBM, EPSRC, Oxford University

Further Information

Project Booklet