me

I'm Robert Kirk, a PhD Student at UCL DARK Lab in the UCL Centre for Artificial Intelligence supervised by Tim Rocktäschel and Edward Grefenstette. I'm an aspiring effective altruist and rationalist. I'm interested in generalisation in reinforcement learning, out-of-distribution robustness, and AI safety and alignment.

Publications

A Survey of Generalisation in Deep Reinforcement Learning
Robert Kirk, Amy Zhang, Edward Grefenstette, Tim Rocktäschel
arXiv preprint, 2021-11-18
https://arxiv.org/abs/2111.09794

Graph Backup: Data Efficient Backup Exploiting Markovian Data
Zhengyao Jiang, Tianjun Zhang, Robert Kirk, Tim Rocktäschel, Edward Grefenstette
Deep RL Workshop NeurIPS 2021
https://openreview.net/forum?id=0UQqmPGuL4n

MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research
Mikayel Samvelyan, Robert Kirk, Vitaly Kurin, Jack Parker-Holder, Minqi Jiang, Eric Hambro, Fabio Petroni, Heinrich Kuttler, Edward Grefenstette, Tim Rocktäschel
NeurIPS 2021 Datasets and Benchmarks Track
https://arxiv.org/abs/2109.13202

More about me

Before starting my PhD, I worked for Smarkets, a sports and politics betting exchange, as an infrastructure and software engineer for two years. For the first year and a half I worked as a software engineering in a team developing python micro-services, and then I joined the infrastructure team. Before that I studied for four years at Somerville college, University of Oxford, for an integrated masters in mathematics and computer science. Up until the fourth year I maintained a 50/50 split between the two subjects, and in the fourth year I focused more on abstract computer science. I did my masters thesis on implementing the comonadic formulation of pebble games in Idris supervised by Samson Abramsky.

I write for the Alignment Newsletter, mostly on the topics of interpretability and reinforcement learning. Originally formed as part of AI Safety Research Program, I’ve collaborated with a small group of fellow independent researchers on interpretability, both doing technical research and thinking about how interpretability helps AI alignment, the results of which can be seen here.

Other than technical things, I’m interested in rationality, neuroscience, cooking, videogames, music (basically any genre), and reading. I sing in a choir and previously played the French horn.