Research
I am broadly interested in artificial intelligence.
I am particularly interested in making learning more autonomous, such as enabling agents to learn in
reset-free, nonstationary, or unsupervised settings; or by incorporating more sources of supervision,
such as offline or multimodal data.
|
|
Pretrained Transformers as Universal Computation Engines
Kevin Lu,
Aditya Grover,
Pieter Abbeel,
Igor Mordatch
arXiv preprint, 2021
arXiv /
blog /
code
unofficial:
press /
video (by Yannic Kilcher)
We show that a transformer pretrained on natural language can, without finetuning of the self-attention and feedforward layers, match the performance of a transformed fully trained on a downstream non-language modality.
|
|
Reset-Free Lifelong Learning with Skill-Space Planning
Kevin Lu,
Aditya Grover,
Pieter Abbeel,
Igor Mordatch
International Conference on Learning Representations, 2021
NeurIPS Deep RL Workshop, 2020   (Contributed Talk)
arXiv /
website /
oral /
poster /
code
We show planning over a space of skills is a key component of successful reset-free lifelong learning, avoiding sink states, improving stability, and increasing learning signal.
|
|
Efficient Empowerment Estimation for Unsupervised Stabilization
Ruihan Zhao,
Kevin Lu,
Pieter Abbeel,
Stas Tiomkin
International Conference on Learning Representations, 2021
paper
We design a new unbiased empowerment estimator and show it represents empowerment more faithfully than traditional variational mutual information algorithms.
|
|
Adaptive Online Planning for Continual Lifelong Learning
Kevin Lu,
Igor Mordatch,
Pieter Abbeel
NeurIPS Deep RL Workshop, 2019   (Contributed Talk)
arXiv /
website /
oral /
poster /
code
Tackling reset-free learning in dynamically changing worlds by combining model-based planning with model-free learning, utilizing an expensive planner only when necessary.
|
|