Avnish Narayan

I am a MS in CS student at the Robotics Embedded Systems Laboratory, part of the Department of Computer Science at the University of Southern California. I work on problems at the intersection of machine learning and robotics. My advisor is Gaurav Sukhatme.

I have a BS in Computer Engineering and Computer Science from USC. I transfered to USC after I spent the first 2 years of my undergraduate career at the University of Washington, Seattle where I was a research assistant in Emo Todorov's lab in my 2nd year.

Email  /  GitHub  /  LinkedIn

profile photo


I'm interested in robotics, deep reinforcement learning, Meta-RL, and Multitask-RL. I am also interested in reproducibility in these fields.

project image


Tianhe Yu, Deirdre Quillen, Zhanpeng He, Avnish Narayan, Hayden Shively, Adithya Bellathur, Ryan Julian, Karol Hausman, Chelsea Finn, Sergey Levine
arxiv / code / site /

Metaworld is open-source simulated benchmark for meta-reinforcement learning and multi-task learning consisting of 50 distinct robotic manipulation tasks. The included Meta-RL benchmarks were made with the aim of making it possible to develop algorithms that generalize in order to accelerate the acquisition of entirely new, held-out tasks. The included Multitask-RL benchmarks were made with the aim of making it possible to develop algorithms that can learn a variety of tasks that have shared structure.

I am the lead maintainer of Metaworld and am actively working on Metaworld-V2. In working on Metaworld-V2 I’ve worked to improve the solvability of the 50 Metaworld environments by rewriting their reward functions and modifying their observation spaces to match observation spaces of real-world robotic environments. I’ve also worked on a regression testing framework for the environments to ensure that changes in performance of the environments can be easily detected as the benchmark and its dependencies are changed. Lastly I’m actively working on a visual extension of Metaworld for Meta-RL and Multitask-RL from image based observations. I’m advised by Dr.Karol Hausman, Prof. Chelsea Finn, and Prof. Sergey Levine.

project image


Avnish Narayan, rlworkgroup contributors
code / site /

garage is a toolkit for developing and evaluating reinforcement learning algorithms, and an accompanying library of state-of-the-art implementations built using that toolkit. garage not only includes all of the standard single-task RL algorithms such as SAC and PPO, but it also includes implementations of all of the popular Multitask-RL and Meta-Rl algorithms such as Pearl, MAML, MTSAC, and RL^2. We’re expanding support for new algorithms and primitives everyday!

I’ve worked on everything from reproducing algorithms and creating reusable components that allow us to produce accurate baselines, to high performance samplers that allow experiments to take advantage of the resources on HPCs.

--> Design and source code from Jon Barron's website and Leonid Keselman's Jekyll fork.