Daniel Geng

I am a senior at UC Berkeley studying EECS and Math. I do research on deep reinforcement learning and representation learning in the Berkeley Aritifical Intelligence Research (BAIR) lab, where I'm advised by Coline Devin and Professor Sergey Levine.

Github  /  Google Scholar  /  LinkedIn  /  Blog

profile photo
Research

My research lies at the intersections of deep learning, vision, and robotics. I am interested in representations as a way to aid generalization of robotic policies and improve understanding in algorithms.

SMiRL: Surprise Minimizing RL in Dynamic Environments
Daniel Geng, Glen Berseth, Coline Devin, Dinesh Jayaraman, Chelsea Finn, Sergey Levine
"Deep Reinforcement Learning" and "Biological and Artificial RL" Workshops at NeurIPS 2019, 2019  

Life seeks order. If we reward an agent for stability do we also get interesting emergent behavior?

Plan Arithmetic: Compositional Plan Vectors for Multi-task Control
Coline Devin, Daniel Geng, Trevor Darrell, Pieter Abbeel, Sergey Levine
NeurIPS, 2019  

Learning a composable representation of tasks aids in long-horizon generalization of a goal-conditioned policy.

arxiv  /  webpage  /  short video  /  code
Bayesian Confidence Prediction for Deep Neural Networks
Sayna Ebrahimi, Daniel Geng, Trevor Darrell
Unpublished  

Given any classification architecture, we can augment it with a confidence network that outputs calibrated class probabilities.


Website template from Jon Barron.
Last updated December 2019.