Daniel Geng

I am a fifth year PhD student in CS at the University of Michigan advised by Professor Andrew Owens and graciously supported by an NSF GRFP fellowship. I study computer vision, and have previously done research in deep reinforcement learning and representation learning.

I am currently also a student researcher with Charles Herrmann and Deqing Sun at Google DeepMind. In the past I was lucky enough to have worked as an undergrad at UC Berkeley under Sergey Levine and Coline Devin, as well as Alyosha Efros and Taesung Park, and as an intern at FAIR under Lorenzo Torresani and Huiyu Wang.

Github  /  Google Scholar  /  Twitter  /  Email

profile photo
Research

I am currently interested in generative models, how we can control them, and novel ways to use them. I've also worked on levarging differentiable models of motion for image and video synthesis and understanding, and in the past I've worked on representation learning, multimodal learning, and reinforcement learning.

Images that Sound: Composing Images and Sounds on a Single Canvas
Ziyang Chen, Daniel Geng, Andrew Owens
NeurIPS, 2024  

We use diffusion models to make spectrograms that look like natural images, but also sound like natural sounds.

arXiv  /  webpage  /  code  / 
Factorized Diffusion: Perceptual Illusions by Noise Decomposition
Daniel Geng*, Inbum Park*, Andrew Owens
ECCV, 2024  

Sister project of "Visual Anagrams." Another zero-shot method for making more types of optical illusions with diffusion models, with connections to spatial and composition control of diffusion models, and inverse problems.

arXiv  /  webpage  /  code
Visual Anagrams: Synthesizing Multi-View Optical Illusions with Diffusion Models
Daniel Geng, Inbum Park, Andrew Owens
CVPR, 2024  (Oral)  

A simple, zero-shot method to synthesize optical illusions from diffusion models. We introduce Visual Anagrams—images that change appearance under a permutation of pixels.

arXiv  /  webpage  /  code  /  colab
Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators
Daniel Geng, Andrew Owens
ICLR, 2024  

We achieve diffusion guidance through off-the-shelf optical flow networks. This enables zero-shot motion based image editing.

arXiv  /  webpage  /  code
Self-Supervised Motion Magnification by Backpropagating Through Optical Flow
Daniel Geng*, Zhaoying Pan*, Andrew Owens
NeurIPS, 2023  

By differentiating through off-the-shelf optical flow networks we can train motion magnification models in a fully self-supervised manner.

arXiv  /  webpage  /  code
Comparing Correspondences: Video Prediction with Correspondence-wise Losses
Daniel Geng, Max Hamilton, Andrew Owens
CVPR, 2022  

Pixelwise losses compare pixels by absolute location. Instead, comparing pixels to their semantic correspondences surprisingly yields better results.

arXiv  /  webpage  /  code
SMiRL: Surprise Minimizing RL in Dynamic Environments
Glen Berseth, Daniel Geng, Coline Devin, Nicholas Rhinehart, Chelsea Finn, Dinesh Jayaraman, Sergey Levine
ICLR, 2021  (Oral)  

Life seeks order. If we reward an agent for stability do we also get interesting emergent behavior?

arXiv  /  webpage  /  oral
Plan Arithmetic: Compositional Plan Vectors for Multi-task Control
Coline Devin, Daniel Geng, Trevor Darrell, Pieter Abbeel, Sergey Levine
NeurIPS, 2019  

Learning a composable representation of tasks aids in long-horizon generalization of a goal-conditioned policy.

arXiv  /  webpage  /  short video  /  code
Bayesian Confidence Prediction for Deep Neural Networks
Sayna Ebrahimi, Daniel Geng, Trevor Darrell

Given any classification architecture, we can augment it with a confidence network that outputs calibrated class probabilities.


Website template from Jon Barron.
Last updated Feb 2024.