I am a fifth year PhD student in CS at the University of Michigan advised by Professor Andrew Owens. I study computer vision, and have previously done research in deep reinforcement learning and representation learning.
I am currently interested in generative models, how we can control them, and novel ways to use them. I've also worked on levarging differentiable models of motion for image and video synthesis and understanding, and in the past I've worked on representation learning, multimodal learning, and reinforcement learning.
Motion Prompting: Controlling Video Generation with Motion Trajectories
Daniel Geng, Charles Herrmann, Junhwa Hur, Forrester Cole, Serena Zhang, Tobias Pfaff, Tatiana Lopez-Guevara, Carl Doersch, Yusuf Aytar, Michael Rubinstein, Chen Sun, Oliver Wang, Andrew Owens, Deqing Sun
arXiv  
We train a video generation to be conditioned on motion, and then prompt it with motion prompts, elliciting a wide range of behavior.
Sister project of "Visual Anagrams." Another zero-shot method for making more types of optical illusions with diffusion models, with connections to spatial and composition control of diffusion models, and inverse problems.
A simple, zero-shot method to synthesize optical illusions from diffusion models. We introduce Visual Anagrams—images that change appearance under a permutation of pixels.