I am a fourth year PhD student in CS at the University of Michigan advised by Professor Andrew Owens and supported by an NSF GRFP fellowship. I currently study computer vision, and I have done research in deep reinforcement learning and representation learning in the past. I was lucky enough to have worked as an undergrad at UC Berkeley under Sergey Levine and Coline Devin, as well as Alyosha Efros and Taesung Park, and as an intern at FAIR under Lorenzo Torresani and Huiyu Wang.
I am currently interested in generative image models, how one can control them, and possible ways to use them. I've also worked on levarging differentiable models of motion for image/video synthesis and understanding, and I've worked on representation learning and multimodal learning.
Sister project of "Visual Anagrams." Another zero-shot method for making more types of optical illusions with diffusion models, with connections to spatial and composition control of diffusion models, and inverse problems.
A simple, zero-shot method to synthesize optical illusions from diffusion models. We introduce Visual Anagrams—images that change appearance under a permutation of pixels.