I am a fifth year PhD student in CS at the University of Michigan advised by Professor Andrew Owens and graciously supported by an NSF GRFP fellowship. I study computer vision, and have previously done research in deep reinforcement learning and representation learning.
I am currently interested in generative models, how we can control them, and novel ways to use them. I've also worked on levarging differentiable models of motion for image and video synthesis and understanding, and in the past I've worked on representation learning, multimodal learning, and reinforcement learning.
Sister project of "Visual Anagrams." Another zero-shot method for making more types of optical illusions with diffusion models, with connections to spatial and composition control of diffusion models, and inverse problems.
A simple, zero-shot method to synthesize optical illusions from diffusion models. We introduce Visual Anagrams—images that change appearance under a permutation of pixels.