Research & Publications
Research Overview
My research is in machine learning theory, with two primary directions. First, I study learning in adaptive environments, including online, multi-agent, and other interactive settings. Second, I study the theoretical foundations of modern machine learning, especially questions arising from deep learning and generative models.
More broadly, I am also interested in questions on sampling, statistical estimation, and classical learning theory. A recurring theme in my research is to identify fundamental principles and structural phenomena that explain modern learning problems and guide the design of provably effective algorithms.
You can also find my publications on my Google Scholar profile.
If you are interested in collaborating, feel free to reach out at viverson@uwaterloo.ca.
Research Directions & Selected Works
Learning in Adaptive Environments
I work on the theoretical foundations of learning in dynamic and interactive settings, including online learning, reinforcement learning, and learning in games.
Theoretical Foundations of Deep Learning & Generative Models
I am interested in developing a deeper mathematical understanding of the empirical success of modern machine learning, with the broader goal of informing the design of better algorithms. My work has included approximation-theoretic aspects of deep learning, such as questions about the expressivity of neural networks and continuous-state sequence models. I have also been interested in other aspects of deep learning theory, including weak-to-strong generalization and theoretical questions surrounding diffusion models.
Sampling & Statistical Estimation
I work on the algorithmic and information-theoretic foundations of statistical estimation and sampling, especially in high-dimensional settings. I have also worked on related questions in privacy, robustness, and stability.
Classical Learning Theory
I am also interested in classical learning-theoretic frameworks that help formalize modern learning problems. I have worked on how additional feedback structures, such as discriminative feature feedback or contrastive information, can enrich classical learning setups and improve learnability.
Miscellaneous Projects
Beyond machine learning theory, I also work on mathematical problems of independent interest, especially in number theory, convex and discrete geometry, and combinatorics, including graph theory and extremal or additive combinatorics.
Talks
I am deeply grateful to my mentors, collaborators, and friends who have profoundly shaped my path as a researcher. I extend particular gratitude to Xiaoheng Wang for introducing me to mathematical research; to Stephen Vavasis and Gautam Kamath for being among the first to mentor me and involve me in machine learning theory research; and to Argyris Mouzakis for his invaluable mentorship and many helpful discussions. I am also indebted to Shai Ben-David, Aukosh Jagannath, and Jeffrey Negrea for inspiring my interest in this field.
