Upcoming Events
Machine Learning Virtual Seminar: Towards a Theory of Representation Learning for Reinforcement Learning
Abstract: Provably sample-efficient reinforcement learning from rich observational inputs remains a key open challenge in research. While impressive recent advances have allowed the use of linear modelling while carrying out sample-efficient exploration and learning, the handling of more general non-linear models remains limited. In this talk, we study reinforcement learning using linear models, where the features underlying the linear model are learned, rather than apriori specified. While the broader question of representation learning for useful embeddings of complex data has seen tremendous progress, doing so in reinforcement learning presents additional challenges: good representations cannot be discovered without adequate exploration, but effective exploration is challenging in the absence of good representations. Concretely, we study this question in the context of low-rank MDPs [Jiang et al., 2017, Jin et al., 2019, Yang and Wang, 2019], where the features underlying a state-action pair are not assumed to be known, unlike most prior works. We develop two styles of methods, model-based and model-free. For the model-based method, we learn an approximate factorization of the transition model, plan within the model to obtain a fresh exploratory policy and then update our factorization with additional data. In the model-free technique, we learn features so that quantities such as value functions at subsequent states can be predicted linearly in those features. In both approaches, we address the intricate coupling between exploration and representation learning, and provide sample complexity guarantees. More details can be found at https://arxiv.org/abs/2006.10814 and https://arxiv.org/abs/2102.07035.
[Based on joint work with Jingling Chen, Nan Jiang, Sham Kakade, Akshay Krishnamurthy, Aditya Modi and Wen Sun]
Bio: Alekh Agarwal is a researcher who works on theoretical foundations of machine learning, spanning many areas including large-scale and distributed optimization, high-dimensional statistics, online learning, and most recently reinforcement learning. He focuses on designing theoretically sound methods which lend themselves to practice, and his work at Microsoft has resulted in the creation of a new Azure service (https://aka.ms/personalizer) that operationalizes some of his reinforcement learning research. His work has been recognized with a NeurIPS best paper award and an ACM SIGAI Industry Impact award for his work on the Azure Personalization Service.
Register and Attend: https://primetime.bluejeans.com/a2m/register/qxugrckz
Event Details
Media Contact
Kyla Hanson
EVENTS BY SCHOOL & CENTER
School of Computational Science and Engineering
School of Interactive Computing
School of Cybersecurity and Privacy
Algorithms and Randomness Center (ARC)
Center for 21st Century Universities (C21U)
Center for Deliberate Innovation (CDI)
Center for Experimental Research in Computer Systems (CERCS)
Center for Research into Novel Computing Hierarchies (CRNCH)
Constellations Center for Equity in Computing
Institute for People and Technology (IPAT)
Institute for Robotics and Intelligent Machines (IRIM)