Labs & Groups
AI Virtual Assistant (AVA) Lab
Faculty: Larry Heck
Georgia Tech's AI Virtual Assistant (AVA) lab is focused on research behind next-generation virtual assistants. We revisit assumptions regarding every aspect of modern AVAs - human-computer interaction design, single vs multimodal interactions, situated interactions over screens and mixed reality (AR/VR), task-oriented conversations to open-domain chit-chat to both, explicit to implicit (commonsense) knowledge-driven conversations, and higher level inference and reasoning.
Animal-Computer Interaction Lab
Faculty: Melody Jackson, Thad Starner
We explore the emerging area of animal-computer interaction focusing on interfaces for interspecies communication and on the design and evaluation of interactive technology for users of multiple species.
BrainLab
Faculty: Melody Jackson
The Brain Lab explores innovative ways of accomplishing human-computer interaction through biometric inputs. Biometric interfaces identify and measure small changes in a person's behavior or physiological responses to certain stimuli. The work has potential in many areas, especially for providing individuals with disabilities a means of personal “hands-off” control of computers and other devices.
BorgLab
Faculty: Frank Dellaert
Our research is in the overlap between robotics and computer vision, and we are particularly interested in graphical model techniques to solve large-scale problems in mapping, 3D reconstruction, and increasingly model-predictive control. The GTSAM toolbox embodies many of the ideas this research group has worked on in the past few years and is available at [gtsam.org](https://gtsam.org) and [the GTSAM Github repo](https://github.com/borglab/gtsam/pulse/monthly).
CoDe Craft Group
Faculty: Hyun Joo Oh
The Computational Design and Craft Group develops computational design tools and methods that integrate everyday craft materials with computing. We explore how computing technologies can extend and transform familiar and accessible materials both as tools and as materials. We investigate how these combinations can broaden creative possibilities for designers.
Computational Behavior Analysis Lab
Faculty: Thomas Ploetz
Our research agenda focuses on applied machine learning that is developing systems and innovative sensor data analysis methods for real-world applications. Primary application domain for our work is computational behavior analysis, where we develop methods for automated and objective behavior assessments in naturalistic environments. Main driving functions for our work are “in the wild” deployments and the development of systems and methods that have a real impact on people’s lives.
Computational Perception Lab
Faculty: Aaron Bobick, Tucker Balch, Hernik Christensen, Frank Dellaert, Irfan Essa, Jim Rehg, Thad Starner
The Computational Perception Laboratory (CPL) was established to explore and develop the next generation of intelligent machines, interfaces, and environments for modeling, perceiving, recognizing, and interacting with humans and for all forms of behavior analysis from data.
Computer Vision Lab
Faculty and Affiliates: Devi Parikh, Dhruv Batra, Stefan Lee
The Computer Vision Lab (CVL) works on various problems in the visual intelligence space. These include, but are not limited to, building agents that can understand visual content, make decisions and act based on this understanding, and communicate with humans in natural language about visual content, which are interpretable, demonstrate common sense, and can work effectively with humans to accomplish common goals.
Contextual Computing Group
Faculty: Thad Starner
The Contextual Computing Group develops applications and interfaces for the computer to be aware of what the user is doing and to assist the user as appropriate. Several current projects at the research stage are envisioned to work together to assist a user in routine tasks such as automatically scheduling an appointment, re-directing an urgent phone call appropriately based on the user's schedule and current activity, and recognizing that the user is engaged in conversation and would prefer to take the phone call later.
CORE Robotics Lab
Faculty: Matthew Gombolay
The Cognitive Optimization and Relational (CORE) Robotics laboratory develops advanced algorithmic techniques to enable robots to collaborate with human teammates. The lab’s work is founded upon the vision for moving beyond thinking of robots as tools that must be manually controlled toward a paradigm in which robots can learn through interaction and experience how to be effective peers for human professionals in healthcare, manufacturing, and search & rescue. Algorithmic techniques include deep reinforcement learning, mathematical programming, and distributed control theory. Additionally, we conduct human subject experiments to understand how to design effective algorithms and to evaluate the contribution of our computational techniques.
Critical Technocultures Lab
Faculty: Cindy Lin
Our Lab explores how people develop and use data and technology as cultural and historical practices. We draw together scholarship in science and technology studies with critical orientations in human-computer interaction to both examine and design how technology aids in equitable and just futures.
Cultural Research in Technology (CRIT) Lab
Faculty: Shaowen Bardzell
The Cultural Research in Technology (CRIT) Lab, directed by Shaowen Bardzell, a Professor at Georgia Tech’s School of Interactive Computing, is a Human-Computer Interaction (HCI) and Interaction Design research lab that brings humanistic thinking to the design of interactive technologies, critiques interactive technologies with regard to their sociocultural and political impacts, and investigates users, use-situations, and technologies where culture is strongly implicated in the success of the technology.
Culture and Technology (CAT) Lab
Faculty: Betsy DiSalvo
The CAT Lab studies how culture impacts the use and production of technology with a focus on learning applications, computer science education and the design of new technologies, with culture as a point of convergence.
Design & Intelligence Lab
Faculty: Ashok Goel, David Joyner, Keith McGreggor, Spencer Rugaber
Research Faculty: Dalton Bassett, Andrew Hornback, Sandeep Kakar, Helen Lu, Vrinda Nandan, and Harshvardhan Sikka.
The Design & Intelligence Laboratory conducts research into human-centered artificial intelligence and computational cognitive science. Historically, the lab has focused on research on computational design and creativity. Over the last decade, its focus has increasingly shifted to AI in education and education in AI. The lab is a part of the National AI Institute for Adult Learning and Online Education (https://aialoe.org/).
ELC Lab
Faculty: Amy Bruckman
Research in the ELC lab studies how to design online communities to bring out the best in individuals and communities. Work focuses on understanding across differences, content moderation, social movements, and online collaboration.
Entertainment Intelligence and Human-Centered AI Labs
Faculty: Mark Riedl
The Entertainment Intelligence and Human-Centered AI Labs focus on computational approaches to creating engaging and entertaining experiences. Some of the problem domains they work on include computer games, storytelling, interactive digital worlds, adaptive media, and procedural content generation. They expressly focus on computationally "hard" problems that require automation, just-in-time generation, and scalability of personalized experiences.
Friendly Cities Lab
Faculty: Clio Andris
We are a research group within the School of City & Regional Planning and School of Interactive Computing at Georgia Tech. We are working on a new field of study: interpersonal relationships and social networks in geographic space, including geographic information systems (GIS), social networks, urban planning, information visualization, and complex systems.
Graphics Lab
Faculty: Greg Turk, Irfan Essa, Bo Zhu
The Graphics Lab is dedicated to research in all aspects of computer graphics, including animation, modeling, rendering, image and video manipulation, and augmented reality.
Hays Lab
Faculty: James Hays
We research computer vision, machine learning, and robotics. We like collecting new datasets that help reveal things about people or the world around us. Our interests are broad, from autonomous vehicle perception to image synthesis to human grasp understanding.
Hoffman Lab
Faculty: Judy Hoffman
Our research lies at the intersection of computer vision and machine learning and focuses on tackling real-world variation and scale while minimizing human supervision. We develop learning algorithms that facilitate transfer of information through unsupervised and semi-supervised model adaptation and generalization.
Information Interfaces Lab
Faculty: John Stasko
The Information Interfaces Lab’s mission is to help people take advantage of information to enrich their lives. As the amount of data available to people and organizations has skyrocketed over the past 10-20 years, largely fueled by the growth of the internet, insufficient methods for people to benefit from this flood of data have been developed. A central focus of many of the group’s projects is the creation of data visualization and visual analytics tools to help people explore, analyze, and understand large data sets.
Immersive Visualization & Interaction Lab
Faculty: Yalong Yang
At IVI Lab, we are passionate about the ever-evolving landscape of display and interaction technologies. With VR/AR becoming increasingly popular, we envision a future where these technologies seamlessly integrate into both personal and business domains. Our research is centered around two core themes: designing and building novel visualization and interaction techniques in VR/AR, and investigating human factors in using interactive systems in VR/AR.
Ka Moamoa Lab
Faculty: Josiah Hester
At Ka Moamoa, we design, build and deploy sustainable computational devices that last decades, supporting applications in healthcare, sustainability, and interactivity. We make from scratch: wearables, implantables, interactive devices, and sensors that harness energy from the ambient, unlock new capabilities, and serve forgotten communities. We work toward a sustainable future for computing informed by Indigenous and Native Hawaiian (Kanaka maoli) culture and traditions.
Machine Learning and Perception Lab
Faculty and Affiliates: Dhruv Batra, Devin Parikh, Stefan Lee
We work at the intersection of machine learning, computer vision, natural language processing, and AI, with a focus on developing intelligent systems that are able to concisely summarize their beliefs across different sub-components or 'modules' of AI (vision, language, reasoning, planning, dialog, navigation), and interpretable AI systems that provide explanations and justifications for why they believe what they believe.
Natural Language Processing Lab
Faculty: Alan Ritter, Wei Xu
This lab works on machine learning approaches to understanding and generating human languages. This includes research topics on neural text generation and dialog, information extraction, social media analysis, robust NLP, interactive learning, minimally supervised learning algorithms.
Play & Learn Lab
Faculty: Judith Uchidiuno
We believe that all students, regardless of their socioeconomic status, should have access to high-quality Computer Science education. We also believe that students’ culture and lived experiences are assets in the design process and increase the efficacy of education technologies.
Through our research, we partner with different communities, students, and educators to design learning technologies that maximize learning gains, the enjoyment of learning, and students’ sense of belonging in learning environments.
People, AI, & Robots (PAIR)
Faculty: Animesh Garg
Our research vision is to build the Algorithmic Foundations for Generalizable Autonomy, that enables robots to acquire skills, at both cognitive & dexterous levels, and to seamlessly interact & collaborate with humans in novel environments. We focus on understanding structured inductive biases and causality on a quest for general-purpose embodied intelligence that learns from imprecise information and achieves flexibility & efficiency of human reasoning.
PIXI LAB
Faculty: Keith Edwards
The PIXI Lab is a group of researchers at the GVU Center at Georgia Tech who are exploring the boundaries between interaction and infrastructure. We take a human-centered approach to our research, by understanding the needs and practices of people through empirical methods, designing compelling user experiences that fit that context, and then building the underlying systems and networking infrastructure necessary to realize that user experience. We are dedicated to creating technology that is not simply usable but also useful.
Robot Autonomy and Interactive Learning (RAIL) Lab
Faculty: Sonia Chernova
The RAIL research lab focuses on the development of robotic systems that operate effectively in complex human environments, adapt to user preferences and learn from user input. Directed by Sonia Chernova, the lab's research spans adjustable autonomy, semantic reasoning, human-robot interaction, and cloud robotics. Explore the lab's site for projects and publications for an in-depth view of recent work.
Robot Learning and Reasoning (RL2) Lab
Faculty: Danfei Xu
Our research is at the intersection of Robotics and Machine Learning. Our mission is to advance the science and the systems of intelligent robots, to allow them to assist everyday tasks in human environments with minimum expert intervention. We are particularly interested in integrating data-driven and model-based decision-making approaches to solve long-horizon manipulation tasks. Our current research focuses on visuomotor skill learning, representation learning for planning, and data-driven approaches to human-robot collaboration.
Robotics Perception and Learning (RIPL) Lab
Faculty: Zsolt Kira
Our areas of research specifically focus on the intersection of learning methods for sensor processing and robotics, developing novel machine learning algorithms and formulations towards solving some of the more difficult perception problems in these areas. We are interested in moving beyond supervised learning (un/semi/self-supervised and continual/lifelong learning) as well as distributed perception (multi-modal fusion, learning to incorporate information across a group of robots, etc.).\
SHI Labs
Faculty: Humphrey Shi
SHI stands for our core values: Service, Humanity, Innovation. SHI Labs is committed to developing leaders who advance technology and improve the human condition, and we embrace progress and service. We are interested in basic research motivated by important applications, and we welcome interdisciplinary collaborations. Our recent focus of research is on building the next generation multimodal AI to understand, emulate, and interact with the world in which we live in a creative, efficient, and responsible way.
Social Dynamics and Well Being Lab
Faculty: Munmun De Choudhury
The Social Dynamics and Wellbeing Lab studies, mines, and analyzes social media to derive insights into improving our health and well-being.
Sonification Lab
Faculty: Bruce Walker
The Georgia Tech Sonification Lab is an interdisciplinary research group based in the School of Psychology and the School of Interactive Computing at Georgia Tech. Under the direction of Prof. Bruce Walker, the Sonification Lab focuses on the development and evaluation of auditory and multimodal interfaces, and the cognitive, psychophysical and practical aspects of auditory displays, paying particular attention to sonification.
Structured Techniques for Algorithmic Robotics (STAR) Lab
Faculty: Harish Ravichandar
We spend most of our time trying to trick people (and sometimes ourselves!) into thinking robots are smart and collaborative. To aid this illusion, the STAR Lab develops structured algorithms that help robots learn to reliably operate and collaborate in complex human environments. We primarily focus on three distinct, yet connected, areas: robot learning, human-robot interaction, and multi-agent coordination. We inject structure into our approaches by combining domain knowledge with tools from dynamical systems, machine learning, and probabilistic inference.
Teachable AI Lab (TAIL)
Faculty: Christopher MacLellan
The Teachable AI Lab (or TAIL for short) is an interdisciplinary research group at Georgia Institute of Technology’s School of Interactive Computing. Our mission is to better understand how people teach and learn and to build machines that can teach and learn like people do. We engage in both use-inspired and fundamental research to achieve this mission. Our research focuses primarily on three thrust areas: (1) Teachable Systems, (2) Human-Like AI/ML Models, and (3) Computational Models of Human Learning and Decision Making. As highlighted in the following figure, these thrust areas are synergistic and support one another.
Technology and Design towards Empowerment (TanDEm) Lab
Faculty: Neha Kumar
The TanDEm Lab is comprised of students at Georgia Tech and beyond who are keen to work with, in tandem, individuals and communities in interrogating the value that technologies can bring/are bringing into the world. The lab's primary commitment is to the ICTD research community, which has been exploring the ties between technology design and global development for about 15 years. Our contributions sit within the spaces of human-computer interaction and human-centered computing, where we aim to co-develop an understanding of the impact that computing can have/is having globally.
Technologies and International Development Lab
Faculty: Michael Best
We research the practice, the promise, and the peril of information and communication technologies (ICTs) in social, economic, and political development. We study the risks and rewards of ICT systems for people and communities, particularly within Africa and Asia. We explore issues of rights and justice in a digital age. And we examine new forms for inclusive innovation and social entrepreneurship enhanced through digital systems. The T+ID Lab is an interdisciplinary community bringing together computer and social scientists with design and policy specialists. We collaborate directly with stakeholders outside of the Lab to critique technologies, invent new ones, and research how and why (or why not) ICTs can serve as a tool to empower, enrich, and interconnect.
Technology-Integrated Learning Environments (TILEs) Lab
Faculty: Jessica Roberts
Research in this group focuses on the design of learning environments in a variety of contexts and content areas employing technology to mediate social and collaborative learning. We draw on theories and methods from the Learning Sciences to investigate how people learn and how to design effective learning interactions.
Ubicomp Health and Wellness Lab
Faculty: Rosa Arriaga
We are conducting research at the intersection of health and wellness. Contact Dr. Arriaga (arriaga at cc.gatech.edu) if you are an undergrad or MS student interested in conducting research in our lab. Research opportunities are for credit only.
Ubiquitous Computing Lab
Faculty: Rosa Arriaga, Thomas Ploetz, Thad Starner, Hyun Joo Oh, Josiah Hester, Alexander Adams
We are interested in ubiquitous computing (ubicomp) and the research issues involved in building and evaluating ubicomp applications and services that impact our lives. Much of our work is situated in settings of everyday activity, such as the classroom, the office, and the home. Our research focuses on several topics including, automated capture and access to live experiences, context-aware computing, applications and services in the home, natural interaction, software architecture, technology policy, security and privacy issues, and technology for individuals with special needs.
Uncommon Sense Lab
Faculty: Alexander Adams
At Uncommon Sense labs, we explore how technology can improve healthcare and health equity. Through novel sensing systems, we design, fabricate, and validate devices that enable us to sense our bodies and the world around us. Our sensing systems affords us to trigger users’ biological senses through novel feedback systems. These feedback systems subtly inform users about what is happening in their bodies and the environment, which can change their behavior.
Visual Analytics Lab
Faculty: Alex Endert
The goal of our research is to develop interactive visual analytic applications that help people make sense of data. We approach this challenge through combining scientific techniques from information visualization, machine learning, data mining, and human-computer interaction to produce usable and powerful visual analytic applications.
VisualizaXiong Lab
Faculty: Cindy Xiong
We conduct experiments to understand how humans interpret data and make decisions using visualizations, generating guidelines for visualization tools that help people more effectively explore and communicate data to make decisions. Some questions we are interested in answering include...
- How do people make comparisons in data? How can we design natural language visualization tools to support comparisons in visual analysis?
- How are people biased when interpreting data? Why do these biases happen? How can we design information systems that mitigate them?
- How do people synthesize information across multiple sources? How can we design tools to help people more effectively seek and synthesize information?
- How do we design trustworthy visualizations? What are the ethical and practical implications of studying trust and data storytelling in human-data interaction?
Wellness Technology Lab
Faculty: Andrea Grimes Parker
The Wellness Technology Lab examines how interactive and social computing technologies can be used to address issues of social justice and health equity.
Work 2 Play Lab
Faculty: Beki Grinter
In the last decade, computing has left the office and entered people's domestic and recreational lives. Consequently, computing affects our lives, shaping not just how we work, but also how we play. Moreover, computing potentially allows individuals to blur the boundaries by letting us conduct domestic routines while in the office, or working from a cafe in an urban center. Researchers in the Work 2 Play Lab are interested in using a variety of empirical techniques to advance the state of the knowledge in how computing affects our lives from work to play.