
Researchers Examining How AI Can Be Trusted in High-Stakes Scenarios
Imagine that an oncologist reads a report from a newly implemented artificial intelligence (AI) platform that recommends immediate surgery to remove a tumor that the doctor’s own examination didn’t find.
With the clock ticking and the patient’s life at risk, what should the doctor do? Has the AI misread something? Or, has it caught something the doctor missed?
High-stakes scenarios where psychology intersects with computing are the focus of Georgia Tech researchers Ryan Shandler and Cindy Xiong in their latest research project.
The pair proposed building an AI system that enables users to engage thoughtfully with AI recommendations, rather than relying on them unquestioningly.
Google Research recognized the potential value of their proposal and selected Shandler and Xiong as 2025 Google Research Scholars.
With the clock ticking and the patient’s life at risk, what should the doctor do? Has the AI misread something? Or, has it caught something the doctor missed?
High-stakes scenarios where psychology intersects with computing are the focus of Georgia Tech researchers Ryan Shandler and Cindy Xiong in their latest research project.
The pair proposed building an AI system that enables users to engage thoughtfully with AI recommendations, rather than relying on them unquestioningly.
Google Research recognized the potential value of their proposal and selected Shandler and Xiong as 2025 Google Research Scholars.