Three students kneel next to a robot dog

Ph.D. Student Wins Best Paper at Prestigious Robotics Conference

Ask a person to find a frying pan, and they will most likely go to the kitchen. Ask a robot to do the same, and you may get numerous responses, depending on how the robot is trained.

Since humans often associate objects in a home with the room they are in, Naoki Yokoyama thinks robots that navigate human environments to perform assistive tasks should mimic that reasoning.

Roboticists have employed natural language models to help robots mimic human reasoning over the past few years. However, Yokoyama, a Ph.D. student in robotics, said these models create a “bottleneck” that prevents agents from picking up on visual cues such as room type, size, décor, and lighting.

Yokoyama presented a new framework for semantic reasoning at the Institute of Electrical and Electronic Engineers (IEEE) International Conference on Robotics and Automation (ICRA) last month in Yokohama, Japan. ICRA is the world’s largest robotics conference.