Picture taken by Tina Lin

Zeyi Liu 刘泽怡
liuzeyi at stanford dot edu

Hi, I'm a second-year PhD student at Stanford University, advised by Professor Shuran Song and part of the REAL lab. Previously, I was an undergraduate student at Columbia University studying Computer Science and Applied Math.

My work focuses on robot perception and manipulation. More specifically, I'm interested in developing methods for embodied agents to better perceive and understand the environment through multimodal data (e.g. vision, language, audio), which facilitates learning of robust and generalizable policies.

Google Scholar / LinkedIn / Twitter / Github



REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction

Zeyi Liu* ,Arpit Bahety* ,Shuran Song
Conference on Robot Learning (CoRL), November 2023
CoRL Workshop on Language and Robot Learning (Oral presentation)
Website  •   ArXiv  •   Video  •   Code

TL;DR: REFLECT is a framework that leverages Large Language Models (LLMs) for robot failure explanation and correction, based on a hierarchical summary of robot past experiences generated from multisensory data.

BusyBot: Learning to Interact, Reason, and Plan in a BusyBoard Environment

Zeyi Liu ,Zhenjia Xu ,Shuran Song
Conference on Robot Learning (CoRL), December 2022
Website  •   ArXiv  •   Video  •   Code

TL;DR: A toy-inspired simulated learning environment for embodied agents to acquire object manipulation, inter-object relation reasoning, and goal-conditioned planning skills.

* indicates equal contribution

Teaching & Outreach

I hold a passion for teaching and empowering minorities in academia/tech industry.