Picture taken by Tina Lin

Zeyi Liu 刘泽怡
liuzeyi at stanford dot edu

Hi, I'm a second-year PhD student at Stanford University, advised by Professor Shuran Song and part of the REAL lab. Previously, I was an undergraduate student at Columbia University studying Computer Science and Applied Math.

My work focuses on robot perception and manipulation. More specifically, I'm interested in developing methods for embodied agents to better perceive and understand the environment through multimodal data (e.g. vision, language, audio), which facilitates learning of robust and generalizable policies.

Google Scholar / LinkedIn / Twitter / Github

Updates

Research

ManiWAV: Learning Robot Manipulation from In-the-Wild Audio-Visual Data

Zeyi Liu ,Cheng Chi ,Eric Cousineau ,Naveen Kuppuswamy ,Benjamin Burchfiel ,Shuran Song
Website  •   ArXiv  •   Video  •   Code  •   MIT Tech Review


TL;DR: A data collection and policy learning framework that learns contact-rich robot manipulation skills from in-the-wild audio-visual data.

ContactHandover: Contact-Guided Robot-to-Human Object Handover

Zixi Wang ,Zeyi Liu ,Nicolas Ouporov ,Shuran Song
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2024 Website  •   ArXiv  •   Code (coming soon)


TL;DR: A robot to human handover system that leverages human contact points to inform robot grasp and object delivery pose.

REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction

Zeyi Liu* ,Arpit Bahety* ,Shuran Song
Conference on Robot Learning (CoRL), November 2023
CoRL Workshop on Language and Robot Learning (Oral presentation)
Website  •   ArXiv  •   Video  •   Code


TL;DR: A framework that leverages LLM for robot failure explanation and correction, based on a hierarchical summary of robot past experiences generated from multisensory data.

BusyBot: Learning to Interact, Reason, and Plan in a BusyBoard Environment

Zeyi Liu ,Zhenjia Xu ,Shuran Song
Conference on Robot Learning (CoRL), December 2022
Website  •   ArXiv  •   Video  •   Code


TL;DR: A toy-inspired simulated learning environment for embodied agents to acquire object manipulation, inter-object relation reasoning, and goal-conditioned planning skills.

* indicates equal contribution

Teaching & Outreach

I hold a passion for teaching and empowering minorities in academia/tech industry.