Skip to content

PLAN-Lab/PALM

Repository files navigation

Project Page Arxiv Hugging Face

🌟: Installation

Environment Setup

Create and activate conda environment:

conda create -n palm python=3.10 -y
conda activate palm

Install PyTorch:

pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117

Install dependencies:

pip install git+https://github.com/openai/CLIP.git
pip install -r requirements.txt

🌟: Getting Started

We provide step-by-step guidance for running PALM in simulations and real-world experiments. Follow the specific instructions for a seamless setup.

Simulation

CALVIN ABC-D

LIBERO

Real-World

Real-World (Quick Training w & w/o pre-training)

For users aiming to train PALM from scratch or fine-tune it, we provide comprehensive instructions for environment setup, downstream task data preparation, training, and deployment.

Real-World (Pre-training)

This section details the pre-training process of PALM in real-world experiments, including environment setup, dataset preparation, and training procedures. Downstream task processing and fine-tuning are covered in Real-World (Quick Training w & w/o pre-training).

🌟: Checkpoints

Relevant checkpoints are available on Google Drive.

🌟: Citation

If you find the project helpful for your research, please consider citing our paper:

@article{liu2026palm,
  title={PALM: Progress-Aware Policy Learning via Affordance Reasoning for Long-Horizon Robotic Manipulation},
  author={Liu, Yuanzhe and Zhu, Jingyuan and Mo, Yuchen and Li, Gen and Cao, Xu and Jin, Jin and Shen, Yifan and Li, Zhengyuan and Yu, Tianjiao and Yuan, Wenzhen and others},
  journal={arXiv preprint arXiv:2601.07060},
  year={2026}
}

About

[CVPR 2026] Progress-Aware Policy Learning via Affordance Reasoning for Long-Horizon Robotic Manipulation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages