FP3: A 3D Foundation Policy for Robotic Manipulation

Rujia Yang*1, Geng Chen*2,4, Chuan Wen‡1,2,3, Yang Gao‡1,2,3,
1IIIS, Tsinghua University, 2Shanghai AI Laboratory, 3Shanghai Qi Zhi Institute, 4UC San Diego *Equal Contribution Equal Advising
FP3 Concept

Abstract

Following its success in natural language processing and computer vision, foundation models that are pre-trained on large-scale multi-task datasets have also shown great potential in robotics. However, most existing robot foundation models rely solely on 2D image observations, ignoring 3D geometric information, which is essential for robots to perceive and reason about the 3D world. In this paper, we introduce FP3, a first large-scale 3D foundation policy model for robotic manipulation. FP3 builds on a scalable diffusion transformer architecture and is pre-trained on 60k trajectories with point cloud observations. With the model design and diverse pre-training data, FP3 can be efficiently fine-tuned for downstream tasks while exhibiting strong generalization capabilities. Experiments on real robots demonstrate that with only 80 demonstrations, FP3 is able to learn a new task with over 90% success rates in novel environments with unseen objects, significantly surpassing existing robot foundation models.

In-the-Wild Test

Our method, FP3, can be fine-tuned with a small number of target task samples and directly generalize to unseen scenes and objects. Our method outperforms both small models without pretraining and other large robotic models.

Generalization

We conduct more comprehensive experiments on FP3's generalizability to different environments and robot setups using the Clean Table task.

BibTeX

@misc{yang2025fp33dfoundationpolicy,
      title={FP3: A 3D Foundation Policy for Robotic Manipulation}, 
      author={Rujia Yang and Geng Chen and Chuan Wen and Yang Gao},
      year={2025},
      eprint={2503.08950},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2503.08950}, 
}

The website template is sourced from https://github.com/nerfies/nerfies.github.io