Skip to content

πŸ€– The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation

License

Notifications You must be signed in to change notification settings

xianglunkai/Rofunc

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation

Release License Documentation Status Build Status

Repository address: https://github.com/Skylark0924/Rofunc

Rofunc package focuses on the robotic Imitation Learning (IL) and Learning from Demonstration (LfD) fields and provides valuable and convenient python functions for robotics, including demonstration collection, data pre-processing, LfD algorithms, planning, and control methods. We also provide an Isaac Gym-based robot simulator for evaluation. This package aims to advance the field by building a full-process toolkit and validation platform that simplifies and standardizes the process of demonstration data collection, processing, learning, and its deployment on robots.

Installation

Install from PyPI (stable version)

The installation is very easy,

pip install rofunc

and as you'll find later, it's easy to use as well!

import rofunc as rf

Thus, have fun in the robotics world!

Note Several requirements need to be installed before using the package. Please refer to the installation guide for more details.

Install from Source (nightly version, recommended)

git clone https://github.com/Skylark0924/Rofunc.git
cd Rofunc

# Create a conda environment
# Python 3.8 is strongly recommended
conda create -n rofunc python=3.8

# For Linux user
sh ./scripts/install.sh
# For MacOS user (brew is required, Isaac Gym based simulator is not supported on MacOS)
sh ./scripts/mac_install.sh

Note If you want to use functions related to ZED camera, you need to install ZED SDK manually. (We have tried to package it as a .whl file to add it to requirements.txt, unfortunately, the ZED SDK is not very friendly and doesn't support direct installation.)

Documentation

Documentation Example Gallery

Note Currently, we provide a simple document; please refer to here. A comprehensive one with both English and Chinese versions is built via the readthedoc. We provide a simple but interesting example: learning to play Taichi by learning from human demonstration.

To give you a quick overview of the pipeline of rofunc, we provide an interesting example of learning to play Taichi from human demonstration. You can find it in the Quick start section of the documentation.

The available functions and plans can be found as follows.

Note βœ…: Achieved πŸ”ƒ: Reformatting β›”: TODO

Data Learning P&C Tools Simulator
xsens.record βœ… DMP β›” LQT βœ… Config βœ… Franka βœ…
xsens.export βœ… GMR βœ… LQTBi βœ… robolab.coord βœ… CURI βœ…
xsens.visual βœ… TPGMM βœ… LQTFb βœ… robolab.fk βœ… CURIMini πŸ”ƒ
opti.record βœ… TPGMMBi βœ… LQTCP βœ… robolab.ik βœ… CURISoftHand βœ…
opti.export βœ… TPGMM_RPCtl βœ… LQTCPDMP βœ… robolab.fd β›” Walker βœ…
opti.visual βœ… TPGMM_RPRepr βœ… LQR βœ… robolab.id β›” Gluon πŸ”ƒ
zed.record βœ… TPGMR βœ… PoGLQRBi βœ… visualab.dist βœ… Baxter πŸ”ƒ
zed.export βœ… TPGMRBi βœ… iLQR πŸ”ƒ visualab.ellip βœ… Sawyer πŸ”ƒ
zed.visual βœ… TPHSMM βœ… iLQRBi πŸ”ƒ visualab.traj βœ… Multi-Robot βœ…
emg.record βœ… BCO πŸ”ƒ iLQRFb πŸ”ƒ
emg.export βœ… STrans β›” iLQRCP πŸ”ƒ
emg.visual βœ… PPO(SKRL) βœ… iLQRDyna πŸ”ƒ
mmodal.record β›” SAC(SKRL) βœ… iLQRObs πŸ”ƒ
mmodal.export βœ… TD3(SKRL) βœ… MPC β›”
PPO(SB3) β›” RMP β›”
SAC(SB3) β›”
TD3(SB3) β›”
PPO(RLlib) βœ…
SAC(RLlib) βœ…
TD3(RLlib) βœ…
PPO(ElegRL) βœ…
SAC(ElegRL) βœ…
TD3(ElegRL) βœ…
PPO(RofuncRL) πŸ”ƒ
SAC(RofuncRL) πŸ”ƒ
TD3(RofuncRL) πŸ”ƒ
CQL(RofuncRL) β›”
DTrans βœ…
ODTrans β›”
RT-1 β›”

Star History

Star History Chart

Citation

If you use rofunc in a scientific publication, we would appreciate citations to the following paper:

@misc{Rofunc2022,
      author = {Liu, Junjia and Li, Chenzui and Delehelle, Donatien and Li, Zhihao and Chen, Fei},
      title = {Rofunc: The full process python package for robot learning from demonstration and robot manipulation},
      year = {2022},
      publisher = {GitHub},
      journal = {GitHub repository},
      howpublished = {\url{https://github.com/Skylark0924/Rofunc}},
}

Related Papers

  1. Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects (IEEE RA-L 2022 | Code)
@article{liu2022robot,
         title={Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects},
         author={Liu, Junjia and Chen, Yiting and Dong, Zhipeng and Wang, Shixiong and Calinon, Sylvain and Li, Miao and Chen, Fei},
         journal={IEEE Robotics and Automation Letters},
         volume={7},
         number={2},
         pages={5159--5166},
         year={2022},
         publisher={IEEE}
}
  1. SoftGPT: Learn Goal-oriented Soft Object Manipulation Skills by Generative Pre-trained Heterogeneous Graph Transformer (IROS 2023)
  2. Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human Demonstration (IEEE CDC 2023 | Code)

The Team

Rofunc is developed and maintained by the CLOVER Lab (Collaborative and Versatile Robots Laboratory), CUHK.

Acknowledge

We would like to acknowledge the following projects:

Learning from Demonstration

  1. pbdlib
  2. Ray RLlib
  3. ElegantRL
  4. SKRL

Planning and Control

  1. Robotics codes from scratch (RCFS)

About

πŸ€– The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.7%
  • Other 0.3%