The RoboTurk Simulation Dataset

We collected a large-scale simulation dataset on the SawyerPickPlace and SawyerNutAssembly tasks from the Surreal Robotics Suite using the RoboTurk platform. Crowdsourced workers collected these task demonstrations remotely. The dataset consists of 1070 successful SawyerPickPlace demonstrations and 1147 succcessful SawyerNutAssembly demonstrations.

We are providing this dataset in the hopes that it will be beneficial to researchers working on imitation learning. Large-scale imitation learning has not been explored much in the community; it will be exciting to see how this data is used.

We will describe the structure of the dataset in the sections below.



Download

After You Download

After unzipping the dataset, the following subdirectories can be found within the RoboTurkPilot dirtectory. Every directory has the same structure as the demonstrations described below.

bins-full: The set of complete demonstrations on the SawyerPickPlace task. Every demonstration consists of the Sawyer arm placing one of each object into its corresponding bin.

bins-Milk: A postprocessed, segmented set of demonstrations that corresponds to the SawyerPickPlaceMilk task. Every demonstration consists of the Sawyer arm placing a carton of milk into its corresponding bin.

bins-Bread: A postprocessed, segmented set of demonstrations that corresponds to the SawyerPickPlaceBread task. Every demonstration consists of the Sawyer arm placing a loaf of bread into its corresponding bin.

bins-Cereal: A postprocessed, segmented set of demonstrations that corresponds to the SawyerPickPlaceCereal task. Every demonstration consists of the Sawyer arm placing a cereal box into its corresponding bin.

bins-Can: A postprocessed, segmented set of demonstrations that corresponds to the SawyerPickPlaceCan task. Every demonstration consists of the Sawyer arm placing a can into its corresponding bin.

pegs-full: The set of complete demonstrations on the full SawyerNutAssembly task. Every demonstration consists of the Sawyer arm fitting a square nut and a round nut onto their corresponding pegs.

pegs-SquareNut: A postprocessed, segmented set of demonstrations that corresponds to the SawyerNutAssemblySquare task. Every demonstration consists of the Sawyer arm fitting a square nut onto its corresponding peg.

pegs-RoundNut: A postprocessed, segmented set of demonstrations that corresponds to the SawyerNutAssemblyRound task. Every demonstration consists of the Sawyer arm fitting a round nut onto its corresponding peg.

Structure of Demonstrations

Every set of demonstrations is collected as a directory. Every directory contains a models subdirectory and a demo.hdf5 file. The models subdirectory contains an xml file per demonstration where the xml file corresponds to the MuJoCo simulation model that was used during that demonstration.

The demo.hdf5 file is structured as follows:

data (group)
date (attribute) - date of collection
time (attribute) - time of collection
repository_version (attribute) - repository version used during collection
env (attribute) - environment name on which demos were collected
demo_1 (group) - group for the first demonstration (every demonstration has a group)
  model_file (attribute) - name of the corresponding model xml in the models directory
  states (dataset) - flattened MuJoCo states, ordered by time
  joint_velocities (dataset) - joint velocities applied during demonstration
  gripper_actuations (dataset) - gripper controls applied during demonstration
  right_dpos (dataset) - end effector delta position command for a single arm robot or right arm
  right_dquat (dataset) - end effector delta rotation command for a single arm robot or right arm
  left_dpos (dataset) - end effector delta position command for left arm (bimanual robot only)
  left_dquat (dataset) - end effector delta rotation command for left arm (bimanual robot only)
demo_2 (group) - group for the second demonstration
 ...

BibTex

         @inproceedings{mandlekar2018roboturk,
          title={Roboturk: A crowdsourcing platform for robotic skill learning through imitation},
          author={Mandlekar, Ajay and Zhu, Yuke and Garg, Animesh and Booher, Jonathan and Spero, Max and Tung, Albert and Gao, Julian and Emmons, John and Gupta, Anchit and Orbay, Emre and others},
          booktitle={Conference on Robot Learning},
          pages={879--893},
          year={2018},
          organization={PMLR}
        }