Set parameters

As the myGym is modular toolbox you can easily change your training setup.

Train a different Robots:

python train.py --robot panda

Change the Workspaces within the gym:

python train.py  --workspace collabtable

Set a different Task:

python train.py  --task push

Choose a task Objects:

python train.py --task_objects wrench

training

It is possible to pass following parameters to the train.py and test.py scripts directly using command line arguments. Alternatively, you can use a config file, see Edit config file.

usage: train [-h] [-cfg CONFIG] [-n ENV_NAME] [-ws WORKSPACE] [-p ENGINE]
             [-d RENDER] [-c CAMERA] [-vi VISUALIZE] [-vg VISGYM] [-g GUI]
             [-b ROBOT] [-bi [ROBOT_INIT [ROBOT_INIT ...]]] [-ba ROBOT_ACTION]
             [-tt TASK_TYPE] [-ns NUM_SUBGOALS]
             [-to [TASK_OBJECTS [TASK_OBJECTS ...]]]
             [-u [USED_OBJECTS [USED_OBJECTS ...]]]
             [-oa [OBJECT_SAMPLING_AREA [OBJECT_SAMPLING_AREA ...]]]
             [-rt REWARD_TYPE] [-re REWARD] [-dt DISTANCE_TYPE]
             [-w TRAIN_FRAMEWORK] [-a ALGO] [-s STEPS] [-ms MAX_EPISODE_STEPS]
             [-ma ALGO_STEPS] [-ef EVAL_FREQ] [-e EVAL_EPISODES] [-l LOGDIR]
             [-r RECORD] [-i MULTIPROCESSING] [-v VECTORIZED_ENVS]
             [-m MODEL_PATH] [-vp VAE_PATH] [-yp YOLACT_PATH]
             [-yc YOLACT_CONFIG] [-ptm PRETRAINED_MODEL]

Named Arguments

-cfg, --config

Can be passed instead of all arguments

Default: “configs/trainTD3sde.json”

-n, --env_name

The name of environment

-ws, --workspace

The name of workspace

-p, --engine

Name of the simulation engine you want to use

-d, --render

Type of rendering: opengl, opencv

-c, --camera

The number of camera used to render and record

-vi, --visualize

Whether visualize camera render and vision in/out or not: 1 or 0

-vg, --visgym

Whether visualize gym background: 1 or 0

-g, --gui

Wether the GUI of the simulation should be used or not: 1 or 0

-b, --robot

Robot to train: kuka, panda, jaco …

-bi, --robot_init

Initial robot’s end-effector position

-ba, --robot_action

Robot’s action control: step - end-effector relative position, absolute - end-effector absolute position, joints - joints’ coordinates

-tt, --task_type

Type of task to learn: reach, push, throw, pick_and_place

-ns, --num_subgoals

Number of subgoals in task

-to, --task_objects

Object (for reach) or a pair of objects (for other tasks) to manipulate with

-u, --used_objects

List of extra objects to randomly appear in the scene

-oa, --object_sampling_area

Area in the scene where objects can appear

-rt, --reward_type

Type of reward: gt(ground truth), 3dvs(3D vision supervised), 2dvu(2D vision unsupervised), 6dvs(6D vision supervised)

-re, --reward

Defines how to compute the reward

-dt, --distance_type

Type of distance metrics: euclidean, manhattan

-w, --train_framework

Name of the training framework you want to use: {tensorflow, pytorch}

-a, --algo

The learning algorithm to be used (ppo2 or her)

-s, --steps

The number of steps to train

-ms, --max_episode_steps

The maximum number of steps per episode

-ma, --algo_steps

The number of steps per for algo training (PPO2,A2C)

-ef, --eval_freq

Evaluate the agent every eval_freq steps

-e, --eval_episodes

Number of episodes to evaluate performance of the robot

-l, --logdir

Where to save results of training and trained models

-r, --record

1: make a gif of model perfomance, 2: make a video of model performance, 0: don’t record

-i, --multiprocessing

True: multiprocessing on (specify also the number of vectorized environemnts), False: multiprocessing off

-v, --vectorized_envs

The number of vectorized environments to run at once (mujoco multiprocessing only)

-m, --model_path

Path to the the trained model to test

-vp, --vae_path

Path to a trained VAE in 2dvu reward type

-yp, --yolact_path

Path to a trained Yolact in 3dvu reward type

-yc, --yolact_config

Path to saved config obj or name of an existing one in the data/Config script (e.g. ‘yolact_base_config’) or None for autodetection

-ptm, --pretrained_model

Path to a model that you want to continue training