OffWorldMonolithDiscreteReal-v0

class offworld_gym.envs.real.offworld_monolith_env.OffWorldMonolithContinousEnv(experiment_name, resume_experiment, learning_type, algorithm_mode=<AlgorithmMode.TRAIN: 'train'>, channel_type=<Channels.DEPTH_ONLY: 1>)[source]

Bases: offworld_gym.envs.real.offworld_monolith_env.OffWorldMonolithEnv

Real Gym environment with a rosbot and a monolith on an uneven terrain with continous action space.

A RL agent learns to reach the goal(monolith) in shortest time.

env = gym.make('OffWorldMonolithContinousReal-v0', experiment_name='first_experiment', resume_experiment=False, learning_type=LearningType.END_TO_END, algorithm_mode=AlgorithmMode.TRAIN, channel_type=Channels.DEPTHONLY)
env = gym.make('OffWorldMonolithContinousReal-v0', experiment_name='first_experiment', resume_experiment=False, learning_type=LearningType.SIM_2_REAL, algorithm_mode=AlgorithmMode.TRAIN, channel_type=Channels.RGB_ONLY)
env = gym.make('OffWorldMonolithContinousReal-v0', experiment_name='first_experiment', resume_experiment=False, learning_type=LearningType.HUMAN_DEMOS, algorithm_mode=AlgorithmMode.TRAIN, channel_type=Channels.RGBD)
observation_space

Gym data structure that encapsulates an observation.

action_space

Gym space box type to represent that environment has continouss actions.

step_count

An integer count of step during an episode.

__abstractmethods__ = frozenset({})
__init__(experiment_name, resume_experiment, learning_type, algorithm_mode=<AlgorithmMode.TRAIN: 'train'>, channel_type=<Channels.DEPTH_ONLY: 1>)[source]

Initialize self. See help(type(self)) for accurate signature.

__module__ = 'offworld_gym.envs.real.offworld_monolith_env'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 53
_abc_registry = <_weakrefset.WeakSet object>
reset()[source]

Resets the state of the environment and returns an observation.

Returns

A numpy array with rgb/depth/rgbd encoding of the state observation.

step(action)[source]

Take a continous action in the environment.

Parameters

action – An action to be taken in the environment.

Returns

A numpy array with rgb/depth/rgbd encoding of the state observation. An integer with reward from the environment. A boolean flag which is true when an episode is complete. No info given for fair learning.

OffWorldMonolithContinousReal-v0

class offworld_gym.envs.real.offworld_monolith_env.OffWorldMonolithContinousEnv(experiment_name, resume_experiment, learning_type, algorithm_mode=<AlgorithmMode.TRAIN: 'train'>, channel_type=<Channels.DEPTH_ONLY: 1>)[source]

Bases: offworld_gym.envs.real.offworld_monolith_env.OffWorldMonolithEnv

Real Gym environment with a rosbot and a monolith on an uneven terrain with continous action space.

A RL agent learns to reach the goal(monolith) in shortest time.

env = gym.make('OffWorldMonolithContinousReal-v0', experiment_name='first_experiment', resume_experiment=False, learning_type=LearningType.END_TO_END, algorithm_mode=AlgorithmMode.TRAIN, channel_type=Channels.DEPTHONLY)
env = gym.make('OffWorldMonolithContinousReal-v0', experiment_name='first_experiment', resume_experiment=False, learning_type=LearningType.SIM_2_REAL, algorithm_mode=AlgorithmMode.TRAIN, channel_type=Channels.RGB_ONLY)
env = gym.make('OffWorldMonolithContinousReal-v0', experiment_name='first_experiment', resume_experiment=False, learning_type=LearningType.HUMAN_DEMOS, algorithm_mode=AlgorithmMode.TRAIN, channel_type=Channels.RGBD)
observation_space

Gym data structure that encapsulates an observation.

action_space

Gym space box type to represent that environment has continouss actions.

step_count

An integer count of step during an episode.

__abstractmethods__ = frozenset({})
__init__(experiment_name, resume_experiment, learning_type, algorithm_mode=<AlgorithmMode.TRAIN: 'train'>, channel_type=<Channels.DEPTH_ONLY: 1>)[source]

Initialize self. See help(type(self)) for accurate signature.

__module__ = 'offworld_gym.envs.real.offworld_monolith_env'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 53
_abc_registry = <_weakrefset.WeakSet object>
reset()[source]

Resets the state of the environment and returns an observation.

Returns

A numpy array with rgb/depth/rgbd encoding of the state observation.

step(action)[source]

Take a continous action in the environment.

Parameters

action – An action to be taken in the environment.

Returns

A numpy array with rgb/depth/rgbd encoding of the state observation. An integer with reward from the environment. A boolean flag which is true when an episode is complete. No info given for fair learning.