Module pearl.user_envs.envs.bandit

Expand source code
from typing import Any, Dict, Optional, Tuple, Union

import numpy as np

try:
    import gymnasium as gym
except ModuleNotFoundError:
    print("gymnasium module is not found")


class MeanVarBanditEnv(gym.Env[np.ndarray, Union[int, np.ndarray]]):
    """environment to test if safe RL algorithms
    prefer a policy that achieves lower variance return"""

    def __init__(
        self,
        seed: Optional[int] = None,
    ) -> None:
        super().__init__()
        self._size = 2
        self._rng = np.random.RandomState(seed)
        high = np.array([1.0] * self._size, dtype=np.float32)
        self.action_space = gym.spaces.Discrete(2)
        self.observation_space = gym.spaces.Box(-high, high, dtype=np.float32)
        self.idx: Optional[int] = None

    def get_observation(self) -> np.ndarray:
        obs = np.zeros(self._size, dtype=np.float32)
        obs[self.idx] = 1.0
        return obs

    def reset(
        self, *, seed: Optional[int] = None, options: Optional[Dict[str, float]] = None
    ) -> Tuple[np.ndarray, Dict[str, float]]:
        super().reset(seed=seed)
        self.idx = 0
        return self.get_observation(), {}

    def step(
        self, action: Union[int, np.ndarray]
    ) -> Tuple[np.ndarray, float, bool, bool, Dict[str, Any]]:
        reward = 0.0
        if action == 0:
            reward = self.np_random.normal(loc=6.0, scale=1)
        else:
            reward = self.np_random.normal(loc=10.0, scale=3)
        done = True
        observation = self.get_observation()
        return observation, reward, done, False, {"risky_sa": int(action == 1)}

Classes

class MeanVarBanditEnv (seed: Optional[int] = None)

environment to test if safe RL algorithms prefer a policy that achieves lower variance return

Expand source code
class MeanVarBanditEnv(gym.Env[np.ndarray, Union[int, np.ndarray]]):
    """environment to test if safe RL algorithms
    prefer a policy that achieves lower variance return"""

    def __init__(
        self,
        seed: Optional[int] = None,
    ) -> None:
        super().__init__()
        self._size = 2
        self._rng = np.random.RandomState(seed)
        high = np.array([1.0] * self._size, dtype=np.float32)
        self.action_space = gym.spaces.Discrete(2)
        self.observation_space = gym.spaces.Box(-high, high, dtype=np.float32)
        self.idx: Optional[int] = None

    def get_observation(self) -> np.ndarray:
        obs = np.zeros(self._size, dtype=np.float32)
        obs[self.idx] = 1.0
        return obs

    def reset(
        self, *, seed: Optional[int] = None, options: Optional[Dict[str, float]] = None
    ) -> Tuple[np.ndarray, Dict[str, float]]:
        super().reset(seed=seed)
        self.idx = 0
        return self.get_observation(), {}

    def step(
        self, action: Union[int, np.ndarray]
    ) -> Tuple[np.ndarray, float, bool, bool, Dict[str, Any]]:
        reward = 0.0
        if action == 0:
            reward = self.np_random.normal(loc=6.0, scale=1)
        else:
            reward = self.np_random.normal(loc=10.0, scale=3)
        done = True
        observation = self.get_observation()
        return observation, reward, done, False, {"risky_sa": int(action == 1)}

Ancestors

  • gymnasium.core.Env
  • typing.Generic

Methods

def get_observation(self) ‑> numpy.ndarray
Expand source code
def get_observation(self) -> np.ndarray:
    obs = np.zeros(self._size, dtype=np.float32)
    obs[self.idx] = 1.0
    return obs
def reset(self, *, seed: Optional[int] = None, options: Optional[Dict[str, float]] = None) ‑> Tuple[numpy.ndarray, Dict[str, float]]

Resets the environment to an initial internal state, returning an initial observation and info.

This method generates a new starting state often with some randomness to ensure that the agent explores the state space and learns a generalised policy about the environment. This randomness can be controlled with the seed parameter otherwise if the environment already has a random number generator and :meth:reset is called with seed=None, the RNG is not reset.

Therefore, :meth:reset should (in the typical use case) be called with a seed right after initialization and then never again.

For Custom environments, the first line of :meth:reset should be super().reset(seed=seed) which implements the seeding correctly.

Changed in version: v0.25

The return_info parameter was removed and now info is expected to be returned.

Args

seed : optional int
The seed that is used to initialize the environment's PRNG (np_random). If the environment does not already have a PRNG and seed=None (the default option) is passed, a seed will be chosen from some source of entropy (e.g. timestamp or /dev/urandom). However, if the environment already has a PRNG and seed=None is passed, the PRNG will not be reset. If you pass an integer, the PRNG will be reset even if it already exists. Usually, you want to pass an integer right after the environment has been initialized and then never again. Please refer to the minimal example above to see this paradigm in action.
options : optional dict
Additional information to specify how the environment is reset (optional, depending on the specific environment)

Returns

observation (ObsType): Observation of the initial state. This will be an element of :attr:observation_space (typically a numpy array) and is analogous to the observation returned by :meth:step. info (dictionary): This dictionary contains auxiliary information complementing observation. It should be analogous to the info returned by :meth:step.

Expand source code
def reset(
    self, *, seed: Optional[int] = None, options: Optional[Dict[str, float]] = None
) -> Tuple[np.ndarray, Dict[str, float]]:
    super().reset(seed=seed)
    self.idx = 0
    return self.get_observation(), {}
def step(self, action: Union[int, numpy.ndarray]) ‑> Tuple[numpy.ndarray, float, bool, bool, Dict[str, Any]]

Run one timestep of the environment's dynamics using the agent actions.

When the end of an episode is reached (terminated or truncated), it is necessary to call :meth:reset to reset this environment's state for the next episode.

Changed in version: 0.26

The Step API was changed removing done in favor of terminated and truncated to make it clearer to users when the environment had terminated or truncated which is critical for reinforcement learning bootstrapping algorithms.

Args

action : ActType
an action provided by the agent to update the environment state.

Returns

observation (ObsType): An element of the environment's :attr:observation_space as the next observation due to the agent actions.
An example is a numpy array containing the positions and velocities of the pole in CartPole.
reward (SupportsFloat): The reward as a result of taking the action.
terminated (bool): Whether the agent reaches the terminal state (as defined under the MDP of the task)
which can be positive or negative. An example is reaching the goal state or moving into the lava from
the Sutton and Barton, Gridworld. If true, the user needs to call :meth:reset.
truncated (bool): Whether the truncation condition outside the scope of the MDP is satisfied.
Typically, this is a timelimit, but could also be used to indicate an agent physically going out of bounds.
Can be used to end the episode prematurely before a terminal state is reached.
If true, the user needs to call :meth:reset.
info (dict): Contains auxiliary diagnostic information (helpful for debugging, learning, and logging).
This might, for instance, contain
metrics that describe the agent's performance state, variables that are hidden from observations, or individual reward terms that are combined to produce the total reward. In OpenAI Gym <v26, it contains "TimeLimit.truncated" to distinguish truncation and termination, however this is deprecated in favour of returning terminated and truncated variables.
done (bool): (Deprecated) A boolean value for if the episode has ended, in which case further :meth:step calls will
return undefined results. This was removed in OpenAI Gym v26 in favor of terminated and truncated attributes.
A done signal may be emitted for different reasons
Maybe the task underlying the environment was solved successfully, a certain timelimit was exceeded, or the physics simulation has entered an invalid state.
Expand source code
def step(
    self, action: Union[int, np.ndarray]
) -> Tuple[np.ndarray, float, bool, bool, Dict[str, Any]]:
    reward = 0.0
    if action == 0:
        reward = self.np_random.normal(loc=6.0, scale=1)
    else:
        reward = self.np_random.normal(loc=10.0, scale=3)
    done = True
    observation = self.get_observation()
    return observation, reward, done, False, {"risky_sa": int(action == 1)}