Transformers documentation
Decision Transformer
This model was released on 2021-06-02 and added to Hugging Face Transformers on 2022-03-23.
Decision Transformer
Overview
The Decision Transformer model was proposed in Decision Transformer: Reinforcement Learning via Sequence Modeling
by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
The abstract from the paper is the following:
We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks.
This version of the model is for tasks where the state is a vector.
This model was contributed by edbeeching. The original code can be found here.
DecisionTransformerConfig
class transformers.DecisionTransformerConfig
< source >( output_hidden_states: bool | None = False return_dict: bool | None = True dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = None chunk_size_feed_forward: int = 0 is_encoder_decoder: bool = False id2label: dict[int, str] | dict[str, str] | None = None label2id: dict[str, int] | dict[str, str] | None = None problem_type: typing.Optional[typing.Literal['regression', 'single_label_classification', 'multi_label_classification']] = None tokenizer_class: str | None = None state_dim: int = 17 act_dim: int = 4 hidden_size: int = 128 max_ep_len: int = 4096 action_tanh: bool = True vocab_size: int = 1 n_positions: int = 1024 n_layer: int = 3 n_head: int = 1 n_inner: int | None = None activation_function: str = 'relu' resid_pdrop: float = 0.1 embd_pdrop: float = 0.1 attn_pdrop: float = 0.1 layer_norm_epsilon: float = 1e-05 initializer_range: float = 0.02 scale_attn_weights: bool = True use_cache: bool = True bos_token_id: int | None = 50256 eos_token_id: int | list[int] | None = 50256 scale_attn_by_inverse_layer_idx: bool = False reorder_and_upcast_attn: bool = False add_cross_attention: bool = False )
Parameters
- output_hidden_states (
bool, optional, defaults toFalse) — Whether or not the model should return all hidden-states. - return_dict (
bool, optional, defaults toTrue) — Whether to return aModelOutput(dataclass) instead of a plain tuple. - dtype (
Union[str, torch.dtype], optional) — The chunk size of all feed forward layers in the residual attention blocks. A chunk size of0means that the feed forward layer is not chunked. A chunk size of n means that the feed forward layer processesn< sequence_length embeddings at a time. For more information on feed forward chunking, see How does Feed Forward Chunking work?. - chunk_size_feed_forward (
int, optional, defaults to0) — Thedtypeof the weights. This attribute can be used to initialize the model to a non-defaultdtype(which is normallyfloat32) and thus allow for optimal storage allocation. For example, if the saved model isfloat16, ideally we want to load it back using the minimal amount of memory needed to loadfloat16weights. - is_encoder_decoder (
bool, optional, defaults toFalse) — Whether the model is used as an encoder/decoder or not. - id2label (
Union[dict[int, str], dict[str, str]], optional) — A map from index (for instance prediction index, or target index) to label. - label2id (
Union[dict[str, int], dict[str, str]], optional) — A map from label to index for the model. - problem_type (
Literal[regression, single_label_classification, multi_label_classification], optional) — Problem type forXxxForSequenceClassificationmodels. Can be one of"regression","single_label_classification"or"multi_label_classification". - tokenizer_class (
str, optional) — The class name of model’s tokenizer. - state_dim (
int, optional, defaults to 17) — The state size for the RL environment - act_dim (
int, optional, defaults to 4) — The size of the output action space - hidden_size (
int, optional, defaults to128) — Dimension of the hidden representations. - max_ep_len (
int, optional, defaults to 4096) — The maximum length of an episode in the environment - action_tanh (
bool, optional, defaults to True) — Whether to use a tanh activation on action prediction - vocab_size (
int, optional, defaults to1) — Vocabulary size of the model. Defines the number of different tokens that can be represented by theinput_ids. - n_positions (
int, optional, defaults to1024) — The maximum sequence length that this model might ever be used with. - n_layer (
int, optional, defaults to3) — Number of hidden layers in the Transformer decoder. - n_head (
int, optional, defaults to1) — Number of attention heads for each attention layer in the Transformer decoder. - n_inner (
int, optional) — Dimension of the MLP representations. - activation_function (
str, optional, defaults torelu) — The non-linear activation function (function or string) in the decoder. For example,"gelu","relu","silu", etc. - resid_pdrop (
float, optional, defaults to0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - embd_pdrop (
float, optional, defaults to0.1) — The dropout ratio for the embeddings. - attn_pdrop (
float, optional, defaults to0.1) — The dropout ratio for the attention probabilities. - layer_norm_epsilon (
float, optional, defaults to1e-05) — The epsilon used by the layer normalization layers. - initializer_range (
float, optional, defaults to0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - scale_attn_weights (
bool, optional, defaults toTrue) — Scale attention weights by dividing by sqrt(hidden_size).. - use_cache (
bool, optional, defaults toTrue) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant ifconfig.is_decoder=Trueor when the model is a decoder-only generative model. - bos_token_id (
int, optional, defaults to50256) — Token id used for beginning-of-stream in the vocabulary. - eos_token_id (
Union[int, list[int]], optional, defaults to50256) — Token id used for end-of-stream in the vocabulary. - scale_attn_by_inverse_layer_idx (
bool, optional, defaults toFalse) — Whether to additionally scale attention weights by1 / layer_idx + 1. - reorder_and_upcast_attn (
bool, optional, defaults toFalse) — Whether to scale keys (K) prior to computing attention (dot-product) and upcast attention dot-product/softmax to float() when training with mixed precision. - add_cross_attention (
bool, optional, defaults toFalse) — Whether cross-attention layers should be added to the model.
This is the configuration class to store the configuration of a DecisionTransformerModel. It is used to instantiate a Decision Transformer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the
Configuration objects inherit from PreTrainedConfig and can be used to control the model outputs. Read the documentation from PreTrainedConfig for more information.
Example:
>>> from transformers import DecisionTransformerConfig, DecisionTransformerModel
>>> # Initializing a DecisionTransformer configuration
>>> configuration = DecisionTransformerConfig()
>>> # Initializing a model (with random weights) from the configuration
>>> model = DecisionTransformerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.configDecisionTransformerGPT2Model
forward
< source >( input_ids: torch.LongTensor | None = None past_key_values: transformers.cache_utils.Cache | None = None attention_mask: torch.FloatTensor | None = None token_type_ids: torch.LongTensor | None = None position_ids: torch.LongTensor | None = None inputs_embeds: torch.FloatTensor | None = None encoder_hidden_states: torch.Tensor | None = None encoder_attention_mask: torch.FloatTensor | None = None use_cache: bool | None = None **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] )
DecisionTransformerModel
class transformers.DecisionTransformerModel
< source >( config )
Parameters
- config (DecisionTransformerModel) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The Decision Transformer Model
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( states: torch.FloatTensor | None = None actions: torch.FloatTensor | None = None rewards: torch.FloatTensor | None = None returns_to_go: torch.FloatTensor | None = None timesteps: torch.LongTensor | None = None attention_mask: torch.FloatTensor | None = None output_hidden_states: bool | None = None output_attentions: bool | None = None return_dict: bool | None = None **kwargs ) → DecisionTransformerOutput or tuple(torch.FloatTensor)
Parameters
- states (
torch.FloatTensorof shape(batch_size, episode_length, state_dim)) — The states for each step in the trajectory - actions (
torch.FloatTensorof shape(batch_size, episode_length, act_dim)) — The actions taken by the “expert” policy for the current state, these are masked for auto regressive prediction - rewards (
torch.FloatTensorof shape(batch_size, episode_length, 1)) — The rewards for each state, action - returns_to_go (
torch.FloatTensorof shape(batch_size, episode_length, 1)) — The returns for each state in the trajectory - timesteps (
torch.LongTensorof shape(batch_size, episode_length)) — The timestep for each step in the trajectory - attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- output_hidden_states (
bool, optional) — Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail. - output_attentions (
bool, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail. - return_dict (
bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
DecisionTransformerOutput or tuple(torch.FloatTensor)
A DecisionTransformerOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DecisionTransformerConfig) and inputs.
The DecisionTransformerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
state_preds (
torch.FloatTensorof shape(batch_size, sequence_length, state_dim)) — Environment state predictionsaction_preds (
torch.FloatTensorof shape(batch_size, sequence_length, action_dim)) — Model action predictionsreturn_preds (
torch.FloatTensorof shape(batch_size, sequence_length, 1)) — Predicted returns for each statehidden_states (
torch.FloatTensor, optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (
torch.FloatTensor, optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional, defaults toNone) — Sequence of hidden-states at the output of the last layer of the model.
Examples:
>>> from transformers import DecisionTransformerModel
>>> import torch
>>> model = DecisionTransformerModel.from_pretrained("edbeeching/decision-transformer-gym-hopper-medium")
>>> # evaluation
>>> model = model.to(device)
>>> model.eval()
>>> env = gym.make("Hopper-v3")
>>> state_dim = env.observation_space.shape[0]
>>> act_dim = env.action_space.shape[0]
>>> state = env.reset()
>>> states = torch.from_numpy(state).reshape(1, 1, state_dim).to(device=device, dtype=torch.float32)
>>> actions = torch.zeros((1, 1, act_dim), device=device, dtype=torch.float32)
>>> rewards = torch.zeros(1, 1, device=device, dtype=torch.float32)
>>> target_return = torch.tensor(TARGET_RETURN, dtype=torch.float32).reshape(1, 1)
>>> timesteps = torch.tensor(0, device=device, dtype=torch.long).reshape(1, 1)
>>> attention_mask = torch.zeros(1, 1, device=device, dtype=torch.float32)
>>> # forward pass
>>> with torch.no_grad():
... state_preds, action_preds, return_preds = model(
... states=states,
... actions=actions,
... rewards=rewards,
... returns_to_go=target_return,
... timesteps=timesteps,
... attention_mask=attention_mask,
... return_dict=False,
... )