Open main menu

Humanoid Robots Wiki β

Changes

Allen's REINFORCE notes

53 bytes removed, 01:23, 26 May 2024
no edit summary
=== Links ===
* [http://www.incompleteideas.net/book/RLbook2020.pdfRLbook2020]* [https://samuelebolotta.medium.com/2-deep-reinforcement-learning-policy-gradients-5a416a99700a Deep RL: Policy Gradients]
[[Category:Reinforcement Learning]]
=== Motivation ===
Consider Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a problem where we have to train neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a robot mapping from observations to pick up some objectactions. A traditional ML algorithm might try to learn some function fThese optimal parameters are defined as<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(x\tau) = y} \left[ \sum_t r(s_t, where given some position x observed via the camera we output some behavior ya_t) \right] </math>. Let's unpack what this means. The trouble To phrase it in english, this is basically saying that in the real world, optimal policy is one such that the correct grab location is some function expected value of the object and total reward over following a trajectory (<math> \tau </math>) determined by the physical environment, which policy is hard to intuitively ascertain by observationthe highest over all policies.
The motivation behind reinforcement learning is to repeatedly take observations, then sample the effects of actions on those observations (reward and new observation/state). Ultimately, we hope to create a policy <math>pi</math> that maps states or observations to optimal actions.=== Overview ===
‎<syntaxhighlight lang="bash" >Initialize neural network with input dimensions =observation dimensions and output dimensions = Learning ===action dimensionsFor each episode: While not terminated: Get observation from environment Use policy network to map observation to action distribution Randomly sample one action from action distribution Compute logarithmic probability of that action occurring Step environment using action and store reward Calculate loss over entire trajectory as function of probabilities and rewards Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss Based on the loss, use a gradient descent policy to update weights
Learning involves the agent taking actions and the environment returning a new state and reward. * Input: <math>s_t</math>: States at each time step* Output: <math>a_t</math>: Actions at each time step* Data: <math>(s_1, a_1, r_1, ... , s_T, a_T, r_T)</math>* Learn <math>\pi_\theta : s_t -> a_t </math> to maximize <math> \sum_t r_t </mathsyntaxhighlight>
=== State vs. Observation Objective Function ===
A state The goal of reinforcement learning is a complete representation of to maximize the expected reward over the physical world while entire episode. We use <math>R(\tau)</math> to denote the observation is total reward over some subset or representation trajectory <math>\tau</math> defined by our policy. Thus we want to maximize <math>E_{\tau \sim \pi_\theta}[R(\tau)]</math>. We can use the definition of s. They are not necessarily expected value to expand this as <math>\sum_\tau P(\tau | \theta) R (\tau)</math>, where the same in that we probability of a given trajectory occurring can'further be expressed as <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t always infer + 1} | s_t from o_t, but o_t is inferable from s_ta_t) </math>. To think of it as a network of conditional probability, we have
* Now we want to find the gradient of <math> s_1 -J (\theta) </math> o_1 - , namely<math>\nabla_\theta \sum_\tau P(\pi_tau | \theta) -> a_1 R(\tau) </math> (policy)* . Since the reward function isn't a dependent on the parameters. We can rearrange: <math> s_1, a_1 - \sum_\tau R(p\tau) \nabla_\theta P(s_{t+1} \tau | s_t, a_t\theta) -> s_2 </math> (dynamics). The next step here is what's called the Log Derivative Trick.
Note that theta represents Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the parameters of the policy chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(for examplex_1, x_2, x_3 ...)}</math>. Thus, by rearranging, we can take the parameters gradient of a neural networkany function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, .. Assumption: Markov Property - Future states are independent of past states given present states. This is the fundamental difference between states and observations)</math>.
=== Problem Representation === Thus, using this idea, we can rewrite our gradient as <math> \sum_\tau R(\tau) p(\tau | \theta) \nabla_\theta \log P(\tau | \theta) </math>.
States and actions are typically continuous - thus, we often want to model our output policy as a density function, which tells us the distribution of probabilities of actions at some given state.=== Loss Computation ===
The It is tricky for us to give our policy the notion of "total" reward and "total" probability. Thus, we desire to change these values parameterized by <math> \tau </math> to instead be parameterized by t. That is , instead of examining the behavior of the entire episode, we want to create a function summation over timesteps. We know that <math> R(\tau) </math> is the total reward over all timesteps. Thus, we can rewrite the <math> R(\tau) </math> component at some timestep t as <math> \gamma^{T - t}r_t </math>, where gamma is our discount factor. Further, we recall that the probability of the state trajectory occurring given the policy is <math> P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_\theta(a_t | s_t) P(s_{t + 1} | s_t, a_t) </math>. Since the probabilities of <math> P(s_0) </math> and action r<math> P(ss_{t+a} | s_t, a,t) -</math> intare determined by the environment and independent of the policy, which tells us what states and actions are bettertheir gradient is zero. We often use Recognizing this, and tune hyperparameters for reward functions further recognizing that multiplication of probabilities in log space is equal to make model training fasterthe sum of the logarithm of each of the probabilities, we get our final gradient expression <math> \sum_\tau P(\tau | \theta) R( \tau) \sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t | s_t) </math>.
=== Markov Chain & Decision Process=== Markov Chain: <math> M = {S, T} </math>, where S - state space, T- transition operator. The state space is the set of all states, and can be discrete or continuous. The transition probabilities is represented in a matrix, where the i,j'th entry is the probability of going Rewriting this into state i at state jan expectation, and we can express the next time step by multiplying the current time step with the transition operator.  Markov Decision Process: have <math> M \nabla_\theta J (\theta) = E_{S, A, T, r\tau \sim \pi_\theta} </math>, where A - action space. T is now a tensor, containing the current state, current action, and next state. We let <math> T_\left[R(\tau)\sum_{i, j, kt = 0} = p^T \nabla_\theta \log \pi_\theta (s_t + 1 = i a_t | s_t = j, a_t = k) \right] </math>. r is Using the formula for discounted reward function. === Reinforcement Learning Algorithms - High-level === # Generate Samples (run policy)# Fit a model/estimate something about how well policy is performing# Improve policy# Repeat Policy Gradients - Directly differentiate objective with respect to the optimal theta and then perform gradient descent Value-based: Estimate value function or q-function of optimal policy (policy is often represented implicitly) Actor-Critic: Estimate value function or q-function of current policy, and find a better policy gradient Model-based: Estimate some transition model, and then use it to improve a policy === REINFORCE === - === Temporal Difference Learning === Temporal Difference (TD) is a model for estimating the utility of states given some state-action-outcome information. Suppose we have some initial value our final formula <math>V_0(s) </math>, and we get some information <math> (s, a, s', r(s, a) </math>. We can then use the update equation <math>V_E_{t+1\tau \sim \pi_\theta}(s) = (1- \alpha)V_left[\sum_{t= 0}(s)+^T \nabla_\theta \log \pi_\alphatheta (R(s, a, s'a_t | s_t) + \gamma V_i(s')) ^{T - t}r_t \right] </math>. Here <math>\alpha</math> represents the learning rate, which This is how much new information why our loss is weighted relative equal to old information, while <math>\gamma</math> represents the discount factor, which can be thought of how much getting a reward in the future factors into our current reward.  === Q Learning === Q Learning gives us a way to extract the optimal policy after learning. Instead of keeping track of the values of individual states, we keep track of Q values for state-action pairs, representing the utility of taking action a at state s. How do we use this Q value? Two main ideas.  Idea 1: Policy iteration - if we have a policy <math> \pi </math> and we know <math> Q^pi (s, a) </math>, we can improve the policy, by deterministically setting the action at each state be the argmax of all possible actions at the state.  <math> Q_sum_{i+1t = 0} (s,a) = (1 - ^T \log \alpha) Q_i (s,a) + pi_\alpha theta (r(s,aa_t | s_t) + \gamma V_i(s'))</math> Idea 2: Gradient update ^{T - If <math> Q^pi(s, a) > V^pi(s) t}r_t </math>, then a is better than average. We will then modify since using the policy chain rule to increase take its derivative gives us the formula for the probability of agradient for our backwards pass (see Dennis' Optimization Notes).
53
edits