Gradient Descent/Ascent is a great asset. If you have an approximation model consisting of some parameters and a cost function, , then you can update the parameters with the gradient of wrt to . In value approximation methods we try to approximate or and we use a policy like -greedy for control. The problem with -greedy is that we have to choose a ‘max’ over all possible actions which is really time-consuming for enviroments with continous action space. Also, sometimes we need a stochastic policy like a Gaussian policy which would mostly choose actions around its mean but also ensures randomness in selection of an action. In such cases, its better to learn a policy for a particular environment that maximizes reward, and to do so we need gradients for our policy parameters. In this post we will look into different aspects of policy gradients and derive the necessary proofs.
Let’s choose a policy with parameters . Gradients depend on the choice of objective function. Formulating an objective function depends on the type of environment we are in. For an episodic environment:
which basically translates to the expected reward from our starting state.
For continuous or a never ending environment we use a stationary distribution, , which tells us distribution of states after the process has run for a sufficiently long time such that running the process anymore doesnt change the distribution. In such environments we look for averaging immediate rewards over the entire distribution states the we visit.
Let’s find policy gradients for both environments:
Episodic Environments:
Let’s say the probability of a trajectory as and the reward for the episode as . So put simply the total rewards here for an episode is the probability of the trajectory of the episode times the total end reward
can be rewritten as since
(1)
The probablility of a trajectory can be defined as:
where is the probability of first state and is the probalility transition among states as governed by the environment. So,
Now we formulated this loss for one episode. Lets say if we have D trajectories, then this turns into an expectation which we can estimate with sample mean.
(2)
Continous Environments:
Using average reward per time step as our cost function,
Finding gradients for continous environments is tricky. Changes in state distribution is a function of our policy as well as the environment. Since we dont know how environment works we dont know how our parameters affect .
This is where policy gradient theorem comes in. Which says:
where, is a distribution wherein ,which is a fraction of visits in that state divided by total visits in all states.
Also the proportionality constant is average length of episode in episodic environment and 1 in a continous environment.
**Note**: The policy gradient theorem also works for episodic cases.
Compatible Function Approximation Theorem
If you notice the gradients have the term and its true value is hard to come by so its easier to use a function approximator to estimate it . But the question is wouldn’t it introduce bias is the estimation of gradients. As it turns out if you play cards right you can almost approximate the true gradients based on
So, the Compatible Function Approximation Theorem has these two conditions:
1) gradients of = gradients of
(3)
At first this condition seems bizzare like how can the gradients be same but it depends on the choice of the policy and the feature vectors used for value approximation. For example if you choose softmax policy after a linear transformation of feature vectors for discrete actions then the gradients of come out as:
Now if choose our function approximator as:
Then, =
2) We are minimizing least square error between true value functions and our function approximations.
We can get these true value function values using differnt policy evaluation methods. But lets see how this condition enables estimating true gradient values with function approximators that follow condition 1. Since we are minimizing the mesn squarred error in the estimation of all possible values, so using a continous environment we get
When this gradient reaches a local minimum then
We can see now if both conditions are met then gives a really good estimate of the true polciy gradients of .
Advantage Function
Till now we talked about reducing bias when using function approximators but what about variance in our gradients. Lets say our environments have Reward or Qvalues of vary from 0 to 10000. This results a huge variance in gradients. A natural approach is to use concept called baselines. A baseline is a function that doesnt depend on the actions or in other words on our policy. A natural choice for baseline is . Adding or substracting a baseline doesnt change the expectation of our gradients.
So our advantage function looks like
In practise we estimate advantage function as , using 2 seperate approximators and updating them using function approximation methods.