RL(1)-Proximal Policy Optimization (PPO)

1.Policy Gradient

1.1.Excepted reward

\overline R_{\theta} =E_{\tau \sim p_{\theta}(\tau)}[R(\tau)] \approx\sum_{\tau} R({\tau})p_{\theta}({\tau})

r: reward
\theta: actor(parameter)
\tau:trajectory \tau=(s_1, a_1, s_2, a_2, ..., s_T, a_T)

which:
R(\tau)=\sum_{t=1}^{T}r_t

p_{\theta}({\tau})=p(s_1)\prod_{t=1}^T p_\theta(a_t|s_t)p(s_{t+1}|s_t, a_t)

1.2.Maximize Expected Reward

\nabla \overline R_{\theta} = \frac 1N \sum_{n=1}^N \sum_{t=1}^{T_n}R(\tau^n) \nabla log p_\theta (a_t^n|s_t^n)

optimize:
\theta \leftarrow \theta+\alpha \nabla \overline R_\theta

1.3.Tips

1.3.1.Add a Baseline (reduce variance)

\nabla \overline R_{\theta} = \frac 1N \sum_{n=1}^N \sum_{t=1}^{T_n}R(\tau^n -b) \nabla log p_\theta (a_t^n|s_t^n)
\nabla_\theta L(\theta) = -\mathbb{E}_{w^s \sim p_{\theta}}[r(w^s )-r(\hat w )] \nabla_\theta log p_\theta (w^s)

1.3.2.Assign Suitable Credit

R(\tau^n) \rightarrow \sum_{t^\prime=t}^{T_n}r_{t^\prime}^n

1.3.3.Add Discount Factor

\sum_{t^\prime=t}^{T_n}r_{t^\prime}^n \rightarrow \sum_{t^\prime=t}^{T_n} \gamma^{t^\prime-t}r_{t^\prime}^n, \gamma<1

2.Proximal Policy Optimization

2.1.Advantage Function

A_\pi(s, a)=Q_\pi(s,a)-V_\pi(s)=E_{s^\prime \sim P(s^\prime |s,a)}[r(s)+\gamma V^\pi (s^\prime)-V^\pi(s)]

which
V^\pi(s)=E_{a \sim P(a|s)} P(a|s)Q_\pi (s,a)

2.2. On-policy \rightarrow Off-policy

Importance Sampling
E_{x \sim p}[f(x)]=E_{x \sim q}[f(x) \frac {p(x)} {q(x)}]

\nabla \overline R_{\theta} =E_{\tau \sim p_{\theta}(\tau)}[R(\tau) \nabla log p_\theta (\tau)]

\nabla \overline R_{\theta} =E_{\tau \sim p_{{\theta}^\prime}(\tau)}[\frac {p_{\theta}(\tau)} {p_{{\theta}^\prime}(\tau)} R(\tau) \nabla log p_\theta (\tau)]

Off-policy means sample data from\theta^\prime, and use the data to train\theta
\begin{equation}\begin{split} \nabla \overline R_{\theta}&=E_{(s_t, a_t) \sim \pi_\theta}[A^\theta(s_t, a_t) \nabla log p_\theta (a_t^n|s_t^n)] \\ &\approx E_{(s_t, a_t) \sim \pi_\theta^\prime}[\frac {P_\theta(s_t,a_t)}{P_{\theta^\prime}(s_t,a_t)}A^{\theta^\prime}(s_t, a_t) \nabla log p_\theta (a_t^n|s_t^n)]\\ &=E_{(s_t, a_t) \sim \pi_\theta^\prime}[\frac {p_\theta(a_t|s_t)}{p_{\theta^\prime}(a_t|s_t)} \frac {p_\theta(s_t)}{p_{\theta^\prime}(s_t)}A^{\theta^\prime}(s_t, a_t) \nabla log p_\theta (a_t^n|s_t^n)]\\ &\approx E_{(s_t, a_t) \sim \pi_\theta^\prime}[\frac {p_\theta(a_t|s_t)}{p_{\theta^\prime}(a_t|s_t)} A^{\theta^\prime}(s_t, a_t) \nabla log p_\theta (a_t^n|s_t^n)]\\ \end{split}\end{equation}

Objective function:
J^{\theta^\prime}(\theta)=E_{(s_t, a_t) \sim \pi_\theta^\prime}[\frac {p_\theta(a_t|s_t)}{p_{\theta^\prime}(a_t|s_t)} A^{\theta^\prime}(s_t, a_t) ]

2.3.Trust Region Policy Optimization (TRPO)

Constrained optimization:
J^{\theta^\prime}(\theta)=E_{(s_t, a_t) \sim \pi_\theta^\prime}[\frac {p_\theta(a_t|s_t)}{p_{\theta^\prime}(a_t|s_t)} A^{\theta^\prime}(s_t, a_t) ], KL(\theta, \theta^\prime)<\delta

2.4.PPO

Unconstrained optimization:
J^{\theta^\prime}(\theta)=E_{(s_t, a_t) \sim \pi_\theta^\prime}[\frac {p_\theta(a_t|s_t)}{p_{\theta^\prime}(a_t|s_t)} A^{\theta^\prime}(s_t, a_t) ]-\beta KL(\theta, \theta^\prime)<\delta

2.5.PPO2

J^{\theta^\prime}(\theta)\approx \sum_{(s_t, a_t)} min[\frac {p_\theta(a_t|s_t)}{p_{\theta^\prime}(a_t|s_t)}, clip(\frac {p_\theta(a_t|s_t)}{p_{\theta^\prime}(a_t|s_t)}, 1-\epsilon, 1+\epsilon )] A^{\theta^\prime}(s_t, a_t)

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容