CS598 Homework 1

Question 1

Question 1
  • Solution

\begin{align*} V^\pi_{M^{\prime}}(s) &= E[\sum_{t=1}^\infty\gamma^{t-1}R_t^{\prime}(s,a)] \\ &= E[\sum_{t=1}^{\infty}\gamma^{t-1}R_t(s,a) - c] \\ &= E[\sum_{t=1}^{\infty}\gamma^{t-1}R_t(s,a) ] - \sum_{t=1}^{\infty}\gamma^{t-1}c \\ &= V^\pi_M(s) - \frac{c}{1 - \gamma} \quad \quad \quad \quad \quad\quad\forall s \in S \end{align*}
\begin{align*} V^\star_{M^\prime}(s) &= \max_{a\in A} Q_{M^\prime}(s,a) \\ &= \max_{a\in A}[ Q_{M}(s,a) - \frac{c}{1 - \gamma}] \\ &= Q_M(s, a^{\star}) - \frac{c}{1 - \gamma} \end{align*}
Thus, although there exists constant c, it doesn't affect the optimal policy. That is, there is a constant difference between two models for state value, however the optimal action is the same. So in infinite MDP we can generalize that R \in [0, R_{max}]

Question 2

Question 2
  • Solution

We denote subscript as the horizon. Like s_1 means the state of h=1
\begin{align*} V^\pi_{M^{\prime}}(s_1) &= E[\sum_{t=1}^H\gamma^{t-1}R_t^{\prime}(s,a)] \\ &= E[\sum_{t=1}^{H}\gamma^{t-1}R_t(s,a) - c] \\ &= E[\sum_{t=1}^{H}\gamma^{t-1}R_t(s,a) ] - \sum_{t=1}^{H}\gamma^{t-1}c \\ &= V^\pi_M(s_1) - \frac{c(1 - \gamma^H)}{1 - \gamma} \quad \quad \quad \quad \quad\quad\forall s_1\in S \end{align*}
\begin{align*} V^\pi_{M^{\prime}}(s_2) &= E[\sum_{t=1}^{H-1}\gamma^{t-1}R_{t+1}^{\prime}(s,a)] \\ &= E[\sum_{t=1}^{H-1}\gamma^{t-1}R_{t+1}(s,a) - c] \\ &= E[\sum_{t=1}^{H-1}\gamma^{t-1}R_{t+1}(s,a) ] - \sum_{t=1}^{H-1}\gamma^{t-1}c \\ &= V^\pi_M(s_1) - \frac{c(1 - \gamma^{H-1})}{1 - \gamma} \quad \quad \quad \quad \quad\quad\forall s_2\in S \end{align*}
Again for each state value of h from 1 to H, there exists same but varied difference between two models. Thus, the optimal policy under two models are same.

Question 3.1

Question 3.1
  • Solution

For reward is -1 per step, the optimal policy will choose the shortest paths.
For reward is 0 per step, it is trivial.
For reward is +1 per step, the optimal policy will choose the longest paths.
To sum up, in the case of indefinite-horizon MDP, the optimal policy will be affected if all rewards are added some numbers.

Question 3.2

Question 3.2
  • Solution
  1. Convert indefinite-horizon MDP into finite-horizon MDP

Let's assume the max horizon length is H_0. For those trajectories that its length is strictly smaller than H_0, it can add some absorbing states into its trajectory with reward zero such that its horizon length is H_0. Note that the additional absorbing states will not change the primitive policy because it will not change the value function for this trajectory.

  1. Add rewards like +1 or +2.

If these rewards do not be added into absorbing state, final result is the same as Q3.1

Question 4

Question 4
  • Solution
  1. Stationary MDP can be viewed as non-stationary MDP with P_h and R_h fixed along horizon.
  2. We can augment the state representation by introducing horizon h, which means that S^\prime= \{S, h\}. And we can rewrite new transition probability as P^\prime = P(S^\prime, a) = P(s, h, a) and new reward function as R^\prime = R(S^\prime, a) = R(S, h, a). In this way, M^\prime = (S^\prime, A, P^\prime, R^\prime, H, \mu) is stationary. Finally, the size of new state space is |S| \times |H|.
  3. At the first glance, if we want to convert non-stationary dynamics into stationary as the above method indicates, the size of new state space will be infinite, which is trivial.
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 203,937评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,503评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 150,712评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,668评论 1 276
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,677评论 5 366
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,601评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,975评论 3 396
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,637评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,881评论 1 298
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,621评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,710评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,387评论 4 319
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,971评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,947评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,189评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 44,805评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,449评论 2 342