欢迎来到麦多课文档分享! | 帮助中心 海量文档,免费浏览,给你所需,享你所想!
麦多课文档分享
全部分类
  • 标准规范>
  • 教学课件>
  • 考试资料>
  • 办公文档>
  • 学术论文>
  • 行业资料>
  • 易语言源码>
  • ImageVerifierCode 换一换
    首页 麦多课文档分享 > 资源分类 > PPT文档下载
    分享到微信 分享到微博 分享到QQ空间

    Active Learning in POMDPs.ppt

    • 资源ID:378010       资源大小:1.19MB        全文页数:67页
    • 资源格式: PPT        下载积分:2000积分
    快捷下载 游客一键下载
    账号登录下载
    微信登录下载
    二维码
    微信扫一扫登录
    下载资源需要2000积分(如需开发票,请勿充值!)
    邮箱/手机:
    温馨提示:
    如需开发票,请勿充值!快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
    如需开发票,请勿充值!如填写123,账号就是123,密码也是123。
    支付方式: 支付宝扫码支付    微信扫码支付   
    验证码:   换一换

    加入VIP,交流精品资源
     
    账号:
    密码:
    验证码:   换一换
      忘记密码?
        
    友情提示
    2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
    3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
    4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
    5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

    Active Learning in POMDPs.ppt

    1、Active Learning in POMDPs,Robin JAULMES Supervisors: Doina PRECUP and Joelle PINEAU McGill University rjaulmcs.mcgill.ca,Outline,1) Partially Observable Markov Decision Processes (POMDPs) 2) Active Learning in POMDPs 3) The MEDUSA algorithm.,Markov Decision Processes(MDPs),Markov Decision Processes:

    2、 States S Actions A Probabilistic transitions P(s|s,a) Immediate Rewards R(s,a) A discount factor The current state is always perfectly observed.,Partially Observable Markov Decision Processes (POMDPs),A POMDP has: States S Actions A Probabilistic transitions Immediate Rewards A discount factor Obse

    3、rvations Z Observation probabilities An initial belief b0,Applications of POMDPs,The ability to render environments in which the state is not fully observed can allow applications in: Dialogue management Vision Robot navigation High-level control of robots Medical diagnosis Network maintenance,A POM

    4、DP example: The Tiger Problem,The Tiger Problem,Description: 2 states: Tiger_Left, Tiger_Right 3 actions: Listen, Open_Left, Open_Right 2 observations: Hear_Left, Hear_Right Rewards are: -1 for the Listen action -100 for the Open_Left in the Tiger_Left state +10 for the Open_Right in the Tiger_Left

    5、state,The Tiger Problem,Furthermore: The hear action does not change the state The open action puts the tiger behind any door with 50% chance. The open action leads to A a useless observation (50% hear_left, 50% hear_right) The hear action gives the correct information 85% of the time.,Solving a POM

    6、DP,To solve a POMDP is to find, for any action/observation history, the action that maximizes the expected discounted reward:,The belief state,Instead of maintaining the complete action/observation history, we maintain a belief state b. The belief is a probability distribution over the states. Dim(b

    7、) = |S|-1,The belief space,Here is a representation of the belief space when we have two states (s0,s1),The belief space,Here is a representation of the belief state when we have three states (s0,s1,s2),The belief space,Here is a representation of the belief state when we have four states (s0,s1,s2,

    8、s3),The belief space,The belief space is continuous but we only visit a countable number of belief points.,The Bayesian update,Value Function in POMDPs,We will compute the value function over the belief space. Hard: the belief space is continuous ! But we can use a property of the optimal value func

    9、tion for a finite horizon: it is piecewise-linear and convex. We can represent any finite-horizon solution by a finite set of alpha-vectors. V(b) = max_s (s)b(s),Alpha-Vectors,They are a set of hyperplanes which define the belief function. At each belief point the value function is equal to the hype

    10、rplane with the highest value.,Value Iteration in POMDPs,Value iteration: Initialize value function (horizon 1 value) V(b) = max_a _s R(s,a) b(s) This produces 1 alpha-vector per action. Compute the value function at the next iteration using Bellmans equation: V(b)= max_a _s R(s,a)b(s)+_sT(s,a,s)O(s

    11、,a,z)(s),PBVI: Point-Based Value Iteration,Always keep a bounded number of alpha vectors. Use value iteration starting from belief points on a grid to produce new sets of alpha vectors. Stop after n steps (finite horizon). The solution is approximate but found in a reasonable amount of time and memo

    12、ry. Good tradeoff between computation time and qualitySee Pineau et al., 2003,Learning a POMDP,What happens if we dont know for sure the model of the POMDP? We have to learn it. The two solutions in the literature are: EM-based approaches (prone to local minima) History-based approach (require of th

    13、e order of 1,000,000 samples for 2 state problems) Singh et al. 2003,Active Learning,In an Active Learning Problem the learner has the ability to influence its training data. The learner asks for what is the most useful given its current knowledge. Methods to find the most useful query have been sho

    14、wn by Cohn et al. (95),Active Learning (Cohn et al. 95),Their method, used for function approximation tasks, is based on finding the query that will minimize the estimated variance of the learner. They showed how this could be done exactly: For a mixture of Gaussians model. For locally weighted regr

    15、ession.,Applying Active Learning to POMDPs,We will suppose in this work that we have an oracle to determine the hidden state of a system on request. However, this action is costly and we want to use it as little as possible. In this setting, the active learning query will be to ask for the hidden st

    16、ate.,Applying Active Learning to POMDPs,We propose two solutions: Integrate the model uncertainty and the query possibility inside the POMDP framework to take advantage of existing algorithms. The MEDUSA algorithm. It uses a Dirichlet distribution over possible models to determine which actions to t

    17、ake and which queries to ask.,Decision-Theoretic Model Learning,We want to integrate in the POMDP model the fact that: We have only a rough estimation of its parameters.The agent can query the hidden state.These queries should not be used too often, and only used to learn.,Decision-Theoretic Model L

    18、earning,So we modify our POMDP: For each uncertain parameter we introduce an additional state feature. This feature is discretized into n levels. At initialization we are uniformly distributed among these n groups of states but we remain in this group as the transitions occur. We introduce a query a

    19、ction that returns the hidden state. This action is attached to a negative reward Rq.Then we solve this new POMDP using the usual methods.,Decision-Theoretic Model Learning,D-T Planning: Results,DT-Planning: Conclusions,Theoretically sound, but: The results are very sensitive to the value of the que

    20、ry penalty, which is therefore very difficult to establish. The number of states becomes exponential in the number of uncertain parameters ! This increases greatly the complexity of the problem. With MEDUSA, we leave the theoretical guarantees of optimality to get a tractable algorithm.,MEDUSA: The

    21、main ideas,Markovian Exploration with Decision based on the Use of Samples Algorithm Use Dirichlet distributions to represent current knowledge about the parameters of the POMDP model. Sample models from the distribution. Use models to take actions that could be good. Use queries to improve current

    22、knowledge.,Dirichlet distributions,Let X 0 ; 1 ;2 ; . N. X is drawn from a multinomial distribution of parameters (1,. N) iff p(X=i)= iThe Dirichlet distribution is a distribution of multinomial distribution parameters (of (1,. N) tuples such that i 0 and i = 1),Dirichlet distributions,Dirichlet dis

    23、tributions have parameters s.t. i0. We can sample from Dirichlet distributions by using Gamma distributions. The most likely parameters in a Dirichlet distribution are the following:,Dirichlet distributions,We can also compute the probability of multinomial distribution parameters according to the D

    24、irichlet.,The MEDUSA algorithm,Step 1: initialize the Dirichlet distribution.Step 2: sample k(=20) POMDPs from the Dirichlet distribution and compute their probabilities according to the Dirichlet. Normalize them to get the weights.Step 3: solve the k POMDPs with an approximate method (PBVI, finite

    25、horizon),The MEDUSA algorithm,Step 4: run the experiment At each time step: Compute the optimal actions for all POMDPs. Execute one of them. Update the belief for each POMDP. If some conditions are met, do a state query. Update the Dirichlet parameters according to this query.,The MEDUSA algorithm,A

    26、t each time step: Recompute the POMDP weights At fixed intervals, erase the POMDP with the lowest weight and redraw another according to the current Dirichlet distribution. Compute the belief of the new POMDP according to the action-observation history until current time.,Theoretical analysis,We can

    27、 compute the policy to which MEDUSA converges with an infinite number of models using integrals over the whole space of models. Under some assumptions over the POMDP, we can prove that MEDUSA converges to the true model.,MEDUSA on Tiger,Evolution of mean discounted reward with time steps (query at e

    28、very step),Diminishing the complexity,The algorithm is flexible. We can have a wide variety of priors. Some parameters may be certain. They can also be dependent (if we use the same alpha parameters for different distributions) So if we have additional information about the POMDPs dynamics we can th

    29、erefore diminish the number of alpha- parameters.,Diminishing the complexity,On the Tiger problem:if we know that: The “hear” action does not change the state The problem is symmetric Opening a door brings an uninformative observation and puts back the tiger with a 0.5 probability behind each door.

    30、We can diminish the number of alpha-parameters from 24 to 2.,MEDUSA on simplified Tiger,Evolution of mean discounted reward with time steps (query at every step) Blue: normal problem Black: simplified problem,Learning without query,The alternate belief keeps track of the knowledge brought by the las

    31、t query. The non-query update updates each parameter proportionally to the probability a query would have of updating it, given the alternate belief and the last action/observation.,Learning without query,Non-query learning: Has high variance: learning rate needs to be lower, therefore more time ste

    32、ps are needed. Is prone to local minima. Convergence to the correct values is not guaranteed. Can converge to the right solution if the initial prior is “good enough”. MEDUSA should use non-query learning when it has done “enough” query learning.,Choosing when to query,There are different heuristics

    33、 to choose when to do a query. Always (up to a certain number of queries). When models disagree. When value functions for the models are different. When the beliefs in the different models differ. When information from last query has been lost (?). Not when a query would bring no information.,Choosi

    34、ng when to query,There is different heuristics to choose when to do a query: Always (up to a certain number of queries) When models disagree. When value functions for the models differ. When the beliefs in the different models differ. When information from last query has been lost. Not when a query

    35、would bring no information.,Non-Query Learning on Tiger,Mean discounted reward Number of queries Blue: Query learning Black: NQ learning,Picking actions during learning,Take one model and do its best action. Consider every model, every action, do the action with highest overall value. Compute the me

    36、an value of every action, probabilistically take one of them according to the Boltzman method.,Picking actions during learning,Take one model and do its best action. Consider every model, every action, do the action with highest overall value. Compute the mean value of every action, probabilisticall

    37、y take one of them according to the Boltzman method.,Different action-pickings on Tiger,Evolution of mean discounted reward with time steps (query at every step) Blue: Highest overall value Black: Pick one model,Learning with non-stationary models,If the parameters of the model unpredictably change

    38、with time: At every time step decay alpha parameters by some factor so that new experience weighs more than old experience. If the parameters do not vary too much, non-query learning is sufficient to keep track of their evolution.,Non-stationary Tiger problem: Change in p (probability of correct obs

    39、ervation with Hear action) at time 0.,Evolution of mean discounted reward with time steps.,The Hallway problem,60 states 5 actions 17 observations The number of alpha parameters corresponding to a prior that is reasonable is 84.,Hallway: reward evolution,Evolution of mean discounted reward with time

    40、 steps,Hallway: query number,Evolution of number of queries with time steps,Benchmark POMDPs,MEDUSA and Robotics,We have interfaced MEDUSA with a robotic simulator (Carmen) which can be used to simulate POMDP learning problems. We present experimental results on the HIDE problem.,The HIDE problem,Th

    41、e robot is trying to capture a moving agent on the following map. The movements of the robot are deterministic but the behavior of the person is unknown and is learned through the execution. The problem is formulated in a POMDP with 362 states, 22 observations and 5 actions. To model the behavior of

    42、 the moving agent we learn 52 alpha parameters,Results on the HIDE problem,Evolution of mean discounted reward with time steps,Results on the HIDE problem,Evolution of number of queries with time steps,Conclusion,Advantages: Learned models can be re-used. If a parameter in the environment has a smal

    43、l change, MEDUSA can detect it online and without query. Convergence is theoretically guaranteed. Number of queries requested and length of training is tractable, even in large problems. Can be applied to robotics.,Conclusion,But: The assumption of an oracle is strong. However we do not need to know

    44、 the query result immediately. The algorithm has lots of parameters which request tuning so that it can work properly. Convergence is guaranteed only under certain conditions (for a certain policy, for certain POMDPs, for an infinite amount of models and perfect policy).,References,Jaulmes R., Pinea

    45、u, J., Precup, D. “Learning in non-stationary Partially Observable Markov Decision Processes”, ECML workshop on Non-Stationarity in RL, 2005. Jaulmes R.,Pineau J., Precup, D. “Active Learning in Partially Observable Markov Decision Processes”, ECML, 2005. Cohn, D.A.,Ghaharamani, Z., Jordan, M.I. “Ac

    46、tive Learning with Statistical Models” NIPS, 1995. Pineau,J.,Gordon,G.,Thrun,S. “Point-Based Value Iteration: an anytime algorithm for POMDPs” IJCAI, 2003. Dearden,R.,Friedman,N.,Andre,N.,”Model based Bayesian Exploration”, UAI, 1999. Singh,S.Littman,M.,Jong.,N.K.,Pardoe,D.,Stone,P.”Learning Predictive State Representations” ICML, 2003.,Questions ?,Definitions,Convergence of the policy,Convergence of MEDUSA,Non-query update equations,


    注意事项

    本文(Active Learning in POMDPs.ppt)为本站会员(medalangle361)主动上传,麦多课文档分享仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知麦多课文档分享(点击联系客服),我们立即给予删除!




    关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

    copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
    备案/许可证编号:苏ICP备17064731号-1 

    收起
    展开