欢迎来到麦多课文档分享! | 帮助中心 海量文档,免费浏览,给你所需,享你所想!
麦多课文档分享
全部分类
  • 标准规范>
  • 教学课件>
  • 考试资料>
  • 办公文档>
  • 学术论文>
  • 行业资料>
  • 易语言源码>
  • ImageVerifierCode 换一换
    首页 麦多课文档分享 > 资源分类 > PPT文档下载
    分享到微信 分享到微博 分享到QQ空间

    Support Vector Machines and Kernels.ppt

    • 资源ID:389495       资源大小:2.10MB        全文页数:43页
    • 资源格式: PPT        下载积分:2000积分
    快捷下载 游客一键下载
    账号登录下载
    微信登录下载
    二维码
    微信扫一扫登录
    下载资源需要2000积分(如需开发票,请勿充值!)
    邮箱/手机:
    温馨提示:
    如需开发票,请勿充值!快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
    如需开发票,请勿充值!如填写123,账号就是123,密码也是123。
    支付方式: 支付宝扫码支付    微信扫码支付   
    验证码:   换一换

    加入VIP,交流精品资源
     
    账号:
    密码:
    验证码:   换一换
      忘记密码?
        
    友情提示
    2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
    3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
    4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
    5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

    Support Vector Machines and Kernels.ppt

    1、Support Vector Machines and Kernels,Adapted from slides by Tim Oates Cognition, Robotics, and Learning (CORAL) Lab University of Maryland Baltimore County,Doing Really Well with Linear Decision Surfaces,Outline,Prediction Why might predictions be wrong? Support vector machines Doing really well with

    2、 linear models Kernels Making the non-linear linear,Supervised ML = Prediction,Given training instances (x,y) Learn a model f Such that f(x) = y Use f to predict y for new x Many variations on this basic theme,Why might predictions be wrong?,True Non-Determinism Flip a biased coin p(heads) = Estimat

    3、e If 0.5 predict heads, else tails Lots of ML research on problems like this Learn a model Do the best you can in expectation,Why might predictions be wrong?,Partial Observability Something needed to predict y is missing from observation x N-bit parity problem x contains N-1 bits (hard PO) x contain

    4、s N bits but learner ignores some of them (soft PO),Why might predictions be wrong?,True non-determinism Partial observability hard, soft Representational bias Algorithmic bias Bounded resources,Representational Bias,Having the right features (x) is crucial,X,O,O,O,O,X,X,X,X,O,O,O,O,X,X,X,Support Ve

    5、ctor Machines,Doing Really Well with Linear Decision Surfaces,Strengths of SVMs,Good generalization in theory Good generalization in practice Work well with few training instances Find globally best model Efficient algorithms Amenable to the kernel trick,Linear Separators,Training instances x n y -1

    6、, 1 w n b Hyperplane+ b = 0 w1x1 + w2x2 + wnxn + b = 0 Decision function f(x) = sign( + b),Math Review Inner (dot) product:= a b = ai*bi = a1b1 + a2b2 + +anbn,Intuitions,X,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,Intuitions,X,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,Intuitions,X,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,Intuitions,X

    7、,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,A “Good” Separator,X,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,Noise in the Observations,X,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,Ruling Out Some Separators,X,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,Lots of Noise,X,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,Maximizing the Margin,X,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,“

    8、Fat” Separators,X,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,Why Maximize Margin?,Increasing margin reduces capacity Must restrict capacity to generalize m training instances 2m ways to label them What if function class that can separate them all? Shatters the training instances VC Dimension is largest m such th

    9、at function class can shatter some set of m points,VC Dimension Example,X,X,X,O,X,X,X,O,X,X,X,O,O,O,X,O,X,O,X,O,O,O,O,O,Bounding Generalization Error,Rf = risk, test error Rempf = empirical risk, train error h = VC dimension m = number of training instances = probability that bound does not hold,Rf

    10、Rempf +,Support Vectors,X,X,O,The Math,Training instances x n y -1, 1 Decision function f(x) = sign( + b) w n b Find w and b that Perfectly classify training instances Assuming linear separability Maximize margin,The Math,For perfect classification, we want yi ( + b) 0 for all i Why? To maximize the

    11、 margin, we want w that minimizes |w|2,Dual Optimization Problem,Maximize over W() = i i - 1/2 i,j i j yi yj Subject toi 0 i i yi = 0 Decision function f(x) = sign(i i yi + b),What if Data Are Not Perfectly Linearly Separable?,Cannot find w and b that satisfy yi ( + b) 1 for all i Introduce slack va

    12、riables i yi ( + b) 1 - i for all i Minimize |w|2 + C i,Strengths of SVMs,Good generalization in theory Good generalization in practice Work well with few training instances Find globally best model Efficient algorithms Amenable to the kernel trick ,What if Surface is Non-Linear?,Image from http:/ M

    13、ethods,Making the Non-Linear Linear,When Linear Separators Fail,X,O,O,O,O,X,X,X,x1,x2,Mapping into a New Feature Space,Rather than run SVM on xi, run it on (xi) Find non-linear separator in input space What if (xi) is really big? Use kernels to compute it implicitly!, : x X = (x),(x1,x2) = (x1,x2,x1

    14、2,x22,x1x2),Image from http:/web.engr.oregonstate.edu/afern/classes/cs534/,Kernels,Find kernel K such that K(x1,x2) = Computing K(x1,x2) should be efficient, much more so than computing (x1) and (x2) Use K(x1,x2) in SVM algorithm rather than Remarkably, this is possible,The Polynomial Kernel,K(x1,x2

    15、) = 2 x1 = (x11, x12) x2 = (x21, x22)= (x11x21 + x12x22)2 = (x112 x212 + x122x222 + 2x11 x12 x21 x22)(x1) = (x112, x122, 2x11 x12)(x2) = (x212, x222, 2x21 x22) K(x1,x2) = ,The Polynomial Kernel,(x) contains all monomials of degree d Useful in visual pattern recognition Number of monomials 16x16 pixe

    16、l image 1010 monomials of degree 5 Never explicitly compute (x)! Variation - K(x1,x2) = ( + 1) 2,Kernels,What does it mean to be a kernel? K(x1,x2) = for some What does it take to be a kernel? The Gram matrix Gij = K(xi, xj) Positive definite matrixij ci cj Gij 0 for ci, cj Positive definite kernel

    17、For all samples of size m, induces a positive definite Gram matrix,A Few Good Kernels,Dot product kernel K(x1,x2) = Polynomial kernel K(x1,x2) = d (Monomials of degree d) K(x1,x2) = ( + 1)d (All monomials of degree 1,2,d) Gaussian kernel K(x1,x2) = exp(-| x1-x2 |2/22) Radial basis functions Sigmoid

    18、kernel K(x1,x2) = tanh( + ) Neural networks Establishing “kernel-hood” from first principles is non-trivial,The Kernel Trick,“Given an algorithm which is formulated in terms of a positive definite kernel K1, one can construct an alternative algorithm by replacing K1 with another positive definite ke

    19、rnel K2”,SVMs can use the kernel trick,Using a Different Kernel in the Dual Optimization Problem,For example, using the polynomial kernel with d = 4 (including lower-order terms).Maximize over W() = i i - 1/2 i,j i j yi yj Subject toi 0 i i yi = 0 Decision function f(x) = sign(i i yi + b),( + 1)4,X,( + 1)4,X,So by the kernel trick, we just replace them!,Exotic Kernels,Strings Trees Graphs The hard part is establishing kernel-hood,Application: “Beautification Engine” (Leyvand et al., 2008),Conclusion,SVMs find optimal linear separator The kernel trick makes SVMs non-linear learning algorithms,


    注意事项

    本文(Support Vector Machines and Kernels.ppt)为本站会员(diecharacter305)主动上传,麦多课文档分享仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知麦多课文档分享(点击联系客服),我们立即给予删除!




    关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

    copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
    备案/许可证编号:苏ICP备17064731号-1 

    收起
    展开