SVM by Sequential Minimal Optimization (SMO).ppt
《SVM by Sequential Minimal Optimization (SMO).ppt》由会员分享,可在线阅读,更多相关《SVM by Sequential Minimal Optimization (SMO).ppt(19页珍藏版)》请在麦多课文档分享上搜索。
1、SVM by Sequential Minimal Optimization (SMO),Algorithm by John Platt Lecture by David Page CS 760: Machine Learning,Quick Review,Inner product (specifically here the dot product) is defined as Changing x to w and z to x, we may also write:or or wTx or w x In our usage, x is feature vector and w is w
2、eight (or coefficient) vector A line (or more generally a hyperplane) is written as wTx = b, or wTx - b = 0 A linear separator is written as sign(wTx - b),Quick Review (Continued),Recall in SVMs we want to maximize margin subject to correctly separating the data that is, we want to:Here the ys are t
3、he class values (+1 for positive and -1 for negative), so says xi correctly labeled with room to spare Recall is just wTw,Recall Full Formulation,As last lecture showed us, we can Solve the dual more efficiently (fewer unknowns) Add parameter C to allow some misclassifications Replace xiTxj by more
4、more general kernel term,Intuitive Introduction to SMO,Perceptron learning algorithm is essentially doing same thing find a linear separator by adjusting weights on misclassified examples Unlike perceptron, SMO has to maintain sum over examples of example weight times example label Therefore, when S
5、MO adjusts weight of one example, it must also adjust weight of another,An Old Slide: Perceptron as Classifier,Output for example x is sign(wTx) Candidate Hypotheses: real-valued weight vectors w Training: Update w for each misclassified example x (target class t, predicted o) by: wi wi + h(t-o)xi H
6、ere h is learning rate parameter Let E be the error o-t. The above can be rewritten asw w hEx So final weight vector is a sum of weighted contributions from each example. Predictive model can be rewritten as weighted combination of examples rather than features, where the weight on an example is sum
7、 (over iterations) of terms hE.,Corresponding Use in Prediction,To use perceptron, prediction is wTx - b. May treat -b as one more coefficient in w (and omit it here), may take sign of this value In alternative view of last slide, prediction can be written as or, revising the weight appropriately, P
8、rediction in SVM is:,From Perceptron Rule to SMO Rule,Recall that SVM optimization problem has the added requirement that: Therefore if we increase one by an amount h, in either direction, then we have to change another by an equal amount in the opposite direction (relative to class value). We can a
- 1.请仔细阅读文档,确保文档完整性,对于不预览、不比对内容而直接下载带来的问题本站不予受理。
- 2.下载的文档,不会出现我们的网址水印。
- 3、该文档所得收入(下载+内容+预览)归上传者、原创作者;如果您是本文档原作者,请点此认领!既往收益都归您。
下载文档到电脑,查找使用更方便
2000 积分 0人已下载
下载 | 加入VIP,交流精品资源 |
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- SVMBYSEQUENTIALMINIMALOPTIMIZATIONSMOPPT
