欢迎来到麦多课文档分享! | 帮助中心 海量文档,免费浏览,给你所需,享你所想!
麦多课文档分享
全部分类
  • 标准规范>
  • 教学课件>
  • 考试资料>
  • 办公文档>
  • 学术论文>
  • 行业资料>
  • 易语言源码>
  • ImageVerifierCode 换一换
    首页 麦多课文档分享 > 资源分类 > PPT文档下载
    分享到微信 分享到微博 分享到QQ空间

    Automatic Analysis of Facial Expressions- The State of the .ppt

    • 资源ID:378741       资源大小:384KB        全文页数:24页
    • 资源格式: PPT        下载积分:2000积分
    快捷下载 游客一键下载
    账号登录下载
    微信登录下载
    二维码
    微信扫一扫登录
    下载资源需要2000积分(如需开发票,请勿充值!)
    邮箱/手机:
    温馨提示:
    如需开发票,请勿充值!快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
    如需开发票,请勿充值!如填写123,账号就是123,密码也是123。
    支付方式: 支付宝扫码支付    微信扫码支付   
    验证码:   换一换

    加入VIP,交流精品资源
     
    账号:
    密码:
    验证码:   换一换
      忘记密码?
        
    友情提示
    2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
    3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
    4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
    5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

    Automatic Analysis of Facial Expressions- The State of the .ppt

    1、Automatic Analysis of Facial Expressions: The State of the Art By Maja Pantic, Leon Rothkrantz,Presentation Outline,Motivation Desired functionality and evaluation criteria Face Detection Expression data extraction Classification Conclusions and future research,Motivation,HCI Hope to achieve robust

    2、communication by recovering from failure of one communication channel using information from another channel According to some estimates, the facial expression of the speaker counts for 55% of the effect of the spoken message (with the voice intonation contributing 38%, and the verbal part just 7%)

    3、Behavioral science research Automation of objective measurement of facial activity,Desired Functionality,Human visual system = good reference point Desired properties: Works on images of people of any sex, age, and ethnicity Robust to variation in lighting Insensitive to hair style changes, presence

    4、 of glasses, facial hair, partial occlusions Can deal with rigid head motions Is real-time Capable of classifying expressions into multiple emotion categories Able to learn the range of emotional expression by a particular person Able to distinguish all possible facial expressions (probably impossib

    5、le),Overview,Three basic problems need to be solved: Face detection Facial expression data extraction Facial expression classification Both static images and image sequences have been used in studies surveyed in the paper,Face Detection,In arbitrary images A. Pentland et al. Detection in a single im

    6、age Principal Component Analysis is used to generate a face space from a set of sample images A face map is created by calculating the distance between the local subimage and the face space at every location in the image If the distance is smaller than a certain threshold, the presence of a face is

    7、declared Detection in an image sequence Frame differencing is used The difference image is thresholded to obtain motion blobs Blobs are tracked and analyzed over time to determine if motion is caused by a person and to determine the head position,Face Detection (Continued),In face images Holistic ap

    8、proaches (the face is detected as a whole unit) M. Pantic, L. Rothkrantz Use a frontal and a profile face images Outer head boundaries are determined by analyzing the horizontal and vertical histograms of the frontal face image The face contour is obtained by using an HSV color model based algorithm

    9、 (the face is extracted as the biggest object in the scene having the Hue parameter in the defined range) The profile contour is determined by following the procedure below: The value component of the HSV color model is used to threshold the input image The number of background pixels between the ri

    10、ght edge of the image and the first “On” pixel is counted (this gives a vector that represents a discrete approximation of the contour curve) Noise is removed by averaging Local extrema correspond to points of interest (found by determining zero crossings of the 1st derivative),Face Detection (Conti

    11、nued),Analytic approaches (the face is detected by detecting some important facial features first) H. Kobayashi, F. Hara Brightness distribution data of the human face is obtained with a camera in monochrome mode An average of brightness distribution data obtained from 10 subjects is calculated Iris

    12、es are identified by computing crosscorrelation between the average image and the novel image The locations of other features are determined using relative locations of the facial features in the face,Template-based facial expression data extraction using static images,Edwards et al. Use Active Appe

    13、arance Models (AAMs) Combined model of shape and gray-level appearance A training set of hand-labeled images with landmark points marked at key positions to outline the main features PCA is applied to shape and gray level data separately, then applied again to a vector of concatenated shape and gray

    14、 level parameters The result is a description in terms of “appearance” parameters 80 appearance parameters sufficient to explain 98% of the variation in the 400 training images labeled with 122 points Given a new face image, they find appearance parameter values that minimize the error between the n

    15、ew image and the synthesized AAM image,Feature-based facial expression data extraction using static images,M. Pantic, L. Rothkrantz A point-based face model is used 19 points selected in the frontal-view image, and 10 in the side-view image Face model features are defined as some geometric relations

    16、hip between facial points or the image intensity in a small region defined relative to facial points (e.g. Feature 17 = Distance KL) Neutral facial expression analyzed first The positions of facial points are determined by using information from feature detectors Multiple feature detectors are used

    17、for each facial feature localization and model feature extraction The result obtained from each detector is stored in a separate file The detector output is checked for accuracy After “inaccurate” results are discarded, those that were obtained by the highest priority detector are selected for use i

    18、n the classification stage,Template-based facial expression data extraction using image sequences,M. Black, Y. Yacoob Do not address the problem of initially locating the various facial features The motion of various face regions is estimated using parameterized optical flow Estimates of deformation

    19、 and motion parameters (e.g. horizontal and vertical translation, divergence, curl) are derived,Feature-based facial expression data extraction using image sequences,Cohn et al. (the only surveyed method) Feature points in the first frame manually marked with a mouse around facial landmarks A 13x13

    20、flow window is centered around each point Hierarchical optical flow method of Lucas and Kanade used to track feature points in the image sequence Displacement of each point calculated relative to the first frame The displacement of feature points between the initial and peak frames used for classifi

    21、cation,Classification,Two basic problems: Defining a set of categories/classes Choosing a classification mechanism People are not very good at it either In one study, a trained observer could classify only 87% of the faces correctly Expressions can be classified in terms of facial actions that cause

    22、 an expression or “typical” emotions Facial muscle activity can be described by a set of codes The codes are called Action Units (AUs). All possible, visually detectable facial changes can be described by a set of 44 AUs. These codes form the basis of Facial Action Coding System (FACS), which provid

    23、es a linguistic description for each code.,Classification (continued),Most of the studies perform an emotion classification and use the following 6 basic categories: happiness, sadness, surprise, fear, anger, and disgust No agreement among psychologists whether these are the right categories People

    24、rarely produce “pure” expressions (e.g. 100% happiness), blends are much more common,Template-based classification using static images,Edwards et al. The Mahalanobis distance measure can be used for classificationClassification into 6 basic + neutral categories Correct recognition of 74% reported,c

    25、is the vector of appearance parameters for the new image, is the centroid of the multivariate distribution for class i, and C-1 is the within-class covariance matrix for all the training images,Neural network-based classification using static images,H. Kobayashi, F. Hara Used 234x50x6 neural network

    26、 trained off-line using backpropagation The input layer units correspond to intensity values extracted from the input image along the 13 vertical lines The output units correspond to the 6 basic emotion categories Average correct recognition rate 85%,Neural network-based classification using static

    27、images (Continued),Zhang et al. Used 680x7x7 neural network Output units represent six basic emotion categories plus the neutral category Output units give a probability of the analyzed expression belonging to the corresponding emotion category Cross-validation used for testing J. Zhao, G. Kearney U

    28、sed 10x10X3 neural network Neural network trained and tested on the whole set of data with 100% percent recognition rate ,Rule-based classification using static images,M. Pantic, L. Rothkrantz (the only surveyed method) Two-stage classification: 1. Facial actions (corresponding to one of the Action

    29、Units) are deduced from changes in face geometry Action Units are described in terms of face model feature values (E.g. AU 28 = (Both) lips sucked in = feature 17 is 0, where feature 17 = Distance KL) 2. The stage 1 classification results are used to classify the expression into one of the emotion c

    30、ategories E.g. AU6 + AU12 + AU16 + AU25 = Happiness The two-stage classification process allows “weighted emotion labels” Assumption: each AU that is part of the AU-coded description of a “pure” emotional expression has the same influence on the intensity of that emotional expression E.g. If the ana

    31、lysis of some image results in the activation of AU6, AU12, and AU16, then the expression is classified as 75% happiness The system can distinguish 29 AUs Recognition rate 92% for upper face Aus, and 86% for lower face AUs,Template-based classification using image sequences,Cohn et al. Classificatio

    32、n in terms of Action Units Uses Discriminant Function Analysis Deals with each face region separately Used for classification only (i.e. all facial point displacements are used as input) Does not deal with image sequences containing several consecutive facial actions Recognition rate: 92% in the bro

    33、w region, 88% in the eye region, 83% in the nose and mouth region,Rule-based classification using image sequences,M. Black, Y. Yacoob (the only surveyed method) Mid- and high-level descriptions of facial actions are used The parameter values (e.g. translation, divergence) derived from optical flow a

    34、re thresholded E.g. Div 0.02 = expansion, Div contraction. This is what the authors would call a mid-level predicate for the mouth. High-level predicates are rules for classifying facial expressions Rules for detecting the beginning and the end of an expression Use the results of applying mid-level

    35、rules as input E.g. Beginning of surprise = Raising brows and vertical expansion of mouth, End of Surprise = Lowering brows and vertical contraction of mouth The rules used for classification are not designed to deal with blends of emotional expressions (Anger + Fear recognized as disgust) Recogniti

    36、on rate: 88%,Conclusions and Possible Directions for Future Research,Active research area Most surveyed systems rely on the frontal view of the face and assume no facial hair or glasses None of the surveyed systems can distinguish all 44 AUs defined in FACS Classification into basic emotion categori

    37、es in most surveyed studies Some reported results are of little practical value The ability of the human visual system to “fill in” missing parts of the observed face (i.e. deal with partial occlusions) has not been investigated,Conclusions and Possible Directions for Future Research (Continued),Not

    38、 clear at all whether the 6 “basic” emotion categories are universal Each person has his/her own range of expression intensity so systems that start with a generic classification and then adapt may be of interest Assignment of a higher priority to upper face features by the human visual system (when

    39、 interpreting facial expressions) has not been subject of a lot of research Hard or impossible to compare reported results objectively without a well-defined, commonly used database of face images,References,M. Pantic, L. Rothkrantz, “Automatic Analysis of Facial Expressions: The State of the Art”,

    40、IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 12, December 2000 M. Pantic, L. Rothkrantz, Expert System for Automatic Analysis of Facial Expressions, Image and Vision Computing, Vol. 18, No. 11, pp. 881-905, 2000 M. J. Black, Y. Yacoob, “Recognizing Facial Expressions

    41、in Image Sequences Using Local Parameterized Models of Image Motion”, Intl J. Computer Vision, Vol. 25, no.1, pp. 23-48, 1997 J. F. Cohn, A.J. Zlochower, J.J. Lien, T. Kanade, “Feature-Point Tracking by Optical Flow Discriminates Subtle Differences in Facial Expression”, Proc. Intl Conf. Automatic F

    42、ace and Gesture Recognition, pp. 396-401, 1998 G.J. Edwards, T.F. Cootes, C.J. Taylor, “Face Recognition Using Active Appearance Models”, Proc. European Conference on Computer Vision, Vol. 2, pp. 581-695, 1998G.J. Edwards, T.F. Cootes, C.J. Taylor, “Active Appearance Models”, Proc. European Conf. Co

    43、mputer Vision, Vol. 2, pp. 484-498, 1998 H. Kobayashi, F. Hara, “Facial Interaction between Animated 3D Face Robot and Human Beings”, Proc. Intl Conf. Systems, Man, Cybernetics, pp.3,732-3,737, 1997,Some YouTube Videos,Real-time facial expression recognition Take 2 Facial expression recognition Facial expression mirroring Facial expression animation,


    注意事项

    本文(Automatic Analysis of Facial Expressions- The State of the .ppt)为本站会员(testyield361)主动上传,麦多课文档分享仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知麦多课文档分享(点击联系客服),我们立即给予删除!




    关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

    copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
    备案/许可证编号:苏ICP备17064731号-1 

    收起
    展开