Automatic Analysis of Facial Expressions- The State of the .ppt
《Automatic Analysis of Facial Expressions- The State of the .ppt》由会员分享,可在线阅读,更多相关《Automatic Analysis of Facial Expressions- The State of the .ppt(24页珍藏版)》请在麦多课文档分享上搜索。
1、Automatic Analysis of Facial Expressions: The State of the Art By Maja Pantic, Leon Rothkrantz,Presentation Outline,Motivation Desired functionality and evaluation criteria Face Detection Expression data extraction Classification Conclusions and future research,Motivation,HCI Hope to achieve robust
2、communication by recovering from failure of one communication channel using information from another channel According to some estimates, the facial expression of the speaker counts for 55% of the effect of the spoken message (with the voice intonation contributing 38%, and the verbal part just 7%)
3、Behavioral science research Automation of objective measurement of facial activity,Desired Functionality,Human visual system = good reference point Desired properties: Works on images of people of any sex, age, and ethnicity Robust to variation in lighting Insensitive to hair style changes, presence
4、 of glasses, facial hair, partial occlusions Can deal with rigid head motions Is real-time Capable of classifying expressions into multiple emotion categories Able to learn the range of emotional expression by a particular person Able to distinguish all possible facial expressions (probably impossib
5、le),Overview,Three basic problems need to be solved: Face detection Facial expression data extraction Facial expression classification Both static images and image sequences have been used in studies surveyed in the paper,Face Detection,In arbitrary images A. Pentland et al. Detection in a single im
6、age Principal Component Analysis is used to generate a face space from a set of sample images A face map is created by calculating the distance between the local subimage and the face space at every location in the image If the distance is smaller than a certain threshold, the presence of a face is
7、declared Detection in an image sequence Frame differencing is used The difference image is thresholded to obtain motion blobs Blobs are tracked and analyzed over time to determine if motion is caused by a person and to determine the head position,Face Detection (Continued),In face images Holistic ap
8、proaches (the face is detected as a whole unit) M. Pantic, L. Rothkrantz Use a frontal and a profile face images Outer head boundaries are determined by analyzing the horizontal and vertical histograms of the frontal face image The face contour is obtained by using an HSV color model based algorithm
9、 (the face is extracted as the biggest object in the scene having the Hue parameter in the defined range) The profile contour is determined by following the procedure below: The value component of the HSV color model is used to threshold the input image The number of background pixels between the ri
10、ght edge of the image and the first “On” pixel is counted (this gives a vector that represents a discrete approximation of the contour curve) Noise is removed by averaging Local extrema correspond to points of interest (found by determining zero crossings of the 1st derivative),Face Detection (Conti
11、nued),Analytic approaches (the face is detected by detecting some important facial features first) H. Kobayashi, F. Hara Brightness distribution data of the human face is obtained with a camera in monochrome mode An average of brightness distribution data obtained from 10 subjects is calculated Iris
12、es are identified by computing crosscorrelation between the average image and the novel image The locations of other features are determined using relative locations of the facial features in the face,Template-based facial expression data extraction using static images,Edwards et al. Use Active Appe
13、arance Models (AAMs) Combined model of shape and gray-level appearance A training set of hand-labeled images with landmark points marked at key positions to outline the main features PCA is applied to shape and gray level data separately, then applied again to a vector of concatenated shape and gray
14、 level parameters The result is a description in terms of “appearance” parameters 80 appearance parameters sufficient to explain 98% of the variation in the 400 training images labeled with 122 points Given a new face image, they find appearance parameter values that minimize the error between the n
15、ew image and the synthesized AAM image,Feature-based facial expression data extraction using static images,M. Pantic, L. Rothkrantz A point-based face model is used 19 points selected in the frontal-view image, and 10 in the side-view image Face model features are defined as some geometric relations
16、hip between facial points or the image intensity in a small region defined relative to facial points (e.g. Feature 17 = Distance KL) Neutral facial expression analyzed first The positions of facial points are determined by using information from feature detectors Multiple feature detectors are used
17、for each facial feature localization and model feature extraction The result obtained from each detector is stored in a separate file The detector output is checked for accuracy After “inaccurate” results are discarded, those that were obtained by the highest priority detector are selected for use i
18、n the classification stage,Template-based facial expression data extraction using image sequences,M. Black, Y. Yacoob Do not address the problem of initially locating the various facial features The motion of various face regions is estimated using parameterized optical flow Estimates of deformation
19、 and motion parameters (e.g. horizontal and vertical translation, divergence, curl) are derived,Feature-based facial expression data extraction using image sequences,Cohn et al. (the only surveyed method) Feature points in the first frame manually marked with a mouse around facial landmarks A 13x13
20、flow window is centered around each point Hierarchical optical flow method of Lucas and Kanade used to track feature points in the image sequence Displacement of each point calculated relative to the first frame The displacement of feature points between the initial and peak frames used for classifi
21、cation,Classification,Two basic problems: Defining a set of categories/classes Choosing a classification mechanism People are not very good at it either In one study, a trained observer could classify only 87% of the faces correctly Expressions can be classified in terms of facial actions that cause
- 1.请仔细阅读文档,确保文档完整性,对于不预览、不比对内容而直接下载带来的问题本站不予受理。
- 2.下载的文档,不会出现我们的网址水印。
- 3、该文档所得收入(下载+内容+预览)归上传者、原创作者;如果您是本文档原作者,请点此认领!既往收益都归您。
下载文档到电脑,查找使用更方便
2000 积分 0人已下载
下载 | 加入VIP,交流精品资源 |
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- AUTOMATICANALYSISOFFACIALEXPRESSIONSTHESTATEOFTHEPPT

链接地址:http://www.mydoc123.com/p-378741.html