WO2019029261A1 - Micro-expression recognition method, device and storage medium - Google Patents

Micro-expression recognition method, device and storage medium Download PDF

Info

Publication number
WO2019029261A1
WO2019029261A1 PCT/CN2018/090990 CN2018090990W WO2019029261A1 WO 2019029261 A1 WO2019029261 A1 WO 2019029261A1 CN 2018090990 W CN2018090990 W CN 2018090990W WO 2019029261 A1 WO2019029261 A1 WO 2019029261A1
Authority
WO
WIPO (PCT)
Prior art keywords
expression
micro
feature information
video
preset
Prior art date
Application number
PCT/CN2018/090990
Other languages
French (fr)
Chinese (zh)
Inventor
袁晖
Original Assignee
深圳市科迈爱康科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市科迈爱康科技有限公司 filed Critical 深圳市科迈爱康科技有限公司
Publication of WO2019029261A1 publication Critical patent/WO2019029261A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Definitions

  • the present application relates to the field of communications technologies, and in particular, to a micro-expression recognition method, apparatus, and storage medium.
  • micro-expressions generally only last for 1/25 ⁇ 1/5 seconds. Although a subconscious micro-expression may last only for a moment, it is easy to expose people's true emotions. Therefore, micro-expression recognition has an extraordinary effect on analyzing people's true mental state. With the rapid development of computer vision, pattern recognition and other disciplines, the automatic recognition technology of micro-expressions is quite mature. The research on automatic recognition of micro-expressions has been greatly developed in recent years, and several standard micros have been established at home and abroad. Expression library.
  • micro-expression library used in the current micro-expression recognition method is established under the unnatural conditions such as expression suppression, and is quite different from the actual life scene of the people, and cannot accurately reflect the true state of the micro-expression. Therefore, a micro-expression library established by capturing people's micro-expressions in real life state is needed, and an identification method capable of better reflecting the true state of the micro-expressions is determined by the micro-expression library.
  • the main purpose of the present application is to provide a micro-expression recognition method, device and storage medium, which aim to solve the technical problem that the actual situation of the micro-expression cannot be better reflected in the prior art.
  • the present application provides a micro-expression recognition method, the method comprising the following steps:
  • the method before the step of performing image recognition on the video to be recognized, obtaining a face in the to-be-identified video, and dividing the face according to a preset area, the method further includes:
  • the step of obtaining the face part in the to-be-identified video specifically includes:
  • the face portion is segmented to eliminate video segments that do not contain micro-expressions.
  • the step of extracting the expression feature information of each preset area in the to-be-identified video includes:
  • the contour feature information, the texture feature information, and the area feature information are respectively used as expression feature information corresponding to the preset region.
  • the method before the acquiring the to-be-identified video, the method further includes:
  • a micro-expression model is established, and the micro-expression model is trained through the mapping relationship to form a preset micro-expression model.
  • the method further includes:
  • the establishing a mapping relationship between the micro-expression and the expression feature information includes:
  • the method before the step of performing expression recognition on the sample video, the method further includes:
  • the storing the mapping relationship to obtain a micro-expression library further includes:
  • mapping relationship is stored according to the character type to obtain each type of micro-expression library.
  • the present application further provides a micro-expression recognition device, including: a memory, a processor, and a micro-expression recognition stored on the memory and operable on the processor a program that, when executed by the processor, implements the steps of the micro-expression recognition method as described above.
  • the present application further provides a storage medium on which a micro-expression recognition program is stored, and when the micro-expression recognition program is executed by a processor, the micro-expression recognition method as described above is implemented. step.
  • the present invention performs image recognition on the video to be recognized, obtains a face in the to-be-identified video, and divides the face according to a preset area; and extracts expression feature information of each preset area from the to-be-identified video. And comparing the expression feature information with the preset micro-expression model, and determining the micro-expression in the to-be-identified video according to the comparison result. Since the video to be recognized adopted in the present embodiment is acquired in a natural state, and the expression feature information of each preset area of the face is extracted, the recognition of the micro-expression is more accurate, and the true state of the micro-expression can be better reflected.
  • FIG. 1 is a schematic structural diagram of a micro-expression recognition device in a hardware operating environment according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a first embodiment of a micro-expression recognition method according to the present application
  • FIG. 3 is a schematic flowchart of a second embodiment of a micro-expression recognition method according to the present application.
  • FIG. 4 is a schematic flow chart of a third embodiment of a micro-expression recognition method according to the present application.
  • FIG. 1 is a schematic structural diagram of a micro-expression recognition device in a hardware operating environment according to an embodiment of the present application.
  • the micro-expression recognition device may include a processor 1001, such as a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005.
  • the communication bus 1002 is used to implement connection communication between these components.
  • the user interface 1003 can include a display, and the optional user interface 1003 can also include a standard wired interface, a wireless interface.
  • the network interface 1004 can optionally include a standard wired interface, a wireless interface (such as a WI-FI interface).
  • the memory 1005 may be a high speed RAM memory or a stable memory (non-volatile) Memory), such as disk storage.
  • the memory 1005 can also optionally be a storage device independent of the aforementioned processor 1001.
  • FIG. 1 does not constitute a definition of a micro-expression recognition device, and may include more or fewer components than those illustrated, or some components may be combined, or different component arrangements.
  • an operating system may be included in the memory 1005 as a storage medium.
  • a network communication module may be included in the memory 1005 as a storage medium.
  • a user interface module may be included in the memory 1005 as a storage medium.
  • a micro-expression recognition program may be included in the memory 1005 as a storage medium.
  • the network interface 1004 is mainly used to connect to other servers for data communication with the other servers;
  • the user interface 1003 is mainly used for connecting to the user terminal and performing data communication with the user terminal;
  • the micro-expression recognition device calls the micro-expression recognition program stored in the memory 1005 by the processor 1001, and performs the following operations:
  • processor 1001 may call the micro-expression recognition program stored in the memory 1005, and further perform the following operations:
  • processor 1001 may call the micro-expression recognition program stored in the memory 1005, and further perform the following operations:
  • the face portion is segmented to eliminate video segments that do not contain micro-expressions.
  • processor 1001 may call the micro-expression recognition program stored in the memory 1005, and further perform the following operations:
  • the contour feature information, the texture feature information, and the area feature information are respectively used as expression feature information corresponding to the preset region.
  • processor 1001 may call the micro-expression recognition program stored in the memory 1005, and further perform the following operations:
  • a micro-expression model is established, and the micro-expression model is trained through the mapping relationship to form a preset micro-expression model.
  • processor 1001 may call the micro-expression recognition program stored in the memory 1005, and further perform the following operations:
  • the establishing a mapping relationship between the micro-expression and the expression feature information includes:
  • processor 1001 may call the micro-expression recognition program stored in the memory 1005, and further perform the following operations:
  • the storing the mapping relationship to obtain a micro-expression library further includes:
  • mapping relationship is stored according to the character type to obtain each type of micro-expression library.
  • the image is recognized by the image to be recognized, the face in the to-be-identified video is obtained, and the face is divided according to the preset area; and the expression features of each preset area are extracted from the to-be-recognized video. And comparing the facial expression feature information with the preset micro-expression model, and determining the micro-expression in the to-be-identified video according to the comparison result. Since the video to be recognized adopted in the present embodiment is acquired in a natural state, and the expression feature information of each preset area of the face is extracted, the recognition of the micro-expression is more accurate, and the true state of the micro-expression can be better reflected.
  • FIG. 2 is a schematic flowchart of a first embodiment of a micro-expression recognition method according to the present application.
  • the micro-expression recognition method comprises the following steps:
  • Step S10 performing image recognition on the video to be recognized, obtaining a face in the to-be-identified video, and dividing the face according to a preset area;
  • the micro-expressions in the unnatural state such as suppressed expressions are stored, and the true state of the micro-expressions cannot be fully reflected.
  • the implementation in the micro-expression recognition method adopted in the example the micro-expressions in the natural state are adopted, and the micro-expressions are established by the micro-expressions in the natural state, and the micro-expressions to be recognized are identified by using the established micro-expression library.
  • the micro-expressions used in this embodiment are collected in a natural state, rather than being collected in a suppressed unnatural state.
  • the to-be-identified video containing the micro-expression in the natural state is acquired, the expression feature information in the to-be-identified video is extracted, and the micro-expression in the to-be-identified video is identified according to the expression feature information.
  • the feature information of each part of the face will be extracted.
  • a preset area that can display the micro-expression on the face of the person is selected in advance as the Each part, the preset area includes a facial features area, a nasolabial area, and an eyelid area, and the image to be recognized is image-decomposed into a continuous single-frame image to obtain the to-be-identified
  • the face part in the video is divided into the preset part according to the preset area, so as to facilitate the subsequent extraction of the expression feature information of each preset area.
  • the method further includes:
  • the external environment will also affect the micro-expression. Even with the same facial expression information, different micro-expressions will still be produced due to different environments. For example, a person poses a smile in both environments, but in a bright, soft-colored environment, the smile represents a quiet, comfortable micro-expression, and instead, in a dark, narrow, dirty environment. The smile represents a micro-expression of bitter laughter and self-deprecating. Therefore, the embodiment further extracts the environment feature information and combines with the expression feature information to determine the micro-expression in the to-be-identified video, which is more accurate.
  • the method further includes:
  • the duration of the general micro-expression is 1/25 ⁇ 1/5 seconds, and the length of the pre-acquired video to be recognized is generally long, and it is difficult to extract the ephemeral micro-expression, and the video to be recognized is processed into 1
  • the duration of ⁇ 2 seconds can not damage the micro-expression segments, and it is also convenient to extract the expression feature information in the to-be-identified video.
  • the to-be-identified video includes other background environments. When extracting the facial expression feature information, the micro-expression is not prominent in the image, which affects the extraction effect. Therefore, after the environment feature information is extracted, the to-be-identified video is subjected to pre-processing including cropping and segmentation, and the to-be-identified video is converted into a micro-expression video of 1 to 2 seconds.
  • the video to be recognized is cropped according to the width of the face, for example, centered on the nose, 1.5 times the length of the face is long, and 1.5 times the width of the face is wide, and a rectangular area is formed, according to the rectangular area
  • the image of the to-be-identified video is cropped to obtain a face video.
  • the face video is segmented, and the video segment that does not contain the micro-expression is removed, and the micro-expression video is obtained.
  • the micro-expression video of the face is obtained, which facilitates the subsequent extraction of the expression feature information.
  • Step S20 extracting expression feature information of each preset area from the to-be-identified video
  • the expression feature information refers to a set of data information that can reflect the process of changing the micro-expression, including the duration and degree of change of each preset region of the face. Such as the duration of changes in eyebrows, the degree of change in eye contours, etc.
  • the human micro-expression is presented by all parts of the face.
  • the change of a single part cannot fully explain the micro-expression of the person. For example, when “happy”, the person will not only raise the corner of the mouth, but the mouth will be lifted and the cheeks will be lifted. Wrinkles, eyelid contraction, and "crow's feet” form at the end of the eye. These parts change together to produce a "happy" micro-expression.
  • the parts affecting the human micro-expression mainly include the facial features area, the nasolabial area, and the eyelid area. Therefore, in the present embodiment, the above-mentioned part is selected as the preset area.
  • the to-be-identified video is clipped and segmented, and converted into a micro-expression video, and extracting the expression feature information in the micro-expression video is more convenient and quick.
  • the expression feature information is extracted, that is, the change duration of each preset area and the degree of change of each preset area are extracted.
  • Step S30 comparing the expression feature information with a preset micro-expression model, and determining a micro-expression in the to-be-identified video according to the comparison result.
  • a preset micro-expression model is established, and expression feature information is input in the preset micro-expression model, and the preset micro-expression model can input
  • the expression feature information is identified, the micro-expression corresponding to the expression feature information is obtained, and the micro-expression is output, that is, the micro-expression in the to-be-identified video is recognized.
  • the image is recognized by the image to be recognized, the face in the to-be-identified video is obtained, and the face is divided according to the preset area; and the expression features of each preset area are extracted from the to-be-recognized video. And comparing the facial expression feature information with the preset micro-expression model, and determining the micro-expression in the to-be-identified video according to the comparison result. Since the video to be recognized adopted in the present embodiment is acquired in a natural state, and the expression feature information of each preset area of the face is extracted, the recognition of the micro-expression is more accurate, and the true state of the micro-expression can be better reflected.
  • FIG. 3 is a schematic flowchart of a second embodiment of the micro-expression recognition method according to the present application. Based on the embodiment shown in FIG. 2, a second embodiment of the micro-expression recognition method of the present application is proposed.
  • step S20 specifically includes:
  • Step S201 performing contour recognition on the facial features area, and acquiring contour feature information of the facial features area;
  • the facial features area is a main area that affects a human micro-expression, and the facial features area has a clear outline.
  • contour recognition of the facial features area contour feature information of the facial features area can be acquired, and the outline characteristic information is obtained. Includes the duration of the change in the profile of the facial features and the degree of change in the profile.
  • the method for the contour recognition may be an edge detection algorithm, which is not limited in this embodiment.
  • Step S202 performing texture analysis on the nasolabial region to obtain texture feature information of the nasolabial region
  • the nasolabial region is an important region that affects the micro-expression of the human, and the nasolabial region has a texture.
  • the texture feature information of the nasolabial region can be obtained.
  • the texture characteristic information includes a duration of change of the nasolabial region and a degree of change of the nasolabial fold.
  • the method of the texture analysis may be a grayscale transformation or a binarization, which is not limited in this embodiment.
  • Step S203 acquiring area feature information of the eyelid region
  • the eyelid region is also an important region affecting the human micro-expression
  • the eyelid region has a skin near the plane
  • the area of the eyelid region can be obtained by calculating the area of the eyelid region in each frame of the video image.
  • Characteristic information, the area characteristic information including a change duration of the eyelid region and a degree of change in the eyelid area.
  • Step S204 The contour feature information, the texture feature information, and the area feature information are respectively used as the expression feature information corresponding to the preset region.
  • contour feature information is used as the expression feature information of the facial features
  • texture feature information is used as the expression feature information of the nasolabial region
  • area feature information is used as the eyelid region.
  • the expression feature information is summarized, and the expression feature information of all the preset regions is summarized into the expression feature information corresponding to the to-be-identified video.
  • different expression methods are used to extract the facial expression feature information of each preset region, which can better capture the change process of the micro-expression, and subsequently identify the to-be-identified according to the expression feature information.
  • the micro-expressions in the video provide the basis.
  • FIG. 4 is a schematic flowchart of a third embodiment of a micro-expression recognition method according to the present application. Based on the embodiment shown in FIG. 2, a third embodiment of the micro-expression recognition method of the present application is proposed.
  • the method before the step S10, the method further includes:
  • Step S001 classify the sample video according to a character type in the sample video, where the character type includes at least one of each preset age group, gender, and identity type;
  • the embodiment provides a micro-expression recognition method, which is applied to the scenario of establishing the micro-expression library and establishing the preset micro-expression model. Pre-establishing a mapping relationship between the micro-expression and the expression feature information, and saving the mapping relationship to obtain a micro-expression library, wherein the micro-expression and the expression feature information in each set of mapping relationships are acquired according to the same sample video.
  • the sample video adopts a video containing a micro-expression in a natural state, and constructs a mapping relationship between the micro-expression and the expression feature information through the micro-expression contained therein.
  • the unique facial expression feature information corresponding to the unique micro-expression and the micro-expression in the sample video is obtained, and the mapping relationship between the micro-expression and the expression feature information corresponding to the sample video is established.
  • the sample video is classified according to the type of the character, and the feature extraction of the classified video can finally obtain the micro-expression library of each character type.
  • the sample video is first divided into a male sample video and a female sample video according to the gender of the person in the sample video, and then the male sample video and the female sample video are separately extracted, and finally the male micro-expression is obtained.
  • Library and female micro-expression library are obtained.
  • the sample video is classified according to each preset age group and the identity of the person, and the micro-expression library of each preset age group and the micro-expression library of each identity are obtained.
  • Step S002 performing expression recognition on the sample video to determine a micro-expression in the sample video
  • the expression expression in the sample video is determined by performing expression recognition on the sample video.
  • the six basic expressions of the human being are pre-set as the expression category, so that the recognized expressions belong to the expression category, and the six basic expressions include surprise, disgust, anger, fear, sadness and pleasure. All human expressions can be included in these six basic expression ranges. Of course, it is also possible to subdivide the expression into more types of expressions as the expression category, which is not limited in this embodiment.
  • Step S003 Extracting environmental feature information in the sample video
  • the environment has an influence on the micro-expression
  • the micro-expression in the sample video is determined by the environment feature information and the expression feature information to be more accurate.
  • Step S004 performing image recognition on the sample video, obtaining a face part in the sample video, and dividing a face part in the sample video according to a preset area;
  • Step S005 extract expression feature information of each preset area from the sample video.
  • image recognition is performed on the sample video, a face portion in the sample video is obtained, and the face portion is divided according to a preset region, and image recognition is performed on the video to be recognized.
  • the process of dividing the face part according to the preset area according to the preset part; and the process of extracting the expression feature information of each preset area from the sample video The process of extracting the expression feature information of each preset area in the recognition video is consistent.
  • Step S006 establishing a mapping relationship between the micro-expression and the expression feature information and environment feature information, and storing the mapping relationship to obtain a micro-expression library;
  • a mapping relationship between the micro-expression and the expression feature information and the environment feature information may be established.
  • the mapping relationship is stored to obtain a micro-expression library, and the micro-expression library includes a mapping relationship between the micro-expressions of the character types and the expression feature information and the environment feature information.
  • Step S007 Establish a micro-expression model, and train the micro-expression model through the mapping relationship to form a preset micro-expression model.
  • the data such as the mapping relationship stored in the micro-expression library is classified according to the character type, and the data stored in each class is scattered and lacks systematicity.
  • the model is trained to construct a data context. Can complete the finishing of the data.
  • the micro-expression recognition can be conveniently and quickly performed on the to-be-identified video through the trained preset micro-expression model.
  • a micro-expression model is pre-established, and the micro-expression model is trained by the mapping relationship to improve the recognition accuracy of the micro-expression model, and the mapping relationship is The obtained known relationship may be used to train the micro-expression model.
  • the micro-expression model is accurately determined. The rate can reach a certain standard and become the preset micro-expression model.
  • the specific process of training the micro-expression model by using the mapping relationship to form a preset micro-expression model is: inputting a set of mapping relationships in the micro-expression model, the micro-expression model according to the Deriving the identification result of the sample video by using the environment feature information and the expression feature information in the mapping relationship, and comparing the recognition result with the micro-expression in the mapping relationship to obtain a comparison result;
  • the recognition accuracy after the training may still not reach the standard and cannot be preset.
  • the micro-expression model only obtains the trial model. Therefore, when the trial model is used to initially perform micro-expression recognition on the trial video, the trial model is secondarily trained by the mapping relationship corresponding to the trial video, so as to realize the recognition accuracy of the trial model. Can reach the standard.
  • the steps of the secondary training specifically include:
  • the output determination result is true, the trial model connection right is increased, and the correspondence relationship between the environmental feature information, the expression feature information, and the micro-expression in the trial video is established.
  • the corresponding relationship is saved in the micro-expression library, thereby expanding the micro-expression library;
  • the sample video including the micro-expression in the natural state is obtained, the character type is classified into the sample video, and the environmental feature information and the expression feature information of the sample video are extracted, and the micro-expression and the environmental feature information are established.
  • the mapping relationship between the expression feature information, the micro-expression library and the micro-expression model including the mapping relationship of each preset type are established, and the micro-expression model is trained by the mapping relationship to improve the recognition of the micro-expression model. Accuracy to achieve recognition of micro-expressions by the preset micro-expression model.
  • the embodiment of the present application further provides a storage medium, where the micro-expression recognition program is stored, and when the micro-expression recognition program is executed by the processor, the following operations are implemented:
  • micro-expression recognition program when executed by the processor, the following operations are also implemented:
  • micro-expression recognition program when executed by the processor, the following operations are also implemented:
  • the face portion is segmented to eliminate video segments that do not contain micro-expressions.
  • micro-expression recognition program when executed by the processor, the following operations are also implemented:
  • the contour feature information, the texture feature information, and the area feature information are respectively used as expression feature information corresponding to the preset region.
  • micro-expression recognition program when executed by the processor, the following operations are also implemented:
  • a micro-expression model is established, and the micro-expression model is trained through the mapping relationship to form a preset micro-expression model.
  • micro-expression recognition program when executed by the processor, the following operations are also implemented:
  • the establishing a mapping relationship between the micro-expression and the expression feature information includes:
  • micro-expression recognition program when executed by the processor, the following operations are also implemented:
  • the storing the mapping relationship to obtain a micro-expression library further includes:
  • mapping relationship is stored according to the character type to obtain each type of micro-expression library.
  • the image is recognized by the image to be recognized, the face in the to-be-identified video is obtained, and the face is divided according to the preset area; and the expression features of each preset area are extracted from the to-be-recognized video. And comparing the facial expression feature information with the preset micro-expression model, and determining the micro-expression in the to-be-identified video according to the comparison result. Since the video to be recognized adopted in the present embodiment is acquired in a natural state, and the expression feature information of each preset area of the face is extracted, the recognition of the micro-expression is more accurate, and the true state of the micro-expression can be better reflected.
  • the embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course Hardware, but in many cases the former is a better implementation.
  • the technical solution of the present application may be in the form of a software product in essence or in part contributing to the prior art. It is now found that the computer software product is stored in a storage medium (such as ROM/RAM, disk, CD), and includes a plurality of instructions for making a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device). And the like) performing the methods described in the various embodiments of the present application.
  • a storage medium such as ROM/RAM, disk, CD

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed in the present application of the present invention are a micro-expression recognition method, a device and a storage medium. Said method comprises: performing image recognition on a video to be recognized, obtaining a face part in the video to be recognized, and dividing the face part according to pre-set regions; extracting, from the video to be recognized, expression feature information in each pre-set region; comparing the expression feature information with a pre-set micro-expression model, and determining, according to the comparison result, a micro-expression in the video to be recognized.

Description

微表情识别方法、装置及存储介质  Micro expression recognition method, device and storage medium
技术领域Technical field
本申请涉及通信技术领域,尤其涉及一种微表情识别方法、装置及存储介质。The present application relates to the field of communications technologies, and in particular, to a micro-expression recognition method, apparatus, and storage medium.
背景技术Background technique
人们通过做一些表情,把内心感受表达给他人看,在这些不同表情之间,或是某个表情里,脸部会通过微表情“泄露”出其它信息。微表情一般仅维持1/25~1/5秒,虽然一个下意识的微表情可能只持续一瞬间,但很容易暴露人们的真实情绪。因此,微表情识别对于分析人们的真实心理状态有着非凡的作用。而随着计算机视觉、模式识别等学科的迅猛发展,微表情的自动识别技术已经相当成熟,相关的微表情自动识别研究在近几年有较大发展,国内外也建立了数个标准的微表情库。By making some expressions, people express their inner feelings to others. In these different expressions, or in an expression, the face will “leak” other information through the micro-expression. Micro-expressions generally only last for 1/25~1/5 seconds. Although a subconscious micro-expression may last only for a moment, it is easy to expose people's true emotions. Therefore, micro-expression recognition has an extraordinary effect on analyzing people's true mental state. With the rapid development of computer vision, pattern recognition and other disciplines, the automatic recognition technology of micro-expressions is quite mature. The research on automatic recognition of micro-expressions has been greatly developed in recent years, and several standard micros have been established at home and abroad. Expression library.
然而,当前的微表情识别方法中所采用的微表情库均为表情抑制等非自然条件下建立,与人们实际生活场景有较大区别,无法较好地体现微表情的真实状况。因此,需要一个通过捕捉实际生活状态下人们的微表情而建立的微表情库,并通过该微表情库确定一种能较好地体现微表情真实状况的识别方法。However, the micro-expression library used in the current micro-expression recognition method is established under the unnatural conditions such as expression suppression, and is quite different from the actual life scene of the people, and cannot accurately reflect the true state of the micro-expression. Therefore, a micro-expression library established by capturing people's micro-expressions in real life state is needed, and an identification method capable of better reflecting the true state of the micro-expressions is determined by the micro-expression library.
发明内容Summary of the invention
本申请的主要目的在于提供一种微表情识别方法、装置及存储介质,旨在解决现有技术中不能较好地体现微表情的真实状况的技术问题。The main purpose of the present application is to provide a micro-expression recognition method, device and storage medium, which aim to solve the technical problem that the actual situation of the micro-expression cannot be better reflected in the prior art.
为实现上述目的,本申请提供一种微表情识别方法,所述方法包括以下步骤:To achieve the above objective, the present application provides a micro-expression recognition method, the method comprising the following steps:
对待识别视频进行图像识别,获得所述待识别视频中的人脸部分,并按照预设区域对所述人脸部分进行划分;Performing image recognition on the recognized video, obtaining a face part in the to-be-identified video, and dividing the face part according to the preset area;
从所述待识别视频中提取各预设区域的表情特征信息;Extracting expression feature information of each preset area from the to-be-identified video;
将所述表情特征信息与预设微表情模型进行对比,并根据对比结果确定所述待识别视频中的微表情。Comparing the expression feature information with the preset micro-expression model, and determining the micro-expression in the to-be-identified video according to the comparison result.
在一实施例中,所述对待识别视频进行图像识别,获得所述待识别视频中的人脸,并按照预设区域对所述人脸进行划分的步骤之前,所述方法还包括:In an embodiment, before the step of performing image recognition on the video to be recognized, obtaining a face in the to-be-identified video, and dividing the face according to a preset area, the method further includes:
提取所述待识别视频的环境特征信息;Extracting environment characteristic information of the to-be-identified video;
相应地,所述将所述表情特征信息与预设微表情模型进行对比,具体包括:Correspondingly, comparing the expression feature information with the preset micro-expression model, specifically:
将所述表情特征信息和所述环境特征信息同时与所述预设微表情模型进行对比。And comparing the expression feature information and the environment feature information to the preset micro-expression model.
在一实施例中,所述获得所述待识别视频中的人脸部分的步骤,具体包括:In an embodiment, the step of obtaining the face part in the to-be-identified video specifically includes:
对所述待识别视频进行裁剪,保留所述待识别视频中的人脸部分;And cutting the to-be-identified video, and retaining a part of the face in the to-be-identified video;
对所述人脸部分进行切段,剔除不包含微表情的视频片段。The face portion is segmented to eliminate video segments that do not contain micro-expressions.
在一实施例中,所述在所述待识别视频中提取各预设区域的表情特征信息的步骤,具体包括:In an embodiment, the step of extracting the expression feature information of each preset area in the to-be-identified video includes:
对所述五官区域进行轮廓识别,获取所述五官区域的轮廓特征信息;Performing contour recognition on the facial features to obtain contour feature information of the facial features;
对所述鼻唇沟区域进行纹理分析,获取所述鼻唇沟区域的纹理特征信息;Performing a texture analysis on the nasolabial region to obtain texture feature information of the nasolabial region;
获取所述眼睑区域的面积特征信息;Obtaining area feature information of the eyelid region;
将所述轮廓特征信息、纹理特征信息、面积特征信息分别作为对应预设区域的表情特征信息。The contour feature information, the texture feature information, and the area feature information are respectively used as expression feature information corresponding to the preset region.
在一实施例中,所述获取待识别视频之前,所述方法还包括:In an embodiment, before the acquiring the to-be-identified video, the method further includes:
对样本视频进行表情识别,确定所述样本视频中的微表情;Performing expression recognition on the sample video to determine a micro-expression in the sample video;
对所述样本视频进行图像识别,获得所述样本视频中的人脸,并按照预设区域对所述样本视频中的人脸进行划分;Performing image recognition on the sample video, obtaining a face in the sample video, and dividing a face in the sample video according to a preset area;
从所述样本视频中提取各预设区域的表情特征信息;Extracting expression feature information of each preset area from the sample video;
建立所述微表情和所述表情特征信息的映射关系,并存储所述映射关系得到微表情库;Establishing a mapping relationship between the micro-expression and the expression feature information, and storing the mapping relationship to obtain a micro-expression library;
建立微表情模型,并通过所述映射关系训练所述微表情模型,形成预设微表情模型。A micro-expression model is established, and the micro-expression model is trained through the mapping relationship to form a preset micro-expression model.
在一实施例中,所述对样本视频进行表情识别,确定所述样本视频中的微表情的步骤之后,所述方法还包括:In an embodiment, after the step of performing facial expression recognition on the sample video to determine the micro-expression in the sample video, the method further includes:
提取所述样本视频中的环境特征信息;Extracting environmental feature information in the sample video;
相应地,所述建立所述微表情和所述表情特征信息的映射关系,具体包括:Correspondingly, the establishing a mapping relationship between the micro-expression and the expression feature information includes:
建立所述微表情和所述表情特征信息、所述环境特征信息的映射关系。Establishing a mapping relationship between the micro-expression and the expression feature information and the environment feature information.
在一实施例中,所述对所述样本视频进行表情识别的步骤之前,所述方法还包括:In an embodiment, before the step of performing expression recognition on the sample video, the method further includes:
按照所述样本视频中的人物类型对所述样本视频进行分类,所述人物类型包括各预设年龄段、性别、身份类型中的至少一项;Sorting the sample video according to a type of a person in the sample video, the character type including at least one of each preset age group, gender, and identity type;
相应地,所述存储所述映射关系得到微表情库,还包括:Correspondingly, the storing the mapping relationship to obtain a micro-expression library further includes:
按所述人物类型存储所述映射关系得到各类型微表情库。The mapping relationship is stored according to the character type to obtain each type of micro-expression library.
此外,为实现上述目的,本申请还提供一种微表情识别装置,所述微表情识别装置包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的微表情识别程序,所述微表情识别程序被所述处理器执行时实现如上文所述的微表情识别方法的步骤。In addition, in order to achieve the above object, the present application further provides a micro-expression recognition device, including: a memory, a processor, and a micro-expression recognition stored on the memory and operable on the processor a program that, when executed by the processor, implements the steps of the micro-expression recognition method as described above.
此外,为实现上述目的,本申请还提供一种存储介质,所述存储介质上存储有微表情识别程序,所述微表情识别程序被处理器执行时实现如上文所述的微表情识别方法的步骤。In addition, in order to achieve the above object, the present application further provides a storage medium on which a micro-expression recognition program is stored, and when the micro-expression recognition program is executed by a processor, the micro-expression recognition method as described above is implemented. step.
本申请通过对待识别视频进行图像识别,获得所述待识别视频中的人脸,并按照预设区域对所述人脸进行划分;从所述待识别视频中提取各预设区域的表情特征信息;将所述表情特征信息与预设微表情模型进行对比,并根据对比结果确定所述待识别视频中的微表情。由于本实施采用的待识别视频是在自然状态下获取的,并且提取了人脸各预设区域的表情特征信息,对微表情的识别更加准确,能够较好地体现微表情的真实状况。The present invention performs image recognition on the video to be recognized, obtains a face in the to-be-identified video, and divides the face according to a preset area; and extracts expression feature information of each preset area from the to-be-identified video. And comparing the expression feature information with the preset micro-expression model, and determining the micro-expression in the to-be-identified video according to the comparison result. Since the video to be recognized adopted in the present embodiment is acquired in a natural state, and the expression feature information of each preset area of the face is extracted, the recognition of the micro-expression is more accurate, and the true state of the micro-expression can be better reflected.
附图说明DRAWINGS
图1是本申请实施例方案涉及的硬件运行环境的微表情识别装置结构示意图;1 is a schematic structural diagram of a micro-expression recognition device in a hardware operating environment according to an embodiment of the present application;
图2为本申请微表情识别方法第一实施例的流程示意图;2 is a schematic flowchart of a first embodiment of a micro-expression recognition method according to the present application;
图3为本申请微表情识别方法第二实施例的流程示意图;3 is a schematic flowchart of a second embodiment of a micro-expression recognition method according to the present application;
图4为本申请微表情识别方法第三实施例的流程示意图。4 is a schematic flow chart of a third embodiment of a micro-expression recognition method according to the present application.
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The implementation, functional features and advantages of the present application will be further described with reference to the accompanying drawings.
具体实施方式Detailed ways
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。It is understood that the specific embodiments described herein are merely illustrative of the application and are not intended to be limiting.
参照图1,图1为本申请实施例方案涉及的硬件运行环境的微表情识别装置结构示意图。1 is a schematic structural diagram of a micro-expression recognition device in a hardware operating environment according to an embodiment of the present application.
如图1所示,该微表情识别装置可以包括:处理器1001,例如CPU,通信总线1002、用户接口1003,网络接口1004,存储器1005。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。As shown in FIG. 1, the micro-expression recognition device may include a processor 1001, such as a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Among them, the communication bus 1002 is used to implement connection communication between these components. The user interface 1003 can include a display, and the optional user interface 1003 can also include a standard wired interface, a wireless interface. The network interface 1004 can optionally include a standard wired interface, a wireless interface (such as a WI-FI interface). The memory 1005 may be a high speed RAM memory or a stable memory (non-volatile) Memory), such as disk storage. The memory 1005 can also optionally be a storage device independent of the aforementioned processor 1001.
本领域技术人员可以理解,图1中示出的结构并不构成对微表情识别装置的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。Those skilled in the art will appreciate that the structure illustrated in FIG. 1 does not constitute a definition of a micro-expression recognition device, and may include more or fewer components than those illustrated, or some components may be combined, or different component arrangements.
如图1所示,作为一种存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及微表情识别程序。As shown in FIG. 1, an operating system, a network communication module, a user interface module, and a micro-expression recognition program may be included in the memory 1005 as a storage medium.
在图1所示的微表情识别装置中,网络接口1004主要用于连接其他服务器,与所述其他服务器进行数据通信;用户接口1003主要用于连接用户终端,与用户终端进行数据通信;所述微表情识别装置通过处理器1001调用存储器1005中存储的微表情识别程序,并执行以下操作:In the micro-expression recognition device shown in FIG. 1 , the network interface 1004 is mainly used to connect to other servers for data communication with the other servers; the user interface 1003 is mainly used for connecting to the user terminal and performing data communication with the user terminal; The micro-expression recognition device calls the micro-expression recognition program stored in the memory 1005 by the processor 1001, and performs the following operations:
对待识别视频进行图像识别,获得所述待识别视频中的人脸部分,并按照预设区域对所述人脸部分进行划分;Performing image recognition on the recognized video, obtaining a face part in the to-be-identified video, and dividing the face part according to the preset area;
从所述待识别视频中提取各预设区域的表情特征信息;Extracting expression feature information of each preset area from the to-be-identified video;
将所述表情特征信息与预设微表情模型进行对比,并根据对比结果确定所述待识别视频中的微表情。Comparing the expression feature information with a preset micro-expression model, and determining a micro-expression in the to-be-identified video according to the comparison result.
进一步地,处理器1001可以调用存储器1005中存储的微表情识别程序,还执行以下操作:Further, the processor 1001 may call the micro-expression recognition program stored in the memory 1005, and further perform the following operations:
提取所述待识别视频的环境特征信息;Extracting environment characteristic information of the to-be-identified video;
相应地,所述将所述表情特征信息与预设微表情模型进行对比,具体包括:Correspondingly, comparing the expression feature information with the preset micro-expression model, specifically:
将所述表情特征信息和所述环境特征信息同时与所述预设微表情模型进行对比。And comparing the expression feature information and the environment feature information to the preset micro-expression model.
进一步地,处理器1001可以调用存储器1005中存储的微表情识别程序,还执行以下操作:Further, the processor 1001 may call the micro-expression recognition program stored in the memory 1005, and further perform the following operations:
对所述待识别视频进行裁剪,保留所述待识别视频中的人脸部分;And cutting the to-be-identified video, and retaining a part of the face in the to-be-identified video;
对所述人脸部分进行切段,剔除不包含微表情的视频片段。The face portion is segmented to eliminate video segments that do not contain micro-expressions.
进一步地,处理器1001可以调用存储器1005中存储的微表情识别程序,还执行以下操作:Further, the processor 1001 may call the micro-expression recognition program stored in the memory 1005, and further perform the following operations:
对所述五官区域进行轮廓识别,获取所述五官区域的轮廓特征信息;Performing contour recognition on the facial features to obtain contour feature information of the facial features;
对所述鼻唇沟区域进行纹理分析,获取所述鼻唇沟区域的纹理特征信息;Performing a texture analysis on the nasolabial region to obtain texture feature information of the nasolabial region;
获取所述眼睑区域的面积特征信息;Obtaining area feature information of the eyelid region;
将所述轮廓特征信息、纹理特征信息、面积特征信息分别作为对应预设区域的表情特征信息。The contour feature information, the texture feature information, and the area feature information are respectively used as expression feature information corresponding to the preset region.
进一步地,处理器1001可以调用存储器1005中存储的微表情识别程序,还执行以下操作:Further, the processor 1001 may call the micro-expression recognition program stored in the memory 1005, and further perform the following operations:
对样本视频进行表情识别,确定所述样本视频中的微表情;Performing expression recognition on the sample video to determine a micro-expression in the sample video;
对所述样本视频进行图像识别,获得所述样本视频中的人脸,并按照预设区域对所述样本视频中的人脸进行划分;Performing image recognition on the sample video, obtaining a face in the sample video, and dividing a face in the sample video according to a preset area;
从所述样本视频中提取各预设区域的表情特征信息;Extracting expression feature information of each preset area from the sample video;
建立所述微表情和所述表情特征信息的映射关系,并存储所述映射关系得到微表情库;Establishing a mapping relationship between the micro-expression and the expression feature information, and storing the mapping relationship to obtain a micro-expression library;
建立微表情模型,并通过所述映射关系训练所述微表情模型,形成预设微表情模型。A micro-expression model is established, and the micro-expression model is trained through the mapping relationship to form a preset micro-expression model.
进一步地,处理器1001可以调用存储器1005中存储的微表情识别程序,还执行以下操作:Further, the processor 1001 may call the micro-expression recognition program stored in the memory 1005, and further perform the following operations:
提取所述样本视频中的环境特征信息;Extracting environmental feature information in the sample video;
相应地,所述建立所述微表情和所述表情特征信息的映射关系,具体包括:Correspondingly, the establishing a mapping relationship between the micro-expression and the expression feature information includes:
建立所述微表情和所述表情特征信息、所述环境特征信息的映射关系。Establishing a mapping relationship between the micro-expression and the expression feature information and the environment feature information.
进一步地,处理器1001可以调用存储器1005中存储的微表情识别程序,还执行以下操作:Further, the processor 1001 may call the micro-expression recognition program stored in the memory 1005, and further perform the following operations:
按照所述样本视频中的人物类型对所述样本视频进行分类,所述人物类型包括各预设年龄段、性别、身份类型中的至少一项;Sorting the sample video according to a type of a person in the sample video, the character type including at least one of each preset age group, gender, and identity type;
相应地,所述存储所述映射关系得到微表情库,还包括:Correspondingly, the storing the mapping relationship to obtain a micro-expression library further includes:
按所述人物类型存储所述映射关系得到各类型微表情库。The mapping relationship is stored according to the character type to obtain each type of micro-expression library.
本实施例通过对待识别视频进行图像识别,获得所述待识别视频中的人脸,并按照预设区域对所述人脸进行划分;从所述待识别视频中提取各预设区域的表情特征信息;将所述表情特征信息与预设微表情模型进行对比,并根据对比结果确定所述待识别视频中的微表情。由于本实施采用的待识别视频是在自然状态下获取的,并且提取了人脸各预设区域的表情特征信息,对微表情的识别更加准确,能够较好地体现微表情的真实状况。In this embodiment, the image is recognized by the image to be recognized, the face in the to-be-identified video is obtained, and the face is divided according to the preset area; and the expression features of each preset area are extracted from the to-be-recognized video. And comparing the facial expression feature information with the preset micro-expression model, and determining the micro-expression in the to-be-identified video according to the comparison result. Since the video to be recognized adopted in the present embodiment is acquired in a natural state, and the expression feature information of each preset area of the face is extracted, the recognition of the micro-expression is more accurate, and the true state of the micro-expression can be better reflected.
基于上述硬件结构,提出本申请微表情识别方法的实施例。Based on the above hardware structure, an embodiment of the micro-expression recognition method of the present application is proposed.
参照图2,图2为本申请微表情识别方法第一实施例的流程示意图。Referring to FIG. 2, FIG. 2 is a schematic flowchart of a first embodiment of a micro-expression recognition method according to the present application.
在第一实施例中,所述微表情识别方法包括以下步骤:In a first embodiment, the micro-expression recognition method comprises the following steps:
步骤S10:对待识别视频进行图像识别,获得所述待识别视频中的人脸,并按照预设区域对所述人脸进行划分;Step S10: performing image recognition on the video to be recognized, obtaining a face in the to-be-identified video, and dividing the face according to a preset area;
可以理解的是,一般微表情识别方法所采用的微表情库中,存储的都是表情受抑制等非自然状态下的微表情,无法完全体现微表情的真实状况,为了克服该缺点,本实施例采用的微表情识别方法中均采用自然状态下的微表情,并通过该自然状态下的微表情建立微表情库,再利用建立的微表情库识别待识别的微表情。较为明显的区别是,本实施例采用的微表情均在自然状态下采集,而不是在受抑制的非自然状态下采集。It can be understood that, in the micro-expression library used by the general micro-expression recognition method, the micro-expressions in the unnatural state such as suppressed expressions are stored, and the true state of the micro-expressions cannot be fully reflected. To overcome the shortcomings, the implementation In the micro-expression recognition method adopted in the example, the micro-expressions in the natural state are adopted, and the micro-expressions are established by the micro-expressions in the natural state, and the micro-expressions to be recognized are identified by using the established micro-expression library. The more obvious difference is that the micro-expressions used in this embodiment are collected in a natural state, rather than being collected in a suppressed unnatural state.
为了识别微表情的真实状况,获取包含自然状态下微表情的待识别视频,提取所述待识别视频中的表情特征信息,并根据所述表情特征信息来识别所述待识别视频中的微表情。In order to identify the real state of the micro-expression, the to-be-identified video containing the micro-expression in the natural state is acquired, the expression feature information in the to-be-identified video is extracted, and the micro-expression in the to-be-identified video is identified according to the expression feature information. .
应当理解的是,为了提取所述待识别视频中的表情特征信息,将提取人脸各个部位的特征信息,在本实施例中,预先选取人脸上能够展现微表情的预设区域作为所述各个部位,所述预设区域包括五官区域、鼻唇沟区域和眼睑区域,通过对所述待识别视频进行图像识别,将所述待识别视频分解为连续的单帧图像,获取所述待识别视频中的人脸部分,并按照预设区域对所述人脸部分进行划分,以方便后续提取各预设区域的表情特征信息。It should be understood that, in order to extract the feature feature information in the to-be-identified video, the feature information of each part of the face will be extracted. In this embodiment, a preset area that can display the micro-expression on the face of the person is selected in advance as the Each part, the preset area includes a facial features area, a nasolabial area, and an eyelid area, and the image to be recognized is image-decomposed into a continuous single-frame image to obtain the to-be-identified The face part in the video is divided into the preset part according to the preset area, so as to facilitate the subsequent extraction of the expression feature information of each preset area.
为了更准确地描述所述待识别视频的微表情,在步骤S10之前,所述方法还包括:In order to describe the micro-expression of the to-be-identified video more accurately, before the step S10, the method further includes:
提取所述待识别视频的环境特征信息;Extracting environment characteristic information of the to-be-identified video;
需要说明的是,外界环境也会对微表情产生影响。即使是同样的表情特征信息,但由于环境不同,仍会产生不同的微表情。例如一个人在两种环境中均摆出微笑的动作,但在光线明亮、色彩柔和的环境中,该微笑代表的是恬静、舒畅的微表情,相反,在阴暗、狭窄、肮脏的环境中,该微笑代表的是苦笑和自嘲的微表情。因此,本实施例还通过提取环境特征信息,与表情特征信息结合共同确定待识别视频中的微表情,更为准确。It should be noted that the external environment will also affect the micro-expression. Even with the same facial expression information, different micro-expressions will still be produced due to different environments. For example, a person poses a smile in both environments, but in a bright, soft-colored environment, the smile represents a quiet, comfortable micro-expression, and instead, in a dark, narrow, dirty environment. The smile represents a micro-expression of bitter laughter and self-deprecating. Therefore, the embodiment further extracts the environment feature information and combines with the expression feature information to determine the micro-expression in the to-be-identified video, which is more accurate.
为了便捷地提取表情特征信息,在步骤S10之后,所述方法还包括:In order to conveniently extract the expression feature information, after the step S10, the method further includes:
依据所述人脸在所述待识别视频中的位置对所述待识别视频进行裁剪,保留人脸区域;And clipping the to-be-identified video according to a position of the face in the to-be-identified video, and retaining a face region;
对裁剪后的待识别视频进行切段,剔除不包含微表情的视频片段。Cut the clipped video to be recognized, and remove the video clip that does not contain the micro-expression.
可以理解的是,一般微表情持续时间为1/25~1/5秒,而预先获取的待识别视频时长一般较长,难以提取转瞬即逝的微表情,将所述待识别视频处理成1~2秒的时长能够不损坏微表情片段,同时也方便提取所述待识别视频中的表情特征信息。并且,所述待识别视频中除了人脸,还包含其它背景环境,在提取表情特征信息时,微表情在画面中不够突出,影响提取效果。因此,在提取环境特征信息之后,将对所述待识别视频进行包括裁剪和切段的预处理,使所述待识别视频转化为1~2秒的微表情视频。It can be understood that the duration of the general micro-expression is 1/25~1/5 seconds, and the length of the pre-acquired video to be recognized is generally long, and it is difficult to extract the ephemeral micro-expression, and the video to be recognized is processed into 1 The duration of ~2 seconds can not damage the micro-expression segments, and it is also convenient to extract the expression feature information in the to-be-identified video. Moreover, in addition to the human face, the to-be-identified video includes other background environments. When extracting the facial expression feature information, the micro-expression is not prominent in the image, which affects the extraction effect. Therefore, after the environment feature information is extracted, the to-be-identified video is subjected to pre-processing including cropping and segmentation, and the to-be-identified video is converted into a micro-expression video of 1 to 2 seconds.
首先,根据脸部长宽对所述待识别视频进行裁剪,例如以鼻子为中心,脸部长度的1.5倍为长,脸部宽度的1.5倍为宽,作出一个矩形区域,按照此矩形区域对所述待识别视频的图像进行裁剪,得到脸部视频。First, the video to be recognized is cropped according to the width of the face, for example, centered on the nose, 1.5 times the length of the face is long, and 1.5 times the width of the face is wide, and a rectangular area is formed, according to the rectangular area The image of the to-be-identified video is cropped to obtain a face video.
其次,对所述脸部视频进行切段,剔除不包含微表情的视频片段,得到微表情视频。Secondly, the face video is segmented, and the video segment that does not contain the micro-expression is removed, and the micro-expression video is obtained.
至此,得到脸部的微表情视频,为后续提取表情特征信息提供了便利。At this point, the micro-expression video of the face is obtained, which facilitates the subsequent extraction of the expression feature information.
步骤S20:从所述待识别视频中提取各预设区域的表情特征信息;Step S20: extracting expression feature information of each preset area from the to-be-identified video;
可以理解的是,所述表情特征信息是指一组能反映微表情变化过程的数据信息,包括脸部的各预设区域的变化持续时间和变化程度。如眉毛的变化持续时间、眼睛轮廓的变化程度等。It can be understood that the expression feature information refers to a set of data information that can reflect the process of changing the micro-expression, including the duration and degree of change of each preset region of the face. Such as the duration of changes in eyebrows, the degree of change in eye contours, etc.
需要说明的是,人的微表情由脸部各部位共同呈现,单个部位的变化不能完全说明人的微表情,如“高兴”时人不会单单嘴角上扬,而是嘴角翘起,面颊上抬起皱,眼睑收缩,眼睛尾部会形成“鱼尾纹”,这些部位共同变化产生“高兴”的微表情。而影响人的微表情的部位主要包括五官区域、鼻唇沟区域和眼睑区域,因此,在本实施例中选取上述部位作为预设区域。It should be noted that the human micro-expression is presented by all parts of the face. The change of a single part cannot fully explain the micro-expression of the person. For example, when “happy”, the person will not only raise the corner of the mouth, but the mouth will be lifted and the cheeks will be lifted. Wrinkles, eyelid contraction, and "crow's feet" form at the end of the eye. These parts change together to produce a "happy" micro-expression. The parts affecting the human micro-expression mainly include the facial features area, the nasolabial area, and the eyelid area. Therefore, in the present embodiment, the above-mentioned part is selected as the preset area.
在具体实现中,所述待识别视频经过了裁剪和切段,转化为微表情视频,在所述微表情视频中提取所述表情特征信息更为方便快捷。针对各预设区域,提取表情特征信息,即提取各预设区域的变化持续时间和各预设区域的变化程度。In a specific implementation, the to-be-identified video is clipped and segmented, and converted into a micro-expression video, and extracting the expression feature information in the micro-expression video is more convenient and quick. For each preset area, the expression feature information is extracted, that is, the change duration of each preset area and the degree of change of each preset area are extracted.
步骤S30:将所述表情特征信息与预设微表情模型进行对比,并根据对比结果确定所述待识别视频中的微表情。Step S30: comparing the expression feature information with a preset micro-expression model, and determining a micro-expression in the to-be-identified video according to the comparison result.
需要说明的是,在进行所述待识别视频中的微表情识别之前,建立预设微表情模型,在所述预设微表情模型中输入表情特征信息,所述预设微表情模型能够对输入的表情特征信息进行识别,得到与所述表情特征信息对应的微表情,并输出所述微表情,即实现了识别所述待识别视频中的微表情。It should be noted that, before performing micro-expression recognition in the to-be-identified video, a preset micro-expression model is established, and expression feature information is input in the preset micro-expression model, and the preset micro-expression model can input The expression feature information is identified, the micro-expression corresponding to the expression feature information is obtained, and the micro-expression is output, that is, the micro-expression in the to-be-identified video is recognized.
本实施例通过对待识别视频进行图像识别,获得所述待识别视频中的人脸,并按照预设区域对所述人脸进行划分;从所述待识别视频中提取各预设区域的表情特征信息;将所述表情特征信息与预设微表情模型进行对比,并根据对比结果确定所述待识别视频中的微表情。由于本实施采用的待识别视频是在自然状态下获取的,并且提取了人脸各预设区域的表情特征信息,对微表情的识别更加准确,能够较好地体现微表情的真实状况。In this embodiment, the image is recognized by the image to be recognized, the face in the to-be-identified video is obtained, and the face is divided according to the preset area; and the expression features of each preset area are extracted from the to-be-recognized video. And comparing the facial expression feature information with the preset micro-expression model, and determining the micro-expression in the to-be-identified video according to the comparison result. Since the video to be recognized adopted in the present embodiment is acquired in a natural state, and the expression feature information of each preset area of the face is extracted, the recognition of the micro-expression is more accurate, and the true state of the micro-expression can be better reflected.
参照图3,图3为本申请微表情识别方法第二实施例的流程示意图,基于上述图2所示的实施例,提出本申请微表情识别方法的第二实施例。Referring to FIG. 3, FIG. 3 is a schematic flowchart of a second embodiment of the micro-expression recognition method according to the present application. Based on the embodiment shown in FIG. 2, a second embodiment of the micro-expression recognition method of the present application is proposed.
在第二实施例中,所述步骤S20具体包括:In the second embodiment, the step S20 specifically includes:
步骤S201:对所述五官区域进行轮廓识别,获取所述五官区域的轮廓特征信息;Step S201: performing contour recognition on the facial features area, and acquiring contour feature information of the facial features area;
可以理解的是,五官区域是影响人的微表情的主要区域,所述五官区域具有清晰的轮廓,通过对所述五官区域进行轮廓识别,能够获取五官区域的轮廓特征信息,所述轮廓特征信息包括五官区域轮廓的变化持续时间和轮廓的变化程度。所述轮廓识别的方法可以是边缘检测算法,本实施例对此不加以限制。It can be understood that the facial features area is a main area that affects a human micro-expression, and the facial features area has a clear outline. By contour recognition of the facial features area, contour feature information of the facial features area can be acquired, and the outline characteristic information is obtained. Includes the duration of the change in the profile of the facial features and the degree of change in the profile. The method for the contour recognition may be an edge detection algorithm, which is not limited in this embodiment.
步骤S202:对所述鼻唇沟区域进行纹理分析,获取所述鼻唇沟区域的纹理特征信息;Step S202: performing texture analysis on the nasolabial region to obtain texture feature information of the nasolabial region;
可以理解的是,鼻唇沟区域是影响人的微表情的重要区域,所述鼻唇沟区域具有纹理,通过对所述鼻唇沟区域进行纹理分析,能够获取鼻唇沟区域的纹理特征信息,所述纹理特征信息包括鼻唇沟区域的变化持续时间和鼻唇沟的变化程度。所述纹理分析的方法可以是灰度变换,也可以是二值化,本实施例对此不加以限制。It can be understood that the nasolabial region is an important region that affects the micro-expression of the human, and the nasolabial region has a texture. By performing texture analysis on the nasolabial region, the texture feature information of the nasolabial region can be obtained. The texture characteristic information includes a duration of change of the nasolabial region and a degree of change of the nasolabial fold. The method of the texture analysis may be a grayscale transformation or a binarization, which is not limited in this embodiment.
步骤S203:获取所述眼睑区域的面积特征信息;Step S203: acquiring area feature information of the eyelid region;
可以理解的是,眼睑区域同样是影响人的微表情的重要区域,所述眼睑区域具有一块近于平面的皮肤,通过计算每帧视频图像中所述眼睑区域的面积,能够获取眼睑区域的面积特征信息,所述面积特征信息包括眼睑区域的变化持续时间和眼睑面积的变化程度。It can be understood that the eyelid region is also an important region affecting the human micro-expression, the eyelid region has a skin near the plane, and the area of the eyelid region can be obtained by calculating the area of the eyelid region in each frame of the video image. Characteristic information, the area characteristic information including a change duration of the eyelid region and a degree of change in the eyelid area.
步骤S204:将所述轮廓特征信息、纹理特征信息、面积特征信息分别作为对应预设区域的表情特征信息。Step S204: The contour feature information, the texture feature information, and the area feature information are respectively used as the expression feature information corresponding to the preset region.
应当理解的是,将所述轮廓特征信息作为所述五官区域的表情特征信息,将所述纹理特征信息作为所述鼻唇沟区域的表情特征信息,将所述面积特征信息作为所述眼睑区域的表情特征信息,并将所有预设区域的表情特征信息汇总为所述待识别视频对应的表情特征信息。It should be understood that the contour feature information is used as the expression feature information of the facial features, the texture feature information is used as the expression feature information of the nasolabial region, and the area feature information is used as the eyelid region. The expression feature information is summarized, and the expression feature information of all the preset regions is summarized into the expression feature information corresponding to the to-be-identified video.
本实施例针对各预设区域不同的形态特点,采用不同的处理方法提取各预设区域的表情特征信息,能够较好地捕捉微表情的变化过程,为后续根据所述表情特征信息识别待识别视频中的微表情提供了基础。In this embodiment, different expression methods are used to extract the facial expression feature information of each preset region, which can better capture the change process of the micro-expression, and subsequently identify the to-be-identified according to the expression feature information. The micro-expressions in the video provide the basis.
参照图4,图4为本申请微表情识别方法第三实施例的流程示意图,基于上述图2所示的实施例,提出本申请微表情识别方法的第三实施例。Referring to FIG. 4, FIG. 4 is a schematic flowchart of a third embodiment of a micro-expression recognition method according to the present application. Based on the embodiment shown in FIG. 2, a third embodiment of the micro-expression recognition method of the present application is proposed.
在第三实施例中,所述步骤S10之前,所述方法还包括:In the third embodiment, before the step S10, the method further includes:
步骤S001:按照样本视频中的人物类型对所述样本视频进行分类,所述人物类型包括各预设年龄段、性别、身份类型中的至少一项;Step S001: classify the sample video according to a character type in the sample video, where the character type includes at least one of each preset age group, gender, and identity type;
可以理解的是,本实施例提供一种微表情识别方法,应用于建立所述微表情库和建立所述预设微表情模型的场景。预先建立微表情和表情特征信息的映射关系,并保存所述映射关系得到微表情库,其中,每组映射关系中的所述微表情和表情特征信息根据同一个样本视频获取。所述样本视频均采用包含了自然状态下的微表情的视频,并通过其中包含的微表情来构建所述微表情和表情特征信息的映射关系。获取所述样本视频中唯一的微表情和微表情对应的唯一的表情特征信息,即可建立所述样本视频对应的微表情和表情特征信息的映射关系。It can be understood that the embodiment provides a micro-expression recognition method, which is applied to the scenario of establishing the micro-expression library and establishing the preset micro-expression model. Pre-establishing a mapping relationship between the micro-expression and the expression feature information, and saving the mapping relationship to obtain a micro-expression library, wherein the micro-expression and the expression feature information in each set of mapping relationships are acquired according to the same sample video. The sample video adopts a video containing a micro-expression in a natural state, and constructs a mapping relationship between the micro-expression and the expression feature information through the micro-expression contained therein. The unique facial expression feature information corresponding to the unique micro-expression and the micro-expression in the sample video is obtained, and the mapping relationship between the micro-expression and the expression feature information corresponding to the sample video is established.
应当理解的是,依据人物类型对所述样本视频进行分类,通过对分类后的视频进行特征提取,最终能够得到各人物类型的微表情库。例如按照性别进行分类,首先按照所述样本视频中的人物性别将所述样本视频分为男性样本视频和女性样本视频,再分别对男性样本视频和女性样本视频进行特征提取,最终得到男性微表情库和女性微表情库。同理,根据各预设年龄段和人物身份对所述样本视频进行分类,可得到各预设年龄段的微表情库和各身份的微表情库。It should be understood that the sample video is classified according to the type of the character, and the feature extraction of the classified video can finally obtain the micro-expression library of each character type. For example, according to the gender classification, the sample video is first divided into a male sample video and a female sample video according to the gender of the person in the sample video, and then the male sample video and the female sample video are separately extracted, and finally the male micro-expression is obtained. Library and female micro-expression library. Similarly, the sample video is classified according to each preset age group and the identity of the person, and the micro-expression library of each preset age group and the micro-expression library of each identity are obtained.
步骤S002:对样本视频进行表情识别,确定所述样本视频中的微表情;Step S002: performing expression recognition on the sample video to determine a micro-expression in the sample video;
可以理解的是,为了建立微表情和表情特征信息的映射关系,将通过对所述样本视频进行表情识别以确定样本视频中的微表情。而在进行表情识别之前预先设置人类的六大基本表情作为表情类别,使识别出的表情均属于所述表情类别之内,该六大基本表情包括惊讶、厌恶、愤怒、恐惧、悲伤和愉悦,人类所有表情都可纳入这六大基本表情范围。当然,还可以将表情细分为更多种表情作为表情类别,本实施例对此不加以限制。It can be understood that in order to establish a mapping relationship between the micro-expression and the expression feature information, the expression expression in the sample video is determined by performing expression recognition on the sample video. Before the expression recognition, the six basic expressions of the human being are pre-set as the expression category, so that the recognized expressions belong to the expression category, and the six basic expressions include surprise, disgust, anger, fear, sadness and pleasure. All human expressions can be included in these six basic expression ranges. Of course, it is also possible to subdivide the expression into more types of expressions as the expression category, which is not limited in this embodiment.
步骤S003:提取所述样本视频中的环境特征信息;Step S003: Extracting environmental feature information in the sample video;
需要说明的是,环境会对微表情产生影响,通过环境特征信息和表情特征信息共同确定所述样本视频中的微表情,更为准确。It should be noted that the environment has an influence on the micro-expression, and the micro-expression in the sample video is determined by the environment feature information and the expression feature information to be more accurate.
步骤S004:对所述样本视频进行图像识别,获得所述样本视频中的人脸部分,并按照预设区域对所述样本视频中的人脸部分进行划分;Step S004: performing image recognition on the sample video, obtaining a face part in the sample video, and dividing a face part in the sample video according to a preset area;
步骤S005:从所述样本视频中提取各预设区域的表情特征信息;Step S005: extract expression feature information of each preset area from the sample video.
可以理解的是,对所述样本视频进行图像识别,获得所述样本视频中的人脸部分,并按照预设区域对所述人脸部分进行划分的过程,与对待识别视频进行图像识别,获得所述待识别视频中的人脸部分,并按照预设区域对所述人脸部分进行划分的过程一致;从所述样本视频中提取各预设区域的表情特征信息的过程与从所述待识别视频中提取各预设区域的表情特征信息的过程一致。It can be understood that image recognition is performed on the sample video, a face portion in the sample video is obtained, and the face portion is divided according to a preset region, and image recognition is performed on the video to be recognized. And the process of dividing the face part according to the preset area according to the preset part; and the process of extracting the expression feature information of each preset area from the sample video The process of extracting the expression feature information of each preset area in the recognition video is consistent.
步骤S006:建立所述微表情和所述表情特征信息、环境特征信息的映射关系,并存储所述映射关系得到微表情库;Step S006: establishing a mapping relationship between the micro-expression and the expression feature information and environment feature information, and storing the mapping relationship to obtain a micro-expression library;
应当理解的是,获取了所述样本视频中的微表情,及所述样本视频中的环境特征信息、表情特征信息后,由于所述微表情、环境特征信息和表情特征信息同属一个样本视频,可建立所述微表情和所述表情特征信息、环境特征信息的映射关系。存储所述映射关系得到微表情库,所述微表情库中包含了各人物类型的微表情和所述表情特征信息、环境特征信息的映射关系。It should be understood that, after acquiring the micro-expression in the sample video, and the environment feature information and the expression feature information in the sample video, since the micro-expression, the environment feature information, and the expression feature information belong to the same sample video, A mapping relationship between the micro-expression and the expression feature information and the environment feature information may be established. The mapping relationship is stored to obtain a micro-expression library, and the micro-expression library includes a mapping relationship between the micro-expressions of the character types and the expression feature information and the environment feature information.
步骤S007:建立微表情模型,并通过所述映射关系训练所述微表情模型,形成预设微表情模型。Step S007: Establish a micro-expression model, and train the micro-expression model through the mapping relationship to form a preset micro-expression model.
需要说明的是,微表情库中存储的映射关系等数据除了对人物类型有分类,每一类下存储的数据是散乱的,缺乏系统性,通过建立模型,对模型进行训练以构建数据脉络,能够完成对数据的整理。通过训练后的预设微表情模型能方便快捷地对所述待识别视频进行微表情识别。It should be noted that the data such as the mapping relationship stored in the micro-expression library is classified according to the character type, and the data stored in each class is scattered and lacks systematicity. By establishing a model, the model is trained to construct a data context. Can complete the finishing of the data. The micro-expression recognition can be conveniently and quickly performed on the to-be-identified video through the trained preset micro-expression model.
应当理解的是,为了构建预设微表情模型,将预先建立微表情模型,并通过所述映射关系训练所述微表情模型,以提高所述微表情模型的识别准确率,所述映射关系为得到的已知关系,可用于训练所述微表情模型,当所述映射关系数量足够多,且利用所述映射关系对所述微表情模型的训练次数足够多时,所述微表情模型判别的准确率能达到一定标准,成为所述预设微表情模型。It should be understood that, in order to construct a preset micro-expression model, a micro-expression model is pre-established, and the micro-expression model is trained by the mapping relationship to improve the recognition accuracy of the micro-expression model, and the mapping relationship is The obtained known relationship may be used to train the micro-expression model. When the number of mapping relationships is sufficient, and the number of trainings of the micro-expression model is sufficient by using the mapping relationship, the micro-expression model is accurately determined. The rate can reach a certain standard and become the preset micro-expression model.
可以理解的是,所述通过所述映射关系训练所述微表情模型,形成预设微表情模型的具体过程为:在所述微表情模型中输入一组映射关系,所述微表情模型根据所述映射关系中的环境特征信息和表情特征信息得出所述样本视频的识别结果,并将所述识别结果与所述映射关系中的微表情进行对比,得到对比结果;It can be understood that the specific process of training the micro-expression model by using the mapping relationship to form a preset micro-expression model is: inputting a set of mapping relationships in the micro-expression model, the micro-expression model according to the Deriving the identification result of the sample video by using the environment feature information and the expression feature information in the mapping relationship, and comparing the recognition result with the micro-expression in the mapping relationship to obtain a comparison result;
当所述识别结果与所述微表情一致时,输出判别结果为真,并增大微表情模型连接权,训练下一组映射关系;When the recognition result is consistent with the micro-expression, the output determination result is true, and the micro-expression model connection right is increased, and the next set of mapping relationships is trained;
当所述识别结果与所述微表情不一致时,输出判别结果为假,减小微表情模型连接权,并通过所述映射关系再次训练所述微表情模型,直至所有映射关系的判别结果均为真。When the recognition result is inconsistent with the micro-expression, the output determination result is false, the micro-expression model connection right is reduced, and the micro-expression model is trained again through the mapping relationship until the discrimination result of all the mapping relationships is true.
需要说明的是,当所述映射关系数量不够多,且利用所述映射关系对所述微表情模型的训练次数不够多时,训练之后的识别准确率可能仍没有达到所述标准而无法得到预设微表情模型,只得到试用模型,因此,在初期使用所述试用模型对试用视频进行微表情识别时,通过试用视频对应的映射关系对试用模型进行二次训练,以实现试用模型的识别准确率能够达到标准。It should be noted that when the number of the mapping relationships is not enough, and the number of trainings of the micro-expression model is insufficient by using the mapping relationship, the recognition accuracy after the training may still not reach the standard and cannot be preset. The micro-expression model only obtains the trial model. Therefore, when the trial model is used to initially perform micro-expression recognition on the trial video, the trial model is secondarily trained by the mapping relationship corresponding to the trial video, so as to realize the recognition accuracy of the trial model. Can reach the standard.
通过试用模型识别试用视频中的微表情时,所述二次训练的步骤,具体包括:When the trial model is used to identify the micro-expression in the trial video, the steps of the secondary training specifically include:
对所述试用视频进行表情识别,确定所述试用视频中的微表情;Performing expression recognition on the trial video to determine a micro-expression in the trial video;
提取所述试用视频中的环境特征信息;Extracting environmental feature information in the trial video;
对所述试用视频进行图像识别,获得所述试用视频中的人脸部分,并按照预设区域对所述人脸部分进行划分;Performing image recognition on the trial video, obtaining a face part in the trial video, and dividing the face part according to a preset area;
从所述试用视频中提取各预设区域的表情特征信息;Extracting expression feature information of each preset area from the trial video;
将所述试用视频中的微表情、环境特征信息和表情特征信息,输入至所述试用模型,所述试用模型根据所述环境特征信息和表情特征信息得出所述试用视频的识别结果,并将所述识别结果与所述试用视频中的微表情进行对比,得到对比结果;Inputting the micro-expression, the environmental feature information, and the expression feature information in the trial video to the trial model, and the trial model obtains the recognition result of the trial video according to the environmental feature information and the expression feature information, and Comparing the recognition result with the micro-expression in the trial video to obtain a comparison result;
当所述识别结果与所述微表情一致时,输出判别结果为真,增大试用模型连接权,并建立所述试用视频中的环境特征信息、表情特征信息和微表情的对应关系,将所述对应关系保存于所述微表情库,从而扩充了微表情库;When the recognition result is consistent with the micro-expression, the output determination result is true, the trial model connection right is increased, and the correspondence relationship between the environmental feature information, the expression feature information, and the micro-expression in the trial video is established. The corresponding relationship is saved in the micro-expression library, thereby expanding the micro-expression library;
当所述识别结果与所述微表情不一致时,输出判别结果为假,减小试用模型连接权,并通过所述对应关系训练所述试用模型,以增大所述试用模型的识别准确率,得到预设微表情模型。When the recognition result is inconsistent with the micro-expression, the output determination result is false, the trial model connection right is reduced, and the trial model is trained by the correspondence relationship to increase the recognition accuracy of the trial model. Get the preset micro-expression model.
本实施例通过获取包含自然状态下的微表情的样本视频,对所述样本视频进行人物类型分类,并提取所述样本视频的环境特征信息和表情特征信息,建立包含微表情和环境特征信息、表情特征信息的映射关系,建立各预设类型的包含映射关系的微表情库和微表情模型,具有针对性,并通过所述映射关系训练所述微表情模型,提高所述微表情模型的识别准确率,以实现通过所述预设微表情模型对微表情进行识别。In this embodiment, the sample video including the micro-expression in the natural state is obtained, the character type is classified into the sample video, and the environmental feature information and the expression feature information of the sample video are extracted, and the micro-expression and the environmental feature information are established. The mapping relationship between the expression feature information, the micro-expression library and the micro-expression model including the mapping relationship of each preset type are established, and the micro-expression model is trained by the mapping relationship to improve the recognition of the micro-expression model. Accuracy to achieve recognition of micro-expressions by the preset micro-expression model.
此外,本申请实施例还提出一种存储介质,所述存储介质上存储有微表情识别程序,所述微表情识别程序被处理器执行时实现如下操作:In addition, the embodiment of the present application further provides a storage medium, where the micro-expression recognition program is stored, and when the micro-expression recognition program is executed by the processor, the following operations are implemented:
对待识别视频进行图像识别,获得所述待识别视频中的人脸部分,并按照预设区域对所述人脸部分进行划分;Performing image recognition on the recognized video, obtaining a face part in the to-be-identified video, and dividing the face part according to the preset area;
从所述待识别视频中提取各预设区域的表情特征信息;Extracting expression feature information of each preset area from the to-be-identified video;
将所述表情特征信息与预设微表情模型进行对比,并根据对比结果确定所述待识别视频中的微表情。Comparing the expression feature information with the preset micro-expression model, and determining the micro-expression in the to-be-identified video according to the comparison result.
进一步地,所述微表情识别程序被处理器执行时还实现如下操作:Further, when the micro-expression recognition program is executed by the processor, the following operations are also implemented:
提取所述待识别视频的环境特征信息;Extracting environment characteristic information of the to-be-identified video;
相应地,所述将所述表情特征信息与预设微表情模型进行对比,具体包括:Correspondingly, comparing the expression feature information with the preset micro-expression model, specifically:
将所述表情特征信息和所述环境特征信息同时与所述预设微表情模型进行对比。And comparing the expression feature information and the environment feature information to the preset micro-expression model.
进一步地,所述微表情识别程序被处理器执行时还实现如下操作:Further, when the micro-expression recognition program is executed by the processor, the following operations are also implemented:
对所述待识别视频进行裁剪,保留所述待识别视频中的人脸部分;And cutting the to-be-identified video, and retaining a part of the face in the to-be-identified video;
对所述人脸部分进行切段,剔除不包含微表情的视频片段。The face portion is segmented to eliminate video segments that do not contain micro-expressions.
进一步地,所述微表情识别程序被处理器执行时还实现如下操作:Further, when the micro-expression recognition program is executed by the processor, the following operations are also implemented:
对所述五官区域进行轮廓识别,获取所述五官区域的轮廓特征信息;Performing contour recognition on the facial features to obtain contour feature information of the facial features;
对所述鼻唇沟区域进行纹理分析,获取所述鼻唇沟区域的纹理特征信息;Performing a texture analysis on the nasolabial region to obtain texture feature information of the nasolabial region;
获取所述眼睑区域的面积特征信息;Obtaining area feature information of the eyelid region;
将所述轮廓特征信息、纹理特征信息、面积特征信息分别作为对应预设区域的表情特征信息。The contour feature information, the texture feature information, and the area feature information are respectively used as expression feature information corresponding to the preset region.
进一步地,所述微表情识别程序被处理器执行时还实现如下操作:Further, when the micro-expression recognition program is executed by the processor, the following operations are also implemented:
对样本视频进行表情识别,确定所述样本视频中的微表情;Performing expression recognition on the sample video to determine a micro-expression in the sample video;
对所述样本视频进行图像识别,获得所述样本视频中的人脸,并按照预设区域对所述样本视频中的人脸进行划分;Performing image recognition on the sample video, obtaining a face in the sample video, and dividing a face in the sample video according to a preset area;
从所述样本视频中提取各预设区域的表情特征信息;Extracting expression feature information of each preset area from the sample video;
建立所述微表情和所述表情特征信息的映射关系,并存储所述映射关系得到微表情库;Establishing a mapping relationship between the micro-expression and the expression feature information, and storing the mapping relationship to obtain a micro-expression library;
建立微表情模型,并通过所述映射关系训练所述微表情模型,形成预设微表情模型。A micro-expression model is established, and the micro-expression model is trained through the mapping relationship to form a preset micro-expression model.
进一步地,所述微表情识别程序被处理器执行时还实现如下操作:Further, when the micro-expression recognition program is executed by the processor, the following operations are also implemented:
提取所述样本视频中的环境特征信息;Extracting environmental feature information in the sample video;
相应地,所述建立所述微表情和所述表情特征信息的映射关系,具体包括:Correspondingly, the establishing a mapping relationship between the micro-expression and the expression feature information includes:
建立所述微表情和所述表情特征信息、所述环境特征信息的映射关系。Establishing a mapping relationship between the micro-expression and the expression feature information and the environment feature information.
进一步地,所述微表情识别程序被处理器执行时还实现如下操作:Further, when the micro-expression recognition program is executed by the processor, the following operations are also implemented:
按照所述样本视频中的人物类型对所述样本视频进行分类,所述人物类型包括各预设年龄段、性别、身份类型中的至少一项;Sorting the sample video according to a type of a person in the sample video, the character type including at least one of each preset age group, gender, and identity type;
相应地,所述存储所述映射关系得到微表情库,还包括:Correspondingly, the storing the mapping relationship to obtain a micro-expression library further includes:
按所述人物类型存储所述映射关系得到各类型微表情库。The mapping relationship is stored according to the character type to obtain each type of micro-expression library.
本实施例通过对待识别视频进行图像识别,获得所述待识别视频中的人脸,并按照预设区域对所述人脸进行划分;从所述待识别视频中提取各预设区域的表情特征信息;将所述表情特征信息与预设微表情模型进行对比,并根据对比结果确定所述待识别视频中的微表情。由于本实施采用的待识别视频是在自然状态下获取的,并且提取了人脸各预设区域的表情特征信息,对微表情的识别更加准确,能够较好地体现微表情的真实状况。In this embodiment, the image is recognized by the image to be recognized, the face in the to-be-identified video is obtained, and the face is divided according to the preset area; and the expression features of each preset area are extracted from the to-be-recognized video. And comparing the facial expression feature information with the preset micro-expression model, and determining the micro-expression in the to-be-identified video according to the comparison result. Since the video to be recognized adopted in the present embodiment is acquired in a natural state, and the expression feature information of each preset area of the face is extracted, the recognition of the micro-expression is more accurate, and the true state of the micro-expression can be better reflected.
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。It is understood that the specific embodiments described herein are merely illustrative of the application and are not intended to be limiting.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。It is to be understood that the term "comprises", "comprising", or any other variants thereof, is intended to encompass a non-exclusive inclusion, such that a process, method, article, or It also includes other elements that are not explicitly listed, or elements that are inherent to such a process, method, item, or system. An element defined by the phrase "comprising a ..." does not exclude the presence of additional equivalent elements in a process, method, article, or system that includes the element, without further limitation.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the embodiments of the present application are merely for the description, and do not represent the advantages and disadvantages of the embodiments.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述 实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通 过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体 现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。Those skilled in the art can clearly understand the above by the description of the above embodiments. The embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course Hardware, but in many cases the former is a better implementation. Based on such understanding, the technical solution of the present application may be in the form of a software product in essence or in part contributing to the prior art. It is now found that the computer software product is stored in a storage medium (such as ROM/RAM, disk, CD), and includes a plurality of instructions for making a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device). And the like) performing the methods described in the various embodiments of the present application.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above is only a preferred embodiment of the present application, and is not intended to limit the scope of the patent application, and the equivalent structure or equivalent process transformations made by the specification and the drawings of the present application, or directly or indirectly applied to other related technical fields. The same is included in the scope of patent protection of this application.

Claims (20)

  1. 一种微表情识别方法,其中,所述方法包括以下步骤: A micro-expression recognition method, wherein the method comprises the following steps:
    对待识别视频进行图像识别,获得所述待识别视频中的人脸部分,并按照预设区域对所述人脸部分进行划分;Performing image recognition on the recognized video, obtaining a face part in the to-be-identified video, and dividing the face part according to the preset area;
    从所述待识别视频中提取各预设区域的表情特征信息;以及Extracting expression feature information of each preset area from the to-be-identified video;
    将所述表情特征信息与预设微表情模型进行对比,并根据对比结果确定所述待识别视频中的微表情。Comparing the expression feature information with the preset micro-expression model, and determining the micro-expression in the to-be-identified video according to the comparison result.
  2. 如权利要求1所述的方法,其中,所述对待识别视频进行图像识别,获得所述待识别视频中的人脸,并按照预设区域对所述人脸进行划分的步骤之前,所述方法还包括:The method according to claim 1, wherein the method of performing image recognition on the image to be recognized, obtaining a face in the to-be-recognized video, and dividing the face according to a preset area, the method Also includes:
    提取所述待识别视频的环境特征信息;Extracting environment characteristic information of the to-be-identified video;
    相应地,所述将所述表情特征信息与预设微表情模型进行对比,包括:Correspondingly, comparing the expression feature information with the preset micro-expression model includes:
    将所述表情特征信息和所述环境特征信息同时与所述预设微表情模型进行对比。And comparing the expression feature information and the environment feature information to the preset micro-expression model.
  3. 如权利要求1所述的方法,其中,所述获得所述待识别视频中的人脸部分的步骤,包括:The method of claim 1, wherein the step of obtaining a face portion in the to-be-identified video comprises:
    对所述待识别视频进行裁剪,保留所述待识别视频中的人脸部分;And cutting the to-be-identified video, and retaining a part of the face in the to-be-identified video;
    对所述人脸部分进行切段,剔除不包含微表情的视频片段。The face portion is segmented to eliminate video segments that do not contain micro-expressions.
  4. 如权利要求1所述的方法,其中,所述预设区域包括五官区域、鼻唇沟区域和眼睑区域;所述表情特征信息包括各预设区域的变化持续时间和各预设区域的变化程度。The method according to claim 1, wherein the predetermined area includes a facial detail area, a nasolabial area, and an eyelid area; the expression feature information includes a duration of change of each preset area and a degree of change of each preset area .
  5. 如权利要求4所述的方法,其中,所述在所述待识别视频中提取各预设区域的表情特征信息的步骤,包括:The method of claim 4, wherein the step of extracting the expression feature information of each preset area in the to-be-identified video comprises:
    对所述五官区域进行轮廓识别,获取所述五官区域的轮廓特征信息;Performing contour recognition on the facial features to obtain contour feature information of the facial features;
    对所述鼻唇沟区域进行纹理分析,获取所述鼻唇沟区域的纹理特征信息;Performing a texture analysis on the nasolabial region to obtain texture feature information of the nasolabial region;
    获取所述眼睑区域的面积特征信息;Obtaining area feature information of the eyelid region;
    将所述轮廓特征信息、纹理特征信息、面积特征信息分别作为对应预设区域的表情特征信息。The contour feature information, the texture feature information, and the area feature information are respectively used as expression feature information corresponding to the preset region.
  6. 如权利要求1所述的方法,其中,所述获取待识别视频之前,所述方法还包括:The method of claim 1, wherein before the obtaining the video to be identified, the method further comprises:
    对样本视频进行表情识别,确定所述样本视频中的微表情;Performing expression recognition on the sample video to determine a micro-expression in the sample video;
    对所述样本视频进行图像识别,获得所述样本视频中的人脸,并按照预设区域对所述样本视频中的人脸进行划分;Performing image recognition on the sample video, obtaining a face in the sample video, and dividing a face in the sample video according to a preset area;
    从所述样本视频中提取各预设区域的表情特征信息;Extracting expression feature information of each preset area from the sample video;
    建立所述微表情和所述表情特征信息的映射关系,并存储所述映射关系得到微表情库;Establishing a mapping relationship between the micro-expression and the expression feature information, and storing the mapping relationship to obtain a micro-expression library;
    建立微表情模型,并通过所述映射关系训练所述微表情模型,形成预设微表情模型。A micro-expression model is established, and the micro-expression model is trained through the mapping relationship to form a preset micro-expression model.
  7. 如权利要求6所述的方法,其中,所述对样本视频进行表情识别,确定所述样本视频中的微表情的步骤之后,所述方法还包括:The method of claim 6, wherein after the step of performing facial expression recognition on the sample video to determine a micro-expression in the sample video, the method further comprises:
    提取所述样本视频中的环境特征信息;Extracting environmental feature information in the sample video;
    相应地,所述建立所述微表情和所述表情特征信息的映射关系,包括:Correspondingly, the mapping relationship between the micro-expression and the expression feature information is established, including:
    建立所述微表情和所述表情特征信息、所述环境特征信息的映射关系。Establishing a mapping relationship between the micro-expression and the expression feature information and the environment feature information.
  8. 如权利要求7所述的方法,其中,所述对所述样本视频进行表情识别的步骤之前,所述方法还包括:The method of claim 7, wherein before the step of performing expression recognition on the sample video, the method further comprises:
    按照所述样本视频中的人物类型对所述样本视频进行分类,所述人物类型包括各预设年龄段、性别、身份类型中的至少一项;Sorting the sample video according to a type of a person in the sample video, the character type including at least one of each preset age group, gender, and identity type;
    相应地,所述存储所述映射关系得到微表情库,还包括:Correspondingly, the storing the mapping relationship to obtain a micro-expression library further includes:
    按所述人物类型存储所述映射关系得到各类型微表情库。The mapping relationship is stored according to the character type to obtain each type of micro-expression library.
  9. 一种微表情识别装置,其中,所述微表情识别装置包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的微表情识别程序,所述微表情识别程序被所述处理器执行时实现如下步骤:A micro-expression recognition device, wherein the micro-expression recognition device comprises: a memory, a processor, and a micro-expression recognition program stored on the memory and operable on the processor, the micro-expression recognition program being The processor implements the following steps when executed:
    对待识别视频进行图像识别,获得所述待识别视频中的人脸部分,并按照预设区域对所述人脸部分进行划分;Performing image recognition on the recognized video, obtaining a face part in the to-be-identified video, and dividing the face part according to the preset area;
    从所述待识别视频中提取各预设区域的表情特征信息;以及Extracting expression feature information of each preset area from the to-be-identified video;
    将所述表情特征信息与预设微表情模型进行对比,并根据对比结果确定所述待识别视频中的微表情。Comparing the expression feature information with the preset micro-expression model, and determining the micro-expression in the to-be-identified video according to the comparison result.
  10. 如权利要求9所述的微表情识别装置,其中,所述微表情识别程序设置为实现如下步骤:The micro-expression recognition device of claim 9, wherein the micro-expression recognition program is configured to implement the following steps:
    提取所述待识别视频的环境特征信息;Extracting environment characteristic information of the to-be-identified video;
    相应地,所述将所述表情特征信息与预设微表情模型进行对比,包括:Correspondingly, comparing the expression feature information with the preset micro-expression model includes:
    将所述表情特征信息和所述环境特征信息同时与所述预设微表情模型进行对比。And comparing the expression feature information and the environment feature information to the preset micro-expression model.
  11. 如权利要求9所述的微表情识别装置,其中,所述微表情识别程序设置为实现如下步骤:The micro-expression recognition device of claim 9, wherein the micro-expression recognition program is configured to implement the following steps:
    对所述待识别视频进行裁剪,保留所述待识别视频中的人脸部分;And cutting the to-be-identified video, and retaining a part of the face in the to-be-identified video;
    对所述人脸部分进行切段,剔除不包含微表情的视频片段。The face portion is segmented to eliminate video segments that do not contain micro-expressions.
  12. 如权利要求9所述的微表情识别装置,其中,所述预设区域包括五官区域、鼻唇沟区域和眼睑区域;所述表情特征信息包括各预设区域的变化持续时间和各预设区域的变化程度。The micro-expression recognition device according to claim 9, wherein the preset area includes a facial detail area, a nasolabial area, and an eyelid area; the expression feature information includes a change duration of each preset area and each preset area The degree of change.
  13. 如权利要求12所述的微表情识别装置,其中,所述微表情识别程序设置为实现如下步骤:A micro-expression recognition apparatus according to claim 12, wherein said micro-expression recognition program is arranged to implement the following steps:
    对所述五官区域进行轮廓识别,获取所述五官区域的轮廓特征信息;Performing contour recognition on the facial features to obtain contour feature information of the facial features;
    对所述鼻唇沟区域进行纹理分析,获取所述鼻唇沟区域的纹理特征信息;Performing a texture analysis on the nasolabial region to obtain texture feature information of the nasolabial region;
    获取所述眼睑区域的面积特征信息;Obtaining area feature information of the eyelid region;
    将所述轮廓特征信息、纹理特征信息、面积特征信息分别作为对应预设区域的表情特征信息。The contour feature information, the texture feature information, and the area feature information are respectively used as expression feature information corresponding to the preset region.
  14. 如权利要求9所述的微表情识别装置,其中,所述微表情识别程序设置为实现如下步骤:The micro-expression recognition device of claim 9, wherein the micro-expression recognition program is configured to implement the following steps:
    对样本视频进行表情识别,确定所述样本视频中的微表情;Performing expression recognition on the sample video to determine a micro-expression in the sample video;
    对所述样本视频进行图像识别,获得所述样本视频中的人脸,并按照预设区域对所述样本视频中的人脸进行划分;Performing image recognition on the sample video, obtaining a face in the sample video, and dividing a face in the sample video according to a preset area;
    从所述样本视频中提取各预设区域的表情特征信息;Extracting expression feature information of each preset area from the sample video;
    建立所述微表情和所述表情特征信息的映射关系,并存储所述映射关系得到微表情库;Establishing a mapping relationship between the micro-expression and the expression feature information, and storing the mapping relationship to obtain a micro-expression library;
    建立微表情模型,并通过所述映射关系训练所述微表情模型,形成预设微表情模型。A micro-expression model is established, and the micro-expression model is trained through the mapping relationship to form a preset micro-expression model.
  15. 如权利要求14所述的微表情识别装置,其中,所述微表情识别程序设置为实现如下步骤:The micro-expression recognition apparatus according to claim 14, wherein said micro-expression recognition program is set to implement the following steps:
    提取所述样本视频中的环境特征信息;Extracting environmental feature information in the sample video;
    相应地,所述建立所述微表情和所述表情特征信息的映射关系,包括:Correspondingly, the mapping relationship between the micro-expression and the expression feature information is established, including:
    建立所述微表情和所述表情特征信息、所述环境特征信息的映射关系。Establishing a mapping relationship between the micro-expression and the expression feature information and the environment feature information.
  16. 如权利要求15所述的微表情识别装置,其中,所述微表情识别程序设置为实现如下步骤:The micro-expression recognition device according to claim 15, wherein said micro-expression recognition program is set to implement the following steps:
    按照所述样本视频中的人物类型对所述样本视频进行分类,所述人物类型包括各预设年龄段、性别、身份类型中的至少一项;Sorting the sample video according to a type of a person in the sample video, the character type including at least one of each preset age group, gender, and identity type;
    相应地,所述存储所述映射关系得到微表情库,还包括:Correspondingly, the storing the mapping relationship to obtain a micro-expression library further includes:
    按所述人物类型存储所述映射关系得到各类型微表情库。The mapping relationship is stored according to the character type to obtain each type of micro-expression library.
  17. 一种存储介质,其中,所述存储介质上存储有微表情识别程序,所述微表情识别程序被处理器执行时实现如下步骤:A storage medium, wherein the storage medium stores a micro-expression recognition program, and when the micro-expression recognition program is executed by the processor, the following steps are implemented:
    对待识别视频进行图像识别,获得所述待识别视频中的人脸部分,并按照预设区域对所述人脸部分进行划分;Performing image recognition on the recognized video, obtaining a face part in the to-be-identified video, and dividing the face part according to the preset area;
    从所述待识别视频中提取各预设区域的表情特征信息;以及Extracting expression feature information of each preset area from the to-be-identified video;
    将所述表情特征信息与预设微表情模型进行对比,并根据对比结果确定所述待识别视频中的微表情。Comparing the expression feature information with the preset micro-expression model, and determining the micro-expression in the to-be-identified video according to the comparison result.
  18. 如权利要求17所述的存储介质,其中,所述微表情识别程序设置为实现如下步骤:The storage medium of claim 17, wherein the micro-expression recognition program is configured to implement the following steps:
    提取所述待识别视频的环境特征信息;Extracting environment characteristic information of the to-be-identified video;
    相应地,所述将所述表情特征信息与预设微表情模型进行对比,包括:Correspondingly, comparing the expression feature information with the preset micro-expression model includes:
    将所述表情特征信息和所述环境特征信息同时与所述预设微表情模型进行对比。And comparing the expression feature information and the environment feature information to the preset micro-expression model.
  19. 如权利要求17所述的存储介质,其中,所述微表情识别程序设置为实现如下步骤:The storage medium of claim 17, wherein the micro-expression recognition program is configured to implement the following steps:
    对所述待识别视频进行裁剪,保留所述待识别视频中的人脸部分;And cutting the to-be-identified video, and retaining a part of the face in the to-be-identified video;
    对所述人脸部分进行切段,剔除不包含微表情的视频片段。The face portion is segmented to eliminate video segments that do not contain micro-expressions.
  20. 如权利要求17所述的存储介质,其中,所述预设区域包括五官区域、鼻唇沟区域和眼睑区域;所述表情特征信息包括各预设区域的变化持续时间和各预设区域的变化程度。 The storage medium according to claim 17, wherein the predetermined area includes a facial detail area, a nasolabial area, and an eyelid area; the expression feature information includes a change duration of each preset area and a change of each preset area degree.
PCT/CN2018/090990 2017-08-07 2018-06-13 Micro-expression recognition method, device and storage medium WO2019029261A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710668442.7 2017-08-07
CN201710668442.7A CN107480622A (en) 2017-08-07 2017-08-07 Micro- expression recognition method, device and storage medium

Publications (1)

Publication Number Publication Date
WO2019029261A1 true WO2019029261A1 (en) 2019-02-14

Family

ID=60598941

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/090990 WO2019029261A1 (en) 2017-08-07 2018-06-13 Micro-expression recognition method, device and storage medium

Country Status (2)

Country Link
CN (1) CN107480622A (en)
WO (1) WO2019029261A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276406A (en) * 2019-06-26 2019-09-24 腾讯科技(深圳)有限公司 Expression classification method, apparatus, computer equipment and storage medium
CN110415015A (en) * 2019-06-19 2019-11-05 深圳壹账通智能科技有限公司 Product degree of recognition analysis method, device, terminal and computer readable storage medium
CN110458018A (en) * 2019-07-05 2019-11-15 深圳壹账通智能科技有限公司 A kind of test method, device and computer readable storage medium
CN110781810A (en) * 2019-10-24 2020-02-11 合肥盛东信息科技有限公司 Face emotion recognition method
CN111178151A (en) * 2019-12-09 2020-05-19 量子云未来(北京)信息科技有限公司 Method and device for realizing human face micro-expression change recognition based on AI technology
CN111967295A (en) * 2020-06-23 2020-11-20 南昌大学 Micro-expression capturing method for semantic tag mining
CN113515702A (en) * 2021-07-07 2021-10-19 北京百度网讯科技有限公司 Content recommendation method, model training method, device, equipment and storage medium
CN114005153A (en) * 2021-02-01 2022-02-01 南京云思创智信息科技有限公司 Real-time personalized micro-expression recognition method for face diversity

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107480622A (en) * 2017-08-07 2017-12-15 深圳市科迈爱康科技有限公司 Micro- expression recognition method, device and storage medium
CN107958230B (en) * 2017-12-22 2020-06-23 中国科学院深圳先进技术研究院 Facial expression recognition method and device
CN108335193A (en) * 2018-01-12 2018-07-27 深圳壹账通智能科技有限公司 Whole process credit methods, device, equipment and readable storage medium storing program for executing
CN108537160A (en) * 2018-03-30 2018-09-14 平安科技(深圳)有限公司 Risk Identification Method, device, equipment based on micro- expression and medium
CN109145837A (en) * 2018-08-28 2019-01-04 厦门理工学院 Face emotion identification method, device, terminal device and storage medium
CN109472206B (en) * 2018-10-11 2023-07-07 平安科技(深圳)有限公司 Risk assessment method, device, equipment and medium based on micro-expressions
CN109640104B (en) * 2018-11-27 2022-03-25 平安科技(深圳)有限公司 Live broadcast interaction method, device, equipment and storage medium based on face recognition
CN109784175A (en) * 2018-12-14 2019-05-21 深圳壹账通智能科技有限公司 Abnormal behaviour people recognition methods, equipment and storage medium based on micro- Expression Recognition
CN109784185A (en) * 2018-12-18 2019-05-21 深圳壹账通智能科技有限公司 Client's food and drink evaluation automatic obtaining method and device based on micro- Expression Recognition
CN109830280A (en) * 2018-12-18 2019-05-31 深圳壹账通智能科技有限公司 Psychological aided analysis method, device, computer equipment and storage medium
CN109697421A (en) * 2018-12-18 2019-04-30 深圳壹账通智能科技有限公司 Evaluation method, device, computer equipment and storage medium based on micro- expression
CN111353354B (en) * 2018-12-24 2024-01-23 杭州海康威视数字技术股份有限公司 Human body stress information identification method and device and electronic equipment
CN109800687A (en) * 2019-01-02 2019-05-24 深圳壹账通智能科技有限公司 Effect of meeting feedback method, device, computer equipment and readable storage medium storing program for executing
CN109858379A (en) * 2019-01-03 2019-06-07 深圳壹账通智能科技有限公司 Smile's sincerity degree detection method, device, storage medium and electronic equipment
CN109866230A (en) * 2019-01-17 2019-06-11 深圳壹账通智能科技有限公司 Customer service robot control method, device, computer equipment and storage medium
CN110321845B (en) * 2019-07-04 2021-06-18 北京奇艺世纪科技有限公司 Method and device for extracting emotion packets from video and electronic equipment
CN110852220B (en) * 2019-10-30 2023-08-18 深圳智慧林网络科技有限公司 Intelligent facial expression recognition method, terminal and computer readable storage medium
CN112749669B (en) * 2021-01-18 2024-02-02 吾征智能技术(北京)有限公司 Micro-expression intelligent recognition system based on facial image
CN116392086B (en) * 2023-06-06 2023-08-25 浙江多模医疗科技有限公司 Method, terminal and storage medium for detecting stimulation
CN117391746B (en) * 2023-10-25 2024-06-21 上海瀚泰智能科技有限公司 Intelligent hotel customer perception analysis method and system based on confidence network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426005A (en) * 2013-08-06 2013-12-04 山东大学 Automatic database creating video sectioning method for automatic recognition of micro-expressions
CN104881660A (en) * 2015-06-17 2015-09-02 吉林纪元时空动漫游戏科技股份有限公司 Facial expression recognition and interaction method based on GPU acceleration
US20150254447A1 (en) * 2014-03-10 2015-09-10 FaceToFace Biometrics, Inc. Expression recognition in messaging systems
CN107480622A (en) * 2017-08-07 2017-12-15 深圳市科迈爱康科技有限公司 Micro- expression recognition method, device and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2745094A1 (en) * 2008-12-04 2010-07-01 Total Immersion Software, Inc. Systems and methods for dynamically injecting expression information into an animated facial mesh
CN105139039B (en) * 2015-09-29 2018-05-29 河北工业大学 The recognition methods of the micro- expression of human face in video frequency sequence

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426005A (en) * 2013-08-06 2013-12-04 山东大学 Automatic database creating video sectioning method for automatic recognition of micro-expressions
US20150254447A1 (en) * 2014-03-10 2015-09-10 FaceToFace Biometrics, Inc. Expression recognition in messaging systems
CN104881660A (en) * 2015-06-17 2015-09-02 吉林纪元时空动漫游戏科技股份有限公司 Facial expression recognition and interaction method based on GPU acceleration
CN107480622A (en) * 2017-08-07 2017-12-15 深圳市科迈爱康科技有限公司 Micro- expression recognition method, device and storage medium

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415015A (en) * 2019-06-19 2019-11-05 深圳壹账通智能科技有限公司 Product degree of recognition analysis method, device, terminal and computer readable storage medium
CN110276406A (en) * 2019-06-26 2019-09-24 腾讯科技(深圳)有限公司 Expression classification method, apparatus, computer equipment and storage medium
CN110276406B (en) * 2019-06-26 2023-09-01 腾讯科技(深圳)有限公司 Expression classification method, apparatus, computer device and storage medium
CN110458018A (en) * 2019-07-05 2019-11-15 深圳壹账通智能科技有限公司 A kind of test method, device and computer readable storage medium
CN110781810A (en) * 2019-10-24 2020-02-11 合肥盛东信息科技有限公司 Face emotion recognition method
CN110781810B (en) * 2019-10-24 2024-02-27 合肥盛东信息科技有限公司 Face emotion recognition method
CN111178151A (en) * 2019-12-09 2020-05-19 量子云未来(北京)信息科技有限公司 Method and device for realizing human face micro-expression change recognition based on AI technology
CN111967295A (en) * 2020-06-23 2020-11-20 南昌大学 Micro-expression capturing method for semantic tag mining
CN111967295B (en) * 2020-06-23 2024-02-13 南昌大学 Micro-expression capturing method for semantic tag mining
CN114005153A (en) * 2021-02-01 2022-02-01 南京云思创智信息科技有限公司 Real-time personalized micro-expression recognition method for face diversity
CN113515702A (en) * 2021-07-07 2021-10-19 北京百度网讯科技有限公司 Content recommendation method, model training method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN107480622A (en) 2017-12-15

Similar Documents

Publication Publication Date Title
WO2019029261A1 (en) Micro-expression recognition method, device and storage medium
WO2019085495A1 (en) Micro-expression recognition method, apparatus and system, and computer-readable storage medium
WO2015184760A1 (en) Air gesture input method and apparatus
WO2019041406A1 (en) Indecent picture recognition method, terminal and device, and computer-readable storage medium
WO2019216593A1 (en) Method and apparatus for pose processing
WO2020190112A1 (en) Method, apparatus, device and medium for generating captioning information of multimedia data
WO2019051683A1 (en) Fill light photography method, mobile terminal and computer-readable storage medium
WO2021132851A1 (en) Electronic device, scalp care system, and control method for same
WO2020197241A1 (en) Device and method for compressing machine learning model
EP3740936A1 (en) Method and apparatus for pose processing
WO2017164716A1 (en) Method and device for processing multimedia information
WO2015111840A1 (en) Device and method for inserting advertisement by using frame clustering
WO2019051899A1 (en) Terminal control method and device, and storage medium
WO2019051895A1 (en) Terminal control method and device, and storage medium
WO2013009020A2 (en) Method and apparatus for generating viewer face-tracing information, recording medium for same, and three-dimensional display apparatus
WO2019051890A1 (en) Terminal control method and device, and computer-readable storage medium
WO2013022226A4 (en) Method and apparatus for generating personal information of client, recording medium thereof, and pos system
WO2021261830A1 (en) Video quality assessment method and apparatus
WO2018166236A1 (en) Claim settlement bill recognition method, apparatus and device, and computer-readable storage medium
WO2015133699A1 (en) Object recognition apparatus, and recording medium in which method and computer program therefor are recorded
WO2020233061A1 (en) Mute detection method, system and device, and computer readable storage medium
WO2018120459A1 (en) Method, apparatus and device for verifying authenticity of image, and storage medium and service end
WO2018149191A1 (en) Method, apparatus and device for underwriting insurance policy, and computer-readable storage medium
WO2017088437A1 (en) Method and device for controlling switching of cameras of terminal by means of smart television
WO2019041851A1 (en) Home appliance after-sales consulting method, electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18843914

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18843914

Country of ref document: EP

Kind code of ref document: A1