CN114038036A - Spontaneous expression recognition method and device - Google Patents

Spontaneous expression recognition method and device Download PDF

Info

Publication number
CN114038036A
CN114038036A CN202111316887.1A CN202111316887A CN114038036A CN 114038036 A CN114038036 A CN 114038036A CN 202111316887 A CN202111316887 A CN 202111316887A CN 114038036 A CN114038036 A CN 114038036A
Authority
CN
China
Prior art keywords
expression
video
recognition
extracting
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111316887.1A
Other languages
Chinese (zh)
Inventor
李军平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiuzhou Anhua Information Security Technology Co ltd
Original Assignee
Beijing Jiuzhou Anhua Information Security Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiuzhou Anhua Information Security Technology Co ltd filed Critical Beijing Jiuzhou Anhua Information Security Technology Co ltd
Priority to CN202111316887.1A priority Critical patent/CN114038036A/en
Publication of CN114038036A publication Critical patent/CN114038036A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to the technical field of public safety protection, in particular to a spontaneous expression recognition method and a spontaneous expression recognition device, which comprise the following steps: acquiring a video to be identified from a video monitoring system; determining a sampling rate of the video to be identified based on a visual attention mechanism; determining an expression dynamic sequence in the video to be identified according to the sampling rate; carrying out image sampling on the expression dynamic sequence based on a dynamic sampling method to obtain an identification sample; the expression characteristics of the recognition samples are extracted, expression classification is carried out on the recognition samples according to the expression characteristics, the expression classification comprises happiness, sadness, surprise, anger, disgust, fear and neutrality, and the method can be used for recognizing the facial expression of the person in the monitoring video of security and play a role of 'early warning in advance'.

Description

Spontaneous expression recognition method and device
Technical Field
The application relates to the technical field of public safety protection, in particular to a spontaneous expression recognition method and device.
Background
The current security industry is mainly characterized by 'prior patrol, in-process tracking and after-event verification', various frightening means are still adopted to prevent crimes in the 'prior' stage, the 'prior early warning' is difficult to achieve, the prejudgment can be carried out only by depending on the professional experience of security personnel, but the accuracy is not high as a whole, and the experience is difficult to reproduce.
Disclosure of Invention
In a first aspect of the present application, a self-generated expression recognition method is provided, including: acquiring a video to be identified from a video monitoring system; determining a sampling rate of the video to be identified based on a visual attention mechanism; determining an expression dynamic sequence in the video to be identified according to the sampling rate; carrying out image sampling on the expression dynamic sequence based on a dynamic sampling method to obtain an identification sample; and extracting the expressive features of the recognition samples, and performing expression classification on the recognition samples according to the expressive features, wherein the expressive features comprise happiness, sadness, surprise, anger, disgust, fear and neutrality.
By adopting the technical scheme, videos of areas needing to be monitored in the video monitoring system are captured, the image sampling rate of the expression dynamic sequences is determined by using a visual attention mechanism, then the expression dynamic sequences of each person in the videos are obtained and are subjected to image sampling, the images of the identification samples are subjected to expression characteristic analysis, finally, classification is carried out according to the analyzed expression characteristics, people in seven basic expressions can be obtained, and further state analysis can be carried out, so that the effect of 'early warning' is obtained.
Further, after performing expression classification on the recognition sample, the method further comprises: and acquiring identification samples respectively corresponding to anger, disgust and fear in the table emotion classification, and determining the persons respectively corresponding to the identification samples as emotion fluctuation objects.
Further, the extracting expression features of the recognition sample specifically includes: extracting a first region image of which the face deformation intensity is smaller than a preset threshold value in the recognition sample according to a collaborative saliency algorithm; removing the first area image in the identification sample to obtain a second area image; and extracting LBP expression characteristics of the second area image as expression characteristics of the identification sample.
Further, before determining the expression dynamic sequence, the method further comprises: and extracting a face region of interest from the expression dynamic sequence based on a Haar-like model.
In a second aspect of the present application, a self-generated expression recognition apparatus is provided, including:
a video acquisition module: the method comprises the steps of obtaining a video to be identified from a video monitoring system;
a sampling determination module: for determining a sampling rate of the video to be identified based on a visual attention mechanism;
a video determination module: the system is used for determining an expression dynamic sequence in the video to be identified according to the sampling rate;
a sample acquisition module: the expression dynamic sequence is used for carrying out image sampling on the expression dynamic sequence based on a dynamic sampling method to obtain an identification sample;
the expression recognition module: the expression classification method comprises the steps of extracting expression features of the recognition samples, and performing expression classification on the recognition samples according to the expression features, wherein the expression classification comprises happiness, sadness, surprise, anger, disgust, fear and neutrality.
Further, the apparatus further comprises: the expression analysis module: and acquiring identification samples respectively corresponding to anger, aversion and fear in the expression classification, and determining the persons respectively corresponding to the identification samples as emotional fluctuation objects.
Further, the expression recognition module is specifically configured to: extracting a first region image of which the face deformation intensity is smaller than a preset threshold value in the identification sample according to a cooperative significance algorithm; removing the first area image in the identification sample to obtain a second area image; and extracting LBP expression characteristics of the second area image as expression characteristics of the identification sample.
Further, the apparatus further comprises: a face extraction module: and the method is used for extracting the face region of interest from the expression dynamic sequence based on a Haar-like model.
In a third aspect of the application, an electronic device is provided, comprising a memory having stored thereon a computer program and a processor implementing the method as described above when executing the program.
In a fourth aspect of the present application, a computer-readable storage medium is provided, on which a computer program is stored, which program, when being executed by a processor, carries out the method as set forth above.
Drawings
The above and other features, advantages and aspects of various embodiments of the present application will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters denote like or similar elements, and wherein:
fig. 1 is a flowchart of a self-generated expression recognition method in an embodiment of the present application.
Fig. 2 is a schematic block diagram of a spontaneous expression recognition apparatus in an embodiment of the present application.
Fig. 3 is a block diagram of an electronic device in an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In order to facilitate understanding of the embodiments of the present application, some terms referred to in the embodiments of the present application are first explained.
The visual attention mechanism is as follows: the visual attention mechanism is a brain signal processing mechanism unique to human vision. Human vision obtains a target area needing important attention, namely a general attention focus, by rapidly scanning a global image, and then puts more attention resources into the area to obtain more detailed information of the target needing attention, and suppresses other useless information.
Optical flow: the method is a method for finding the corresponding relation between the previous frame and the current frame by using the change of pixels in a pixel sequence in a time domain and the correlation between adjacent frames so as to calculate the motion information of an object between the adjacent frames. In general, optical flow is due to movement of the foreground objects themselves in the scene, motion of the camera, or both.
DCT transformation: the DCT (Discrete Cosine Transform) is mainly used for data or image compression, and has a good decorrelation performance because the DCT can convert signals in a spatial domain to a frequency domain.
LBP (Local Binary Pattern) is an operator used to describe the Local texture features of an image; it has the obvious advantages of rotation invariance, gray scale invariance and the like, and is used for textures
The current security protection industry is mainly characterized by ' a priori patrol, a priori tracking and a posteriori check ', various fright means are still adopted to prevent crime in the ' priori ' stage, advance warning is difficult to achieve, prejudgment can be carried out only by depending on professional experience of security protection personnel, but the accuracy is not high as a whole, the experience is difficult to reproduce, and the rise and the application of an Artificial Intelligence (AI) technology are helping a security protection video monitoring system to move from ' seeing and clearly ' to ' seeing ' and understanding ', the AI technology is changing a security protection system from passive recording and seeing to ' early warning in advance, treatment in the fact and analysis after the fact ', wherein the ' AI + expression recognition ' technology is a key technology capable of playing the role of ' advance warning '.
The following specifically introduces the spontaneous expression recognition method in the embodiment of the present application with reference to the drawings of the specification:
fig. 1 is a flowchart of a self-generated expression recognition method in the embodiment of the present application, and as shown in fig. 1, the method includes:
and step S101, acquiring a video to be identified from a video monitoring system.
In the embodiment of the application, the video to be identified may be a monitoring video in a public area called by a video monitoring system, such as a station or a mall in an area needing monitoring, may be a real-time monitoring video, or may be a stored monitoring playback video.
Step S102, determining the sampling rate of the video to be identified based on the visual attention mechanism.
In the embodiment of the application, the sampling rate of the expression dynamic sequence is selected by a visual attention mechanism, the human visual system gets dynamic attention based on the change of visual information, and the attention is considered as the number of each video clip selection, and if the visual information does not change too much, the attention is not paid, so that fewer frames are selected.
And step S103, determining the expression dynamic sequence in the video to be identified according to the sampling rate.
In some application embodiments, before determining an expression dynamic sequence, a Haar-like model is first adopted to detect a face saliency region in the video to be recognized, a part irrelevant to an expression is removed, then the expression dynamic sequence of each person is divided into small video segments, 10 frames can be taken as one segment, and other suitable frame numbers can be taken as one segment, namely, the face expression dynamic sequence of each person is obtained.
In some embodiments, after obtaining the expression dynamic sequence, an optical flow algorithm may be used to extract the temporal features of the expression dynamic sequence, and the temporal features may be transformed from the spatial domain to the frequency domain using a DCT transform.
And step S104, carrying out image sampling on the dynamic expression sequence based on a dynamic sampling method to obtain an identification sample.
In the embodiment of the application, dynamic sampling based on vertexes can be adopted after the expression dynamic sequence is obtained, the expression dynamic sequence is divided into uniform segments containing N frames, the central frame of the segments is taken as the vertex, frames which are not near the vertex are deleted, after the main frequency of each segment is solved, frames influenced by the main frequency are selected in a continuous time, if the main frequency is high, a plurality of frames are selected at the vertex, and if the main frequency is low, only a small number of frames close to the vertex are selected. Dynamic sampling without vertices may also be employed in some application embodiments.
And step S105, extracting expression characteristics of the identification sample, and classifying the expression of the identification sample according to the expression characteristics, wherein the expression classification comprises happiness, sadness, surprise, anger, disgust, fear and neutrality.
In the embodiment of the application, the method for realizing the cooperative significance is adopted for extracting the expression features of the identification sample, and comprises the steps of firstly extracting a first region image with the face deformation strength smaller than a preset threshold value in the identification sample according to a cooperative significance algorithm, specifically, the cooperative significance detection is divided into a significance detection part and a cooperative detection part, the significance and the cooperative analysis are respectively carried out by adopting clustering-level spatial features and contrast features, and then generating an expression cooperative significance map by using a multiplication feature sum method.
Contrast represents the uniqueness of visual features of a single or multiple images, a feature that measures the most widely used significance. Using clustering based on CkContrast ratio W ofc(K) The calculation method comprises the following steps:
Figure BDA0003343865680000071
Figure BDA0003343865680000072
where the distance in feature space is calculated using | · |, niRepresents class CiN denotes the number of pixels per image.
In the human visual system, the central region of the picture is more attractive than the other regions, and as the distance between an object and the center of the image increases, the attention of the person to the object decreases, which is the case when the underlying image is called the center bias rule, which is extended to a cluster-based approach, then based on cluster CKCenter shift difference W ofs(K) The calculation is as follows:
Figure BDA0003343865680000073
Figure BDA0003343865680000074
wherein, delta [ ·]Is the function of CrohnjRepresenting an image IjAt the center of (1), normalization coefficient nkRepresents class CkPixel number of (2), Gaussian kernel N (-) calculating pixel point
Figure BDA0003343865680000075
And an image center ojOf Euclidean distance, variance σ2Is the normalized image radius.
Using the approach of multiplicative fusion features in this application, then the co-saliency probability p (C) of class k at the cluster levelk) The calculation is as follows: p (C)k)=Πωi(k) Here ω isi(k) Representing a salient feature.
Smoothing the common saliency value of each pixel, based on class CkSatisfies the gaussian distribution N, calculated as follows: p (x | C)k)=N(||vx,μk||2,10,σk 2) Wherein v isxCharacteristic vector representing pixel x, class CkVariance of as a variance σ of the gaussiansk. Therefore, the significant edge probability p (x) is determined by the joint significant value p (C) over all clustersk)p(x|Ck) Summing to obtain:
Figure BDA0003343865680000076
Figure BDA0003343865680000077
finally, the pixel-level synergistic significance is obtained.
In the embodiment of the present application, most of the first region images are distributed in the forehead and cheek regions, which indicates that the degree of change of these regions with the expression is small, so that a second region image can be obtained by removing the regions, and then the LBP expression features of the second region image are extracted as the expression features of the recognition sample.
In the embodiment of the application, after the expression features of the recognition samples are obtained, the expression classification is performed on the recognition samples according to the expression features, and the expression classification is seven basic expressions which are happy, sad, surprised, angry, disgust, fear and neutral.
In some application embodiments, after the expression classification of the recognition sample, the method further includes: and acquiring identification samples respectively corresponding to anger, disgust and fear in the table emotion classification, and determining that the personnel respectively corresponding to the identification samples are emotion fluctuation objects.
In the embodiment of the application, people with three expressions of anger, disgust and fear can be determined as an object with large emotional fluctuation, and need to pay attention and monitor.
It is noted that while for simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present disclosure is not limited by the order of acts, as some steps may, in accordance with the present disclosure, occur in other orders and concurrently. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments in nature, and that acts and modules referred to are not necessarily required for the disclosure
The above is a description of embodiments of the method, and the embodiments of the apparatus are further described below.
Fig. 2 is a schematic block diagram of a self-generated expression recognition apparatus in an embodiment of the present application, and as shown in fig. 2, the apparatus includes:
a video acquisition module: the method is used for acquiring the video to be identified from the video monitoring system.
A sampling determination module: for determining a sampling rate of a video to be identified based on a visual attention mechanism.
A video determination module: and the method is used for determining the expression dynamic sequence in the video to be identified according to the sampling rate.
A sample acquisition module: the method is used for carrying out image sampling on the dynamic sequence of the expression based on a dynamic sampling method to obtain an identification sample.
The expression recognition module: the method comprises the steps of extracting expression features of recognition samples, and carrying out expression classification on the recognition samples according to the expression features, wherein the expression classification comprises happiness, sadness, surprise, anger, disgust, fear and neutrality.
In some embodiments, the apparatus further comprises: the expression analysis module: and acquiring identification samples respectively corresponding to anger, disgust and fear in the table emotion classification, and determining the persons respectively corresponding to the identification samples as emotion fluctuation objects.
In some application embodiments, the expression recognition module is specifically configured to: extracting a first region image with the face deformation intensity smaller than a preset threshold value in the identification sample according to a cooperative significance algorithm; removing the first area image in the identification sample to obtain a second area image; and extracting LBP expression characteristics of the second area image.
In some embodiments, the apparatus further comprises: a face extraction module: and the method is used for extracting the face region of interest from the expression dynamic sequence based on a Haar-like model.
For convenience and brevity of description, it is clear to those skilled in the art that the specific working processes of the described modules may refer to the corresponding processes in the foregoing method embodiments, which are not described herein again.
Fig. 3 shows a schematic structural diagram of a terminal device or a server suitable for implementing the embodiments of the present application.
As shown in fig. 3, the terminal device or the server includes a Central Processing Unit (CPU)301 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)302 or a program loaded from a storage section 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for system operation are also stored. The CPU 301, ROM 302, and RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
The following components are connected to the I/O interface 305: an input portion 306 including a keyboard, a mouse, and the like; an output section 307 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker and the like; a storage section 308 including a hard disk or the like; and a communication section 309 including a network interface card such as a LAN card, a modem, or the like. The communication section 309 performs communication processing via a network such as the internet. The driver 310 is also connected to the I/O interface 305 as needed. A removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 310 as necessary, so that a computer program read out therefrom is mounted into the storage section 308 as necessary.
In particular, according to an embodiment of the present disclosure, the process described above with reference to the flowchart fig. 1 may be implemented as a computer software program. For example, embodiments of the disclosure include a computer program product comprising a computer program embodied on a machine-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 309, and/or installed from the removable medium 311. The above-described functions defined in the system of the present application are executed when the computer program is executed by the Central Processing Unit (CPU) 301.
It should be noted that the computer readable media shown in the present disclosure may be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present application may be implemented by software or hardware. The described units or modules may also be provided in a processor, and may be described as: a processor includes a video acquisition module, a sample determination module, a video determination module, and a sample acquisition module. The names of these units or modules do not in some cases constitute a limitation on the units or modules themselves, for example, the status determination module may also be described as a "module for acquiring a video to be identified from a video surveillance system". As another example, it can be described as: a processor includes a first module, a second module, and a third module. Where the names of these units or modules do not in some cases constitute a limitation of the unit or module itself, for example, the second module may also be described as "a module for determining the sampling rate of the video to be identified based on a visual attention mechanism".
As another aspect, the present application also provides a computer-readable storage medium, which may be included in the electronic device described in the above embodiments; or may exist separately and not be incorporated into the electronic device. The computer readable storage medium stores one or more programs which, when executed by one or more processors, perform the hyperlink status determination method or the invalid hyperlink repair method described herein.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the disclosure. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (10)

1. A self-generating expression recognition method is characterized by comprising the following steps:
acquiring a video to be identified from a video monitoring system;
determining a sampling rate of the video to be identified based on a visual attention mechanism;
determining an expression dynamic sequence in the video to be identified according to the sampling rate;
carrying out image sampling on the expression dynamic sequence based on a dynamic sampling method to obtain an identification sample;
extracting expression features of the recognition samples, and performing expression classification on the recognition samples according to the expression features, wherein the expression classification comprises happiness, sadness, surprise, anger, disgust, fear and neutrality.
2. The self-generated expression recognition method according to claim 1, further comprising, after performing expression classification on the recognition sample:
and acquiring identification samples respectively corresponding to anger, aversion and fear in the expression classification, and determining the persons respectively corresponding to the identification samples as emotion fluctuation objects.
3. The self-generated expression recognition method according to claim 1, wherein the extracting expression features of the recognition sample specifically includes:
extracting a first region image of which the face deformation intensity is smaller than a preset threshold value in the identification sample according to a cooperative significance algorithm;
removing the first area image in the identification sample to obtain a second area image;
and extracting LBP expression characteristics of the second area image as expression characteristics of the identification sample.
4. The self-generated expression recognition method according to claim 1, further comprising, before determining the expression dynamic sequence:
and extracting a face region of interest from the expression dynamic sequence based on a Haar-li ke model.
5. A spontaneous expression recognition apparatus, comprising:
a video acquisition module: the method comprises the steps of obtaining a video to be identified from a video monitoring system;
a sampling determination module: for determining a sampling rate of the video to be identified based on a visual attention mechanism;
a video determination module: the system is used for determining an expression dynamic sequence in the video to be identified according to the sampling rate;
a sample acquisition module: the expression dynamic sequence is subjected to image sampling based on a dynamic sampling method to obtain an identification sample;
the expression recognition module: the expression classification method comprises the steps of extracting expression features of the recognition samples, and performing expression classification on the recognition samples according to the expression features, wherein the expression classification comprises happiness, sadness, surprise, anger, disgust, fear and neutrality.
6. The self-generated expression recognition apparatus according to claim 5, further comprising:
the expression analysis module: and acquiring identification samples respectively corresponding to anger, aversion and fear in the expression classification, and determining the persons respectively corresponding to the identification samples as emotion fluctuation objects.
7. The self-generating expression recognition device of claim 5, wherein the expression recognition module is specifically configured to:
extracting a first region image of which the face deformation intensity is smaller than a preset threshold value in the identification sample according to a cooperative significance algorithm;
removing the first area image in the identification sample to obtain a second area image;
and extracting LBP expression characteristics of the second area image as expression characteristics of the identification sample.
8. The self-generated expression recognition apparatus according to claim 5, further comprising:
a face extraction module: and the method is used for extracting the face region of interest from the expression dynamic sequence based on a Haar-like model.
9. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the processor, when executing the program, implements the method of any of claims 1-4.
10. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 4.
CN202111316887.1A 2021-11-09 2021-11-09 Spontaneous expression recognition method and device Pending CN114038036A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111316887.1A CN114038036A (en) 2021-11-09 2021-11-09 Spontaneous expression recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111316887.1A CN114038036A (en) 2021-11-09 2021-11-09 Spontaneous expression recognition method and device

Publications (1)

Publication Number Publication Date
CN114038036A true CN114038036A (en) 2022-02-11

Family

ID=80136792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111316887.1A Pending CN114038036A (en) 2021-11-09 2021-11-09 Spontaneous expression recognition method and device

Country Status (1)

Country Link
CN (1) CN114038036A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106210444A (en) * 2016-07-04 2016-12-07 石家庄铁道大学 Kinestate self adaptation key frame extracting method
CN107392105A (en) * 2017-06-23 2017-11-24 广东工业大学 A kind of expression recognition method based on reverse collaboration marking area feature
CN107977634A (en) * 2017-12-06 2018-05-01 北京飞搜科技有限公司 A kind of expression recognition method, device and equipment for video
CN108537209A (en) * 2018-04-25 2018-09-14 广东工业大学 A kind of adaptive down-sampling method and device of view-based access control model theory of attention

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106210444A (en) * 2016-07-04 2016-12-07 石家庄铁道大学 Kinestate self adaptation key frame extracting method
CN107392105A (en) * 2017-06-23 2017-11-24 广东工业大学 A kind of expression recognition method based on reverse collaboration marking area feature
CN107977634A (en) * 2017-12-06 2018-05-01 北京飞搜科技有限公司 A kind of expression recognition method, device and equipment for video
CN108537209A (en) * 2018-04-25 2018-09-14 广东工业大学 A kind of adaptive down-sampling method and device of view-based access control model theory of attention

Similar Documents

Publication Publication Date Title
CN108446390B (en) Method and device for pushing information
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
Xiao et al. Video-based evidence analysis and extraction in digital forensic investigation
CN108229297B (en) Face recognition method and device, electronic equipment and computer storage medium
WO2021139324A1 (en) Image recognition method and apparatus, computer-readable storage medium and electronic device
Kim et al. Spatiotemporal saliency detection and its applications in static and dynamic scenes
CN104933414B (en) A kind of living body faces detection method based on WLD-TOP
CN110163096B (en) Person identification method, person identification device, electronic equipment and computer readable medium
CN107220652B (en) Method and device for processing pictures
JP7419080B2 (en) computer systems and programs
KR102606734B1 (en) Method and apparatus for spoof detection
WO2022021287A1 (en) Data enhancement method and training method for instance segmentation model, and related apparatus
CN114612987A (en) Expression recognition method and device
CN114973057B (en) Video image detection method and related equipment based on artificial intelligence
CN113688839B (en) Video processing method and device, electronic equipment and computer readable storage medium
Krithika et al. MAFONN-EP: A minimal angular feature oriented neural network based emotion prediction system in image processing
CN111783677B (en) Face recognition method, device, server and computer readable medium
CN113920023A (en) Image processing method and device, computer readable medium and electronic device
US20140376822A1 (en) Method for Computing the Similarity of Image Sequences
Anwar et al. Perceptual judgments to detect computer generated forged faces in social media
CN114038036A (en) Spontaneous expression recognition method and device
Avazov et al. Automatic moving shadow detection and removal method for smart city environments
CN115147895A (en) Face counterfeit discrimination method and device and computer program product
Utami et al. Face spoof detection by motion analysis on the whole video frames
Nautiyal et al. An automated technique for criminal face identification using biometric approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination