CN112507916B - Face detection method and system based on facial expression - Google Patents

Face detection method and system based on facial expression Download PDF

Info

Publication number
CN112507916B
CN112507916B CN202011483841.4A CN202011483841A CN112507916B CN 112507916 B CN112507916 B CN 112507916B CN 202011483841 A CN202011483841 A CN 202011483841A CN 112507916 B CN112507916 B CN 112507916B
Authority
CN
China
Prior art keywords
information
facial
expression
obtaining
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011483841.4A
Other languages
Chinese (zh)
Other versions
CN112507916A (en
Inventor
王晨阳
朱明�
丁辉晖
李洪洋
李思琴
铁志波
许琳娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Jinruiyang Information Technology Co ltd
Original Assignee
Suzhou Jinruiyang Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Jinruiyang Information Technology Co ltd filed Critical Suzhou Jinruiyang Information Technology Co ltd
Priority to CN202011483841.4A priority Critical patent/CN112507916B/en
Publication of CN112507916A publication Critical patent/CN112507916A/en
Application granted granted Critical
Publication of CN112507916B publication Critical patent/CN112507916B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face detection method and a system based on facial expression, wherein the method is applied to a face detection system, the system comprises an image acquisition system and a thermal imaging system, and the method comprises the following steps: acquiring image information of a first user without facial expression according to the image acquisition system; acquiring second facial image information of a first user according to the image acquisition system; obtaining a first facial distinguishing feature dataset according to the second facial image information and the first facial image information; inputting the first facial distinguishing feature data set into a first neural network model to obtain first expression information; obtaining first facial thermal imaging information of the first user from the thermal imaging system; and determining first facial expression information of the first user according to the first facial thermal imaging information and the first expression information. The technical problems of low face dynamic recognition rate and insufficient recognition precision in the prior art are solved.

Description

Face detection method and system based on facial expression
Technical Field
The invention relates to the technical field of face recognition, in particular to a face detection method and system based on facial expression.
Background
At present, along with the rapid development of leading-edge technologies such as artificial intelligence, the internet of things and the like, the intelligence era has quietly arrived, "brush face" becomes gradually a new trend, and face recognition is a biological recognition technology for carrying out identity recognition based on face feature information of people. Compared with other biological recognition modes, the human face recognition has the advantages of naturalness, imperceptibility and the like. The problem to be solved by the existing face recognition is how to ensure the recognition rate when different scenes, face shielding and the like are applied. In addition, privacy and security are also considerable issues. The face recognition has obvious advantages and will become the dominant recognition technology in the future.
In the process of implementing the technical scheme of the invention in the embodiment of the present application, the inventor of the present application finds that the above-mentioned technology has at least the following technical problems:
the dynamic face recognition rate is low and the recognition accuracy is insufficient.
Disclosure of Invention
The embodiment of the application provides a face detection method and a face detection system based on facial expression, solves the technical problems of low face dynamic recognition rate and insufficient recognition precision in the prior art, and achieves the technical purpose of improving the face detection accuracy based on accurate analysis of the facial expression of a user.
The embodiment of the application provides a face detection method based on facial expression, wherein the method is applied to a face detection system, the system comprises an image acquisition system and a thermal imaging system, and the method comprises the following steps: acquiring first facial image information of a first user according to the image acquisition system, wherein the first facial image information is image information of the first user without facial expression; acquiring second facial image information of a first user according to the image acquisition system; obtaining a first facial distinguishing feature dataset according to the second facial image information and the first facial image information; inputting the first facial distinguishing feature data set into a first neural network model to obtain first expression information; obtaining first facial thermal imaging information of the first user from the thermal imaging system; and determining first facial expression information of the first user according to the first facial thermal imaging information and the first expression information.
In another aspect, the present application further provides a face detection system based on facial expression, wherein the system includes: the first obtaining unit is used for obtaining first facial image information of a first user according to the image acquisition system, and the first facial image information is image information of the first user without facial expression; a second obtaining unit for obtaining second facial image information of the first user according to the image acquisition system; a third obtaining unit configured to obtain a first face discrimination feature data set from the second face image information and the first face image information; a first input unit, configured to input the first facial feature dataset into a first neural network model, to obtain first expression information; a fourth obtaining unit for obtaining first facial thermography information of the first user according to the thermography system; a fifth obtaining unit configured to determine first facial expression information of a first user according to the first facial thermography information and the first expression information.
On the other hand, the embodiment of the present application further provides a face detection system based on facial expression, wherein the system includes: the first obtaining unit is used for obtaining first facial image information of a first user according to the image acquisition system, and the first facial image information is image information of the first user without facial expression; a second obtaining unit for obtaining second facial image information of the first user according to the image acquisition system; a third obtaining unit configured to obtain a first face discrimination feature data set from the second face image information and the first face image information; a first input unit, configured to input the first facial feature dataset into a first neural network model, to obtain first expression information; a fourth obtaining unit for obtaining first facial thermography information of the first user according to the thermography system; a fifth obtaining unit configured to determine first facial expression information of a first user according to the first facial thermography information and the first expression information.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
the facial expression of the user is analyzed and compared by capturing different facial image information of the user to obtain a facial feature distinguishing feature data set, and experience can be continuously obtained to process data based on a neural network model by inputting the facial feature distinguishing feature data set into the neural network model, so that the expression of the user is accurately analyzed, and the facial expression of the user is accurately judged according to the expression and facial thermal imaging information of the user. The technical purpose of improving the accuracy of face detection based on accurate analysis of the facial expression of the user is achieved.
The foregoing is a summary of the present disclosure, and embodiments of the present disclosure are described below to make the technical means of the present disclosure more clearly understood.
Drawings
Fig. 1 is a schematic flow chart of a face detection method based on facial expression according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a face detection system based on facial expression according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an exemplary electronic device according to an embodiment of the present application.
Description of reference numerals: a first obtaining unit 11, a second obtaining unit 12, a third obtaining unit 13, a first input unit 14, a fourth obtaining unit 15, a fifth obtaining unit 16, a bus 300, a receiver 301, a processor 302, a transmitter 303, a memory 304, a bus interface 305.
Detailed Description
The embodiment of the application provides a face detection method and a face detection system based on facial expression, solves the technical problems of low face dynamic recognition rate and insufficient recognition precision in the prior art, and achieves the technical purpose of improving the face detection accuracy by accurately analyzing the facial expression of a user. Hereinafter, example embodiments of the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are merely some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited to the example embodiments described herein.
Summary of the application
Compared with other biological recognition modes, the human face recognition has the advantages of naturalness, imperceptibility and the like. The problem to be solved by the existing face recognition is how to ensure the recognition rate when different scenes, face shielding and the like are applied. In addition, privacy and security are also considerable issues. The face recognition has obvious advantages and will become the dominant recognition technology in the future. The prior art also has the technical problems of low face dynamic recognition rate and insufficient recognition precision.
In view of the above technical problems, the technical solution provided by the present application has the following general idea:
the embodiment of the application provides a face detection method based on facial expression, wherein the method is applied to a face detection system, the system comprises an image acquisition system and a thermal imaging system, and the method comprises the following steps: acquiring first facial image information of a first user according to the image acquisition system, wherein the first facial image information is image information of the first user without facial expression; acquiring second facial image information of a first user according to the image acquisition system; obtaining a first facial distinguishing feature dataset according to the second facial image information and the first facial image information; inputting the first facial distinguishing feature data set into a first neural network model to obtain first expression information; obtaining first facial thermal imaging information of the first user from the thermal imaging system; and determining first facial expression information of the first user according to the first facial thermal imaging information and the first expression information.
Having thus described the general principles of the present application, various non-limiting embodiments thereof will now be described in detail with reference to the accompanying drawings.
Example one
As shown in fig. 1, an embodiment of the present application provides a face detection method based on facial expression, where the method is applied to a face detection system, the system includes an image acquisition system and a thermal imaging system, and the method includes:
step S100: acquiring first facial image information of a first user according to the image acquisition system, wherein the first facial image information is image information of the first user without facial expression;
specifically, the face detection system is connected to the image acquisition system, and when the image acquisition system recognizes that face information exists in a predetermined area, the face detection system acquires image information of the first user without facial expression. The image acquisition system carries out image processing on the face information detected in the specified area through scene filtering, ray correction and face size correction, thereby ensuring the accuracy of image recognition.
Step S200: acquiring second facial image information of a first user according to the image acquisition system;
specifically, the second face information is face information of a designated expression, such as smile, tooth-exposing smile, etc., made by the second user according to an identification requirement. The image acquisition system reminds a user to operate according to the example by acquiring the designated expression information and displaying the example expression at the output end of the system.
Step S300: obtaining a first facial distinguishing feature dataset according to the second facial image information and the first facial image information;
specifically, in the first face distinguishing feature data set, features are generally classified into visual features, pixel statistical features, face image transformation coefficient features, face image algebraic features, and the like. The face feature extraction is performed on some features of the face. Face feature extraction, also known as face characterization, is a process of feature modeling for a face. The facial features of the first user under different expressions are different, and a foundation is laid for accurately identifying the expression information of the first user by obtaining the first facial distinguishing feature data set.
Step S400: inputting the first facial distinguishing feature data set into a first neural network model to obtain first expression information;
in particular, a neural network is a complex network system formed by a large number of simple processing units (called neurons) widely connected to each other, which reflects many basic features of human brain functions, and is a highly complex nonlinear dynamical learning system. The first facial distinguishing feature data set is input into the neural network model, the first neural network model is continuously corrected and optimized through training data, the accuracy of a machine learning model for processing the data is improved through a supervised learning process, and therefore the first expression information is more accurately acquired. Accurate first expression information is obtained through the first neural network model, and therefore a foundation is laid for improving the detection precision of the face recognition technology.
Step S500: obtaining first facial thermal imaging information of the first user from the thermal imaging system;
specifically, the thermal imaging technology is to use an infrared detector and an optical imaging objective to receive an infrared radiation energy distribution pattern of a detected object and reflect the infrared radiation energy distribution pattern onto a photosensitive element of the infrared detector so as to obtain an infrared thermography image, wherein the thermography image corresponds to a thermal distribution field on the surface of an object. Biometric thermal imaging systems capture patterns generated when a person's blood moves under the facial skin, and since the pattern of blood vessels is unique, biometric identification maps are apparent even in the same egg twins. The thermal imaging system obtains different thermal imaging information because the muscle states of the face of the user are different under different facial expressions, and thus, the basis is laid for accurately obtaining the facial expression information of the first user by obtaining the first facial thermal imaging information.
Step S600: determining first facial expression information of a first user according to the first facial thermal imaging information and the first expression information;
specifically, the image acquisition system and the thermal imaging system input the obtained first facial thermal imaging information and the first expression information to the first face detection system, and the system performs analysis processing on the obtained data through machine learning, so as to determine facial expression information of the first user. The technical purpose of improving the accuracy of face detection by accurately analyzing the facial expression of the user is achieved.
Further, the embodiment S400 of the present application further includes:
step S401 a: inputting the first facial distinguishing feature data set as input data into a first neural network model, wherein the first neural network model is obtained by training a plurality of groups of training data, and each group of training data in the plurality of groups comprises: the first facial discriminatory feature dataset and identification information to identify facial expressions;
step S402 a: obtaining first output information of the first neural network model, wherein the first output information comprises first expression information of the first user.
In particular, the process of training the neural network model is essentially a process of supervised learning. The plurality of groups of training data are specifically: the first facial distinguishing feature data set and identification information for identifying a facial expression. Inputting the first facial distinguishing feature data set, outputting first expression information of the first user by a neural network model, verifying the output information and the first expression information of the first user playing a role in identification, and if the output information is consistent with the first expression information requirement of the first user playing a role in identification, finishing the data supervised learning, and then performing the next group of data supervised learning; and if the output information is inconsistent with the first expression information requirement of the first user, adjusting the neural network learning model until the output result of the neural network learning model is consistent with the first expression information requirement of the first user, and performing supervised learning of the next group of data. The neural network learning model is continuously corrected and optimized through training data, the accuracy of the neural network learning model in processing the information is improved through the process of supervised learning, and the technical effect that the first expression information of the first user is more accurately acquired is achieved.
Further, step S600 in the embodiment of the present application further includes:
step S601: according to the first face thermal imaging information, the face of the first user is partitioned to obtain different face area information;
step S602: obtaining temperature information corresponding to different facial areas of the first user;
step S603: inputting the different facial region information and the temperature information corresponding to the different facial regions into a second neural network model, wherein the second neural network model is obtained by training multiple sets of training data, and each set of training data in the multiple sets comprises: the different facial region information, the temperature information corresponding to the different facial regions and the identification information for identifying facial expression;
step S604: obtaining second output information of the second neural network model, the second output information including second mental state information of the first user;
step S605: and determining the first facial expression information of the first user according to the first expression information and the second expression information.
Specifically, the facial temperatures corresponding to different facial regions of the user are different, different facial region information of the first user and temperature information corresponding to the different facial regions are obtained and input into the second neural network model, and the acquired first facial mental state information of the first user is more accurate in a manner that the neural network model can continuously learn to acquire data through experience.
Further, step S400 in the embodiment of the present application further includes:
step S401 b: obtaining a first predetermined time threshold;
step S402 b: obtaining second expression information;
step S403 b: obtaining third expression information;
step S404 b: obtaining a first time, wherein the first time is the time when the second expression disappears;
step S405 b: obtaining a second time, wherein the second time is the time when the third expression appears, and the third expression is an adjacent expression after the second expression;
step S406 b: obtaining a first time difference according to the second time and the first time;
step S407 b: determining whether the first time difference is within the first predetermined time threshold;
step S408 b: if the first time difference is not within the first preset time threshold, obtaining a first deviation correction parameter;
step S409 b: and adjusting the first expression information according to the first deviation correcting parameter.
Specifically, the second expression and the third expression are adjacent expressions, and the switching time information of the first user, that is, the first time difference information, is determined by obtaining the disappearance time of the second expression of the first user and the appearance time information of the third expression, and it is determined whether the first time difference is within the first predetermined time threshold, and if the first time difference is not within the first predetermined time threshold, it represents that the expression switching of the first user is slow, and the obtained first expression information needs to be adjusted.
Further, step S500 in the embodiment of the present application further includes:
step S501 a: obtaining first temperature information of a first facial region of the first user from the thermal imaging system;
step S502 a: obtaining second temperature information of a first facial region of the first user from the thermal imaging system;
step S503 a: obtaining first temperature difference information according to the second temperature information and the first temperature information;
step S504 a: obtaining a preset expression temperature difference threshold value;
step S505 a: judging whether the first temperature difference information is within the preset expression temperature difference threshold value or not;
step S506 a: if the first temperature difference information is within the preset expression temperature difference threshold value, determining that the first expression information is real expression information;
step S507 a: and if the first temperature difference information is not within the preset expression temperature difference threshold value, determining that the first expression information is not real expression information.
Specifically, by acquiring first temperature information and second temperature information of a first face area of the first user, where the first temperature information and the second temperature information are different temperature information of the first face area of the first user at different time nodes, and by acquiring the first temperature difference information, and determining whether the first temperature difference information is within the predetermined expression temperature difference threshold, if not, it represents that temperature changes of different face areas of the first user exceed a normal value, and the expression is non-real expression information.
Further, step S500 in the embodiment of the present application further includes:
step S501 b: obtaining a first high temperature facial region from the thermal imaging system;
step S502 b: obtaining a second high temperature facial region from the thermal imaging system;
step S503 b: obtaining high-temperature transfer track information according to the first high-temperature face area and the second high-temperature face area;
step S504 b: and adjusting the first facial expression information according to the high-temperature transfer track information.
Specifically, the face detection system monitors a high-temperature face region of the first user in real time, so as to obtain transfer track information of the high-temperature face region, and adjusts the obtained facial expression information according to the track information.
Further, step S400 in the embodiment of the present application further includes:
step S401: obtaining facial muscle trend information of a first user;
step S402: obtaining a second deviation rectifying parameter according to the facial muscle trend information;
step S403: and adjusting the first expression information according to the second deviation correcting parameter.
Specifically, the face detection system analyzes and compares the facial information of the first user obtained by the image acquisition system, so as to obtain the facial muscle trend information of the first user, and adjust the obtained facial expression information according to the facial muscle information of the first user, so as to obtain more accurate first expression information.
Further, the embodiment S605 of the present application further includes:
step S6051: obtaining first scene information of the first predetermined area;
step S6052: judging whether second person information exists in the first scene information or not;
step S6053: if so, obtaining the expression information of the second person;
step S6054: inputting the expression information of the second person into a third neural network model to obtain first evaluation result information;
step S6055: judging whether the second person is abnormal or not according to the first evaluation result information;
step S6056: and if the second person is abnormal, obtaining first early warning information.
Specifically, the image acquisition system acquires scene information in the first predetermined area, analyzes whether second person information exists in the first predetermined area except for the first outdoor, inputs expression information of the second person through the neural network model if the second person information exists, analyzes whether the second person is abnormal, and acquires the first early warning information to remind the first user of paying attention if the expression information is abnormal and the second person tends to be stolen or maliciously injured.
In summary, the face detection method based on the facial expression provided by the embodiment of the present application has the following technical effects:
the facial expression of the user is analyzed and compared by capturing different facial image information of the user to obtain a facial feature distinguishing feature data set, and experience can be continuously obtained to process data based on a neural network model by inputting the facial feature distinguishing feature data set into the neural network model, so that the expression of the user is accurately analyzed, and the facial expression of the user is accurately judged according to the expression and facial thermal imaging information of the user. The technical purpose of improving the accuracy of face detection based on accurate analysis of the facial expression of the user is achieved.
Example two
Based on the same inventive concept as the face detection method based on the facial expression in the foregoing embodiment, the present invention further provides a face detection system based on the facial expression, as shown in fig. 2, the system includes:
a first obtaining unit 11, where the first obtaining unit 11 is configured to obtain first facial image information of a first user according to the image acquisition system, where the first facial image information is image information of the first user without facial expression;
a second obtaining unit 12, where the second obtaining unit 12 is configured to obtain second facial image information of the first user according to the image acquisition system;
a third obtaining unit 13, configured to obtain a first face distinguishing feature data set according to the second face image information and the first face image information;
a first input unit 14, where the first input unit 14 is configured to input the first facial feature data set into a first neural network model, so as to obtain first expression information;
a fourth obtaining unit 15, wherein the fourth obtaining unit 15 is configured to obtain first facial thermal imaging information of the first user according to the thermal imaging system;
a fifth obtaining unit 16, wherein the fifth obtaining unit 16 is configured to determine first facial expression information of the first user according to the first facial thermal imaging information and the first expression information.
Further, the system further comprises:
a second input unit, configured to input a first neural network model using the first facial feature data set as input data, where the first neural network model is obtained by training multiple sets of training data, and each set of training data in the multiple sets includes: the first facial discriminatory feature dataset and identification information to identify facial expressions;
a sixth obtaining unit, configured to obtain first output information of the first neural network model, where the first output information includes first expression information of the first user.
Further, the system further comprises:
a seventh obtaining unit configured to obtain different facial region information by partitioning the face of the first user according to the first facial thermography information;
an eighth obtaining unit, configured to obtain temperature information corresponding to different facial areas of the first user;
a third input unit, configured to input the different facial region information and the temperature information corresponding to the different facial regions into a second neural network model, where the second neural network model is obtained through training of multiple sets of training data, and each set of training data in the multiple sets includes: the different facial region information, the temperature information corresponding to the different facial regions and the identification information for identifying facial expression;
a ninth obtaining unit, configured to obtain second output information of the second neural network model, where the second output information includes second emotional information of the first user;
a tenth obtaining unit, configured to determine first facial expression information of the first user according to the first expression information and the second expression information.
Further, the system further comprises:
obtaining a first predetermined time threshold;
an eleventh obtaining unit configured to obtain second expression information;
a twelfth obtaining unit for obtaining third emotion information,
a thirteenth obtaining unit, configured to obtain a first time, where the first time is a time when the second expression disappears;
a fourteenth obtaining unit, configured to obtain a second time, where the second time is a time when the third expression appears, and the third expression is an adjacent expression after the second expression;
a fifteenth obtaining unit, configured to obtain a first time difference according to the second time and the first time;
a first judging unit, configured to judge whether the first time difference is within the first predetermined time threshold;
a sixteenth obtaining unit, configured to obtain a first deviation correcting parameter if the first time difference is not within the first predetermined time threshold;
and the first adjusting unit is used for adjusting the first expression information according to the first deviation correcting parameter.
Further, the system further comprises:
a seventeenth obtaining unit for obtaining first temperature information of a first facial region of the first user according to the thermal imaging system;
an eighteenth obtaining unit for obtaining second temperature information of a first face area of the first user according to the thermal imaging system;
a nineteenth obtaining unit, configured to obtain first temperature difference information according to the second temperature information and the first temperature information;
a twentieth obtaining unit configured to obtain a predetermined expression temperature difference threshold;
a second determination unit configured to determine whether the first temperature difference information is within the predetermined expression temperature difference threshold;
a twenty-first obtaining unit configured to determine that the first expression information is real expression information if the first temperature difference information is within the predetermined expression temperature difference threshold;
a twenty-second obtaining unit configured to determine that the first expression information is non-real expression information if the first temperature difference information is not within the predetermined expression temperature difference threshold.
Further, the system further comprises:
a twenty-third obtaining unit for obtaining a first high-temperature facial region according to the thermal imaging system;
a twenty-fourth obtaining unit for obtaining a second high temperature face region according to the thermal imaging system;
a twenty-fifth obtaining unit configured to obtain high-temperature transfer trajectory information from the first high-temperature face area and the second high-temperature face area;
and the second adjusting unit is used for adjusting the first facial aspect information according to the high-temperature transfer track information.
Further, the system further comprises:
a twenty-sixth obtaining unit configured to obtain facial muscle trend information of the first user;
a twenty-seventh obtaining unit, configured to obtain a second deviation correcting parameter according to the facial muscle trend information;
and the third adjusting unit is used for adjusting the first expression information according to the second deviation correcting parameter.
Various changes and specific examples of the face detection method based on facial expression in the first embodiment of fig. 1 are also applicable to the face detection system based on facial expression in the present embodiment, and through the foregoing detailed description of the face detection method based on facial expression, a person skilled in the art can clearly know the face detection system based on facial expression in the present embodiment, so for the sake of brevity of the description, detailed descriptions thereof are omitted here.
Exemplary electronic device
The electronic device of the embodiment of the present application is described below with reference to fig. 3.
Fig. 3 illustrates a schematic structural diagram of an electronic device according to an embodiment of the present application.
Based on the inventive concept of a face detection method based on facial expression in the foregoing embodiments, the present invention further provides a face detection system based on facial expression, on which a computer program is stored, which when executed by a processor implements the steps of any one of the foregoing face detection methods based on facial expression.
Where in fig. 3a bus architecture (represented by bus 300), bus 300 may include any number of interconnected buses and bridges, bus 300 linking together various circuits including one or more processors, represented by processor 302, and memory, represented by memory 304. The bus 300 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 305 provides an interface between the bus 300 and the receiver 301 and transmitter 303. The receiver 301 and the transmitter 303 may be the same element, i.e., a transceiver, providing a means for communicating with various other systems over a transmission medium.
The processor 302 is responsible for managing the bus 300 and general processing, and the memory 304 may be used for storing data used by the processor 302 in performing operations.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create a system for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including an instruction system which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (8)

1. A face detection method based on facial expression, wherein the method is applied to a face detection system, the system comprises an image acquisition system and a thermal imaging system, and the method comprises:
acquiring first facial image information of a first user according to the image acquisition system, wherein the first facial image information is image information of the first user without facial expression;
acquiring second facial image information of a first user according to the image acquisition system;
obtaining a first facial distinguishing feature dataset according to the second facial image information and the first facial image information;
inputting the first facial distinguishing feature data set into a first neural network model to obtain first expression information;
obtaining first facial thermal imaging information of the first user from the thermal imaging system;
determining first facial expression information of a first user according to the first facial thermal imaging information and the first expression information;
wherein the determining first facial expression information of a first user from the first facial thermography information and the first expression information comprises:
according to the first face thermal imaging information, the face of the first user is partitioned to obtain different face area information;
obtaining temperature information corresponding to different facial areas of the first user;
inputting the different facial region information and the temperature information corresponding to the different facial regions into a second neural network model, wherein the second neural network model is obtained by training multiple sets of training data, and each set of training data in the multiple sets comprises: the different facial region information, the temperature information corresponding to the different facial regions and the identification information for identifying facial expression;
obtaining second output information of the second neural network model, the second output information including second mental state information of the first user;
and determining the first facial expression information of the first user according to the first expression information and the second expression information.
2. The method of claim 1, wherein said inputting the first facial discriminatory feature dataset into a first neural network model, obtaining first expression information, comprises:
inputting the first facial distinguishing feature data set as input data into a first neural network model, wherein the first neural network model is obtained by training a plurality of groups of training data, and each group of training data in the plurality of groups comprises: the first facial discriminatory feature dataset and identification information to identify facial expressions;
obtaining first output information of the first neural network model, wherein the first output information comprises first expression information of the first user.
3. The method of claim 1, wherein the method comprises:
obtaining a first predetermined time threshold;
obtaining second expression information;
a third set of emotional information is obtained,
obtaining a first time, wherein the first time is the time when the second expression disappears;
obtaining a second time, wherein the second time is the time when the third expression appears, and the third expression is an adjacent expression after the second expression;
obtaining a first time difference according to the second time and the first time;
determining whether the first time difference is within the first predetermined time threshold;
if the first time difference is not within the first preset time threshold, obtaining a first deviation correction parameter;
and adjusting the first expression information according to the first deviation correcting parameter.
4. The method of claim 1, wherein the method comprises:
obtaining first temperature information of a first facial region of the first user from the thermal imaging system;
obtaining second temperature information of a first facial region of the first user from the thermal imaging system;
obtaining first temperature difference information according to the second temperature information and the first temperature information;
obtaining a preset expression temperature difference threshold value;
judging whether the first temperature difference information is within the preset expression temperature difference threshold value or not;
if the first temperature difference information is within the preset expression temperature difference threshold value, determining that the first expression information is real expression information;
and if the first temperature difference information is not within the preset expression temperature difference threshold value, determining that the first expression information is not real expression information.
5. The method of claim 1, wherein the method comprises:
obtaining a first high temperature facial region from the thermal imaging system;
obtaining a second high temperature facial region from the thermal imaging system;
obtaining high-temperature transfer track information according to the first high-temperature face area and the second high-temperature face area;
and adjusting the first facial expression information according to the high-temperature transfer track information.
6. The method of claim 1, wherein the method comprises:
obtaining facial muscle trend information of a first user;
obtaining a second deviation rectifying parameter according to the facial muscle trend information;
and adjusting the first expression information according to the second deviation correcting parameter.
7. A facial aspect-based face detection system, the system comprising an image acquisition system and a thermal imaging system, wherein the system further comprises:
the first obtaining unit is used for obtaining first facial image information of a first user according to the image acquisition system, and the first facial image information is image information of the first user without facial expression;
a second obtaining unit for obtaining second facial image information of the first user according to the image acquisition system;
a third obtaining unit configured to obtain a first face discrimination feature data set from the second face image information and the first face image information;
a first input unit, configured to input the first facial feature dataset into a first neural network model, to obtain first expression information;
a fourth obtaining unit for obtaining first facial thermography information of the first user according to the thermography system;
a fifth obtaining unit configured to determine first facial expression information of a first user according to the first facial thermography information and the first expression information;
wherein the system further comprises:
a seventh obtaining unit configured to obtain different facial region information by partitioning the face of the first user according to the first facial thermography information;
an eighth obtaining unit, configured to obtain temperature information corresponding to different facial areas of the first user;
a third input unit, configured to input the different facial region information and the temperature information corresponding to the different facial regions into a second neural network model, where the second neural network model is obtained through training of multiple sets of training data, and each set of training data in the multiple sets includes: the different facial region information, the temperature information corresponding to the different facial regions and the identification information for identifying facial expression;
a ninth obtaining unit, configured to obtain second output information of the second neural network model, where the second output information includes second emotional information of the first user;
a tenth obtaining unit, configured to determine first facial expression information of the first user according to the first expression information and the second expression information.
8. A face detection system based on facial expression comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1-6 when executing the program.
CN202011483841.4A 2020-12-16 2020-12-16 Face detection method and system based on facial expression Active CN112507916B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011483841.4A CN112507916B (en) 2020-12-16 2020-12-16 Face detection method and system based on facial expression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011483841.4A CN112507916B (en) 2020-12-16 2020-12-16 Face detection method and system based on facial expression

Publications (2)

Publication Number Publication Date
CN112507916A CN112507916A (en) 2021-03-16
CN112507916B true CN112507916B (en) 2021-07-27

Family

ID=74972512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011483841.4A Active CN112507916B (en) 2020-12-16 2020-12-16 Face detection method and system based on facial expression

Country Status (1)

Country Link
CN (1) CN112507916B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313020B (en) * 2021-05-27 2023-04-07 成都威爱新经济技术研究院有限公司 Unmarked facial expression capturing method and system based on virtual human
CN117058131B (en) * 2023-10-11 2024-03-19 深圳市鹰瞳智能技术有限公司 Method for carrying out visual positioning on facial artery based on high-resolution thermal imaging

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135242A (en) * 2019-03-28 2019-08-16 福州大学 Emotion identification device and method based on low resolution infrared thermal imaging depth perception
CN110991281A (en) * 2019-11-21 2020-04-10 电子科技大学 Dynamic face recognition method
CN111582059A (en) * 2020-04-20 2020-08-25 哈尔滨工程大学 Facial expression recognition method based on variational self-encoder
CN112036281A (en) * 2020-07-29 2020-12-04 重庆工商大学 Facial expression recognition method based on improved capsule network

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11073899B2 (en) * 2010-06-07 2021-07-27 Affectiva, Inc. Multidevice multimodal emotion services monitoring
US10417483B2 (en) * 2017-01-25 2019-09-17 Imam Abdulrahman Bin Faisal University Facial expression recognition
CN110751015B (en) * 2019-09-02 2023-04-11 合肥工业大学 Perfusion optimization and artificial intelligence emotion monitoring method for facial infrared heat map
CN110473630A (en) * 2019-09-28 2019-11-19 马学磊 A kind of tumor patient mental anguish assessment instrument and its appraisal procedure
CN111110256A (en) * 2020-01-13 2020-05-08 浙江科技学院 Emotion measuring method based on human face infrared thermal image
CN111904441A (en) * 2020-08-20 2020-11-10 金陵科技学院 Emotion analysis system and method based on multi-modal characteristics

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135242A (en) * 2019-03-28 2019-08-16 福州大学 Emotion identification device and method based on low resolution infrared thermal imaging depth perception
CN110991281A (en) * 2019-11-21 2020-04-10 电子科技大学 Dynamic face recognition method
CN111582059A (en) * 2020-04-20 2020-08-25 哈尔滨工程大学 Facial expression recognition method based on variational self-encoder
CN112036281A (en) * 2020-07-29 2020-12-04 重庆工商大学 Facial expression recognition method based on improved capsule network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Facial Expression Recognition Using Thermal Image;Guotai Jiang等;《2005 IEEE Engineering in Medicine and Biology 27th Annual Conference》;20060410;第631–633页 *
Thermal face recognition using convolutional neural network;Zhan Wu等;《2016 International Conference on Optoelectronics and Image Processing (ICOIP)》;20160804;第6-9页 *
基于卷积神经网络及特征提取的面部表情识别算法;王怡文;《电脑知识与技术》;20190630;第15卷(第16期);第203-205页 *
基于红外热成像人脸表情识别研究;谷文娟;《中国优秀硕士学位论文全文数据库 信息科技辑》;20140415;第I138-1072页 *

Also Published As

Publication number Publication date
CN112507916A (en) 2021-03-16

Similar Documents

Publication Publication Date Title
CN112507916B (en) Face detection method and system based on facial expression
CN107194361A (en) Two-dimentional pose detection method and device
EP2544147A1 (en) Biological information management device and method
CN104598888B (en) A kind of recognition methods of face gender
CN111829661A (en) Forehead temperature measurement method and system based on face analysis
CN111631682A (en) Physiological feature integration method and device based on trend-removing analysis and computer equipment
CN105303771A (en) Fatigue judging system and method
Ahmadi et al. Iris recognition system based on canny and LoG edge detection methods
Feng et al. Using eye aspect ratio to enhance fast and objective assessment of facial paralysis
CN113255802A (en) Intelligent skin tendering system based on infrared laser
CN110222608A (en) A kind of self-service examination machine eyesight detection intelligent processing method
Hai et al. Real time burning image classification using support vector machine
CN113017565A (en) Intelligent detection and analysis method and system for skin color spots
CN112998657A (en) Method and system for preventing skin allergy
CN111325282A (en) Mammary gland X-ray image identification method and device suitable for multiple models
Kumar et al. Performance improvement using an automation system for segmentation of multiple parametric features based on human footprint
CN110532845A (en) Biological authentication apparatus and biological authentication method
CN116115239A (en) Embarrassing working gesture recognition method for construction workers based on multi-mode data fusion
EP1810216A1 (en) 3d object recognition
CN111222374A (en) Lie detection data processing method and device, computer equipment and storage medium
CN112086193A (en) Face recognition health prediction system and method based on Internet of things
Ren et al. An improved method for Daugman's iris localization algorithm
CN112288934A (en) Control method based on intelligent gating
Logu et al. Real‐Time Mild and Moderate COVID‐19 Human Body Temperature Detection Using Artificial Intelligence
KR101615480B1 (en) Method and Apparatus for Automatic Threshold Control of Unsupervised Form for Real Time Detection of Spike in Time Series Data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant