CN109472206B - Risk assessment method, device, equipment and medium based on micro-expressions - Google Patents
Risk assessment method, device, equipment and medium based on micro-expressions Download PDFInfo
- Publication number
- CN109472206B CN109472206B CN201811182176.8A CN201811182176A CN109472206B CN 109472206 B CN109472206 B CN 109472206B CN 201811182176 A CN201811182176 A CN 201811182176A CN 109472206 B CN109472206 B CN 109472206B
- Authority
- CN
- China
- Prior art keywords
- data
- feature
- micro
- index
- evaluated
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/176—Dynamic expression
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/38—Payment protocols; Details thereof
- G06Q20/40—Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
- G06Q20/401—Transaction verification
- G06Q20/4016—Transaction verification involving fraud or risk level assessment in transaction processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/03—Credit; Loans; Processing thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Accounting & Taxation (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Finance (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Computer Security & Cryptography (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Technology Law (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a risk assessment method, device, equipment and medium based on micro-expressions. The method comprises the following steps: acquiring first video data of an object to be evaluated; performing microexpressive recognition on the first video data by using a microexpressive recognition model to obtain first microexpressive data of an object to be evaluated; establishing facial micro-expression baseline data of the object to be evaluated according to the first micro-expression data; acquiring second video data of an object to be evaluated; performing micro-expression recognition on the second video data by using the micro-expression recognition model to obtain second micro-expression data of the object to be evaluated; and carrying out risk assessment on the feature data to be identified of each index feature in the second microexpressive data by using the normal numerical range of each index feature in the facial microexpressive baseline data to obtain a risk assessment result of the object to be assessed. The technical scheme of the invention realizes automatic risk assessment, can improve the accuracy of credit risk assessment and reduce the credit risk.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a risk assessment method, apparatus, device, and medium based on micro-expressions.
Background
With the gradual application of software systems in the financial field, the software systems are widely applied in the loan financial field. Common loan financial systems, such as credit systems, can implement the electronics and automation of loan form filling processes, signing processes, approval processes, paying processes, and the like.
However, in the present credit process, the wind control is mostly carried out through face-to-face communication between the credit reviewer and the borrower, and the credit reviewer can ignore some subtle expression changes of the borrower, especially ignoring individual differences, and it is difficult to accurately distinguish risk anomalies through subjective judgment only, especially for the credit reviewer with insufficient experience, often difficult to distinguish potential high risk clients, which leads to higher credit risks, because the credit reviewer is not focused or does not know deeply about the facial expression of the borrower.
Disclosure of Invention
The embodiment of the invention provides a risk assessment method, device, equipment and medium based on micro-expressions, which are used for solving the problem of high credit risk caused by low accuracy of credit risk assessment at present.
A risk assessment method based on micro-expressions, comprising:
acquiring first video data of an object to be evaluated, wherein the first video data is video data of a preset basic problem answered by the object to be evaluated;
performing microexpressive recognition on the first video data by using a preset microexpressive recognition model to obtain first microexpressive data of the object to be evaluated, wherein the preset microexpressive recognition model recognizes preset index features and feature data of each index feature from the first video data, and the first microexpressive data comprises feature data of each index feature;
establishing facial micro-expression baseline data of the object to be evaluated according to the characteristic data of each index characteristic in the first micro-expression data, wherein the facial micro-expression baseline data comprises a normal numerical range of each index characteristic;
acquiring second video data of the object to be evaluated, wherein the second video data is video data of the object to be evaluated for answering a preset evaluation problem;
performing micro-expression recognition on the second video data by using the micro-expression recognition model to obtain second micro-expression data of the object to be evaluated, wherein the preset micro-expression recognition model recognizes the index features and the feature data to be recognized of each index feature from the second video data, and the second micro-expression data comprises the feature data to be recognized of each index feature;
And performing risk assessment on feature data to be identified of each index feature in the second micro-expression data by using a normal numerical range of each index feature in the facial micro-expression baseline data to obtain a risk assessment result of the object to be assessed.
A microexpressive based risk assessment device, comprising:
the first acquisition module is used for acquiring first video data of an object to be evaluated, wherein the first video data is video data of a preset basic question answered by the object to be evaluated;
the first recognition module is used for carrying out micro-expression recognition on the first video data by using a preset micro-expression recognition model to obtain first micro-expression data of the object to be evaluated, wherein the preset micro-expression recognition model is used for recognizing preset index features and feature data of each index feature from the first video data, and the first micro-expression data comprises feature data of each index feature;
the base line establishing module is used for establishing facial micro-expression base line data of the object to be evaluated according to the characteristic data of each index characteristic in the first micro-expression data, wherein the facial micro-expression base line data comprises a normal numerical range of each index characteristic;
The second acquisition module is used for acquiring second video data of the object to be evaluated, wherein the second video data is video data of the object to be evaluated for answering a preset evaluation problem;
the second recognition module is used for performing micro-expression recognition on the second video data by using the micro-expression recognition model to obtain second micro-expression data of the object to be evaluated, wherein the preset micro-expression recognition model recognizes the index features from the second video data and the feature data to be recognized of each index feature, and the second micro-expression data comprises the feature data to be recognized of each index feature;
and the risk assessment module is used for carrying out risk assessment on the feature data to be identified of each index feature in the second micro-expression data by using the normal numerical range of each index feature in the facial micro-expression baseline data to obtain a risk assessment result of the object to be assessed.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the microexpressive based risk assessment method described above when the computer program is executed.
A computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the microexpressive based risk assessment method described above.
In the risk assessment method, the risk assessment device, the risk assessment equipment and the risk assessment medium based on the micro-expressions, the first video data of the object to be assessed for answering the preset basic questions are collected, the first video data are subjected to micro-expression identification by using the preset micro-expression identification model to obtain first micro-expression data containing characteristic data of the object to be assessed in each index characteristic, and facial micro-expression baseline data of the object to be assessed are established according to the first micro-expression data, wherein the facial micro-expression baseline data comprise normal numerical ranges of each index characteristic, so that the establishment of a set of independent facial micro-expression baseline data for each object to be assessed is realized; on the basis of establishing facial micro-expression baseline data of an object to be evaluated, carrying out micro-expression recognition on the second video data by acquiring second video data of the object to be evaluated for answering a preset evaluation problem and using the same micro-expression recognition model to obtain second micro-expression data containing feature data to be recognized of the object to be evaluated in each index feature, then carrying out risk evaluation on the feature data to be recognized of each index feature in the second micro-expression data by taking the facial micro-expression baseline data of the object to be evaluated as a reference to obtain a risk evaluation result of the object to be evaluated, carrying out micro-expression recognition through a preset micro-expression recognition model, setting various index features related to the micro-expression, carrying out risk evaluation according to feature data comparison of the index features, realizing automatic risk evaluation of the object to be evaluated based on the micro-expression, avoiding the problem of low accuracy caused by artificial evaluation of credit risks, and improving the accuracy of risk evaluation; on the other hand, by establishing independent facial micro-expression baseline data for each object to be evaluated, individual differences can be distinguished when the facial micro-expression baseline data are used for risk evaluation, so that the credit risk evaluation accuracy is further effectively improved, and the credit risk is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of an application environment of a risk assessment method based on micro-expressions according to an embodiment of the invention;
FIG. 2 is a flowchart of a micro-expression based risk assessment method according to an embodiment of the present invention;
FIG. 3 is a flowchart of step S2 in a microexpressive based risk assessment method according to an embodiment of the invention;
FIG. 4 is a flowchart of step S3 in a microexpressive based risk assessment method according to an embodiment of the invention;
FIG. 5 is a flowchart of step S6 in a microexpressive based risk assessment according to an embodiment of the invention;
FIG. 6 is a schematic diagram of a risk assessment apparatus based on micro-expressions according to an embodiment of the invention;
FIG. 7 is a schematic diagram of a computer device in accordance with an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The risk assessment method based on the micro-expressions can be applied to an application environment shown in fig. 1, wherein the application environment comprises a server side and a client side, the server side and the client side are connected through a network, the network can be a wired network or a wireless network, the client side specifically comprises but not limited to various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices, and the server side can be specifically realized by a server cluster formed by independent servers or a plurality of servers. The client sends the collected video data of the object to be evaluated to the server, and the server performs risk assessment according to the received video data.
In an embodiment, as shown in fig. 2, a risk assessment method based on micro-expressions is provided, and the method is applied to the server in fig. 1 for illustration, and is described in detail as follows:
s1: and acquiring first video data of the object to be evaluated, wherein the first video data is video data of the object to be evaluated for answering a preset basic question.
Specifically, the object to be evaluated is a user who needs to perform risk evaluation, when the risk evaluation needs to be performed on the object to be evaluated, the client sends a basic problem acquisition request to the server, the server sends a preset basic problem to the client, the client displays the basic problem to the object to be evaluated, and after the video data of the object to be evaluated, which answers the basic problem, is collected by the collecting device, the video data is sent to the server, and the video data can be specifically a video file. The server receives video data sent by the client and serves as first video data of an object to be evaluated.
The basic questions may be personal information questions such as age, sex, identification card number, mobile phone number, home address, etc.
S2: performing micro-expression recognition on the first video data by using a preset micro-expression recognition model to obtain first micro-expression data of an object to be evaluated, wherein the preset micro-expression recognition model recognizes preset index features and feature data of each index feature from the first video data, and the first micro-expression data comprises feature data of each index feature.
Specifically, the server side builds the video data acquired in the step S1, inputs the video data into a preset micro-expression recognition model, extracts a video frame from the video data, carries out micro-expression recognition on a face image in the video frame, and extracts feature data of various index features of an object to be recognized in a normal micro-expression state.
The micro-expression recognition model may be specifically a neural network model based on deep learning, and the preset index features include, but are not limited to, AU (Action Unit) index, head action index, eye action index, and the like, and the feature data of the index features may be specifically the frequency or the frequency of occurrence of the index features in a preset time period.
For example, the preset time period is 2 minutes, the index features include eye motion features and head motion features, and the eye motion features include blink features, the head motion features include head left twist features and head right twist features, and the first microexpressive data obtained through recognition by the microexpressive recognition model includes: the number of blinks within 2 minutes was 12, the number of head twists to the left was 6, and the number of head twists to the right was 4.
Further, if the characteristic data of a certain index characteristic is not identified, setting the characteristic data of the index characteristic as preset default data according to the normal distribution of the group.
For example, most people normally twist their head left 1 time per minute, so twist their head left 1 time per minute is taken as default data for the head left twist feature. If the object to be evaluated does not have the feature data identifying the head left twist feature, the feature data of the head left twist feature is set as default 1 left twist per minute.
S3: and establishing facial micro-expression baseline data of the object to be evaluated according to the characteristic data of each index characteristic in the first micro-expression data of the object to be evaluated, wherein the facial micro-expression baseline data comprise a normal numerical range of each index characteristic.
Specifically, the first microexpressive data comprises characteristic data of each index characteristic, the range expansion is carried out by taking the characteristic data of each index characteristic as a core according to a preset proportion, and a normal numerical range of each index characteristic is obtained and is used as facial microexpressive baseline data of an object to be evaluated.
For example, if the characteristic data of the blink characteristic is that blink 12 times per minute, the range expansion is performed with 12 times per minute as a core according to a preset ratio, and blink 6 times to 18 times per minute is taken as a normal numerical range of the blink characteristic assuming that the preset ratio is 50%.
It should be noted that, the facial microexpressive baseline data of the object to be evaluated refers to microexpressive data of the object to be evaluated in a normal microexpressive state, in this embodiment, the preset basic problem is a question that a correct answer is known clearly, and the microexpressive displayed by the object to be evaluated when answering the basic question is considered to be a normal microexpressive, so that the default object to be evaluated is in a normal microexpressive state when answering the basic question, and the baseline data obtained according to the feature data in the normal microexpressive state can accurately express the normal microexpressive state, and be used as a basis for risk evaluation of the object to be evaluated, so that each baseline data has individual pertinence for a specific object to be evaluated, and individual differences are distinguished, thereby improving accuracy of risk evaluation of the object to be evaluated.
S4: and acquiring second video data of the object to be evaluated, wherein the second video data is video data of the object to be evaluated for answering a preset evaluation problem.
The preset assessment questions are sensitive questions aiming at credit risk setting, and the credit risk of the object to be assessed can be judged through the answer of the object to be assessed to the assessment questions, for example, the assessment questions can be specifically loan users, personal income, repayment willingness and the like.
Specifically, the client sends an evaluation problem acquisition request to the server, the server sends a preset evaluation problem to the client, the client displays the evaluation problem to an object to be evaluated, and after the video data of the object to be evaluated for answering the evaluation problem is acquired by the acquisition device, the video data is sent to the server, wherein the video data can be a video file. The server receives the video data sent by the client and serves as second video data of the object to be evaluated.
S5: and carrying out micro-expression recognition on the second video data by using a preset micro-expression recognition model to obtain second micro-expression data of the object to be evaluated, wherein the preset micro-expression recognition model recognizes preset index features and feature data to be recognized of each index feature from the second video data, and the second micro-expression data comprises feature data to be recognized of each index feature.
Specifically, the server performs micro-expression recognition on the second video data acquired in step S4 by using the same micro-expression recognition model as that in step S2, and the recognition process is consistent with the recognition process of performing micro-expression recognition on the first video data by using the micro-expression recognition model in step S2, so that repetition is avoided, and no further description is given here.
The second micro-expression data obtained after the micro-expression recognition by the server is the characteristic data of the index characteristics of the micro-expression shown by the object to be evaluated when the object to be evaluated answers the evaluation questions, the characteristic data are used as the characteristic data to be recognized, and the server can judge whether the micro-expression state of the object to be evaluated is normal or not according to the analysis of the characteristic data to be recognized, so that whether the credit risk and the credit risk degree of the object to be evaluated exist or not are determined.
S6: and performing risk assessment on the feature data to be identified of each index feature in the second microexpressive data by using the normal numerical range of each index feature in the facial microexpressive baseline data of the object to be assessed, so as to obtain a risk assessment result of the object to be assessed.
Specifically, the server performs risk assessment on the second micro-expression data obtained in the step S5 according to the facial micro-expression baseline data of the object to be assessed obtained in the step S3, and a risk assessment result of the object to be assessed is obtained.
It should be noted that, the risk assessment result may be a quantized value, and the larger the value is, the higher the risk is, the risk assessment result may also be a preset assessment level, for example, a first level, a second level, etc., and the higher the level is, the larger the risk is, and the risk assessment result may also be in other defining manners, which may be specifically set according to the needs of practical applications, and is not limited herein.
In a specific embodiment, the risk assessment process may be to compare, for each index feature, deviation between feature data to be identified of the index feature and a normal data range of the index feature, and if a preset index feature is M, M deviation values are obtained, the server orders the M deviation values in order from large to small, and selects N deviation values ordered in front, calculates an average value of the N deviation values, and uses the average value as a risk assessment result of an object to be assessed, where M and N are both positive integers, and M is greater than or equal to N.
In the embodiment, by collecting first video data of a to-be-evaluated object for answering a preset basic problem, and performing microexpressive recognition on the first video data by using a preset microexpressive recognition model, first microexpressive data containing characteristic data of the to-be-evaluated object in each index characteristic is obtained, and then facial microexpressive baseline data of the to-be-evaluated object is established according to the first microexpressive data, wherein the facial microexpressive baseline data comprises a normal numerical range of each index characteristic, so that establishing a set of independent facial microexpressive baseline data for each to-be-evaluated object is realized; on the basis of establishing facial micro-expression baseline data of an object to be evaluated, carrying out micro-expression recognition on the second video data by acquiring second video data of the object to be evaluated for answering a preset evaluation problem and using the same micro-expression recognition model to obtain second micro-expression data containing feature data to be recognized of the object to be evaluated in each index feature, then carrying out risk evaluation on the feature data to be recognized of each index feature in the second micro-expression data by taking the facial micro-expression baseline data of the object to be evaluated as a reference to obtain a risk evaluation result of the object to be evaluated, carrying out micro-expression recognition through a preset micro-expression recognition model, setting various index features related to the micro-expression, carrying out risk evaluation according to feature data comparison of the index features, realizing automatic risk evaluation of the object to be evaluated based on the micro-expression, avoiding the problem of low accuracy caused by artificial evaluation of credit risks, and improving the accuracy of risk evaluation; on the other hand, by establishing independent facial micro-expression baseline data for each object to be evaluated, individual differences can be distinguished when the facial micro-expression baseline data are used for risk evaluation, so that the credit risk evaluation accuracy is further effectively improved, and the credit risk is reduced.
In one embodiment, the preset micro-expression recognition model includes a face detection model, an emotion discrimination model, a head gesture recognition model, a blink detection model, and an iris edge detection model, and the preset index features include muscle action features, head gesture features, blink features, and eye movement change features.
Specifically, the face detection model is used for identifying a face area in a video frame image in video data; the emotion distinguishing model is used for identifying muscle action characteristics, namely AU indexes, of the face region, such as inner eyebrow lifting, mouth corner lifting, nose frizzing and the like; the head gesture recognition model is used for recognizing the offset angles of the head in different preset directions in the face region, namely the head gesture features, such as left twisting, right twisting and the like; the blink detection model is used for detecting blink frequency, namely blink characteristics, from a face area of a video frame image of a continuous time period; the iris edge detection model is used to identify the condition of eye movement change in the face region, i.e., eye movement change characteristics, such as left rotation, backward rotation, etc.
Preferably, in the present embodiment, the feature data of each index feature may be a frequency of occurrence of the index feature.
Further, as shown in fig. 3, in step S2, the preset micro-expression recognition model is used to perform micro-expression recognition on the first video data to obtain first micro-expression data of the object to be evaluated, and specifically includes the following steps:
s21: and extracting a first preset number of video frame images from the first video data according to a preset extraction mode.
Specifically, the preset extraction mode may be to extract one video frame image from video frame images included in the first video data at a predetermined interval frame number, so as to obtain a preset number of video frame images.
For example, the first video data contains 1000 video frame images, and 200 video frame images are extracted from the first video data in such a manner that one video frame image is extracted every 5 frames.
S22: and carrying out face detection on the video frame image by using a face detection model, and extracting a face picture in the video frame image.
Specifically, a cascade convolutional neural network (cascades NN) model can be adopted for the preset face detection model to realize rapid face detection, 6 convolutional neural networks are arranged in a cascade structure of the cascades NN model, 3 convolutional neural networks are used for classifying faces and non-faces, and in addition, 3 convolutional neural networks are used for frame correction of face areas, so that face pictures are obtained.
When the face detection model is used for face picture extraction, if a plurality of face pictures are contained in the video frame image, a face region with the largest occupied image area is extracted as the extracted face picture, namely, the object to be evaluated is required to be in a preset core region of video acquisition when answering a question.
S23: and inputting the face picture into an emotion distinguishing model, and carrying out micro-expression recognition to obtain feature data of each muscle action feature of the object to be evaluated.
In particular, the muscle action features, i.e. AU features, include basic muscle action units of the face, such as inner eyebrow lifting, mouth corner lifting, nose muxing, etc.
In the emotion distinguishing model, a plurality of muscle action features are preset, the frequency of occurrence of each muscle action feature in a time period corresponding to the first video data is recognized through micro expression recognition of a face picture, the frequency of occurrence of the muscle action feature is calculated according to the time period corresponding to the first video data and the frequency of occurrence of each muscle action feature, and the frequency is used as feature data of the muscle action feature.
The emotion distinguishing model can specifically adopt a Deep Convolutional Neural Network (DCNN) model and an 80-layer residual network (ResNet-80) model.
And identifying the position coordinates of the face feature points by using a DCNN model, wherein the face feature points comprise, but are not limited to, a left eye, a right eye, a nose tip, a left mouth corner and a right mouth corner, and inputting the identification result of the DCNN model into a ResNet-80 model.
In the ResNet-80 model, extracting the characteristic value of the face characteristic point from the face picture according to the position coordinates of the input face characteristic point, comparing the characteristic value with the preset standard characteristic value of each AU characteristic, judging whether the face characteristic point meets the requirement of each AU characteristic according to the similarity comparison result, if so, confirming that the AU characteristic appears once, counting the total times of appearance of each AU characteristic after the comparison of each face characteristic point is completed, and determining the characteristic data of each muscle action characteristic according to the total times.
S24: and inputting the face picture into a head gesture recognition model, and performing head gesture recognition to obtain feature data of each head gesture feature of the object to be evaluated.
Specifically, the head gesture recognition model may use a 10-layer convolutional neural network model, and perform convolutional operations through different convolutional kernels to obtain a feature map of the head feature, and further use a nonlinear function to perform nonlinear operations on the feature map obtained after the convolutional to avoid insufficient expression capacity of the linear model, so as to prevent overfitting, where the adopted nonlinear function includes but is not limited to sigmoid, tanh, reLU, and the like.
The preset head pose features may include a plurality of preset directional head twist features, wherein the preset directions include, but are not limited to, up and down, side to side, front and back, etc.
The head gesture recognition model determines the frequency of head twisting in the preset direction by judging whether the head twisting angle in each preset direction exceeds the angle threshold value in the preset direction, if the head twisting angle in the preset direction exceeds the angle threshold value in the preset direction, the head twisting once in the preset direction is determined, the total frequency of occurrence of each head twisting feature in a time period corresponding to the first video data is counted, the frequency of occurrence of each head twisting feature is calculated according to the time period corresponding to the first video data and the total frequency of occurrence of each head twisting feature, and the frequency is used as the feature data of the head twisting feature.
For example, in a specific embodiment, after the head gesture recognition model performs head gesture recognition on the face image, feature data of the subject to be evaluated, in which the frequency of head twisting to the left is 6 times/min and the frequency of head twisting to the right is 8 times/min, is obtained.
S25: and inputting the face picture into a blink detection model, detecting the blink times to obtain blink frequency, and taking the blink frequency as characteristic data of blink characteristics of the object to be evaluated.
Specifically, the blink detection model may use a logistic regression (Logistic Regression, LR) model to determine whether the subject to be evaluated in the face picture blinks. The logistic regression model is a classification model in machine learning, and is also a classification model.
And inputting each face picture into a preset blink detection model to obtain the number of face pictures which are blinked in a time period corresponding to the first video data, and taking the quotient of the number and the time period as blink frequency, namely the characteristic data of blink characteristics.
S26: and inputting the face picture into an iris edge detection model, and detecting eye movement change to obtain characteristic data of each eye movement change characteristic of the object to be evaluated.
Specifically, the iris edge detection model determines the change track of the central position of the eye in the orbit region by detecting the position of the iris edge of the eye, so as to obtain the condition of eye movement change, namely the characteristic data of preset eye movement change characteristics, wherein the eye movement change characteristics comprise, but are not limited to, eye movement leftwards, eye movement rightwards, eye movement upwards, eye movement downwards and the like.
In an iris edge detection model, acquiring an eyeball center coordinate point in a face picture, taking the eyeball center coordinate point as a center, cutting out an eye socket area with a preset radius, detecting the position of an iris edge point on the eye area, determining a closed area surrounded by the iris edge point according to the position of the iris edge point, determining the center position of the closed area as the center position of the eye, tracking the change track of the center position of the eye in the eye socket area in real time, identifying the moving direction of the eye according to the change track, counting the moving times of the eye in each preset direction, and calculating the moving frequency of the eye in each preset direction according to the times and a time period corresponding to first video data, namely the characteristic data of each eye movement change characteristic.
It should be noted that, there is no necessary sequence of execution among step S23, step S24, step S25 and step S26, which may be executed in parallel, and the present invention is not limited thereto.
S27: the characteristic data of each muscle action characteristic, the characteristic data of each head posture characteristic, the characteristic data of blink characteristic and the characteristic data of each eye change characteristic are taken as first micro-expression data of an object to be evaluated.
Specifically, the feature data of each muscle action feature obtained in step S23, the feature data of each head posture feature obtained in step S24, the feature data of the blink feature obtained in step S25, and the feature data of each eye change feature obtained in step S26 are combined into first microexpressive data of the subject to be evaluated.
In this embodiment, according to a preset extraction manner, a preset number of video frame images are extracted from first video data, face images in the video frame images are extracted through a face detection model, and then feature recognition is performed on the face images by using a emotion discrimination model, a head gesture recognition model, a blink detection model and an iris edge detection model which are included in a microexpressive recognition model, so as to obtain feature data of each different index feature, and the feature data are combined into first microexpressive data of an object to be evaluated, so that the microexpressive of the face is subjected to multidimensional mining from a plurality of different angles and dimensions, the microexpressive characteristics of the object to be evaluated can be more accurately reflected, an accurate data basis is provided for establishing a base line for the subsequent microexpressive data according to the first microexpressive data, and further more accurate risk assessment results are obtained scientifically and reasonably.
It can be understood that when the micro-expression recognition model is used to perform micro-expression recognition on the second video data to obtain the second micro-expression data of the object to be evaluated, the same recognition method as that in the above embodiment is also required to be used to obtain the second micro-expression data, so that risk evaluation can be based on the same data acquisition standard, and the accuracy of evaluation is ensured.
In one embodiment, as shown in fig. 4, in step S3, facial microexpressive baseline data of the object to be evaluated is established according to feature data of each index feature in first microexpressive data of the object to be evaluated, and specifically includes the following steps:
s31: and acquiring a preset minimum proportion coefficient and a preset maximum proportion coefficient corresponding to each index feature in the first micro-expression data.
Specifically, the server sets a preset minimum scaling factor and a preset maximum scaling factor for each index feature in advance. The preset minimum scaling factor and the preset maximum scaling factor are used to adjust the normal range of values of the index feature.
For example, for an index feature of blink feature, the corresponding preset minimum ratio may be 0.5 and the corresponding preset maximum ratio may be 1.5.
S32: for each index feature, a first product between feature data of the index feature and a preset minimum scaling factor corresponding to the index feature and a second product between feature data of the index feature and a preset maximum scaling factor corresponding to the index feature are calculated.
Specifically, assuming that the value of the feature data of a certain index feature is w, the preset minimum scaling factor corresponding to the index feature is a, the preset maximum scaling factor is b, the value of the first product is w×a, and the value of the second product is w×b.
Continuing with the example of the blink feature in step S31, if the feature data of the blink feature is 12 times/min, i.e. w=12, the value of the first product is 12×0.5=6, and the value of the second product is 12×1.5=18.
S33: and determining a value range between a first product and a second product corresponding to each index feature as a normal value range of each index feature, and obtaining facial microexpressive baseline data of the object to be evaluated.
Specifically, a value that is equal to or greater than the first product and equal to or less than the second product is determined as the normal value range of the index feature.
Continuing with the blink feature example in step S31, the normal range of values is [6, 18].
In this embodiment, the feature data of each index feature in the first microexpressive data is reasonably expanded according to the preset minimum proportionality coefficient and the preset maximum proportionality coefficient corresponding to each index feature, so as to obtain the normal data range of each index feature, and the normal data range is used as the facial microexpressive baseline data of the object to be evaluated, so that the facial microexpressive baseline data is scientifically and reasonably determined, and accurate risk evaluation is facilitated to the object to be evaluated according to the facial microexpressive baseline data.
In an embodiment, as shown in fig. 5, in step S6, risk assessment is performed on feature data to be identified of each index feature in the second micro-expression data by using a normal numerical range of each index feature in the facial micro-expression baseline data of the object to be assessed, so as to obtain a risk assessment result of the object to be assessed, which specifically includes the following steps:
s61: for each index feature, if the feature data to be identified of the index feature meets the requirement of the normal numerical range of the index feature, confirming that the index feature of the object to be evaluated is risk-free, otherwise, confirming that the index feature of the object to be evaluated is risk-free.
Specifically, for each preset index feature, judging whether the feature data to be identified of the index feature belongs to the normal numerical range of the index feature, if the feature data to be identified of the index feature belongs to the normal numerical range of the index feature, namely, meets the requirement of the normal numerical range of the index feature, confirming that the index feature is risk-free, and if the feature data to be identified of the index feature does not belong to the normal numerical range of the index feature, namely, does not meet the requirement of the normal numerical range of the index feature, confirming that the index feature is risk.
In a specific embodiment, the server sets a risk identifier of one bit for each index feature, uses the risk identifier to identify whether the index feature has a risk, if the index feature has a risk, sets the risk identifier of the index feature to 1, otherwise, sets the risk identifier of the index feature to 0 if the index feature does not have a risk.
S62: for each index feature, if the index feature has risk, calculating the proportion of the feature data to be identified of the index feature exceeding the normal numerical range of the index feature, and determining the risk score of the index feature of the object to be evaluated according to the proportion.
Specifically, according to the judging result of whether each index feature has risk or not obtained in step S61, if the index feature has risk, the feature data to be identified of the index feature is obtained, the feature data to be identified is compared with the normal numerical range of the index feature, and the ratio of the difference value of the feature data to be identified exceeding the normal numerical range to the normal numerical range is calculated, including the ratio greater than the maximum value of the normal numerical range or the ratio smaller than the minimum value of the normal data range.
For example, taking the blink feature in step S31 as an example, the normal value range is [6, 18], if the feature data to be identified is 25, that is, the blink frequency of the subject to be evaluated when answering the evaluation question is 25 times/min, the proportion of the feature data to be identified out of the normal value range is a proportion of more than 18, that is, (25-18)/18=39%; if the characteristic data to be identified is 2, that is, the blink frequency of the object to be evaluated when answering the evaluation question is 2 times/min, the proportion of the characteristic data to be identified exceeding the normal numerical range is less than 6, that is, (6-2)/6=66.7%.
The method comprises the steps that a server side presets a proportional linear function relation X=lambda, h+delta of a proportion h and a risk score X, wherein lambda and delta are preset parameters, each index feature corresponds to independent parameters lambda and delta, namely different index features, and the corresponding parameters lambda and delta can be identical or different, and are specifically set according to the actual application requirements.
And according to the corresponding proportional linear function relation of each index feature, bringing the proportion of the feature data to be identified of the index feature with risk exceeding the normal numerical range into the corresponding proportional linear function relation, and calculating to obtain the risk score of the index feature with risk.
S63: and carrying out weighted calculation on the risk scores of the index features of the risk of the object to be evaluated to obtain the risk grade scores of the object to be evaluated.
Specifically, according to the risk score of each index feature with risk obtained in step S62, weighting calculation is performed according to the preset risk weight of each index feature, and the obtained result is used as the risk grade score of the object to be evaluated.
S64: according to the corresponding relation between the preset risk level score range and the risk level, determining the risk level score range of the risk level score of the object to be evaluated, and determining the risk level corresponding to the risk level score range as a risk evaluation result of the object to be evaluated.
Specifically, the server sets a corresponding relation between a risk level score range and a risk level in advance, determines a risk level score range in which the risk level score is located according to the risk level score obtained in step S63, obtains a risk level corresponding to the risk level score range according to the corresponding relation, and uses the risk level as a risk assessment result of the object to be assessed.
In a specific embodiment, the preset risk levels include four levels of a slight risk, a general risk, a medium risk and a serious risk, the risk level score range having a risk level score of less than 30 corresponds to the slight risk, the risk level score range having a risk level score of greater than or equal to 30 and less than 60 corresponds to the general risk, the risk level score range having a risk level score of greater than or equal to 60 and less than 80 corresponds to the medium risk, and the risk level score range having a risk level score of greater than or equal to 80 corresponds to the serious risk.
For example, if the risk level score obtained in step S63 is 70, it is determined that the risk assessment result of the subject to be assessed is a medium risk.
In this embodiment, whether the index feature has risk is determined by judging whether the feature data to be identified of the index feature meets the requirement of the normal numerical range of the index feature, and for the index feature having risk, calculating the risk score of the index feature according to the proportion that the feature data to be identified of the index feature exceeds the normal numerical range of the index feature, weighting the risk score of the index feature having risk to obtain the risk grade score, and determining the risk grade according to the risk grade score, so that the risk evaluation result of the object to be evaluated is obtained based on comprehensive judgment analysis and quantization of the index features of different dimensions, the accuracy is higher, and the quantized score is converted into the corresponding risk grade, so that the result is more visual.
In one embodiment, in step S63, the weighted calculation is performed on the risk score of the index feature of the risk of the object to be evaluated to obtain the risk score of the object to be evaluated, which specifically includes the following steps:
Calculating a risk level score of the object to be evaluated according to the following formula (1):
wherein P is the risk level score, k of the object to be evaluated i Preset weight, x, of the ith risk indicator feature i The risk score is the i-th index feature with risk, n is the number of index features with risk, and beta is a preset adjusting parameter.
For example, if the index feature of the risk is a blink feature and an eye-up feature, and the preset weight of the blink feature is 0.6, the preset weight of the eye-up feature is 0.2, the preset adjustment parameter is 10, the risk score of the blink feature is 55, and the risk score of the eye-up feature is 70, the risk score calculated according to the above formula is 55×0.6+70×0.2+10=57.
In this embodiment, the risk level score of the object to be evaluated calculated according to the formula (1) integrates the weight and the risk score of each index feature with risk, so that the risk level score can accurately and comprehensively reflect the micro-expression state of the object to be evaluated, and the accuracy of the risk evaluation result obtained according to the risk level score is higher.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In an embodiment, a risk assessment device based on micro-expressions is provided, and the risk assessment device based on micro-expressions corresponds to the risk assessment method based on micro-expressions in the above embodiment one by one. As shown in fig. 6, the microexpressive based risk assessment apparatus includes a first acquisition module 61, a first recognition module 62, a baseline establishing module 63, a second acquisition module 64, a second recognition module 65, and a risk assessment module 66. The functional modules are described in detail as follows:
a first obtaining module 61, configured to obtain first video data of an object to be evaluated, where the first video data is video data of an object to be evaluated that answers a preset basic question;
the first recognition module 62 is configured to perform microexpressive recognition on the first video data using a preset microexpressive recognition model to obtain first microexpressive data of the object to be evaluated, where the preset microexpressive recognition model recognizes preset index features and feature data of each index feature from the first video data, and the first microexpressive data includes feature data of each index feature;
a baseline establishing module 63, configured to establish facial microexpressive baseline data of the object to be evaluated according to feature data of each index feature in the first microexpressive data, where the facial microexpressive baseline data includes a normal numerical range of each index feature;
A second obtaining module 64, configured to obtain second video data of the object to be evaluated, where the second video data is video data of the object to be evaluated that answers a preset evaluation question;
a second recognition module 65, configured to perform microexpressive recognition on the second video data using a microexpressive recognition model to obtain second microexpressive data of the object to be evaluated, where the preset microexpressive recognition model recognizes preset index features and feature data to be recognized of each index feature from the second video data, and the second microexpressive data includes feature data to be recognized of each index feature;
the risk assessment module 66 is configured to perform risk assessment on feature data to be identified of each index feature in the second microexpressive data by using a normal numerical range of each index feature in the facial microexpressive baseline data, so as to obtain a risk assessment result of the object to be assessed.
Further, the preset micro-expression recognition model includes a face detection model, an emotion discrimination model, a head gesture recognition model, a blink detection model, and an iris edge detection model, the preset index features include muscle action features, head gesture features, blink features, and eye movement change features, and the first recognition module 62 includes:
The frame image extraction sub-module is used for extracting a preset number of video frame images from the first video data according to a preset extraction mode;
the face detection sub-module is used for carrying out face detection on the video frame image by using a face detection model and extracting a face picture in the video frame image;
the emotion judging sub-module is used for inputting the face picture into the emotion judging model, carrying out micro expression recognition and obtaining characteristic data of each muscle action characteristic of the object to be evaluated;
the head gesture recognition sub-module is used for inputting the face picture into the head gesture recognition model to perform head gesture recognition so as to obtain feature data of each head gesture feature of the object to be evaluated;
the blink detection submodule is used for inputting the face picture into the blink detection model, detecting the blink times to obtain blink frequency, and taking the blink frequency as characteristic data of blink characteristics of the object to be evaluated;
the eye movement change detection sub-module is used for inputting the face picture into the iris edge detection model to perform eye movement change detection, so as to obtain characteristic data of each eye movement change characteristic of the object to be evaluated;
the micro-expression data generation sub-module is used for taking the characteristic data of each muscle action characteristic, the characteristic data of each head gesture characteristic, the characteristic data of blink characteristic and the characteristic data of each eye change characteristic as first micro-expression data.
Further, the baseline establishing module 63 includes:
the coefficient acquisition sub-module is used for acquiring a preset minimum proportion coefficient and a preset maximum proportion coefficient corresponding to each index feature in the first microexpressive data;
the data expansion sub-module is used for calculating a first product between the characteristic data of each index characteristic and a preset minimum proportionality coefficient corresponding to the index characteristic and a second product between the characteristic data of the index characteristic and a preset maximum proportionality coefficient corresponding to the index characteristic;
and the baseline determination submodule is used for determining a value range between a first product and a second product corresponding to each index feature as a normal value range of each index feature to obtain facial microexpressive baseline data of the object to be evaluated.
Further, the risk assessment module 66 includes:
the index risk determination submodule is used for determining that the index feature of the object to be evaluated is risk-free if the feature data to be identified of the index feature meets the requirement of the normal numerical range of the index feature, and determining that the index feature of the object to be evaluated is risk-free if the feature data to be identified of the index feature meets the requirement of the normal numerical range of the index feature;
the risk score calculation sub-module is used for calculating the proportion of the feature data to be identified of each index feature exceeding the normal numerical range of the index feature if the index feature has risk, and determining the risk score of the index feature of the object to be evaluated according to the proportion;
The weighting calculation sub-module is used for carrying out weighting calculation on the risk scores of the index features of the risk of the object to be evaluated to obtain the risk grade scores of the object to be evaluated;
the level determination sub-module is used for determining a risk level score range in which the risk level score of the object to be evaluated is located according to the corresponding relation between the preset risk level score range and the risk level, and determining the risk level corresponding to the risk level score range as a risk evaluation result of the object to be evaluated.
Further, the weight calculation sub-module is further configured to:
the risk level score of the object to be evaluated is calculated according to the following formula:
wherein P is the risk level score, k of the object to be evaluated i Preset weight, x, of the ith risk indicator feature i The risk score is the i-th index feature with risk, n is the number of index features with risk, and beta is a preset adjusting parameter.
For specific limitations of the microexpressive based risk assessment device, reference may be made to the above limitations of the microexpressive based risk assessment method, and will not be described in detail herein. The above-mentioned each module in the microexpressive based risk assessment device may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing first video data and second video data of an object to be evaluated. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a microexpressive based risk assessment method.
In an embodiment, a computer device is provided, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps of the micro-expression based risk assessment method in the above embodiment, such as steps S1 to S6 shown in fig. 2. Alternatively, the processor may implement the functions of the modules/units of the microexpressive based risk assessment device in the above embodiment when executing the computer program, for example, the functions of the modules 61 to 66 shown in fig. 6. To avoid repetition, no further description is provided here.
In an embodiment, a computer readable storage medium is provided, on which a computer program is stored, where the computer program when executed by a processor implements the micro-expression based risk assessment method in the above method embodiment, or where the computer program when executed by a processor implements the functions of each module/unit in the micro-expression based risk assessment device in the above device embodiment. To avoid repetition, no further description is provided here.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.
Claims (10)
1. A risk assessment method based on micro-expressions, characterized in that the risk assessment method comprises:
acquiring first video data of an object to be evaluated, wherein the first video data is video data of a preset basic problem answered by the object to be evaluated;
Performing microexpressive recognition on the first video data by using a preset microexpressive recognition model to obtain first microexpressive data of the object to be evaluated, wherein the preset microexpressive recognition model recognizes preset index features and feature data of each index feature from the first video data, and the first microexpressive data comprises feature data of each index feature;
establishing facial micro-expression baseline data of the object to be evaluated according to the characteristic data of each index characteristic in the first micro-expression data, wherein the facial micro-expression baseline data comprises a normal numerical range of each index characteristic;
acquiring second video data of the object to be evaluated, wherein the second video data is video data of the object to be evaluated for answering a preset evaluation problem;
performing micro-expression recognition on the second video data by using the micro-expression recognition model to obtain second micro-expression data of the object to be evaluated, wherein the preset micro-expression recognition model recognizes the index features and the feature data to be recognized of each index feature from the second video data, and the second micro-expression data comprises the feature data to be recognized of each index feature;
And performing risk assessment on feature data to be identified of each index feature in the second micro-expression data by using a normal numerical range of each index feature in the facial micro-expression baseline data to obtain a risk assessment result of the object to be assessed.
2. The risk assessment method according to claim 1, wherein the preset micro-expression recognition model includes a face detection model, an emotion discrimination model, a head posture recognition model, a blink detection model, and an iris edge detection model, the preset index features include a muscle action feature, a head posture feature, a blink feature, and an eye movement change feature, and the performing micro-expression recognition on the first video data using the preset micro-expression recognition model to obtain first micro-expression data of the subject to be assessed includes:
extracting a preset number of video frame images from the first video data according to a preset extraction mode;
using the face detection model to perform face detection on the video frame image, and extracting a face picture in the video frame image;
inputting the face picture into the emotion distinguishing model, and carrying out microexpressive recognition to obtain characteristic data of each muscle action characteristic of the object to be evaluated;
Inputting the face picture into the head gesture recognition model to perform head gesture recognition to obtain feature data of each head gesture feature of the object to be evaluated;
inputting the face picture into the blink detection model, detecting blink times to obtain blink frequency, and taking the blink frequency as characteristic data of the blink characteristics of the object to be evaluated;
inputting the face picture into the iris edge detection model, and detecting eye movement change to obtain characteristic data of each eye movement change characteristic of the object to be evaluated;
and taking the characteristic data of each muscle action characteristic, the characteristic data of each head gesture characteristic, the characteristic data of the blink characteristic and the characteristic data of each eye movement change characteristic as the first micro-expression data.
3. The risk assessment method according to claim 1, wherein the establishing facial microexpressive baseline data of the subject to be assessed from the feature data of each of the index features in the first microexpressive data includes:
acquiring a preset minimum proportion coefficient and a preset maximum proportion coefficient corresponding to each index feature in the first micro-expression data;
For each index feature, calculating a first product between feature data of the index feature and the preset minimum proportionality coefficient corresponding to the index feature, and a second product between feature data of the index feature and the preset maximum proportionality coefficient corresponding to the index feature;
and determining a value range between the first product and the second product corresponding to each index feature as the normal value range of each index feature, and obtaining facial micro-expression baseline data of the object to be evaluated.
4. A risk assessment method according to any one of claims 1 to 3, wherein said performing risk assessment on feature data to be identified of each of said index features in said second microexpressive data using a normal numerical range of each of said index features in said facial microexpressive baseline data, to obtain a risk assessment result of said subject to be assessed, comprises:
for each index feature, if the feature data to be identified of the index feature meets the requirement of the normal numerical range of the index feature, confirming that the index feature of the object to be evaluated is risk-free, otherwise, confirming that the index feature of the object to be evaluated is risk-free;
For each index feature, if the index feature has risk, calculating the proportion of the feature data to be identified of the index feature exceeding the normal numerical range of the index feature, and determining the risk score of the index feature of the object to be evaluated according to the proportion;
weighting calculation is carried out on the risk scores of the index features of the risk of the object to be evaluated, so that the risk grade scores of the object to be evaluated are obtained;
according to the corresponding relation between the preset risk level score range and the risk level, determining the risk level score range of the risk level score of the object to be evaluated, and determining the risk level corresponding to the risk level score range as the risk evaluation result.
5. The risk assessment method according to claim 4, wherein the weighting calculation of the risk score of the index feature of the risk of the object to be assessed, to obtain the risk level score of the object to be assessed, includes:
calculating the risk level score of the object to be evaluated according to the following formula:
wherein P is the followingRisk level score, k i Preset weight, x, of the index feature of the ith risk i And (3) the risk score of the index feature with risk in the ith step, n is the number of the index features with risk, and beta is a preset adjusting parameter.
6. A microexpressive based risk assessment device, the risk assessment device comprising:
the first acquisition module is used for acquiring first video data of an object to be evaluated, wherein the first video data is video data of a preset basic question answered by the object to be evaluated;
the first recognition module is used for carrying out micro-expression recognition on the first video data by using a preset micro-expression recognition model to obtain first micro-expression data of the object to be evaluated, wherein the preset micro-expression recognition model is used for recognizing preset index features and feature data of each index feature from the first video data, and the first micro-expression data comprises feature data of each index feature;
the base line establishing module is used for establishing facial micro-expression base line data of the object to be evaluated according to the characteristic data of each index characteristic in the first micro-expression data, wherein the facial micro-expression base line data comprises a normal numerical range of each index characteristic;
The second acquisition module is used for acquiring second video data of the object to be evaluated, wherein the second video data is video data of the object to be evaluated for answering a preset evaluation problem;
the second recognition module is used for performing micro-expression recognition on the second video data by using the micro-expression recognition model to obtain second micro-expression data of the object to be evaluated, wherein the preset micro-expression recognition model recognizes the index features from the second video data and the feature data to be recognized of each index feature, and the second micro-expression data comprises the feature data to be recognized of each index feature;
and the risk assessment module is used for carrying out risk assessment on the feature data to be identified of each index feature in the second micro-expression data by using the normal numerical range of each index feature in the facial micro-expression baseline data to obtain a risk assessment result of the object to be assessed.
7. The risk assessment apparatus of claim 6, wherein the preset micro-expression recognition model comprises a face detection model, a mood discrimination model, a head gesture recognition model, a blink detection model, and an iris edge detection model, the preset index features comprise muscle action features, head gesture features, blink features, and eye movement variation features, and the first recognition module comprises:
The frame image extraction sub-module is used for extracting a preset number of video frame images from the first video data according to a preset extraction mode;
the face detection sub-module is used for carrying out face detection on the video frame image by using the face detection model and extracting a face picture in the video frame image;
the emotion judging sub-module is used for inputting the face picture into the emotion judging model to carry out micro-expression recognition so as to obtain the characteristic data of each muscle action characteristic of the object to be evaluated;
the head gesture recognition sub-module is used for inputting the face picture into the head gesture recognition model to perform head gesture recognition so as to obtain feature data of each head gesture feature of the object to be evaluated;
the blink detection submodule is used for inputting the face picture into the blink detection model, detecting blink times to obtain blink frequency, and taking the blink frequency as characteristic data of the blink characteristics of the object to be evaluated;
the eye movement change detection sub-module is used for inputting the face picture into the iris edge detection model to perform eye movement change detection so as to obtain characteristic data of each eye movement change characteristic of the object to be evaluated;
And the micro-expression data generation sub-module is used for taking the characteristic data of each muscle action characteristic, the characteristic data of each head gesture characteristic, the characteristic data of the blink characteristic and the characteristic data of each eye movement change characteristic as the first micro-expression data.
8. The risk assessment apparatus of claim 6, wherein the baseline establishing module comprises:
the coefficient acquisition sub-module is used for acquiring a preset minimum proportion coefficient and a preset maximum proportion coefficient corresponding to each index feature in the first micro-expression data;
the data expansion sub-module is used for calculating a first product between the characteristic data of the index characteristic and the preset minimum proportionality coefficient corresponding to the index characteristic and a second product between the characteristic data of the index characteristic and the preset maximum proportionality coefficient corresponding to the index characteristic aiming at each index characteristic;
and the baseline determination submodule is used for determining a value range between the first product and the second product corresponding to each index feature as the normal value range of each index feature, and obtaining facial micro-expression baseline data of the object to be evaluated.
9. Computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the microexpressive based risk assessment method according to any of claims 1 to 5 when the computer program is executed by the processor.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the microexpressive based risk assessment method according to any of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811182176.8A CN109472206B (en) | 2018-10-11 | 2018-10-11 | Risk assessment method, device, equipment and medium based on micro-expressions |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811182176.8A CN109472206B (en) | 2018-10-11 | 2018-10-11 | Risk assessment method, device, equipment and medium based on micro-expressions |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109472206A CN109472206A (en) | 2019-03-15 |
CN109472206B true CN109472206B (en) | 2023-07-07 |
Family
ID=65664675
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811182176.8A Active CN109472206B (en) | 2018-10-11 | 2018-10-11 | Risk assessment method, device, equipment and medium based on micro-expressions |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109472206B (en) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110457432B (en) * | 2019-07-04 | 2023-05-30 | 平安科技(深圳)有限公司 | Interview scoring method, interview scoring device, interview scoring equipment and interview scoring storage medium |
CN110490424A (en) * | 2019-07-23 | 2019-11-22 | 阿里巴巴集团控股有限公司 | A kind of method and apparatus of the progress risk assessment based on convolutional neural networks |
CN111192150B (en) * | 2019-12-23 | 2023-07-25 | 中国平安财产保险股份有限公司 | Method, device, equipment and storage medium for processing vehicle danger-giving agent service |
CN111223078B (en) * | 2019-12-31 | 2023-09-26 | 富联裕展科技(河南)有限公司 | Method for determining flaw level and storage medium |
CN111339859A (en) * | 2020-02-17 | 2020-06-26 | 出门问问信息科技有限公司 | Multi-modal risk control method and device and computer-readable storage medium |
CN111540440B (en) * | 2020-04-23 | 2021-01-15 | 深圳市镜象科技有限公司 | Psychological examination method, device, equipment and medium based on artificial intelligence |
CN111553311A (en) * | 2020-05-13 | 2020-08-18 | 吉林工程技术师范学院 | Micro-expression recognition robot and control method thereof |
CN111708939B (en) * | 2020-05-29 | 2024-04-16 | 平安科技(深圳)有限公司 | Emotion recognition-based pushing method and device, computer equipment and storage medium |
CN112200462B (en) * | 2020-10-13 | 2024-04-26 | 中国银行股份有限公司 | Risk assessment method and risk assessment device |
TR202019387A2 (en) * | 2020-12-01 | 2021-04-21 | Tuerkiye Garanti Bankasi Anonim Sirketi | A SYSTEM THAT DETERMINES PERSONS WHO MAY TAKE SUSPICIOUS ACTIONS |
CN112633239A (en) * | 2020-12-31 | 2021-04-09 | 中国工商银行股份有限公司 | Micro-expression identification method and device |
CN112818754A (en) * | 2021-01-11 | 2021-05-18 | 广州番禺职业技术学院 | Learning concentration degree judgment method and device based on micro-expressions |
CN112819609A (en) * | 2021-02-24 | 2021-05-18 | 深圳前海微众银行股份有限公司 | Risk assessment method, apparatus, computer-readable storage medium, and program product |
CN113283978B (en) * | 2021-05-06 | 2024-05-10 | 北京思图场景数据科技服务有限公司 | Financial risk assessment method based on biological basis, behavioral characteristics and business characteristics |
CN113743335B (en) * | 2021-09-08 | 2024-03-22 | 平安科技(深圳)有限公司 | Method, device, computer and medium for risk identification of gaze data |
CN114255499A (en) * | 2021-12-20 | 2022-03-29 | 中国农业银行股份有限公司 | Face examination assisting method and device, electronic equipment and storage medium |
CN115050081B (en) * | 2022-08-12 | 2022-11-25 | 平安银行股份有限公司 | Expression sample generation method, expression recognition method and device and terminal equipment |
CN115798022B (en) * | 2023-02-07 | 2023-05-16 | 南京思优普信息科技有限公司 | Artificial intelligence recognition method based on feature extraction |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016149063A (en) * | 2015-02-13 | 2016-08-18 | オムロン株式会社 | Emotion estimation system and emotion estimation method |
CN107480622A (en) * | 2017-08-07 | 2017-12-15 | 深圳市科迈爱康科技有限公司 | Micro- expression recognition method, device and storage medium |
CN108537160A (en) * | 2018-03-30 | 2018-09-14 | 平安科技(深圳)有限公司 | Risk Identification Method, device, equipment based on micro- expression and medium |
-
2018
- 2018-10-11 CN CN201811182176.8A patent/CN109472206B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016149063A (en) * | 2015-02-13 | 2016-08-18 | オムロン株式会社 | Emotion estimation system and emotion estimation method |
CN107480622A (en) * | 2017-08-07 | 2017-12-15 | 深圳市科迈爱康科技有限公司 | Micro- expression recognition method, device and storage medium |
CN108537160A (en) * | 2018-03-30 | 2018-09-14 | 平安科技(深圳)有限公司 | Risk Identification Method, device, equipment based on micro- expression and medium |
Also Published As
Publication number | Publication date |
---|---|
CN109472206A (en) | 2019-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109472206B (en) | Risk assessment method, device, equipment and medium based on micro-expressions | |
US10713532B2 (en) | Image recognition method and apparatus | |
CN109241868B (en) | Face recognition method, device, computer equipment and storage medium | |
CN108717663B (en) | Facial tag fraud judging method, device, equipment and medium based on micro expression | |
US10318797B2 (en) | Image processing apparatus and image processing method | |
TW202004637A (en) | Risk prediction method and apparatus, storage medium, and server | |
CN104680121B (en) | Method and device for processing face image | |
WO2022105118A1 (en) | Image-based health status identification method and apparatus, device and storage medium | |
CN106982196A (en) | A kind of abnormal access detection method and equipment | |
CN111626371A (en) | Image classification method, device and equipment and readable storage medium | |
CN113449704B (en) | Face recognition model training method and device, electronic equipment and storage medium | |
CN113139439B (en) | Online learning concentration evaluation method and device based on face recognition | |
CN113254491A (en) | Information recommendation method and device, computer equipment and storage medium | |
CN112418135A (en) | Human behavior recognition method and device, computer equipment and readable storage medium | |
CN116863522A (en) | Acne grading method, device, equipment and medium | |
CN111222374A (en) | Lie detection data processing method and device, computer equipment and storage medium | |
CN110147740B (en) | Face recognition method, device, equipment and storage medium | |
CN113971841A (en) | Living body detection method and device, computer equipment and storage medium | |
CN113536965A (en) | Method and related device for training face shielding recognition model | |
CN110298684B (en) | Vehicle type matching method and device, computer equipment and storage medium | |
CN106980818B (en) | Personalized preprocessing method, system and terminal for face image | |
CN113112185A (en) | Teacher expressive force evaluation method and device and electronic equipment | |
CN113901418B (en) | Video-based identity verification method and device, computer equipment and storage medium | |
CN110399818A (en) | A kind of method and apparatus of risk profile | |
CN111507555B (en) | Human body state detection method, classroom teaching quality evaluation method and related device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |