CN108537160A - Risk Identification Method, device, equipment based on micro- expression and medium - Google Patents

Risk Identification Method, device, equipment based on micro- expression and medium Download PDF

Info

Publication number
CN108537160A
CN108537160A CN201810292475.0A CN201810292475A CN108537160A CN 108537160 A CN108537160 A CN 108537160A CN 201810292475 A CN201810292475 A CN 201810292475A CN 108537160 A CN108537160 A CN 108537160A
Authority
CN
China
Prior art keywords
identified
expression recognition
test
standard
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810292475.0A
Other languages
Chinese (zh)
Inventor
戴磊
张国辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201810292475.0A priority Critical patent/CN108537160A/en
Priority to PCT/CN2018/094217 priority patent/WO2019184125A1/en
Publication of CN108537160A publication Critical patent/CN108537160A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/175Static expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/03Credit; Loans; Processing thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound

Abstract

The invention discloses a kind of Risk Identification Method, device, equipment and medium based on micro- expression.The Risk Identification Method based on micro- expression includes:Video data to be identified is obtained, video data to be identified includes at least two frames video image to be identified.At least two frames video image to be identified is divided into basic problem feature set and tender subject feature set;Each frame video image to be identified in basic problem feature set is input to advance trained at least two micro- Expression Recognition model to be identified, obtains corresponding standard expression recognition result;Each frame video image to be identified in tender subject feature set is input to advance trained at least two micro- Expression Recognition model to be identified, obtains corresponding test Expression Recognition result;Based on standard expression recognition result and test Expression Recognition as a result, obtaining risk identification result.The credible result degree that the Risk Identification Method based on micro- expression can effectively solve current risk control is not high, the bad problem of auxiliaring effect.

Description

Risk Identification Method, device, equipment based on micro- expression and medium
Technical field
The present invention relates to field of face identification more particularly to a kind of Risk Identification Method based on micro- expression, device, equipment And medium.
Background technology
In financial industry, the granting of each loan fund is both needed to carry out risk management and control (i.e. risk control), to determine energy It is no to offer loans to creditor.One step of key in traditional risk control method of financial industry is that the careful people of letter faces with creditor The exchange in face, to determine creditor in the accuracy for handling the data that loan process provides, so that it is determined that its credit risk.But In aspectant communication process, believe that examining people may be because absent minded or do not know much have less understanding to the facial expression of people, suddenly Slightly some subtle expression shape changes of loan human face, these subtle expression shape changes may reflect psychology when creditor's exchange Movable (such as lying) so that the risk control result that the careful people of letter provides causes because ignoring creditor's letter and examining micro- expression of process can Reliability is not high.
Invention content
The embodiment of the present invention provides a kind of Risk Identification Method, device, equipment and medium based on micro- expression, to solve to work as Cause ignores the micro- expression shape change of creditor and leads to the not high problem of risk control credible result degree.
In a first aspect, the embodiment of the present invention provides a kind of Risk Identification Method based on micro- expression, including:
Video data to be identified is obtained, the video data to be identified includes at least two frames video image to be identified;
At least two frames video image to be identified is divided into basic problem feature set and tender subject feature set;
By video image to be identified described in each frame in the basic problem feature set be input in advance it is trained at least Two micro- Expression Recognition models are identified, and obtain corresponding standard expression recognition result;
By video image to be identified described in each frame in the tender subject feature set be input in advance it is trained at least Two micro- Expression Recognition models are identified, and obtain corresponding test Expression Recognition result;
Based on the standard expression recognition result and the test Expression Recognition as a result, obtaining risk identification result.
Second aspect, the embodiment of the present invention provide a kind of risk identification device based on micro- expression, including:
Video data acquisition module to be identified, for obtaining video data to be identified, the video data to be identified includes At least two frames video image to be identified;
Video data division module to be identified, at least two frames video image to be identified to be divided into basic problem feature Collection and tender subject feature set;
Standard expression recognition result acquisition module, for being regarded to be identified described in each frame in the basic problem feature set Frequency image is input to advance trained at least two micro- Expression Recognition model and is identified, and obtains corresponding standard Expression Recognition As a result;
Expression Recognition result acquisition module is tested, for being regarded to be identified described in each frame in the tender subject feature set Frequency image is input to advance trained at least two micro- Expression Recognition model and is identified, and obtains corresponding test Expression Recognition As a result;
Risk identification result acquisition module, for being based on the standard expression recognition result and the test Expression Recognition knot Fruit obtains risk identification result.
The third aspect, the embodiment of the present invention provide a kind of computer equipment, including memory, processor and are stored in institute The computer program that can be run in memory and on the processor is stated, the processor executes real when the computer program The step of Risk Identification Method based on micro- expression.
Fourth aspect, the embodiment of the present invention provide a kind of computer readable storage medium, the computer-readable storage medium Matter is stored with computer program, is realized when the computer program is executed by processor as described in the first aspect of the invention based on micro- The step of Risk Identification Method of expression.
A kind of Risk Identification Method, device, equipment and medium based on micro- expression provided in an embodiment of the present invention.By obtaining Video data to be identified is taken, video data to be identified includes at least two frames video image to be identified, at least two frames to wait knowing Other video image is divided into the basic problem feature set and tender subject feature set of equal proportion, subsequently to be carried out to recognition result When statistics, convenience of calculation.Then, each frame video image to be identified in basic problem feature set is input to trained in advance At least two micro- Expression Recognition models are identified, and obtain corresponding standard expression recognition result, will be in tender subject feature set Each frame video image to be identified is input to advance trained at least two micro- Expression Recognition model and is identified, and obtains and corresponds to Test Expression Recognition as a result, to improve the accuracy rate of risk identification so that auxiliaring effect is more preferably.Finally, it is based on standard expression Recognition result and test Expression Recognition as a result, obtain risk identification as a result, to achieve the purpose that the risk identification based on micro- expression, Effectively auxiliary letter examines people and carries out risk control to creditor.
Description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the present invention Example, for those of ordinary skill in the art, without having to pay creative labor, can also be according to these attached drawings Obtain other attached drawings.
Fig. 1 is a flow chart of the Risk Identification Method based on micro- expression provided in the embodiment of the present invention 1.
Fig. 2 is a specific schematic diagram of step S10 in Fig. 1.
Fig. 3 is a specific schematic diagram of step S30 in Fig. 1.
Fig. 4 is a specific schematic diagram of step S40 in Fig. 1.
Fig. 5 is a specific schematic diagram of step S50 in Fig. 1.
Fig. 6 is a functional block diagram of the risk identification device based on micro- expression provided in the embodiment of the present invention 2.
Fig. 7 is a schematic diagram of the computer equipment provided in the embodiment of the present invention 4.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation describes, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained without creative efforts Example, shall fall within the protection scope of the present invention.
Embodiment 1
Fig. 1 shows the flow chart of the Risk Identification Method based on micro- expression in the present embodiment.The risk based on micro- expression Recognition methods can be applicable in the financial institutions such as bank, security, insurance, can effectively assist believing that careful people carries out risk to creditor Control, so that it is determined that can offer loans to the creditor.As shown in Figure 1, the Risk Identification Method based on micro- expression includes Following steps:
S10:Video data to be identified is obtained, video data to be identified includes at least two frames video image to be identified.
Wherein, video data to be identified refers to the video data got after being pre-processed to original video data.Its In, original video data is the untreated video data for recording creditor during letter is examined.Video counts to be identified According to the video data being made of at least two frames video image to be identified.
In the present embodiment, due to subsequently before video data to be identified is identified, needing to be directed to target customer institute The video data (i.e. original video data) of reply is divided, and therefore, which waits knowing including at least two frames Other video image, to judge micro- expressive features of the face in each frame video image to be identified, with determine user whether It lies, to carry out risk management and control.
S20:At least two frames video image to be identified is divided into basic problem feature set and tender subject feature set.
Wherein, basic problem feature set refers to the collection of the basic problem set by some personal information based on target customer It closes, such as identification card number, relatives' cell-phone number and home address etc..Tender subject feature set is for judging whether target customer deposits In set of the basic problem of risk, such as the intended use of the loan, personal income and repayment wish etc..
Specifically, the division of basic problem feature set and tender subject feature set is answered with the presence or absence of standard according to the problem The condition of case is divided.By taking bank as an example, if target customer has been pre-stored some in financial institutions such as bank, security, insurances Personal information (such as identification card number, relatives' cell-phone number and home address) is then previously stored with of model answer based on these The problem of people's information is proposed reply the set of corresponding video image to be identified as basic problem feature set.And it is right In the information that target customer is not pre-stored in financial institutions such as bank, security, insurances, then it is assumed that the partial information does not have standard Answer will carry out replying based on the partial information set of corresponding video image to be identified as sensitivity the problem of proposed Problem characteristic collection.
In the present embodiment, basic problem feature set includes at least one frame video image to be identified;Tender subject feature set Including at least frame video image to be identified, so as to the recognition result and tender subject feature subsequently based on basic problem feature set The recognition result of collection is judged, to achieve the purpose that risk control, improves the accuracy of risk identification, and basic problem Video frame quantity in feature set is identical as the video frame quantity in tender subject feature set, so as to subsequently be carried out to recognition result When statistics, convenience of calculation.
S30:Each frame video image to be identified in basic problem feature set is input to advance trained at least two Micro- Expression Recognition model is identified, and obtains corresponding standard expression recognition result.
Wherein, micro- Expression Recognition model is the model for obtaining the micro- expressive features of target customer trained in advance.Mark Quasi- Expression Recognition the result is that using micro- Expression Recognition model to each frame video image to be identified in basic problem feature set into The accessed recognition result of row identification.Specifically, each frame video image to be identified in basic problem feature set is inputted It is identified to advance trained at least two micro- Expression Recognition model, to obtain pair of each micro- Expression Recognition model output The standard expression recognition result answered, the standard expression recognition result reflect micro- table when target customer tells the truth to a certain extent Feelings, can be as the Appreciation gist for judging whether target customer tells the truth when replying tender subject.It, will be basic in the present embodiment Each frame video image to be identified that problem characteristic is concentrated is input at least two micro- Expression Recognition models and is identified, to obtain Corresponding standard expression recognition result, to improve the accuracy rate of risk identification so that auxiliaring effect is more preferably.
S40:Each frame video image to be identified in tender subject feature set is input to advance trained at least two Micro- Expression Recognition model is identified, and obtains corresponding test Expression Recognition result.
Wherein, test Expression Recognition is the result is that wait for each frame in tender subject feature set using micro- Expression Recognition model Accessed recognition result is identified in identification video image.Specifically, each frame in tender subject feature set is waited knowing Other video image is input to advance trained at least two micro- Expression Recognition model and is identified, and is known with obtaining each micro- expression The corresponding test Expression Recognition result of other model output.The test Expression Recognition result reflects target customer to a certain extent The micro- expression told the truth or told a lie when replying tender subject.In the present embodiment, each frame in tender subject feature set is waited for Identification video image is input at least two micro- Expression Recognition models and is identified, to obtain corresponding test Expression Recognition knot Fruit improves the accuracy rate of risk identification so that auxiliaring effect is more preferably.
S50:Based on standard expression recognition result and test Expression Recognition as a result, obtaining risk identification result.
Specifically, the corresponding standard expression recognition result of each frame of basic problem feature set video image to be identified is carried out Summarize as reference data.Then, by the corresponding test expression of each frame video image to be identified in tender subject feature set Recognition result is summarized as test data, reference data is compared with test data, if test data is relative to base The multiple of quasi- data difference is compared with predetermined threshold value, to obtain risk class, and then obtains risk identification result.
In the present embodiment, by obtaining video data to be identified, video data to be identified, which includes that at least two frames are to be identified, to be regarded Frequency image, so that at least two frames video image to be identified to be divided into the basic problem feature set and tender subject feature of equal proportion Collection, when subsequently to be counted to recognition result, convenience of calculation.Then, each frame in basic problem feature set is to be identified Video image is input to advance trained at least two micro- Expression Recognition model and is identified, and obtains corresponding standard expression and knows Other result;Each frame video image to be identified in tender subject feature set is input to advance trained at least two micro- expression Identification model is identified, and obtains corresponding test Expression Recognition as a result, to improve the accuracy rate of risk identification so that auxiliary effect Fruit is more preferably.Finally, based on standard expression recognition result and test Expression Recognition as a result, obtaining risk identification as a result, to reach base In the purpose of the risk identification of micro- expression, effectively auxiliary letter examines people and carries out risk control to creditor.
In a specific embodiment, as shown in Fig. 2, in step S10, that is, video data to be identified is obtained, is specifically included Following steps:
S11:Obtain original video data.
Wherein, original video data is the untreated video data for recording creditor during letter is examined.Tool Body, believe that Video chat can be carried out with target customer (i.e. creditor) by examining people, based on pre-set during Video chat Problem puts question to target customer, to obtain the video data i.e. original video data that target customer replys problem.
S12:Framing and normalized are carried out to original video data, obtain video data to be identified.
Specifically, sub-frame processing refers to being divided to original video data according to preset time, to obtain an at least frame Video image to be identified.Wherein, normalization is a kind of mode of simplified calculating, i.e., the expression formula that will have dimension, by transformation, Nondimensional expression formula is turned to, scalar is become.Such as in the original video data in the present embodiment, the face of target customer is needed Portion region could extract micro- expressive features of target customer, it is therefore desirable to return the pixel of the video image to be identified after framing One changes to 260*260 pixels, unified pixel, so that subsequently each frame video image to be identified is identified.
In the present embodiment, target customer is putd question to by way of Video chat, to obtain target customer's reply Video data, that is, original video data examines people and target customer's progress face-to-face exchange so that letter examines process intelligence without letter, To save labour turnover.Then, to original video data framing and normalized, unify each frame video image to be identified Pixel improves the accuracy rate of risk identification so that subsequently each frame video image to be identified is identified.
In a specific embodiment, micro- Expression Recognition model in step S30 includes Face datection model, characteristic point inspection Survey at least two in model, mood detection model, head pose detection model, blink detection model and iris edge detection model It is a.
Wherein, Face datection model is the model of the face picture for extracting each frame video image to be identified.Feature Point detection model is the model for identifying the human face characteristic point in each frame video image to be identified.Head pose detection model It is the model in the head bias direction for identifying each frame target in video image client to be identified.Blink detection model is to use To identify model that whether target customer in each frame video image to be identified blinks.It is used for when iris edge detection model anti- Reflect the model of the eye movement situation of the target user in each frame video image to be identified.In the present embodiment, by by basic problem Feature set and sensitivity are separately input to Face datection model, characteristic point detection model, mood detection model, head for topic feature set It is identified in this seven models of attitude detection model, blink detection model and iris edge detection model, to obtain target visitor Standard expression recognition result and the test Expression Recognition at family are as a result, and based on standard expression recognition result and test Expression Recognition knot Fruit shows the purpose of the risk identification based on micro- expression.
In a specific embodiment, as shown in figure 3, in step S30, i.e., each frame in basic problem feature set is waited knowing Other video image is input to advance trained at least two micro- Expression Recognition model and is identified, and obtains corresponding standard expression Recognition result specifically comprises the following steps:
S31:Each frame video image to be identified in basic problem feature set is input to Face datection model to be identified, Obtain standard faces picture.
Wherein, standard faces picture be by basic problem feature set be input to Face datection model be identified it is obtained Face picture.Specifically, each frame video image to be identified in basic problem feature set is input in Face datection model, is examined The face location in each frame video image to be identified is surveyed, and then extracts face picture, that is, standard faces picture, is following model Input provide technical support.
S32:Standard faces picture is input to characteristic point detection model to be identified, obtains standard faces characteristic point.
Wherein, standard faces characteristic point is that standard faces picture is input to obtained by characteristic point detection model is identified Characteristic coordinates point.The human face characteristic point includes five characteristic points such as left eye, right eye, nose, the left corners of the mouth and right corners of the mouth.Specifically Standard faces picture is input to characteristic point detection model and is identified by ground, and characteristic point detection model can obtain above-mentioned five spies The coordinate position of point is levied, the input for follow-up iris edge detection model provides technical support.
S33:Standard faces picture is input to mood detection model to be identified, obtains the first standard Expression Recognition knot Fruit.
Wherein, the first standard expression recognition result is standard faces picture to be input to mood detection model institute is identified The corresponding Emotion identification result obtained.The mood detection model can export the corresponding seven kinds of moods of the standard faces picture Probability value.This seven kinds of moods include tranquil, angry, detest, is frightened, glad, sad and surprised.Specifically, by standard faces figure Piece is input to mood detection model and is identified, and gets the probability value of the corresponding seven kinds of moods of the standard faces picture, if certain The probability value of kind mood is more than corresponding predetermined threshold value, then it is the first standard scale to obtain the corresponding mood of standard faces picture Feelings recognition result provides technical support to be subsequently based on the first standard expression recognition result progress risk control.
S34:Standard faces picture is input to head pose model to be identified, obtains the second standard Expression Recognition knot Fruit.
Wherein, the second standard expression recognition result is standard faces picture to be input to head pose model institute is identified The probability value in the head bias direction of acquisition.Head bias direction is indicated with upper, lower, left, right, front and rear, this six direction of front and rear. Specifically, standard faces picture head pose model is input to be identified, to obtain the probability value in head bias direction, if The probability value that the head angle is biased to a direction is more than corresponding predetermined threshold value, it is determined that current face is inclined to corresponding direction It moves.In the present embodiment, by showing that the head pose of target customer can be good at the eye sight line direction for reflecting target customer Or pay attention to force direction, such as when inquiring a problem, the head of target customer is made that a lofty movement (as dashed forward suddenly So recall or tilt suddenly), it would be possible that he is to lie.Therefore, it is follow-up by obtaining the head pose of target customer It carries out risk control and technical support is provided, improve the accuracy rate of risk control.
S35:Standard faces picture is input to blink detection model to be identified, obtains third standard Expression Recognition knot Fruit.
Wherein, third standard expression recognition result is that standard faces picture is input to iris edge detection model to know The not recognition result of accessed reflection eye movement situation.Specifically, by standard faces picture be input to blink detection model into Row identification, blink detection model can export 0 (blink) or 1 (not blinking), be used with representing frame target in video image to be identified Whether blink at family.The current psychological activity (as nervous) that can reflect target customer by subsequent statistical number of winks, after being It is continuous risk assessment is made to target customer to assist, further increase the accuracy rate of risk control.
S36:Standard faces characteristic point is input to iris edge detection model to be identified, the 4th standard expression is obtained and knows Other result.
Wherein, the 4th standard expression recognition result is that standard faces characteristic point is input to iris edge detection model to carry out The accessed recognition result for being used for reflecting eye movement situation of identification.Specifically, standard faces characteristic point is input to iris side Before edge detection model is identified, first based on the human eye coordinates point in human face characteristic point, eye areas is cut out, is then used Iris edge detection model is detected the eye areas, to obtain iris edge position, is then based on iris edge point position The center for being formed by enclosed region is the accurate location of eye center, and tracking eye center position is relative to eye socket position The variation of (the eye socket position corresponding to ball center's coordinate points at a glance is obtained by characteristic point detection model), you can obtain eye movement change The case where change, can be good at being reflected as subsequently carrying out risk control offer technical support by obtained eye movement situation.
Wherein, standard expression recognition result includes the first standard expression recognition result, the second standard expression recognition result, the Three standard expression recognition results and the 4th standard expression recognition result.
In the present embodiment, each frame video image to be identified in basic problem feature set is first input to Face datection mould Type is identified, and obtains standard faces picture, to remove other factors interference, improves the accuracy rate of risk identification.Then, it will mark Quasi- face picture is input to characteristic point detection model and is identified, and obtains the five characteristic points i.e. standard faces characteristic point of face, It is identified so that standard faces characteristic point is input to iris edge detection model, obtains the eye movement situation of target customer (i.e. 4th standard expression recognition result), based on the eye movement situation technical support can be provided for follow-up progress risk control well. Standard faces picture is input to mood detection model to be identified, to obtain the probability value of certain corresponding mood of target customer (i.e. the first standard expression recognition result) provides technology to be subsequently based on the first standard expression recognition result progress risk control It supports.Standard faces picture is input to head pose model to be identified, to obtain the offset direction (i.e. second of head pose Standard expression recognition result), the head pose based on target customer can be good at the eye sight line direction for reflecting target customer Or pay attention to the variation of force direction, technical support is provided for follow-up progress risk control, improves the accuracy rate of risk control.By standard Face picture is input to blink detection model and is identified, to obtain target customer in corresponding blink situation (i.e. third standard Expression Recognition result), so that subsequent statistical number of winks can reflect the current psychological activity (as nervous) of target customer, be Subsequently risk assessment is made to target customer to assist, further increase the accuracy rate of risk control.
In a specific embodiment, as shown in figure 4, in step S40, i.e., each frame in tender subject feature set is waited knowing Other video image is input to advance trained at least two micro- Expression Recognition model and is identified, and obtains corresponding test expression In recognition result, specifically comprise the following steps:
S41:Each frame video image to be identified in tender subject feature set is input to Face datection model to be identified, Obtain test face picture.
Wherein, test face picture be by tender subject feature set be input to Face datection model be identified it is obtained Face picture.Specifically, each frame video image to be identified in tender subject feature set is input in Face datection model, is examined The face location in each frame video image to be identified is surveyed, and then extracts face picture and tests face picture, is following model Input provide technical support.
S42:Test face picture is input to characteristic point detection model to be identified, obtains test human face characteristic point.
Wherein, test human face characteristic point be will test face picture be input to characteristic point detection model be identified it is acquired Characteristic coordinates point.The test human face characteristic point includes five characteristic points such as left eye, right eye, nose, the left corners of the mouth and right corners of the mouth.Tool Body, test face picture is input to characteristic point detection model and is identified, characteristic point detection model can obtain above-mentioned five The coordinate position of characteristic point, the input for follow-up iris edge detection model provide technical support.
S43:Test face picture is input to mood detection model to be identified, obtains the first test Expression Recognition knot Fruit.
Wherein, institute is identified the result is that test face picture is input to mood detection model in the first test Expression Recognition The corresponding Emotion identification result obtained.The mood detection model can export the general of the corresponding seven kinds of moods of test face picture Rate value.This seven kinds of moods include tranquil, angry, detest, is frightened, glad, sad and surprised.Specifically, face picture will be tested It is input to mood detection model to be identified, gets the probability value of the corresponding seven kinds of moods of the test face picture, if certain The probability value of mood is more than corresponding predetermined threshold value, then obtains the corresponding mood of test face picture (the i.e. first test expression Recognition result), provide technical support to be subsequently based on the first test Expression Recognition result progress risk control.
S44:Test face picture is input to head pose model to be identified, obtains the second test Expression Recognition knot Fruit.
Wherein, institute is identified the result is that test face picture is input to head pose model in the second test Expression Recognition The probability value in the head bias direction of acquisition.Head bias direction is indicated with upper, lower, left, right, front and rear, this six direction of front and rear. Specifically, test face picture head pose model is input to be identified, to obtain the probability value in head bias direction, if The probability value that the head angle is biased to a direction is more than corresponding predetermined threshold value, it is determined that current face is inclined to corresponding direction It moves.In the present embodiment, by showing that the head pose of target customer can be good at the eye sight line direction for reflecting target customer Or pay attention to force direction, technical support is provided for follow-up progress risk control, improves the accuracy rate of risk control.
S45:Test face picture is input to blink detection model to be identified, third is obtained and tests Expression Recognition knot Fruit.
Wherein, third test Expression Recognition is known the result is that test face picture is input to iris edge detection model The not recognition result of accessed reflection eye movement situation.Specifically, will test face picture be input to blink detection model into Row identification, blink detection model, which can export 0 (blink), or 1 (not blinking) represents frame target in video image user to be identified is No blink.The current psychological activity (as nervous) that can reflect target customer by subsequent statistical number of winks, is follow-up right Target customer makes risk assessment and assists, and further increases the accuracy rate of risk control.
S46:Test human face characteristic point is input to iris edge detection model to be identified, the 4th test expression is obtained and knows Other result.
Wherein, the 4th test Expression Recognition carries out the result is that test human face characteristic point is input to iris edge detection model The accessed recognition result for being used for reflecting eye movement situation of identification.Specifically, test human face characteristic point is input to iris side Before edge detection model is identified, first based on the human eye coordinates point in human face characteristic point, eye areas is cut out, is then used Iris edge detection model is detected the eye areas, to obtain iris edge position, is then based on iris edge point position The center for being formed by enclosed region is the accurate location of eye center, and tracking eye center position is relative to eye socket position The variation of (the eye socket position corresponding to ball center's coordinate points at a glance is obtained by characteristic point detection model), you can obtain eye movement change The case where change, can be good at being reflected as subsequently carrying out risk control offer technical support by obtained eye movement situation.
Wherein, test Expression Recognition result includes the first test Expression Recognition result, the second test Expression Recognition result, the Three test Expression Recognition results and the 4th test Expression Recognition result.
In the present embodiment, each frame video image to be identified in tender subject feature set is first input to Face datection mould Type is identified, and obtains test face picture, to remove other factors interference, improves the accuracy rate of risk identification.Then, it will survey Examination face picture is input to characteristic point detection model and is identified, and five characteristic points for obtaining face test human face characteristic point, Iris edge detection model is input to so that human face characteristic point will be tested to be identified, obtains the eye movement situation of target customer (i.e. 4th test Expression Recognition knot) fruit, it can be good at being reflected as subsequently carrying out risk control offer technology based on the eye movement situation It supports.Test face picture is input to mood detection model to be identified, to obtain certain corresponding mood of target customer Probability value (the i.e. first test Expression Recognition result) carries to be subsequently based on the first test Expression Recognition result progress risk control For technical support;Test face picture is input to head pose model to be identified, to obtain the offset direction of head pose (the i.e. second test Expression Recognition result), the head pose based on target customer can be good at reflecting that the eyes of target customer regard Line direction or the variation for paying attention to force direction provide technical support for follow-up progress risk control, improve the accuracy rate of risk control. Test face picture is input to blink detection model and is identified, to obtain target customer in corresponding blink situation (i.e. the Three test Expression Recognition results), so that subsequent statistical number of winks can reflect the current psychological activity of target customer (as tightly ), it is assisted subsequently to make risk assessment to target customer, further increases the accuracy rate of risk control.
In a specific embodiment, in step S30 or step S40, Face datection model uses CascadeCNN networks Training.
Wherein, CascadeCNN (Face datection) is realized to the depth convolutional network of classical Violajones methods, It is a kind of faster method for detecting human face of detection speed.Violajones is a kind of Face datection frame.In the present embodiment, use CascadeCNN methods are trained the picture for having marked face location, to obtain Face datection model, improve face inspection Survey the recognition efficiency of model.
Specifically, the step of being trained to the picture for having marked face location using CascadeCNN methods is as follows:Instruction Practice the first stage, using 12-net network sweep images, and refuse 90% or more window, remaining window is input to 12- Calibration-net networks are corrected, then to being handled the image after correction using non-maxima suppression algorithm, To eliminate high superposed window.Wherein, 12-net is the detection window using 12 × 12, with step-length for 4, in W (width) × H (height) Picture on slide, obtain detection window.Non-maxima suppression algorithm is a kind of extensive in the fields such as target detection and positioning The essence of the method used, algorithm principle is search local maximum and inhibits non-maximum element.Then, using above-mentioned 12-net networks will be judged to non-face (being not above predetermined threshold value) to making Face datection on training data in training data Window as negative sample, using the window of all real human faces (being more than predetermined threshold value) as positive sample, to obtain correspondence Detection window.Training second stage, is handled image using 24-net and 24-calibration-net networks;Its In, 12-net and 24-net are the networks for determining whether face area.12-calibration-net networks and 24- Calibration-net networks are correction networks.Finally, make face inspection on the training data using above-mentioned 24-net networks It surveys, will be determined as non-face window in training data as negative sample, using all real human faces as positive sample.Training third Stage is handled the image of training second stage input using 48-net and 48-calibration-net networks, with complete At the training of final stage, to obtain corresponding face picture from video image to be identified.
Specifically, the step of correction network is for correcting face region, obtaining the coordinate of human face region, correct is such as Under:Three offset variables are set first:Horizontal translation amount (Xn), vertical translation amount (Yn), the ratio of width to height scale (Sn).Wherein Xn is set Fixed 3 values, Yn set 3 values, and Sn sets 5 values.According to Xn, the combination of Yn, Sn can obtain altogether 3x3x5=45 kind groups It closes.By practical human face region on data set (training data), is corrected according to each combination, rectified based on each combination All there are one score c for bounding box after justn, score (when i.e. t), is accumulated it into primary side higher than some threshold value set Boundary, final result are averaged, and are exactly optimal boundary frame.If three offset variables are as follows:Sn∈(0.83,0.91,1.0,1.10, 1.21), Xn ∈ (- 0.17,0,0.17), Yn ∈ (- 0.17,0,0.17), while three parameters of offset vector are corrected, have It is as follows that body corrects formula:
Correspondingly, in step S30 or step S40, characteristic point detection model is trained using DCNN network trainings.
Wherein, DCNN (depth convolutional neural networks) is a kind of depth convolutional neural networks.In the present embodiment, using mark The picture of good face characteristic (five features such as left eye, right eye, nose, the left corners of the mouth and right corners of the mouth) position instructs DCNN networks Practice, to obtain characteristic point detection model.
Specifically, the training process of network is as follows:Training group is first selected, N number of sample is randomly selected from training data and is made For training group, weights and threshold value are disposed proximate to the random value in 0, and initialize learning rate;Then, training group is input to In DCNN networks, the prediction output of network is obtained, and provide its true output;Using formula (x ' expressions prediction output;X indicates the corresponding true outputs of x ';I indicates ith feature;L indicates the length of face frame) to prediction Output and true output are calculated, and obtain output error, and calculate the adjustment amount of each weights successively based on the output error With the adjustment amount of threshold value, and the adjustment amount based on each weights and the adjustment amount of threshold value adjust separately weights and threshold in DCNN models Value.After undergoing M iteration, whether the accuracy rate of judgment models meets the requirements, if conditions are not met, then continuing iteration;If full Foot, then training terminate, and obtain characteristic point detection model.
Correspondingly, in step S30 or step S40, mood detection model is trained using ResNet-80 networks.
Wherein, ResNet-80 networks refer to the network using residual error Network Theory, totally 80 layers, it can be understood as 80 layers Residual error network.Residual error network (ResNet) is a kind of depth convolutional network.In the present embodiment, using 80 layers of residual error networks to mark The face picture for being poured in seven kinds of moods is trained, and is obtained mood detection model, is improved the accuracy rate of Model Identification.Seven kinds of moods Including tranquil, anger, detest, frightened, glad, sad and surprised.
Specifically, the training face picture for marking seven kinds of moods being trained using 80 layers of depth convolutional network Steps are as follows:First by the face picture (original training data) of marked 7 kinds of moods, it is normalized to 256*256 pixels.Then Face picture and its corresponding picture label data are converted to unified format (as " 1 " picture tag data represent image data " life Gas "), to obtain target training data, and upset at random, to carry out model training so that model can be based on the training number According to study emotional characteristics, the accuracy rate of Model Identification is improved.Then target training data is inputted into network, starts to train, passes through Gradient descent method adjusts the value of model parameter, by successive ignition until measuring accuracy is stablized at 0.99 or so, stops instruction Practice, to obtain mood detection model.Wherein, the calculation formula of gradient descent algorithm includes WithWherein, θjIndicate the θ values that each iteration obtains;hθ(x) probability is close Spend function;xjIndicate the training data of iteration j;x(i)Indicate positive negative sample;y(i)Indicate output result.Gradient descent algorithm Also referred to as steepest descent algorithm is the θ carried out to it when successive ignition derivation optimizes to obtain the value minimum for making cost function J (θ) Value, as required model parameter is based on this model parameter, obtains mood detection model, gradient descent algorithm calculate it is simple, It is easy to implement.
Correspondingly, in step S30 or step S40, head pose detection model is carried out using 10 layers of convolutional neural networks Training.
Wherein, convolutional neural networks (CNN) are a kind of multilayer neural networks, are good at the phase of the processing especially big image of image It shuts down problem concerning study.The basic structure of CNN includes two layers, convolutional layer and pond layer.
In the present embodiment, since the number of plies of neural network is more, the calculating time is longer, and head pose difference degree is higher, adopts It can be realized with 10 layers of convolutional neural networks and reach training precision requirement within a short period of time.Using 10 layers of convolutional neural networks pair Data in umdface databases are trained, and to obtain head pose detection model, substantially reduce head pose model Training time improves the efficiency of Model Identification.Wherein, umdface databases are a kind of face information (such as people comprising different people Face frame and face posture) image data base.
Specifically, the training process being trained using 10 layers of convolutional neural networks is as follows:Using formulaConvolution algorithm (i.e. feature extraction) is carried out to training data.Wherein, * represents volume Product;xjRepresent j-th of input feature vector figure;yjRepresent j-th of output characteristic pattern;wijIt is i-th of input feature vector figure and j-th of output Convolution kernel (weight) between characteristic pattern;bjRepresent the bias term of j-th of output characteristic pattern.Then using maximum pond down-sampling Carry out down-sampling operation to the characteristic pattern after convolution is to realize the dimensionality reduction to characteristic pattern, calculation formula
Wherein, yjI-th of output spectra during expression down-sampling is (under i.e. Characteristic pattern after sampling), each neuron during down-sampling is adopted from i-th of input spectrum (characteristic pattern after convolution) It is obtained with the down-sampling frame local sampling of S*S;M and n indicate that down-sampling is frameed shift dynamic step-length respectively.
Correspondingly, in step S30 or step S40, blink detection model is trained using Logic Regression Models.
Wherein, logistic regression (Logistic Regression, LR) model is a kind of disaggregated model in machine learning. In the present embodiment, using the good eye areas picture blinked and do not blinked of advance mark as training data to Logic Regression Models It is trained.Specifically, Logic Regression Models are assumed to be hθ(x)=g (θmX), wherein g (θmX) it is logical function, i.e. certain number According to the probability for belonging to a certain classification (two classification problems).It is specific to select Sigmoid (S sigmoid growth curves) function as logic letter Number, Sigmoid functions are the functions of a common S type in biology, in information science, due to its list increasing and inverse function list Properties, the Sigmoid functions such as increase and be often used as the threshold function table of neural network, by variable mappings to 0, between 1.Sigmoid functions Function formula beWherein Sigmoid function formulas substitution logistic regression hypothesized model is obtained, above-mentioned public affairs Formula isFurther, the cost function of Logic Regression Models is By Cost (hθ(x), y) substitute into cost function obtain above-mentioned formula, i.e., Since Logic Regression Models are two disaggregated models, it is assumed that it is p to take the probability of positive class, as soon as then to an input, observes p/ (1-p) It can show that it is more likely to belong to positive class and still bears class, Sigmoid functions can be very good to reflect this of Logic Regression Models Kind feature, so that Logic Regression Models training is efficient.
Correspondingly, in step S30 or step S40, iris edge detection model is trained using random forests algorithm.
Wherein, random forest is to set a kind of classification that sample (i.e. training data) is trained and is predicted using more Device.In the present embodiment, the simple eye picture of iris region is marked as training data using pre-set color.Specifically, random forest Realization steps are as follows:At random on picture choose a pixel, then with its very close to surrounding pixel point constantly spread, then Pixel comparison is carried out, due to marking iris with pre-set color in advance, the color of iris region is with the color in its region It is completely different, therefore, as long as finding (such as 20, the outermost in a region and the relatively large region in one, other peripheries Pixel) color it is all different, then it is assumed that be iris edge.
Specifically, the eye structure of people is made of the part such as sclera, iris, pupil crystalline lens and retina.Iris is position Annular formations between black pupil crystalline lens and white sclera, it includes have many interlaced spots, filament, hat The minutia of shape, striped and crypts etc..In the present embodiment, training data is trained by random forests algorithm, to obtain Iris edge detection model is taken, is subsequently to detect the position of iris edge based on the iris edge detection model, and then obtain Eye movement variation provides technical support.
In the present embodiment, the picture for having marked face location is trained by using CascadeCNN network trainings, To obtain Face datection model, the recognition efficiency of Face datection model is improved.Using marked face characteristic (left eye, right eye, Five features such as nose, the left corners of the mouth and right corners of the mouth) picture of position is trained depth convolutional neural networks, to obtain feature Point detection model, improves the accuracy rate of characteristic point detection model identification.Using 80 layers of residual error networks to marking seven kinds of moods Face picture be trained, obtain mood detection model, improve mood detection model identification accuracy rate.Using 10 layers of convolution Neural network is trained the data in umdface databases, to obtain head pose detection model, substantially reduces head The training time of attitude mode improves the efficiency of Model Identification.Using Logic Regression Models to the eye areas figure that marks in advance Piece is trained, and to obtain blink detection model, can be reflected two classification problems (whether blinking) well, be improved model The accuracy rate of identification.The simple eye picture for being marked iris region to pre-set color using random forests algorithm is trained, to obtain Iris edge detection model is realized simply, improves the training effectiveness of model.
In a specific embodiment, the corresponding standard expression recognition result of each frame video image to be identified corresponds at least One standard sentiment indicator.The corresponding test Expression Recognition result of each frame video image to be identified corresponds at least one test feelings Thread index.
Wherein, standard sentiment indicator includes standard front face mood and standard negative emotions.Standard front face mood is to ask substantially The positive mood showed in topic feature set, such as happiness or the corners of the mouth raise up.Standard negative emotions are basic problem features It concentrates the negative mood that is showed, such as indignation or frowns.It includes that the positive mood of test and test are negative to test sentiment indicator Face mood.The positive mood of test is the positive mood showed in basic problem feature set, and such as happiness or the corners of the mouth raise up. Test negative emotions be the negative mood showed in basic problem feature set, such as indignation or frown.
In a specific embodiment, the corresponding standard expression recognition result of each frame video image to be identified corresponds at least One standard sentiment indicator;The corresponding test Expression Recognition result of each frame video image to be identified corresponds at least one test feelings Thread index;As shown in figure 5, in step S50, that is, standard expression recognition result and test Expression Recognition are based on as a result, obtaining risk Recognition result specifically comprises the following steps:
S51:Based on all standard Emotion identifications as a result, determining that the occurrence number of each standard sentiment indicator is first The frequency.
Specifically, it unites to the standard sentiment indicator of each frame video image to be identified in basic problem feature set Meter obtains in the corresponding standard expression recognition result of basic problem feature set, and standard front face mood or standard negative emotions go out Occurrence number is as first frequency.In the present embodiment, the mark of each frame video image to be identified in basic problem feature set is counted Quasi- sentiment indicator determines that the occurrence number of each standard sentiment indicator is first frequency, is provided for the follow-up risk identification that carries out Technical support.
S52:Based on all test Emotion identifications as a result, determining that the occurrence number of each test sentiment indicator is second The frequency.
Specifically, it unites to the test sentiment indicator of each frame video image to be identified in tender subject feature set Meter obtains in the corresponding test Expression Recognition result of tender subject feature set, tests positive mood or tests going out for negative emotions Occurrence number is as second frequency.In the present embodiment, the survey of each frame video image to be identified in basic problem feature set is counted Sentiment indicator is tried, determines that the occurrence number of each test sentiment indicator is second frequency, is provided for the follow-up risk identification that carries out Technical support.
S53:Based on first frequency and second frequency, risk identification result is obtained.
Specifically, using formulaThe fold differences of first frequency and second frequency are calculated, to obtain front The fold differences of mood or the fold differences of negative emotions.Wherein, t1Indicate that (standard front face sentiment indicator first frequency occurs The frequency or standard negative emotions index occur the frequency);t2Indicate the second frequency (frequency that test front sentiment indicator occurs The frequency that secondary or test negative emotions index occurs).When if desired obtaining the fold differences of negative emotions, negative feelings will be tested Thread index is divided by with standard negative emotions index can obtain its corresponding fold differences, so as to by fold differences and first threshold Be compared, if fold differences be more than first threshold, regard as it is risky, to obtain risk identification result.Alternatively, if desired When obtaining the fold differences of front mood, test front sentiment indicator is divided by with standard front face mood, and it is corresponding to obtain its Fold differences, if fold differences are more than second threshold, have regarded as wind to be compared fold differences with second threshold Danger, to obtain risk identification result.In the present embodiment, first threshold is set as 3 times, and second threshold is set as 2 times.
Further, it further includes such as under type to obtain risk identification result:By the items for counting basic problem feature set Every test data of reference data and tender subject feature set, is compared one by one, to obtain risk identification result.Specifically Ground, reference data are the corresponding achievement datas of basic problem feature set comprising blink, AU, mood and head pose etc..Test Data are the corresponding achievement datas of tender subject feature set comprising blink, AU, mood and head pose etc..Finally, statistics is every The number that one basic index occurs is compared with the number that each test index occurs, if it is more than default threshold abnormal index occur It is worth (such as first threshold or second threshold), then regards as risk subscribers.
In the present embodiment, target customer is putd question to by way of Video chat, to obtain target customer's reply Video data, that is, original video data saves labour turnover so that letter examines process intelligence, then, to original video data point Frame and normalized, the pixel of unified each frame video image to be identified, so as to subsequently to each frame video image to be identified It is identified, improves the accuracy rate of risk identification.Then, at least two frames video image to be identified is divided into the basic of equal proportion Problem characteristic collection and tender subject feature set, when subsequently to be counted to recognition result, convenience of calculation.It will be by basic problem Each frame video image to be identified in feature set is input to Face datection model and is identified, and obtains standard faces picture, with Other factors interference is removed, the accuracy rate of risk identification is improved.Then, standard faces picture is input to characteristic point detection model It is identified, five characteristic points (i.e. standard faces characteristic point) of face is obtained, so that standard faces characteristic point is input to rainbow Film edge detection model is identified, and the eye movement situation (i.e. the 4th standard expression recognition result) of target customer is obtained, to lead to The eye movement situation crossed provides technical support for follow-up progress risk control.Standard faces picture is input to mood and detects mould Type is identified, to obtain the probability value (i.e. the first standard expression recognition result) of certain corresponding mood of target customer, after being It is continuous that risk control offer technical support is carried out based on the first standard expression recognition result.Standard faces picture is input to head Attitude mode is identified, to obtain the offset direction (i.e. the second standard expression recognition result) on head, by obtaining target visitor The head pose at family can be good at the eye sight line direction for reflecting target customer or pay attention to force direction, and risk control is carried out to be follow-up System provides technical support, improves the accuracy rate of risk control.Standard faces picture is input to blink detection model to be identified, To obtain target customer at corresponding blink situation (i.e. third standard expression recognition result), pass through subsequent statistical number of winks energy The enough current psychological activity (as nervous) of reflection target customer, assists subsequently to make risk assessment to target customer, into One step improves the accuracy rate of risk control.Finally, going out for each standard sentiment indicator is determined based on standard expression recognition result Occurrence number is first frequency;Based on all test Emotion identifications as a result, determining the occurrence number of each test sentiment indicator For second frequency, by calculating the fold differences of first frequency and second frequency, by by variance data and first threshold or the Two threshold values are compared, and obtain risk identification as a result, to achieve the purpose that the risk identification based on micro- expression, effectively auxiliary letter is examined People carries out risk control to creditor.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit It is fixed.
Embodiment 2
Fig. 6 shows the risk correspondingly based on micro- expression with the Risk Identification Method based on micro- expression in embodiment 1 The functional block diagram of identification device.As shown in fig. 6, it includes that video data to be identified obtains to be somebody's turn to do the risk identification device based on micro- expression Module 10, video data division module 20 to be identified, standard expression recognition result acquisition module 30, test Expression Recognition result obtain Modulus block 40 and risk identification result acquisition module 50.Wherein, video data acquisition module 10 to be identified, video data to be identified Division module 20, standard expression recognition result acquisition module 30, test Expression Recognition result acquisition module 40 and risk identification knot The realization function of fruit acquisition module 50 step corresponding with the Risk Identification Method based on micro- expression in embodiment 1 corresponds, To avoid repeating, the present embodiment is not described in detail one by one.
Video data acquisition module 10 to be identified, for obtaining video data to be identified, video data to be identified includes extremely Few two frames video image to be identified.
Video data division module 20 to be identified, at least two frames video image to be identified to be divided into basic problem spy Collection and tender subject feature set.
Standard expression recognition result acquisition module 30 is used for each frame video image to be identified in basic problem feature set It is input to advance trained at least two micro- Expression Recognition model to be identified, obtains corresponding standard expression recognition result.
Expression Recognition result acquisition module 40 is tested, is used for each frame video image to be identified in tender subject feature set It is input to advance trained at least two micro- Expression Recognition model to be identified, obtains corresponding test Expression Recognition result.
Risk identification result acquisition module 50, for being based on standard expression recognition result and test Expression Recognition as a result, obtaining Take risk identification result.
Preferably, video data acquisition module 10 to be identified includes original video data acquiring unit 11 and video to be identified Data capture unit 12.
Original video data acquiring unit 11, for obtaining original video data.
Video data acquiring unit 12 to be identified carries out framing and normalized to original video data, obtains and wait knowing Other video data.
Preferably, standard expression recognition result acquisition module 30 includes standard faces picture acquiring unit 31, standard faces Characteristic point acquiring unit 32, the first standard expression recognition result acquiring unit 33, the second standard expression recognition result acquiring unit 34, third standard expression recognition result acquiring unit 35 and the 4th standard expression recognition result acquiring unit 36.
Standard faces picture acquiring unit 31, for inputting the video image to be identified of each frame in basic problem feature set It is identified to Face datection model, obtains standard faces picture.
Standard faces characteristic point acquiring unit 32 is known for standard faces picture to be input to characteristic point detection model Not, standard faces characteristic point is obtained.
First standard expression recognition result acquiring unit 33, for by standard faces picture be input to mood detection model into Row identification, obtains the first standard expression recognition result.
Second standard expression recognition result acquiring unit 34, for by standard faces picture be input to head pose model into Row identification, obtains the second standard expression recognition result.
Third standard expression recognition result acquiring unit 35 detects mould for standard faces picture to be input to iris edge Type is identified, and obtains third standard expression recognition result.
4th standard expression recognition result acquiring unit 36, for standard faces characteristic point to be input to blink detection model It is identified, obtains the 4th standard expression recognition result.
Preferably, test Expression Recognition result acquisition module 40 includes that test face picture acquiring unit 41 tests face spy The sign point test Expression Recognition result of acquiring unit 42, first the 43, second test Expression Recognition result of acquiring unit acquiring unit 44, Third tests Expression Recognition result acquiring unit 45 and the 4th test Expression Recognition result acquiring unit 46.
Face picture acquiring unit 41 is tested, for inputting the video image to be identified of each frame in tender subject feature set It is identified to Face datection model, obtains test face picture.
Human face characteristic point acquiring unit 42 is tested, being input to characteristic point detection model for will test face picture knows Not, test human face characteristic point is obtained.
First test Expression Recognition result acquiring unit 43, for will test face picture be input to mood detection model into Row identification obtains the first test Expression Recognition result.
Second test Expression Recognition result acquiring unit 44, for will test face picture be input to head pose model into Row identification obtains the second test Expression Recognition result.
Third tests Expression Recognition result acquiring unit 45, and iris edge detection mould is input to for that will test face picture Type is identified, and obtains third and tests Expression Recognition result.
4th test Expression Recognition result acquiring unit 46, blink detection model is input to for that will test human face characteristic point It is identified, obtains the 4th test Expression Recognition result.
The corresponding standard expression recognition result of each frame video image to be identified corresponds at least one standard sentiment indicator.Often The corresponding test Expression Recognition result of one frame video image to be identified corresponds at least one test sentiment indicator.
Preferably, risk identification result acquisition module 50 includes the first frequency acquiring unit 51, the second frequency acquiring unit 52 and risk identification result acquiring unit 53.
First frequency acquiring unit 51, based on all standard Emotion identifications as a result, determining each standard sentiment indicator Occurrence number be first frequency.
Second frequency acquiring unit 52 is used for based on all test Emotion identifications as a result, determining each test mood The occurrence number of index is second frequency.
Risk identification result acquiring unit 53 is based on first frequency and second frequency, obtains risk identification result.
Embodiment 3
The present embodiment provides a computer readable storage medium, computer journey is stored on the computer readable storage medium Sequence realizes the Risk Identification Method based on micro- expression in embodiment 1 when the computer program is executed by processor, to avoid weight Multiple, which is not described herein again.Alternatively, realizing the risk based on micro- expression in embodiment 2 when the computer program is executed by processor The function of each module/unit in identification device, to avoid repeating, which is not described herein again.
Embodiment 4
Fig. 7 is the schematic diagram for the computer equipment that one embodiment of the invention provides.As shown in fig. 7, the calculating of the embodiment Machine equipment 70 includes:Processor 71, memory 72 and it is stored in the calculating that can be run in memory 72 and on processor 71 Machine program 73.Processor 71 realizes above-mentioned each Risk Identification Method embodiment based on micro- expression when executing computer program 73 In step, such as step S10 to S50 shown in FIG. 1.Alternatively, being realized when the execution computer program 73 of processor 81 above-mentioned each The function of each module/unit in device embodiment, for example, module 10 to 50 shown in Fig. 6 function.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each work( Can unit, module division progress for example, in practical application, can be as needed and by above-mentioned function distribution by different Functional unit, module are completed, i.e., the internal structure of described device are divided into different functional units or module, more than completion The all or part of function of description.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although with reference to aforementioned reality Applying example, invention is explained in detail, it will be understood by those of ordinary skill in the art that:It still can be to aforementioned each Technical solution recorded in embodiment is modified or equivalent replacement of some of the technical features;And these are changed Or replace, the spirit and scope for various embodiments of the present invention technical solution that it does not separate the essence of the corresponding technical solution should all It is included within protection scope of the present invention.

Claims (10)

1. a kind of Risk Identification Method based on micro- expression, which is characterized in that including:
Video data to be identified is obtained, the video data to be identified includes at least two frames video image to be identified;
At least two frames video image to be identified is divided into basic problem feature set and tender subject feature set;
Video image to be identified described in each frame in the basic problem feature set is input to advance trained at least two Micro- Expression Recognition model is identified, and obtains corresponding standard expression recognition result;
Video image to be identified described in each frame in the tender subject feature set is input to advance trained at least two Micro- Expression Recognition model is identified, and obtains corresponding test Expression Recognition result;
Based on the standard expression recognition result and the test Expression Recognition as a result, obtaining risk identification result.
2. the Risk Identification Method as described in claim 1 based on micro- expression, which is characterized in that described to obtain video to be identified Data, including:
Obtain original video data;
Framing and normalized are carried out to the original video data, obtain the video data to be identified.
3. the Risk Identification Method as described in claim 1 based on micro- expression, which is characterized in that micro- Expression Recognition model Including Face datection model, characteristic point detection model, mood detection model, head pose detection model, blink detection model and Iris edge detection model.
4. the Risk Identification Method as described in claim 1 based on micro- expression, which is characterized in that described by the basic problem Video image to be identified described in each frame is input to trained at least two micro- Expression Recognition models in advance and carries out in feature set Identification, obtains corresponding standard expression recognition result, including:
Video image to be identified described in each frame in the basic problem feature set is input to the Face datection model to carry out Identification obtains standard faces picture;
The standard faces picture is input to the characteristic point detection model to be identified, obtains standard faces characteristic point;
The standard faces picture is input to the mood detection model to be identified, obtains the first standard Expression Recognition knot Fruit;
The standard faces picture is input to the head pose model to be identified, obtains the second standard Expression Recognition knot Fruit;
The standard faces picture is input to the blink detection model to be identified, obtains third standard Expression Recognition knot Fruit;
The standard faces characteristic point is input to the iris edge detection model to be identified, the 4th standard expression is obtained and knows Other result;
Wherein, the standard expression recognition result includes the first standard expression recognition result, the second standard expression knowledge Other result, the third standard expression recognition result and the 4th standard expression recognition result.
5. the Risk Identification Method as described in claim 1 based on micro- expression, which is characterized in that described by the tender subject Video image to be identified described in each frame is input to trained at least two micro- Expression Recognition models in advance and carries out in feature set Identification obtains corresponding test Expression Recognition as a result, including:
Video image to be identified described in each frame in the tender subject feature set is input to the Face datection model to carry out Identification obtains test face picture;
The test face picture is input to the characteristic point detection model to be identified, obtains test human face characteristic point;
The test face picture is input to the mood detection model to be identified, obtains the first test Expression Recognition knot Fruit;
The test face picture is input to the head pose model to be identified, obtains the second test Expression Recognition knot Fruit;
The test face picture is input to the blink detection model to be identified, third is obtained and tests Expression Recognition knot Fruit;
The test human face characteristic point is input to the iris edge detection model to be identified, the 4th test expression is obtained and knows Other result;
Wherein, the test Expression Recognition result includes the first test Expression Recognition result, the second test expression knowledge Other result, third test Expression Recognition result and the 4th test Expression Recognition result.
6. the Risk Identification Method based on micro- expression as described in any one of claim 3-5, which is characterized in that the people Face detection model is specially the Face datection model obtained using CascadeCNN network trainings;
The characteristic point detection model is trained using DCNN network trainings;
The mood detection model is trained using ResNet-80 networks;
The head pose detection model is trained using 10 layers of convolutional neural networks;
The blink detection model is trained using Logic Regression Models;
The iris edge detection model is trained using random forests algorithm.
7. the Risk Identification Method as described in claim 1 based on micro- expression, which is characterized in that be identified described in each frame to regard The corresponding standard expression recognition result of frequency image corresponds at least one standard sentiment indicator;
The corresponding test Expression Recognition result of video image to be identified described in each frame corresponds at least one test mood and refers to Mark;
It is described to be based on the standard expression recognition result and the test Expression Recognition as a result, obtaining risk identification as a result, including:
Based on all standard Emotion identifications as a result, determining that the occurrence number of each standard sentiment indicator is first The frequency;
Based on all test Emotion identifications as a result, determining that the occurrence number of each test sentiment indicator is second The frequency;
Based on first frequency and second frequency, risk identification result is obtained.
8. a kind of risk identification device based on micro- expression, which is characterized in that including:
Video data acquisition module to be identified, for obtaining video data to be identified, the video data to be identified includes at least Two frames video image to be identified;
Video data division module to be identified, for will at least two frames video image to be identified be divided into basic problem feature set and Tender subject feature set;
Standard expression recognition result acquisition module is used for video figure to be identified described in each frame in the basic problem feature set It is identified as being input to advance trained at least two micro- Expression Recognition model, obtains corresponding standard Expression Recognition knot Fruit;
Expression Recognition result acquisition module is tested, is used for video figure to be identified described in each frame in the tender subject feature set It is identified as being input to advance trained at least two micro- Expression Recognition model, obtains corresponding test Expression Recognition knot Fruit;
Risk identification result acquisition module, for based on the standard expression recognition result and the test Expression Recognition as a result, Obtain risk identification result.
9. a kind of computer equipment, including memory, processor and it is stored in the memory and can be in the processor The computer program of upper operation, which is characterized in that the processor realized when executing the computer program as claim 1 to The step of Risk Identification Method based on micro- expression described in 7 any one.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, feature to exist In realization risk based on micro- expression as described in any one of claim 1 to 7 is known when the computer program is executed by processor The step of other method.
CN201810292475.0A 2018-03-30 2018-03-30 Risk Identification Method, device, equipment based on micro- expression and medium Pending CN108537160A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810292475.0A CN108537160A (en) 2018-03-30 2018-03-30 Risk Identification Method, device, equipment based on micro- expression and medium
PCT/CN2018/094217 WO2019184125A1 (en) 2018-03-30 2018-07-03 Micro-expression-based risk identification method and device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810292475.0A CN108537160A (en) 2018-03-30 2018-03-30 Risk Identification Method, device, equipment based on micro- expression and medium

Publications (1)

Publication Number Publication Date
CN108537160A true CN108537160A (en) 2018-09-14

Family

ID=63482484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810292475.0A Pending CN108537160A (en) 2018-03-30 2018-03-30 Risk Identification Method, device, equipment based on micro- expression and medium

Country Status (2)

Country Link
CN (1) CN108537160A (en)
WO (1) WO2019184125A1 (en)

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472206A (en) * 2018-10-11 2019-03-15 平安科技(深圳)有限公司 Methods of risk assessment, device, equipment and medium based on micro- expression
CN109509087A (en) * 2018-12-15 2019-03-22 深圳壹账通智能科技有限公司 Intelligentized loan checking method, device, equipment and medium
CN109584050A (en) * 2018-12-14 2019-04-05 深圳壹账通智能科技有限公司 Consumer's risk degree analyzing method and device based on micro- Expression Recognition
CN109584051A (en) * 2018-12-18 2019-04-05 深圳壹账通智能科技有限公司 The overdue risk judgment method and device of client based on micro- Expression Recognition
CN109635838A (en) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 Face samples pictures mask method, device, computer equipment and storage medium
CN109711297A (en) * 2018-12-14 2019-05-03 深圳壹账通智能科技有限公司 Risk Identification Method, device, computer equipment and storage medium based on facial picture
CN109711982A (en) * 2019-01-04 2019-05-03 深圳壹账通智能科技有限公司 Face core questioning method, device, computer equipment and readable storage medium storing program for executing
CN109754312A (en) * 2018-12-18 2019-05-14 深圳壹账通智能科技有限公司 Product method for pushing, device, computer equipment and storage medium
CN109766419A (en) * 2018-12-14 2019-05-17 深圳壹账通智能科技有限公司 Products Show method, apparatus, equipment and storage medium based on speech analysis
CN109766461A (en) * 2018-12-15 2019-05-17 深圳壹账通智能科技有限公司 Photo management method, device, computer equipment and medium based on micro- expression
CN109767290A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 Product method for pushing, device, computer equipment and storage medium
CN109766917A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 Interview video data handling procedure, device, computer equipment and storage medium
CN109784170A (en) * 2018-12-13 2019-05-21 平安科技(深圳)有限公司 Vehicle insurance damage identification method, device, equipment and storage medium based on image recognition
CN109784185A (en) * 2018-12-18 2019-05-21 深圳壹账通智能科技有限公司 Client's food and drink evaluation automatic obtaining method and device based on micro- Expression Recognition
CN109793526A (en) * 2018-12-18 2019-05-24 深圳壹账通智能科技有限公司 Lie detecting method, device, computer equipment and storage medium
CN109831665A (en) * 2019-01-16 2019-05-31 深圳壹账通智能科技有限公司 A kind of video quality detecting method, system and terminal device
CN109858405A (en) * 2019-01-17 2019-06-07 深圳壹账通智能科技有限公司 Satisfaction evaluation method, apparatus, equipment and storage medium based on micro- expression
CN109919426A (en) * 2019-01-24 2019-06-21 平安科技(深圳)有限公司 Check interview lie detecting method, server and computer readable storage medium
CN110427881A (en) * 2019-08-01 2019-11-08 东南大学 The micro- expression recognition method of integration across database and device based on the study of face local features
CN110490424A (en) * 2019-07-23 2019-11-22 阿里巴巴集团控股有限公司 A kind of method and apparatus of the progress risk assessment based on convolutional neural networks
CN110889332A (en) * 2019-10-30 2020-03-17 中国科学院自动化研究所南京人工智能芯片创新研究院 Lie detection method based on micro expression in interview
CN111241887A (en) * 2018-11-29 2020-06-05 北京市商汤科技开发有限公司 Target object key point identification method and device, electronic equipment and storage medium
WO2020124710A1 (en) * 2018-12-18 2020-06-25 深圳壹账通智能科技有限公司 Auxiliary security inspection analysis method and apparatus, and computer device and storage medium
CN111339940A (en) * 2020-02-26 2020-06-26 中国工商银行股份有限公司 Video risk identification method and device
CN111540440A (en) * 2020-04-23 2020-08-14 深圳市镜象科技有限公司 Psychological examination method, device, equipment and medium based on artificial intelligence
CN111597301A (en) * 2020-04-24 2020-08-28 北京百度网讯科技有限公司 Text prediction method and device and electronic equipment
CN111767779A (en) * 2020-03-18 2020-10-13 北京沃东天骏信息技术有限公司 Image processing method, device, equipment and computer readable storage medium
CN112084992A (en) * 2020-09-18 2020-12-15 北京中电兴发科技有限公司 Face frame selection method in face key point detection module
CN112183946A (en) * 2020-09-07 2021-01-05 腾讯音乐娱乐科技(深圳)有限公司 Multimedia content evaluation method, device and training method thereof
WO2021027553A1 (en) * 2019-08-15 2021-02-18 深圳壹账通智能科技有限公司 Micro-expression classification model generation method, image recognition method, apparatus, devices, and mediums
CN112614583A (en) * 2020-11-25 2021-04-06 平安医疗健康管理股份有限公司 Depression grade testing system
WO2021069989A1 (en) * 2019-10-06 2021-04-15 International Business Machines Corporation Filtering group messages
CN112699774A (en) * 2020-12-28 2021-04-23 深延科技(北京)有限公司 Method and device for recognizing emotion of person in video, computer equipment and medium
CN113158978A (en) * 2021-05-14 2021-07-23 无锡锡商银行股份有限公司 Risk early warning method for micro-expression recognition in video auditing
CN113243918A (en) * 2021-06-11 2021-08-13 深圳般若计算机系统股份有限公司 Risk detection method and device based on multi-mode hidden information test
CN110097004B (en) * 2019-04-30 2022-03-29 北京字节跳动网络技术有限公司 Facial expression recognition method and device
CN115526888A (en) * 2022-11-17 2022-12-27 博奥生物集团有限公司 Eye pattern data identification method and device, storage medium and electronic equipment

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781810B (en) * 2019-10-24 2024-02-27 合肥盛东信息科技有限公司 Face emotion recognition method
CN111062074B (en) * 2019-12-11 2023-04-07 同济大学 Building space quality virtual simulation and intelligent evaluation method
CN111274447A (en) * 2020-01-13 2020-06-12 深圳壹账通智能科技有限公司 Target expression generation method, device, medium and electronic equipment based on video
CN111860154A (en) * 2020-06-12 2020-10-30 歌尔股份有限公司 Forehead detection method and device based on vision and electronic equipment
CN111783620A (en) * 2020-06-29 2020-10-16 北京百度网讯科技有限公司 Expression recognition method, device, equipment and storage medium
CN111950449B (en) * 2020-08-11 2024-02-13 合肥工业大学 Emotion recognition method based on walking gesture
CN112329663B (en) * 2020-11-10 2023-04-07 西南大学 Micro-expression time detection method and device based on face image sequence
CN112487904A (en) * 2020-11-23 2021-03-12 成都尽知致远科技有限公司 Video image processing method and system based on big data analysis
CN112381036A (en) * 2020-11-26 2021-02-19 厦门大学 Micro expression and macro expression fragment identification method applied to criminal investigation
CN113052064B (en) * 2021-03-23 2024-04-02 北京思图场景数据科技服务有限公司 Attention detection method based on face orientation, facial expression and pupil tracking
CN113191196A (en) * 2021-04-01 2021-07-30 北京睿芯高通量科技有限公司 Novel track analysis method and system in intelligent security system
CN113276827A (en) * 2021-05-26 2021-08-20 朱芮叶 Control method and system for electric automobile energy recovery system and automobile
CN113313048B (en) * 2021-06-11 2024-04-09 北京百度网讯科技有限公司 Facial expression recognition method and device
CN113901915B (en) * 2021-10-08 2024-04-02 无锡锡商银行股份有限公司 Expression detection method of light-weight network and MagFace in video
CN113822229A (en) * 2021-10-28 2021-12-21 重庆科炬企业孵化器有限公司 Expression recognition-oriented user experience evaluation modeling method and device
CN114287938B (en) * 2021-12-13 2024-02-13 重庆大学 Method and equipment for obtaining safety interval of human body parameters in building environment
CN114973362A (en) * 2022-05-20 2022-08-30 厦门大学 Dynamic extension coding micro-expression recognition method applied to social robot
CN116824280B (en) * 2023-08-30 2023-11-24 安徽爱学堂教育科技有限公司 Psychological early warning method based on micro-expression change

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913046A (en) * 2016-05-06 2016-08-31 姜振宇 Micro-expression identification device and method
CN107480622A (en) * 2017-08-07 2017-12-15 深圳市科迈爱康科技有限公司 Micro- expression recognition method, device and storage medium
CN107679526A (en) * 2017-11-14 2018-02-09 北京科技大学 A kind of micro- expression recognition method of face

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103617B (en) * 2009-12-22 2013-02-27 华为终端有限公司 Method and device for acquiring expression meanings
CN103258204B (en) * 2012-02-21 2016-12-14 中国科学院心理研究所 A kind of automatic micro-expression recognition method based on Gabor and EOH feature
CN104820495B (en) * 2015-04-29 2019-06-21 姜振宇 A kind of micro- Expression Recognition of exception and based reminding method and device
US10515393B2 (en) * 2016-06-30 2019-12-24 Paypal, Inc. Image data detection for micro-expression analysis and targeted data services

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913046A (en) * 2016-05-06 2016-08-31 姜振宇 Micro-expression identification device and method
CN107480622A (en) * 2017-08-07 2017-12-15 深圳市科迈爱康科技有限公司 Micro- expression recognition method, device and storage medium
CN107679526A (en) * 2017-11-14 2018-02-09 北京科技大学 A kind of micro- expression recognition method of face

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472206B (en) * 2018-10-11 2023-07-07 平安科技(深圳)有限公司 Risk assessment method, device, equipment and medium based on micro-expressions
CN109472206A (en) * 2018-10-11 2019-03-15 平安科技(深圳)有限公司 Methods of risk assessment, device, equipment and medium based on micro- expression
CN109635838B (en) * 2018-11-12 2023-07-11 平安科技(深圳)有限公司 Face sample picture labeling method and device, computer equipment and storage medium
CN109635838A (en) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 Face samples pictures mask method, device, computer equipment and storage medium
CN111241887A (en) * 2018-11-29 2020-06-05 北京市商汤科技开发有限公司 Target object key point identification method and device, electronic equipment and storage medium
CN111241887B (en) * 2018-11-29 2024-04-16 北京市商汤科技开发有限公司 Target object key point identification method and device, electronic equipment and storage medium
CN109784170A (en) * 2018-12-13 2019-05-21 平安科技(深圳)有限公司 Vehicle insurance damage identification method, device, equipment and storage medium based on image recognition
JP7078803B2 (en) 2018-12-14 2022-05-31 ワン・コネクト・スマート・テクノロジー・カンパニー・リミテッド・(シェンチェン) Risk recognition methods, equipment, computer equipment and storage media based on facial photographs
CN109766419A (en) * 2018-12-14 2019-05-17 深圳壹账通智能科技有限公司 Products Show method, apparatus, equipment and storage medium based on speech analysis
WO2020119450A1 (en) * 2018-12-14 2020-06-18 深圳壹账通智能科技有限公司 Risk identification method employing facial image, device, computer apparatus, and storage medium
JP2022501729A (en) * 2018-12-14 2022-01-06 ワン・コネクト・スマート・テクノロジー・カンパニー・リミテッド・(シェンチェン) Risk recognition methods, equipment, computer equipment and storage media based on facial photographs
CN109711297A (en) * 2018-12-14 2019-05-03 深圳壹账通智能科技有限公司 Risk Identification Method, device, computer equipment and storage medium based on facial picture
CN109584050A (en) * 2018-12-14 2019-04-05 深圳壹账通智能科技有限公司 Consumer's risk degree analyzing method and device based on micro- Expression Recognition
CN109766461A (en) * 2018-12-15 2019-05-17 深圳壹账通智能科技有限公司 Photo management method, device, computer equipment and medium based on micro- expression
CN109509087A (en) * 2018-12-15 2019-03-22 深圳壹账通智能科技有限公司 Intelligentized loan checking method, device, equipment and medium
CN109754312A (en) * 2018-12-18 2019-05-14 深圳壹账通智能科技有限公司 Product method for pushing, device, computer equipment and storage medium
CN109793526A (en) * 2018-12-18 2019-05-24 深圳壹账通智能科技有限公司 Lie detecting method, device, computer equipment and storage medium
CN109784185A (en) * 2018-12-18 2019-05-21 深圳壹账通智能科技有限公司 Client's food and drink evaluation automatic obtaining method and device based on micro- Expression Recognition
CN109766917A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 Interview video data handling procedure, device, computer equipment and storage medium
CN109767290A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 Product method for pushing, device, computer equipment and storage medium
WO2020124710A1 (en) * 2018-12-18 2020-06-25 深圳壹账通智能科技有限公司 Auxiliary security inspection analysis method and apparatus, and computer device and storage medium
CN109793526B (en) * 2018-12-18 2022-08-02 深圳壹账通智能科技有限公司 Lie detection method, device, computer equipment and storage medium
CN109584051A (en) * 2018-12-18 2019-04-05 深圳壹账通智能科技有限公司 The overdue risk judgment method and device of client based on micro- Expression Recognition
CN109711982A (en) * 2019-01-04 2019-05-03 深圳壹账通智能科技有限公司 Face core questioning method, device, computer equipment and readable storage medium storing program for executing
CN109831665A (en) * 2019-01-16 2019-05-31 深圳壹账通智能科技有限公司 A kind of video quality detecting method, system and terminal device
CN109831665B (en) * 2019-01-16 2022-07-08 深圳壹账通智能科技有限公司 Video quality inspection method, system and terminal equipment
CN109858405A (en) * 2019-01-17 2019-06-07 深圳壹账通智能科技有限公司 Satisfaction evaluation method, apparatus, equipment and storage medium based on micro- expression
CN109919426A (en) * 2019-01-24 2019-06-21 平安科技(深圳)有限公司 Check interview lie detecting method, server and computer readable storage medium
CN110097004B (en) * 2019-04-30 2022-03-29 北京字节跳动网络技术有限公司 Facial expression recognition method and device
CN110490424A (en) * 2019-07-23 2019-11-22 阿里巴巴集团控股有限公司 A kind of method and apparatus of the progress risk assessment based on convolutional neural networks
CN110427881B (en) * 2019-08-01 2021-11-26 东南大学 Cross-library micro-expression recognition method and device based on face local area feature learning
CN110427881A (en) * 2019-08-01 2019-11-08 东南大学 The micro- expression recognition method of integration across database and device based on the study of face local features
WO2021027553A1 (en) * 2019-08-15 2021-02-18 深圳壹账通智能科技有限公司 Micro-expression classification model generation method, image recognition method, apparatus, devices, and mediums
US11552914B2 (en) 2019-10-06 2023-01-10 International Business Machines Corporation Filtering group messages
GB2604772A (en) * 2019-10-06 2022-09-14 Ibm Filtering group messages
WO2021069989A1 (en) * 2019-10-06 2021-04-15 International Business Machines Corporation Filtering group messages
US11843569B2 (en) 2019-10-06 2023-12-12 International Business Machines Corporation Filtering group messages
CN110889332A (en) * 2019-10-30 2020-03-17 中国科学院自动化研究所南京人工智能芯片创新研究院 Lie detection method based on micro expression in interview
CN111339940B (en) * 2020-02-26 2023-07-21 中国工商银行股份有限公司 Video risk identification method and device
CN111339940A (en) * 2020-02-26 2020-06-26 中国工商银行股份有限公司 Video risk identification method and device
CN111767779A (en) * 2020-03-18 2020-10-13 北京沃东天骏信息技术有限公司 Image processing method, device, equipment and computer readable storage medium
CN111540440A (en) * 2020-04-23 2020-08-14 深圳市镜象科技有限公司 Psychological examination method, device, equipment and medium based on artificial intelligence
CN111540440B (en) * 2020-04-23 2021-01-15 深圳市镜象科技有限公司 Psychological examination method, device, equipment and medium based on artificial intelligence
CN111597301A (en) * 2020-04-24 2020-08-28 北京百度网讯科技有限公司 Text prediction method and device and electronic equipment
CN112183946A (en) * 2020-09-07 2021-01-05 腾讯音乐娱乐科技(深圳)有限公司 Multimedia content evaluation method, device and training method thereof
CN112084992A (en) * 2020-09-18 2020-12-15 北京中电兴发科技有限公司 Face frame selection method in face key point detection module
CN112084992B (en) * 2020-09-18 2021-04-13 北京中电兴发科技有限公司 Face frame selection method in face key point detection module
CN112614583A (en) * 2020-11-25 2021-04-06 平安医疗健康管理股份有限公司 Depression grade testing system
CN112699774A (en) * 2020-12-28 2021-04-23 深延科技(北京)有限公司 Method and device for recognizing emotion of person in video, computer equipment and medium
CN113158978B (en) * 2021-05-14 2022-04-08 无锡锡商银行股份有限公司 Risk early warning method for micro-expression recognition in video auditing
CN113158978A (en) * 2021-05-14 2021-07-23 无锡锡商银行股份有限公司 Risk early warning method for micro-expression recognition in video auditing
CN113243918A (en) * 2021-06-11 2021-08-13 深圳般若计算机系统股份有限公司 Risk detection method and device based on multi-mode hidden information test
CN115526888A (en) * 2022-11-17 2022-12-27 博奥生物集团有限公司 Eye pattern data identification method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
WO2019184125A1 (en) 2019-10-03

Similar Documents

Publication Publication Date Title
CN108537160A (en) Risk Identification Method, device, equipment based on micro- expression and medium
CN110678875B (en) System and method for guiding a user to take a self-photograph
CN109344693B (en) Deep learning-based face multi-region fusion expression recognition method
JP6788264B2 (en) Facial expression recognition method, facial expression recognition device, computer program and advertisement management system
CN110728225B (en) High-speed face searching method for attendance checking
US20050201594A1 (en) Movement evaluation apparatus and method
CN106228137A (en) A kind of ATM abnormal human face detection based on key point location
CN106980852B (en) Based on Corner Detection and the medicine identifying system matched and its recognition methods
CN108090830B (en) Credit risk rating method and device based on facial portrait
CN107368778A (en) Method for catching, device and the storage device of human face expression
Hu et al. Research on abnormal behavior detection of online examination based on image information
CN108985210A (en) A kind of Eye-controlling focus method and system based on human eye geometrical characteristic
CN110543848B (en) Driver action recognition method and device based on three-dimensional convolutional neural network
CN111008971B (en) Aesthetic quality evaluation method of group photo image and real-time shooting guidance system
CN110634116A (en) Facial image scoring method and camera
CN109325408A (en) A kind of gesture judging method and storage medium
CN112883867A (en) Student online learning evaluation method and system based on image emotion analysis
Hsu et al. A novel eye center localization method for multiview faces
CN104091173A (en) Gender recognition method and device based on network camera
CN111507227A (en) Multi-student individual segmentation and state autonomous identification method based on deep learning
Wang et al. Research on face recognition technology based on PCA and SVM
CN110929570B (en) Iris rapid positioning device and positioning method thereof
CN114550270A (en) Micro-expression identification method based on double-attention machine system
CN113887386A (en) Fatigue detection method based on multi-feature fusion of deep learning and machine learning
CN112800815A (en) Sight direction estimation method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination