CN108510194A - Air control model training method, Risk Identification Method, device, equipment and medium - Google Patents

Air control model training method, Risk Identification Method, device, equipment and medium Download PDF

Info

Publication number
CN108510194A
CN108510194A CN201810292057.1A CN201810292057A CN108510194A CN 108510194 A CN108510194 A CN 108510194A CN 201810292057 A CN201810292057 A CN 201810292057A CN 108510194 A CN108510194 A CN 108510194A
Authority
CN
China
Prior art keywords
training
air control
target
model
control model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810292057.1A
Other languages
Chinese (zh)
Other versions
CN108510194B (en
Inventor
马潜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201810292057.1A priority Critical patent/CN108510194B/en
Priority to PCT/CN2018/094216 priority patent/WO2019184124A1/en
Publication of CN108510194A publication Critical patent/CN108510194A/en
Application granted granted Critical
Publication of CN108510194B publication Critical patent/CN108510194B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/03Credit; Loans; Processing thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Technology Law (AREA)
  • Data Mining & Analysis (AREA)
  • Human Resources & Organizations (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of air control model training method, Risk Identification Method, device, equipment and medium.The air control model training method includes:Original video data is labeled, positive negative sample is obtained;Framing and Face datection are carried out to the positive negative sample, obtain training face picture;The trained face picture is grouped according to preset quantity, obtains at least one set of target training data;Target training data includes the training face picture of continuous N frames;Target training data is divided according to preset ratio, obtains training set and test set;Each group of target training data in training set is input to convolutional neural networks length to be trained in recurrent neural networks model in short-term, obtains original air control model;Original air control model is tested using each group of target training data in test set, obtains target air control model.The air control model training method has the advantages that training effectiveness is high and accuracy of identification is high.

Description

Air control model training method, Risk Identification Method, device, equipment and medium
Technical field
The present invention relates to risk identification field more particularly to a kind of air control model training method, Risk Identification Method device, Equipment and medium.
Background technology
In financial industry, the granting of each loan fund is both needed to carry out risk control (hereinafter referred to as air control), with determination It can offer loans to creditor.Traditional air control process, the main side for carrying out face-to-face exchange with creditor using the careful people of letter Formula carries out, but in aspectant communication process, believes that examining people may be because facial expression absent minded or to people It does not know much have less understanding, ignores some subtle expression shape changes of loan human face, these subtle expression shape changes may reflect creditor Psychological activity (such as lying) when exchange.Whether part financial institution is gradually lied using air control Model Identification creditor, with auxiliary It helps and carries out loan air control.Current air control model needs the face using a series of micro- Expression Recognition model crawl face special Sign, and then based on these psychological activities of the subtle expression shape change reflection creditor in loan, to achieve the purpose that air control, but In this Expression Recognition model slightly of training using general neural network so that the accuracy rate of model is not high and recognition efficiency It is low.
Invention content
A kind of air control model training method of offer of the embodiment of the present invention, device, equipment and medium, to solve current risk knowledge Other model needs to be identified using a series of micro- Expression Recognition models, leads to the problem that recognition efficiency is low.
The embodiment of the present invention provides a kind of Risk Identification Method, and general nerve is used to solve current risk identification model Network model is trained so that the not high problem of Model Identification accuracy rate.
In a first aspect, the embodiment of the present invention provides a kind of air control model training method, including:
Original video data is labeled, positive negative sample is obtained;
Framing and Face datection are carried out to the positive negative sample, obtain training face picture;
The trained face picture is grouped according to preset quantity, obtains at least one set of target training data;It is described Target training data includes the trained face picture of continuous N frames;
The target training data is divided according to preset ratio, obtains training set and test set;
Target training data described in the training set each group is input to convolutional neural networks-length recurrent neural in short-term It is trained in network model, obtains original air control model;
The original air control model is tested using target training data described in each group in the test set, is obtained Target air control model.
Second aspect, the embodiment of the present invention provide a kind of air control model training apparatus, including:
Positive and negative sample acquisition module obtains positive negative sample for being labeled to original video data;
Training face picture acquisition module obtains training of human for carrying out framing and Face datection to the positive negative sample Face picture;
Target training data acquisition module is obtained for being grouped according to preset quantity to the trained face picture At least one set of target training data;The target training data includes the trained face picture of continuous N frames;
Target training data division module is obtained for being divided according to preset ratio to the target training data Training set and test set;
Original air control model acquisition module, for target training data described in the training set each group to be input to volume Product neural network-length is trained in recurrent neural networks model in short-term, obtains original air control model;
Target air control model acquisition module is used for using target training data described in each group in the test set to described Original air control model is tested, and target air control model is obtained.
The third aspect, the embodiment of the present invention provide a kind of Risk Identification Method, including:
Obtain video data to be identified;
Face datection is carried out to the video data to be identified using Face datection model, obtains face picture to be identified;
The face picture to be identified is grouped, at least one set of target face picture is obtained;
The target air control model obtained using air control model training method described in first aspect is at least one set of target Face picture is identified, obtain each group described in the corresponding risk identification probability of target face picture;
Based on the risk identification probability, obtains risk and know result.
Fourth aspect, the embodiment of the present invention provide a kind of risk identification device, including:
Video data acquisition module to be identified, for obtaining video data to be identified;
Face picture acquisition module to be identified, for using Face datection model to the video data to be identified into pedestrian Face detects, and obtains face picture to be identified;
Target face picture acquisition module obtains at least one set of mesh for being grouped to the face picture to be identified Mark face picture;
Risk identification probability acquisition module, the target wind for being obtained using air control model training method described in first aspect At least one set of target face picture is identified in control model, obtain each group described in the corresponding risk of target face picture Identification probability;
Risk knows result acquisition module, for being based on the risk identification probability, obtains risk and knows result.
5th aspect, the embodiment of the present invention provide a kind of computer equipment, including memory, processor and are stored in institute The computer program that can be run in memory and on the processor is stated, the processor executes real when the computer program Described in existing first aspect the step of air control model training method;Alternatively, the processor is realized when executing the computer program Described in the third aspect the step of Risk Identification Method.
6th aspect, the embodiment of the present invention provide a kind of computer readable storage medium, the computer-readable storage medium Matter is stored with computer program, and air control model training side described in first aspect is realized when the computer program is executed by processor The step of method;Alternatively, the step of realizing Risk Identification Method described in the third aspect when computer program is executed by processor.
In air control model training method provided in an embodiment of the present invention, device, equipment and medium, first to original video data It is labeled, obtains positive negative sample, to facilitate model training, improve the efficiency of model training.Then, positive negative sample is divided Frame and Face datection obtain the picture comprising human face's feature i.e. training face picture, so that air control model can be based on training Face picture extracts micro- expressive features, and carries out deep learning, improves the recognition accuracy of air control model.To training face picture It is grouped according to preset quantity, obtains at least one set of target training data;Target training data includes the instruction of continuous N frames Practice face picture;Each group of target training data in training set is input to convolutional neural networks-length recurrent neural network in short-term It is trained in model, obtains original air control model, without a series of general micro- Expression Recognition models to training face Picture is identified, and each group of target training data need to be only directly inputted to in model convolutional neural networks-length recurrence in short-term It can be trained in neural network model, improve the efficiency of model training.Finally, using each group of target training in test set Data test original air control model, target air control model are obtained, so that the recognition effect of target air control model is more smart It is accurate.
In a kind of Risk Identification Method of offer of the embodiment of the present invention, device, equipment and medium, in the present embodiment, first pass through The mode of Video chat puts question to target customer, to obtain the video data video counts i.e. to be identified of target customer's reply According to so that the careful process of letter is intelligent, without believing that examining people and target customer carries out face-to-face exchange, to save labour turnover.Then, Face datection carried out to video data to be identified with face detection model, and then it includes that the video image of face waits knowing to extract Other face picture, is grouped face picture to be identified, obtains at least one set of target face picture, improves Model Identification Accuracy rate.At least one set of target face picture is identified using target air control model, obtains each group of target face picture Corresponding risk identification probability improves the recognition efficiency and recognition accuracy of target air control model.
Description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the present invention Example, for those of ordinary skill in the art, without having to pay creative labor, can also be according to these attached drawings Obtain other attached drawings.
Fig. 1 is a flow chart of the air control model training method provided in the embodiment of the present invention 1.
Fig. 2 is a specific schematic diagram of step S12 in Fig. 1.
Fig. 3 is a specific schematic diagram of step S15 in Fig. 1.
Fig. 4 is a specific schematic diagram of step S153 in Fig. 3.
Fig. 5 is a functional block diagram of the air control model training apparatus provided in the embodiment of the present invention 2.
Fig. 6 is a flow chart of the Risk Identification Method provided in the embodiment of the present invention 3.
Fig. 7 is a functional block diagram of the risk identification device provided in the embodiment of the present invention 4.
Fig. 8 is a schematic diagram of the computer equipment provided in the embodiment of the present invention 6.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation describes, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained without creative efforts Example, shall fall within the protection scope of the present invention.
Embodiment 1
Fig. 1 shows the flow chart of air control model training method in the present embodiment.The air control model training method can be applicable to In the financial institutions such as bank, security, insurance, risk is carried out to creditor to examine people using trained air control model-aided letter Control, so that it is determined that can offer loans to the creditor.As shown in Figure 1, the air control model training method includes the following steps:
S11:Original video data is labeled, positive negative sample is obtained.
Wherein, original video data is opened by what is obtained in the data set disclosed in internet or the third-party institution/platform Source video data comprising lie video data and video data of not lying.Specifically, to carrying out lie in original video data Mark, i.e., be labeled as " 0 ", video data of not lying is labeled as " 1 ", to obtain positive negative sample, facilitates mould to video data of lying Type training improves the efficiency of model training.
In the present embodiment, the ratio setting of positive negative sample is 1:1, that is, it obtains lying for equal proportion and video data and does not say Lie video data, the case where model training over-fitting can be effectively prevent, so that the air control mould obtained by positive and negative sample training The recognition effect of type is more accurate.
S12:Framing and Face datection are carried out to positive negative sample, obtain training face picture.
Wherein, training face picture is to carry out framing and the obtained face spy comprising people of Face datection to positive negative sample The picture of sign.It is trained based on micro- expressive features by air control model in this present embodiment, therefore, it is necessary to positive and negative sample This progress framing and Face datection, the picture for obtaining the facial characteristics comprising people are training face picture, are trained to use Face picture carries out model training, so that air control model can be based on training face picture and extract micro- expressive features, and carries out depth Degree study, improves the recognition accuracy of air control model.
S13:Training face picture is grouped according to preset quantity, obtains at least one set of target training data;Target Training data includes the training face picture of continuous N frames.
Wherein, it is grouped according to preset quantity, obtains at least one set of target training data, make each group of target training number The training face picture for including continuous N frames in, so that the micro- expression for obtaining face from the training face picture of continuous N frames is special Sign variation, so that training face picture has timing, to increase the accuracy rate of target air control model.
In the present embodiment, the range of preset quantity may be configured as [50,200] if, which is because, will be less than or equal to 50 frames Training face picture as one group of training data in training set, then can due to training face picture it is very few, one cannot be shown The change procedure for the facial characteristics that individual tells a lie, causes the recognition accuracy of air control model not high.If will be greater than equal to 200 frames Training face picture can then lead to the overlong time of model training as one group of training data in training set, reduce model instruction Experienced efficiency.In the present embodiment, model training is carried out as one group of training data according to every 100 frame training face picture, is improved The recognition accuracy for the air control model that the training effectiveness of model and training obtain.
S14:Target training data is divided according to preset ratio, obtains training set and test set.
Wherein, preset ratio is pre-set, the ratio for classifying to training face picture.The default ratio Example can be the ratio obtained according to historical experience.Wherein, training set (training set) is learning sample data set, is logical Some parameters of overmatching establish grader, i.e., using the target training data in training set come training machine learning model, with Determine the parameter of machine learning model.Test set (test set) is the resolution energy for testing trained machine learning model Power, such as discrimination.It, can be according to 9 in the present embodiment:1 ratio divides training face picture, you can by 90% training Face picture is as training set, and the data of residue 10% are as test set.
S15:Each group of target training data in training set is input to convolutional neural networks-length recurrent neural network in short-term It is trained in model, obtains original air control model.
Wherein, recurrent neural networks model is by convolutional neural networks model and to grow in short-term to convolutional neural networks-length in short-term Recurrent neural networks model is combined obtained model.It is to be appreciated that convolutional neural networks-length recurrent neural network in short-term Model is equivalent to convolutional neural networks and is connected with long recurrent neural networks model in short-term the model to be formed.
Convolutional neural networks (Convolutional Neural Network, CNN)) it is part connection network.Relative to Its maximum feature of fully-connected network is exactly local connectivity and weights sharing.Some pixel p in one sub-picture is come It says, the pixel closer from pixel p influences it also bigger (local connectivity).In addition, according to the statistical property of natural image, The weights in some region can be used for another region, i.e. weights sharing.The shared convolution kernel that can be understood as of weights is shared, In convolutional neural networks (CNN), a convolution kernel is done convolution algorithm with given image can extract a kind of image spy Sign, different convolution kernels can extract different characteristics of image.Due to the local connectivity of convolutional neural networks so that model Complexity reduces, and improves the efficiency of model training;Also, due to the weights sharing of convolutional neural networks, convolutional Neural Network can further increase model training efficiency with collateral learning.
Long recurrent neural network (long-short term memory, hereinafter referred to as LSTM) model in short-term is a kind of time Recurrent neural networks model is suitable for handling and predict to have time series, and time series interval and delay are relatively long Critical event.LSTM models have the function of time memory, train the feature of face picture with before by each frame in this present embodiment Afterwards the training face picture feature of two frames has a close ties, therefore using long recurrent neural networks model in short-term to extracting Feature is trained, and to embody the long-term memory ability of data, improves the accuracy rate of model.
In the present embodiment, due to being trained to the training face picture of the i.e. continuous N frames of target training data, need Feature extraction is carried out to training face picture, and convolutional neural networks model is the common neural network of picture feature extraction, by In the weights sharing and local connectivity of convolutional neural networks, the efficiency of model training is considerably increased.And in the present embodiment The feature of each frame training face picture and the training face picture feature of front and back two frame have close ties, therefore use length When recurrent neural networks model the face characteristic extracted is trained, to embody the long-term memory ability of data, improve mould The accuracy rate of type.Due to the weights sharing and local connectivity of convolutional neural networks, and long recurrent neural network mould in short-term Type can embody the advantages of long-term memory ability of data, considerably increase by convolutional neural networks-length recurrent neural net in short-term The efficiency for the air control model training that network model is trained and the accuracy rate of air control model.
S16:Original air control model is tested using each group of target training data in test set, obtains target air control Model.
Wherein, target air control model is tested raw risk model using the training face picture in test set, So that the accuracy of original air control model reaches the model of default accuracy.Specifically, number is trained using the target in test set Original air control model is tested according to the training face picture of i.e. continuous N frames, to obtain corresponding accuracy;If accuracy reaches To default accuracy, then using the original air control model as target air control model.
In the present embodiment, first original video data is labeled, obtains positive negative sample, to facilitate model training, is improved The efficiency of model training.And by the ratio setting equal proportion of positive negative sample, the feelings of model training over-fitting can be effectively prevent Condition, so that the recognition effect of the air control model obtained by positive and negative sample training is more accurate.Then, positive negative sample is divided Frame and Face datection obtain the picture comprising human face's feature i.e. training face picture, so that air control model can be based on training Face picture extracts micro- expressive features, and carries out deep learning, improves the recognition accuracy of air control model.To training face picture It is grouped according to preset quantity, so that the training face picture of the continuous N frames of each preset quantity is trained as one group of target Data carry out model training, improve the accuracy rate of the training effectiveness and air control Model Identification of model.To training face picture according to Preset ratio is divided, and obtains training set and test set, and each group of target training data in training set is input to convolution Neural network-length is trained in recurrent neural networks model in short-term, obtains original air control model, so that original air control model has There is timing, and due to the weights sharing of convolutional neural networks, network can improve model training effect with collateral learning Rate, due to the local connectivity of convolutional neural networks so that the complexity of model reduces, and improves the efficiency of model training.Finally, Original air control model is tested using each group of target training data in test set, target air control model is obtained, so that mesh The recognition effect for marking air control model is more accurate.
In a specific embodiment, as shown in Fig. 2, in step S12, i.e., framing is carried out to positive negative sample and face is examined It surveys, obtains training face picture, specifically comprise the following steps:
S121:Framing is carried out to positive negative sample, obtains video image.
Wherein, framing refers to being divided to original video data according to preset time, to obtain video image.Specifically Ground further includes the steps that video image being normalized and time-labeling after the step of carrying out framing to positive negative sample. Normalization is a kind of mode of simplified calculating, i.e., the expression formula that will have dimension turns to nondimensional expression formula by transformation, at For scalar.Such as in the positive negative sample in the present embodiment, the facial area of client is needed, the micro- expression that could extract client is special Sign, it is therefore desirable to which the pixel of the video image after framing is normalized into 260*260 pixels, unified pixel, so as to subsequently to every One frame video image carries out Face datection, improves the accuracy rate of Model Identification.Time-labeling is carried out to video image, i.e., to each Frame video image is labeled according to the sequencing of time, so that video image has timing, improves the accuracy rate of model.
S122:Face datection is carried out to video image using Face datection model, obtains training face picture.
Wherein, face inspection detection model is trained in advance for detecting the face whether each frame video image includes people The model in portion region.Specifically, each frame video image is input in Face datection model, is detected in each frame video image Face location, and then extract include face video image be training face picture, provided for the input of following model Technical support.
In the present embodiment, framing and normalized are carried out to positive negative sample, obtain video image, unified each frame regards The pixel of frequency image improves the efficiency of air control model training subsequently to carry out Face datection to each frame video image.Most Afterwards, Face datection is carried out to video image using Face datection model, is training of human to obtain the video image comprising face Face picture provides technical support for the input of following model, and by carrying out model instruction to the video image for including face Practice, exclude other factors interference, so that model can be based on training face picture, extract micro- expressive features, is air control model Training provides technical support.
In a specific embodiment, the Face datection model in step S122 is specially to be instructed using CascadeCNN networks The Face datection model got.
Wherein, CascadeCNN (concatenated convolutional neural network) is the depth convolution net to classical Violajones methods Network is realized, is a kind of faster method for detecting human face of detection speed.Violajones is a kind of Face datection frame.The present embodiment In, the picture for having marked face location is trained using CascadeCNN methods, to obtain Face datection model, is improved The recognition efficiency of Face datection model.
Specifically, the picture (training face picture) for having marked face location is trained using CascadeCNN methods The step of it is as follows:
The training first stage, using 12-net network sweep images, and refuse 90% or more window, remaining window is defeated Enter to 12-calibration-net networks and corrected, then to using non-maxima suppression algorithm to the image after correction into Row processing, to eliminate high superposed window.Wherein, 12-net is the detection window using 12 × 12, with step-length for 4, at W (width) It is slided on the picture of × H (height), obtains detection window.12-calibration-net is correction network, for correcting face institute In region, the area coordinate of face is obtained.Non-maxima suppression algorithm is a kind of extensive in the fields such as target detection and positioning The essence of the method used, algorithm principle is search local maximum and inhibits non-maximum element.Utilize above-mentioned 12- Net networks make Face datection to training face picture, will be judged to non-face (be not above predetermined threshold value in training face picture ) window as negative sample, the window of all real human faces (being more than predetermined threshold value) is regard as positive sample, with acquisition pair The detection window answered.Wherein, predetermined threshold value be developer it is pre-set for whether there is people in training of judgement data The threshold value of face.
Training second stage, the image that the first stage is exported using 24-net and 24-calibration-net networks into Row processing;Wherein, 12-net and 24-net contributes to determine whether the network in face area, difference lies in 24-net be On the basis of 12-net, 24 × 24 picture is input to the feature that 24-net networks obtain the full articulamentum extraction of 24-net, And 21 × 24 picture is zoomed to 12 × 12 simultaneously, the full articulamentums of 12-net are input to, finally carry the full articulamentums of 24-net The feature taken exports together with the feature that the full articulamentums of 12-net obtain.12-calibration-net networks and 24- Calibration-net networks are correction networks.Make Face datection on the training data using above-mentioned 24-net networks, will instruct Practice in data and is determined as non-face window as negative sample, using all real human faces as positive sample.
The training phase III, using 48-net and 48-calibration-net networks to the output knot of training second stage Fruit is handled, to complete the training of final stage.The level processes are similar with training second stage, to avoid repeating, Here it no longer repeats one by one.
In the present embodiment, face is carried out to video image using the Face datection model that CascadeCNN network trainings obtain Detection, the process for obtaining training face picture are consistent with above-mentioned training process, to avoid repeating, no longer go to live in the household of one's in-laws on getting married one by one here It states.
In a specific embodiment, as shown in figure 3, in step S15, i.e., by each group of target training data in training set It is input to convolutional neural networks-length to be trained in recurrent neural networks model in short-term, obtains original air control model, specifically include Following steps:
S151:Initialize convolutional neural networks-length recurrent neural networks model in short-term.
Wherein, recurrent neural networks model refers to advance initialization convolutional Neural to initialization convolutional neural networks-length in short-term Model parameter (the connection weight between i.e. each layer in the model parameter (i.e. convolution kernel and biasing) and LSTM models of network model Value).Convolution kernel refers to the weights of convolutional neural networks, when inputting training data, can be multiplied by a weights i.e. convolution kernel, then The output of neuron is obtained, it reflects the significance level of training data.Biasing is to multiply the range of input for changing weight Linear component.Based on the connection weight between each layer in determining convolution kernel, biasing and LSTM models, you can complete model instruction Experienced process.
S152:Feature extraction is carried out to the target training data in training set using convolutional neural networks, it is special to obtain face Sign.
Wherein, face spy is characterized in using convolutional neural networks to the i.e. continuous N frames of the target training data in training set Training face picture carry out the obtained facial characteristics of feature extraction.Specifically, using convolutional neural networks in training set Target training data carry out feature extraction, specifically comprise the following steps:
Wherein, face characteristic is to carry out convolution fortune to the target training data in training set using convolutional neural networks model Calculate obtained feature.Specifically, the calculation formula of convolution algorithm includesIts In, * represents convolution algorithm;xjRepresent j-th of input feature vector figure;yjRepresent j-th of output characteristic pattern;wijIt is i-th of input spy Convolution kernel (weights) between sign figure and j-th of output characteristic pattern;bjRepresent the bias term of j-th of output characteristic pattern.Then it adopts Down-sampling operation is carried out to realize the dimensionality reduction to characteristic pattern, calculation formula to the characteristic pattern after convolution with maximum pond down-sampling ForWherein, yjIndicate that i-th of output spectra during down-sampling (is adopted down Characteristic pattern after sample), each neuron during down-sampling is used from i-th of input spectrum (characteristic pattern after convolution) What the down-sampling frame local sampling of S*S obtained;M and n indicate that down-sampling is frameed shift dynamic step-length respectively.
S153:Face characteristic is input in long recurrent neural networks model in short-term and is trained, original air control mould is obtained Type.
Specifically, LSTM models are the one kind having in the neural network model of long-term memory ability, have input layer, hidden Hide layer and output layer this Three Tiered Network Architecture.Wherein, input layer is the first layer of LSTM models, for receiving outer signals, i.e., It is responsible for receiving the face characteristic for carrying time sequence status.In the present embodiment, since the training face picture in training set has sequential Property, therefore, the face characteristic that the training face picture in training set obtains after step S152 processing also has timing, makes it It can be applicable in LSTM models so that LSTM obtains the face characteristic for carrying time sequence status.Output layer is the last of LSTM models One layer, for outputing signal to the outside, that is, it is responsible for the result of calculation of output LSTM models.Hidden layer is in LSTM models except input Each layer except layer and output layer, is handled for the face characteristic to input, obtains the result of calculation of LSTM models.Its In, original air control model is to carry out successive ignition to the face characteristic for carrying time sequence status using LSTM models until convergence gained The model arrived.It is to be appreciated that carrying out model training to the face characteristic of extraction using LSTM models enhances the original got The timing of beginning air control model, to improve the accuracy rate of original air control model.
In the present embodiment, the output layer of LSTM models carries out recurrence processing using Softmax (regression model), for classifying Export weight matrix.Softmax (regression model) is a kind of classification function being usually used in neural network, it is by multiple neurons Output, is mapped in [0,1] section, it is possible to understand that at probability, calculate it is simple and convenient, to carry out outputs of classifying more, It is set to export result more acurrate.
In the present embodiment, convolutional neural networks-length recurrent neural networks model in short-term is first initialized, to be based on convolution god The target training data in training set is trained through network model, obtains face characteristic, then by the face got spy Sign input LSTM models are trained, which only need to be directly inputted to volume without artificial extraction feature by training face picture Product neural network-length is in short-term in recurrent neural networks model, you can voluntarily extracts feature by model, improves model training efficiency.
It is trained (i.e. step as shown in figure 4, face characteristic is input in long recurrent neural networks model in short-term S153), specifically comprise the following steps:
S1531:Face characteristic is trained using propagated forward algorithm, obtains first state parameter.
Specifically, face characteristic is trained using propagated forward (Forward Propagation) algorithm, refers to adopting Sequencing with propagated forward algorithm according to the time sequence status of face characteristic carrying is trained.Wherein, first state parameter It refer to the obtained parameter of primary iteration process that model training is carried out based on face characteristic.
Wherein, propagated forward (Forward Propagation) algorithm is the sequence progress according to the time The algorithm of model training.Specifically, the calculation formula of propagated forward algorithm isWithWherein, StIndicate the output of current time hidden layer;Indicate hidden layer last moment to currently The weights at moment;Weights of the expression input layer to hidden layer;Indicate the prediction output at current time;It indicates to hide Layer arrives the weights of output layer.
It is to be appreciated that propagated forward algorithm is by the input X at current timetAnd the hidden unit of last moment is defeated Go out St-1, i.e., the output S of the mnemon in LSTM models in hidden layert-1As the input of hidden layer, later by activating letter The transformation of number tanh (tanh) obtains the output S at hidden layer current timet, the prediction of t moment, which exports, then to be usedIt indicates, U Indicate that the weights at hidden layer last moment to current time, W indicate that the weights from input layer to hidden layer, V are indicated from hidden layer To the weights of output layer.It follows that prediction outputWith the output S at current timetCorrelation, StIt include the input of t moment With the state at t-1 moment so that model output remains information all in time series, has timing.
In the present embodiment, since the ability to express of linear model is inadequate, using tanh (tanh) as activation Function, non-linear factor can be added, and that the original air control model trained is solved the problems, such as is more complicated.Also, activate letter Number tanh (tanh) has the advantages that fast convergence rate, can save the training time, improve the efficiency of model training.
S1532:Error calculation is carried out to first state parameter using Back Propagation Algorithm, obtains original air control model.
Wherein, back-propagating (Back Propagation) algorithm is passed back to from the last one time by the residual error of accumulation Carry out and carry out the algorithm of neural network model training.Specifically, the calculation formula of Back Propagation Algorithm isWherein,Indicate the prediction output of t moment;otIndicate t moment withCorresponding actual value.This implementation In example, error calculation is carried out to first state parameter using Back Propagation Algorithm, and the result based on error calculation carries out error Anti-pass updates, and to update the weighting parameter of LSTM models and the weighting parameter of convolutional neural networks, can effectively improve air control model Accuracy rate.
Specifically, error calculation is carried out to first state parameter using back-propagating (Back Propagation) algorithm, Refer to the sequential update Optimal Parameters according to time reversal, i.e. these three weight parameters of U, V and W in the present embodiment.This implementation In example, error calculation is that the loss function of the t moment of back-propagating is defined as cross entropy to calculate, that is, uses formulaIt is calculated.Each layer of local derviation is finally calculated according to chain type method of derivation calculates each layer Local derviation calculatesU, V and W these three weighting parameters are updated based on these three change rates, to obtain State parameter after adjusting.Wherein,It can thus be appreciated that we only need pair The loss function calculating partial derivative at each moment is added again can be obtained aforementioned four change rate to update the power of LSTM models Value parameter.Wherein, chain type method of derivation is the Rule for derivation in calculus, and the derivative for seeking a compound function is in micro- product A kind of common method in the derivative operation divided.Finally, using formulaWithCalculate convolutional neural networks biasing and The local derviation of convolution kernel, the reversed model parameter (i.e. convolution kernel and biasing) for updating convolutional neural networks, wherein b indicates convolution god Biasing through network, k indicate the convolution kernel of convolutional neural networks.Since LSTM models and convolutional neural networks model are a god Through network, therefore, the model parameter and convolutional Neural net of LSTM models are updated based on the Back Propagation Algorithm in LSTM models The model parameter of network model, you can complete the optimization to original air control model.
Specifically, since gradient can cause showing for gradient disappearance with the exponential increase that incrementally forms of the backpropagation number of plies As can be good at solving asking for gradient disappearance using cross entropy loss function and the cooperation of tanh activation primitives in the present embodiment Topic, increases trained accuracy rate.
In the present embodiment, first face characteristic is trained using propagated forward algorithm, obtains first state parameter, then Error calculation is carried out to first state parameter using Back Propagation Algorithm, and the result based on error calculation carries out error-duration model more Newly, to update the weighting parameter of LSTM models and the weighting parameter of convolutional neural networks, the original wind got can be effectively improved Control the accuracy rate of model.
In the present embodiment, since convolutional neural networks (CNN) are part connection networks, have local connectivity and weights total Enjoying property so that model can collateral learning, therefore using convolutional neural networks in training set face picture carry out feature carry It takes, improves the acquisition efficiency of face characteristic, and then improve the efficiency of model training.Then the face characteristic of acquisition is inputted To being trained in LSTM models, the original air control model with timing is obtained, to enhance original air control model in time Predictive ability, improve raw risk model accuracy rate.
In the present embodiment, first original video data is labeled, obtains positive negative sample, to facilitate model training, is improved The efficiency of model training.Then, by the ratio setting equal proportion of positive negative sample, model training over-fitting can be effectively prevent Situation, so that the recognition effect of the air control model obtained by positive and negative sample training is more accurate.Then, positive negative sample is carried out Framing and normalized obtain video image, the pixel of unified each frame video image, so as to subsequently to each frame video Image carries out Face datection, improves the accuracy rate of risk identification.Finally, face is carried out to video image using Face datection model Detection provides technical support to obtain the video image comprising face i.e. training face picture for the input of following model, and By carrying out model training to the video image for including face, other factors interference is excluded, so that model can be based on training Face picture extracts micro- expressive features, achievees the purpose that risk control.Training face picture is divided according to preset quantity Group carries so that the training face picture of the continuous N frames of each preset quantity carries out model training as one group of target training data The accuracy rate of the training effectiveness and air control Model Identification of high model.Target training data is divided according to preset ratio, is obtained Training set and test set are taken, and each group of target training data in training set is input to convolutional neural networks-length and is passed in short-term Return in neural network model and be trained, obtain original air control model so that original air control model have timing, and due to The weights sharing of convolutional neural networks, therefore network can improve model training efficiency with collateral learning.Finally, using test It concentrates each group of target training data to test original air control model, target air control model is obtained, so that target air control mould The recognition effect of type is more accurate.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit It is fixed.
Embodiment 2
Fig. 5 shows the principle frame with the one-to-one air control model training apparatus of air control model training method in embodiment 1 Figure.As shown in figure 5, the air control model training apparatus include positive and negative sample acquisition module 11, training face picture acquisition module 12, Target training data acquisition module 13, target training data division module 14, original air control model acquisition module 15 and target wind Control model acquisition module 16.Wherein, positive and negative sample acquisition module 11, training face picture acquisition module 12, target training data Acquisition module 13, target training data division module 14, original air control model acquisition module 15 and target air control model obtain mould The realization function of block 16 step corresponding with air control model training method in embodiment 1 corresponds, to avoid repeating, this implementation Example is not described in detail one by one.
Positive and negative sample acquisition module 11 obtains positive negative sample for being labeled to original video data.
Training face picture acquisition module 12 obtains training face for carrying out framing and Face datection to positive negative sample Picture.
Target training data acquisition module 13 obtains instruction for being divided according to preset ratio to training face picture Practice collection and test set.
Target training data division module 14 obtains instruction for being divided according to preset ratio to target training data Practice collection and test set.
Original air control model acquisition module 15, for each group of target training data in training set to be input to convolutional Neural Network-length is trained in recurrent neural networks model in short-term, obtains original air control model.
Target air control model acquisition module 16 is used for using each group of target training data in test set to original air control mould Type is tested, and target air control model is obtained.
Preferably, training face picture acquisition module 12 includes that video image acquiring unit 121 and training face picture obtain Take unit 122.
Video image acquiring unit 121 obtains video image for carrying out framing to positive negative sample.
Training face picture acquiring unit 122 is obtained for carrying out Face datection to video image using Face datection model Take trained face picture.
Preferably, original air control model acquisition module 15 includes model initialization unit 151, face characteristic acquiring unit 152 and original air control model acquiring unit 153.
Model initialization unit 151, for initializing convolutional neural networks-length recurrent neural networks model in short-term.
Face characteristic acquiring unit 152, for being carried out to the target training data in training set using convolutional neural networks Feature extraction obtains face characteristic.
Original air control model acquiring unit 153, for face characteristic to be input in long recurrent neural networks model in short-term It is trained, obtains original air control model.
Preferably, original air control model acquiring unit 153 includes first state parameter acquiring subelement 1531 and original wind It controls model and obtains subelement 1532.
First state parameter acquiring subelement 1531 is obtained for being trained to face characteristic using propagated forward algorithm Take first state parameter.
Original air control model obtains subelement 1532, for carrying out error to first state parameter using Back Propagation Algorithm It calculates, obtains original air control model.
Embodiment 3
Fig. 6 shows the flow chart of the present embodiment risk recognition methods.The air control model training method can be applicable to bank, On the computer equipment of the financial institutions such as security, insurance configuration, it can effectively assist believing that careful people carries out risk control to creditor, And then determine whether to offer loans to the creditor.As shown in fig. 6, the Risk Identification Method includes the following steps:
S21:Obtain video data to be identified.
Wherein, video data to be identified is the untreated video data for recording creditor during letter is examined. Since the accuracy being identified for frame video image to be identified is not high, the video data to be identified in the present embodiment The video data being made of at least two frames video image to be identified.
In the present embodiment, during letter is examined, believe that target customer can be putd question to by way of Video chat by examining people, To obtain the video data (video data i.e. to be identified) of target customer's reply, so that letter examines process intelligence, people is examined without letter Face-to-face exchange is carried out with target customer, to save labour turnover.
S22:Face datection is carried out to video data to be identified using Face datection model, obtains face picture to be identified.
Wherein, face picture to be identified is to carry out Face datection to video data to be identified using Face datection model to be obtained The face picture for being identified taken.Specifically, each frame video image to be identified in video data to be identified is defeated Enter into Face datection model, detects the face location in each frame video image to be identified, and then it includes face to extract Video image, that is, face picture to be identified.Specifically, which is specially and is obtained using CascadeCNN network trainings The Face datection model arrived, the process and the detection process phase in embodiment 1 for carry out Face datection to video data to be identified Together, it to avoid repeating, does not repeat one by one herein.
S23:Face picture to be identified is grouped, at least one set of target face picture is obtained.
Wherein, face picture to be identified is grouped according to preset quantity, obtains at least one set of target face picture.Tool Body, face picture to be identified is grouped face picture to be identified in the way of intersecting and choosing.In the present embodiment, press It is that one group of data (i.e. target face picture) to be identified is grouped according to every 100 frame, for example, the video counts to be identified of a 40s According to (including 960 frames), it is one group to be grouped i.e. the 1st pictures to the 100th pictures according to every 100 frame picture, the 10th figure Piece to the 110th pictures be one group, and so on, at least one set of target face picture is obtained, in such a way that this intersection is chosen It obtains at least one set of target face picture and Model Identification is improved with the contact being sufficiently reserved between face picture to be identified Accuracy rate
S24:At least one set of target face picture is identified using target air control model, obtains each group of target face The corresponding risk identification probability of picture.
Wherein, target air control model is that acquired target is trained using air control model training method in embodiment 1 Air control model.In the present embodiment, at least one set of target face picture is input to and is identified with target risk model, in mesh At least one set of target face picture of input is calculated in mark risk model, and is exported and each group of target face picture pair The risk identification probability answered.In the present embodiment, real number which can be between 0-1.
S25:Based on risk identification probability, risk identification result is obtained.
Specifically, using ranking operation formulaRisk identification probability is calculated, air control is obtained and knows knot Fruit.Wherein, piIt is the corresponding risk identification probability of each group of target face picture, wiIt is corresponding for each group of target face picture Weight.
In the present embodiment, the corresponding weight of each group of target face picture be arranged for different problems by words art it is different Weight, for example, the letter interrogation for foundation class such as age, gender and names is inscribed, the weight of setting can be relatively low, and loan is used On the way, the weight of the letter interrogation topic setting of the sensitive kinds such as personal income and repayment wish can be relatively high, by ranking operation to wind Dangerous identification probability is calculated, and is obtained air control and is known as a result, so that risk identification result is more accurate.Wherein, the letter of foundation class is examined The differentiation of the letter interrogation topic of problem and sensitive kinds is that the condition according to the problem with the presence or absence of model answer is divided.With bank For, if target customer has been pre-stored some personal information (such as identification card numbers, relatives in financial institutions such as bank, security, insurances Cell-phone number and home address etc.), then be previously stored with based on these model answer personal information it is proposed the problem of as base The letter interrogation of plinth class is inscribed.And for target customer's information not pre-stored in financial institutions such as bank, security, insurances, it is believed that The partial information does not have model answer, then based on the partial information, as the letter interrogation of sensitive kinds is inscribed the problem of proposed.
In the present embodiment, the mode for first passing through Video chat puts question to target customer, to obtain target customer's reply Video data be video data to be identified so as to examine process intelligent for letter, without believing that examine people faces with target customer Face exchanges, to save labour turnover.Then, Face datection, Jin Erti are carried out to video data to be identified with face detection model It include the video image of face is face picture to be identified to take, and is carried out to face picture to be identified by intersecting selection mode Grouping obtains at least one set of target face picture, improves the accuracy rate of Model Identification.Using target air control model at least one set Target face picture is identified, and obtains the corresponding risk identification probability of each group of target face picture, improves target air control The recognition efficiency and recognition accuracy of model.Finally, risk identification probability is calculated by ranking operation, obtains air control and knows As a result, so that risk identification result is more accurate.
Embodiment 4
Fig. 7 shows the functional block diagram with the one-to-one risk identification device of 3 risk recognition methods of embodiment.Such as Fig. 7 Shown, which includes video data acquisition module 21 to be identified, face picture acquisition module 22 to be identified, target Face picture acquisition module 23, risk identification probability acquisition module 24 and risk know result acquisition module 25.Wherein, to be identified to regard Frequency data acquisition module 21, face picture acquisition module 22 to be identified, target face picture acquisition module 23, risk identification probability Acquisition module 24 and risk know the realization function step 1 corresponding with 3 risk recognition methods of embodiment of result acquisition module 25 One corresponds to, and to avoid repeating, the present embodiment is not described in detail one by one.
Video data acquisition module 21 to be identified, for obtaining video data to be identified.
Face picture acquisition module 22 to be identified, for carrying out face to video data to be identified using Face datection model Detection, obtains face picture to be identified.
Target face picture acquisition module 23 obtains at least one set of target for being grouped to face picture to be identified Face picture.
Risk identification probability acquisition module 24, the target air control for being obtained using 1 air control model training method of embodiment At least one set of target face picture is identified in model, obtains the corresponding risk identification probability of each group of target face picture.
Risk knows result acquisition module 25, for being based on risk identification probability, obtains risk and knows result.
Preferably, risk knows result acquisition module 25, for using ranking operation formulaTo risk identification Probability is calculated, and is obtained air control and is known result;Wherein, piIt is the corresponding risk identification probability of each group of target face picture, wiFor The corresponding weight of each group of target face picture.
Embodiment 5
The present embodiment provides a computer readable storage medium, computer journey is stored on the computer readable storage medium Sequence realizes air control model training method in embodiment 1, to avoid repeating, here not when the computer program is executed by processor It repeats again.
Alternatively, realized when the computer program is executed by processor in embodiment 2 each module in air control model training apparatus/ The function of unit, to avoid repeating, which is not described herein again;
Alternatively, 3 risk recognition methods of embodiment is realized when the computer program is executed by processor, to avoid repeating, Which is not described herein again;
Alternatively, realizing each module/unit in 4 risk identification device of embodiment when the computer program is executed by processor Function, to avoid repeating, which is not described herein again.
Embodiment 6
Fig. 8 is the schematic diagram for the computer equipment that one embodiment of the invention provides.As shown in figure 8, the calculating of the embodiment Machine equipment 80 includes:Processor 81, memory 82 and it is stored in the calculating that can be run in memory 82 and on processor 81 Machine program 83.The step of realizing air control model training method in above-described embodiment 1 when processor 81 executes computer program 83, it is It avoids repeating, which is not described herein again.Alternatively, processor 81 realizes air control mould in above-described embodiment 2 when executing computer program 83 The function of each module/unit in type training device, to avoid repeating, which is not described herein again;Alternatively, processor 81 executes computer The step of 3 risk recognition methods of above-described embodiment is realized when program 83, to avoid repeating, which is not described herein again;Processor 81 The function that each module/unit in 4 risk identification device of above-described embodiment is realized when executing computer program 83, to avoid weight Multiple, which is not described herein again.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each work( Can unit, module division progress for example, in practical application, can be as needed and by above-mentioned function distribution by different Functional unit, module are completed, i.e., the internal structure of described device are divided into different functional units or module, more than completion The all or part of function of description.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although with reference to aforementioned reality Applying example, invention is explained in detail, it will be understood by those of ordinary skill in the art that:It still can be to aforementioned each Technical solution recorded in embodiment is modified or equivalent replacement of some of the technical features;And these are changed Or replace, the spirit and scope for various embodiments of the present invention technical solution that it does not separate the essence of the corresponding technical solution should all It is included within protection scope of the present invention.

Claims (10)

1. a kind of air control model training method, which is characterized in that including:
Original video data is labeled, positive negative sample is obtained;
Framing and Face datection are carried out to the positive negative sample, obtain training face picture;
The trained face picture is grouped according to preset quantity, obtains at least one set of target training data;The target Training data includes the trained face picture of continuous N frames;
The target training data is divided according to preset ratio, obtains training set and test set;
Target training data described in the training set each group is input to convolutional neural networks-length recurrent neural network in short-term It is trained in model, obtains original air control model;
The original air control model is tested using target training data described in each group in the test set, obtains target Air control model.
2. air control model training method as described in claim 1, which is characterized in that described to carry out framing to the positive negative sample And Face datection, training face picture is obtained, including:
Framing is carried out to the positive negative sample, obtains video image;
Face datection is carried out to the video image using Face datection model, obtains the trained face picture.
3. air control model training method as described in claim 1, which is characterized in that described by each group of institute in the training set It states target training data and is input to convolutional neural networks-length and be trained in recurrent neural networks model in short-term, obtain original wind Model is controlled, including:
Initialize the model parameter of convolutional neural networks-length recurrent neural networks model in short-term;
Feature extraction is carried out to the target training data in the training set using convolutional neural networks, obtains face characteristic;
The face characteristic is input in long recurrent neural networks model in short-term and is trained, the original air control mould is obtained Type.
4. air control model training method as claimed in claim 3, which is characterized in that the face characteristic is input to length in short-term It is trained in recurrent neural networks model, obtains the original air control model, including:
The face characteristic is trained using propagated forward algorithm, obtains first state parameter;
Error calculation is carried out to the first state parameter using Back Propagation Algorithm, obtains original air control model.
5. a kind of Risk Identification Method, which is characterized in that including:
Obtain video data to be identified;
Face datection is carried out to the video data to be identified using Face datection model, obtains face picture to be identified;
The face picture to be identified is grouped, at least one set of target face picture is obtained;
The target air control model obtained using any one of the claim 1-4 air control model training methods is to described at least one set Target face picture is identified, obtain each group described in the corresponding risk identification probability of target face picture;
Based on the risk identification probability, obtains risk and know result.
6. Risk Identification Method as claimed in claim 5, which is characterized in that it is described to be based on the risk identification probability, it obtains Risk is known as a result, including:
Using ranking operation formulaThe risk identification probability is calculated, air control is obtained and knows result;Its In, piIt is the corresponding risk identification probability of target face picture described in each group, wiIt is corresponded to for target face picture described in each group Weight.
7. a kind of air control model training apparatus, which is characterized in that including:
Positive and negative sample acquisition module obtains positive negative sample for being labeled to original video data;
Training face picture acquisition module obtains training face figure for carrying out framing and Face datection to the positive negative sample Piece;
Target training data acquisition module obtains at least for being grouped according to preset quantity to the trained face picture One group of target training data;The target training data includes the trained face picture of continuous N frames;
Target training data division module obtains training for being divided according to preset ratio to the target training data Collection and test set;
Original air control model acquisition module, for target training data described in the training set each group to be input to convolution god Through being trained in network-length in short-term recurrent neural networks model, original air control model is obtained;
Target air control model acquisition module is used for using target training data described in each group in the test set to described original Air control model is tested, and target air control model is obtained.
8. a kind of risk identification device, which is characterized in that including:
Video data acquisition module to be identified, for obtaining video data to be identified;
Face picture acquisition module to be identified, for carrying out face inspection to the video data to be identified using Face datection model It surveys, obtains face picture to be identified;
Target face picture acquisition module obtains at least one set of target person for being grouped to the face picture to be identified Face picture;
Risk identification probability acquisition module, for what is obtained using any one of the claim 1-4 air control model training methods At least one set of target face picture is identified in target air control model, obtain each group described in target face picture correspond to Risk identification probability;
Risk knows result acquisition module, for being based on the risk identification probability, obtains risk and knows result.
9. a kind of computer equipment, including memory, processor and it is stored in the memory and can be in the processor The computer program of upper operation, which is characterized in that the processor realizes such as claim 1-4 when executing the computer program The step of any one air control model training method;Alternatively, the processor is realized when executing the computer program as weighed Profit requires the step of any one of 5-6 Risk Identification Methods.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, feature to exist In realizing the air control model training method as described in claim any one of 1-4 when the computer program is executed by processor Step;Alternatively, realizing the Risk Identification Method as described in claim any one of 5-6 when the computer program is executed by processor The step of.
CN201810292057.1A 2018-03-30 2018-03-30 Wind control model training method, risk identification method, device, equipment and medium Active CN108510194B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810292057.1A CN108510194B (en) 2018-03-30 2018-03-30 Wind control model training method, risk identification method, device, equipment and medium
PCT/CN2018/094216 WO2019184124A1 (en) 2018-03-30 2018-07-03 Risk-control model training method, risk identification method and apparatus, and device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810292057.1A CN108510194B (en) 2018-03-30 2018-03-30 Wind control model training method, risk identification method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN108510194A true CN108510194A (en) 2018-09-07
CN108510194B CN108510194B (en) 2022-11-29

Family

ID=63380183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810292057.1A Active CN108510194B (en) 2018-03-30 2018-03-30 Wind control model training method, risk identification method, device, equipment and medium

Country Status (2)

Country Link
CN (1) CN108510194B (en)
WO (1) WO2019184124A1 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214719A (en) * 2018-11-02 2019-01-15 广东电网有限责任公司 A kind of system and method for the marketing inspection analysis based on artificial intelligence
CN109584051A (en) * 2018-12-18 2019-04-05 深圳壹账通智能科技有限公司 The overdue risk judgment method and device of client based on micro- Expression Recognition
CN109635838A (en) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 Face samples pictures mask method, device, computer equipment and storage medium
CN109670940A (en) * 2018-11-12 2019-04-23 深圳壹账通智能科技有限公司 Credit Risk Assessment Model generation method and relevant device based on machine learning
CN109711665A (en) * 2018-11-20 2019-05-03 深圳壹账通智能科技有限公司 A kind of prediction model construction method and relevant device based on financial air control data
CN109784170A (en) * 2018-12-13 2019-05-21 平安科技(深圳)有限公司 Vehicle insurance damage identification method, device, equipment and storage medium based on image recognition
CN109992505A (en) * 2019-03-15 2019-07-09 平安科技(深圳)有限公司 Applied program testing method, device, computer equipment and storage medium
CN110399927A (en) * 2019-07-26 2019-11-01 玖壹叁陆零医学科技南京有限公司 Identification model training method, target identification method and device
CN110569721A (en) * 2019-08-01 2019-12-13 平安科技(深圳)有限公司 Recognition model training method, image recognition method, device, equipment and medium
CN110619462A (en) * 2019-09-10 2019-12-27 苏州方正璞华信息技术有限公司 Project quality assessment method based on AI model
CN111144360A (en) * 2019-12-31 2020-05-12 新疆联海创智信息科技有限公司 Multimode information identification method and device, storage medium and electronic equipment
CN111429215A (en) * 2020-03-18 2020-07-17 北京互金新融科技有限公司 Data processing method and device
CN111460909A (en) * 2020-03-09 2020-07-28 兰剑智能科技股份有限公司 Vision-based goods location management method and device
CN111798047A (en) * 2020-06-30 2020-10-20 平安普惠企业管理有限公司 Wind control prediction method and device, electronic equipment and storage medium
CN112131607A (en) * 2020-09-25 2020-12-25 腾讯科技(深圳)有限公司 Resource data processing method and device, computer equipment and storage medium
CN112201343A (en) * 2020-09-29 2021-01-08 浙江大学 Cognitive state recognition system and method based on facial micro-expression
CN112257974A (en) * 2020-09-09 2021-01-22 北京无线电计量测试研究所 Gas lock well risk prediction model data set, model training method and application
CN113139812A (en) * 2021-04-27 2021-07-20 中国工商银行股份有限公司 User transaction risk identification method and device and server
CN114765634A (en) * 2021-01-13 2022-07-19 腾讯科技(深圳)有限公司 Network protocol identification method and device, electronic equipment and readable storage medium
CN115688130A (en) * 2022-10-17 2023-02-03 支付宝(杭州)信息技术有限公司 Data processing method, device and equipment

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651267A (en) * 2019-10-11 2021-04-13 阿里巴巴集团控股有限公司 Recognition method, model training, system and equipment
CN110826320B (en) * 2019-11-28 2023-10-13 上海观安信息技术股份有限公司 Sensitive data discovery method and system based on text recognition
CN112949359A (en) * 2019-12-10 2021-06-11 清华大学 Convolutional neural network-based abnormal behavior identification method and device
CN111210335B (en) * 2019-12-16 2023-11-14 北京淇瑀信息科技有限公司 User risk identification method and device and electronic equipment
CN111222026B (en) * 2020-01-09 2023-07-14 支付宝(杭州)信息技术有限公司 Training method of user category recognition model and user category recognition method
CN111291668A (en) * 2020-01-22 2020-06-16 北京三快在线科技有限公司 Living body detection method, living body detection device, electronic equipment and readable storage medium
CN111400663B (en) * 2020-03-17 2022-06-14 深圳前海微众银行股份有限公司 Model training method, device, equipment and computer readable storage medium
CN111582654B (en) * 2020-04-14 2023-03-28 五邑大学 Service quality evaluation method and device based on deep cycle neural network
CN113657136B (en) * 2020-05-12 2024-02-13 阿里巴巴集团控股有限公司 Identification method and device
CN111768286B (en) * 2020-05-14 2024-02-20 北京旷视科技有限公司 Risk prediction method, apparatus, device and storage medium
CN111723907B (en) * 2020-06-11 2023-02-24 浪潮电子信息产业股份有限公司 Model training device, method, system and computer readable storage medium
CN111859913B (en) * 2020-06-12 2024-04-12 北京百度网讯科技有限公司 Processing method and device of wind control characteristic factors, electronic equipment and storage medium
CN111522570B (en) * 2020-06-19 2023-09-05 杭州海康威视数字技术股份有限公司 Target library updating method and device, electronic equipment and machine-readable storage medium
CN111861701A (en) * 2020-07-09 2020-10-30 深圳市富之富信息技术有限公司 Wind control model optimization method and device, computer equipment and storage medium
CN111950625B (en) * 2020-08-10 2023-10-27 中国平安人寿保险股份有限公司 Risk identification method and device based on artificial intelligence, computer equipment and medium
CN112329974B (en) * 2020-09-03 2024-02-27 中国人民公安大学 LSTM-RNN-based civil aviation security event behavior subject identification and prediction method and system
CN112070215B (en) * 2020-09-10 2023-08-29 北京理工大学 Processing method and processing device for dangerous situation analysis based on BP neural network
CN112116577B (en) * 2020-09-21 2024-01-23 公安部物证鉴定中心 Deep learning-based tamper portrait video detection method and system
CN112258026B (en) * 2020-10-21 2023-12-15 国网江苏省电力有限公司信息通信分公司 Dynamic positioning scheduling method and system based on video identity recognition
CN112329849A (en) * 2020-11-04 2021-02-05 中冶赛迪重庆信息技术有限公司 Scrap steel stock yard unloading state identification method based on machine vision, medium and terminal
CN112397204B (en) * 2020-11-16 2024-01-19 中国人民解放军空军特色医学中心 Method, device, computer equipment and storage medium for predicting altitude sickness
CN112509129B (en) * 2020-12-21 2022-12-30 神思电子技术股份有限公司 Spatial view field image generation method based on improved GAN network
CN112990432B (en) * 2021-03-04 2023-10-27 北京金山云网络技术有限公司 Target recognition model training method and device and electronic equipment
CN113343821B (en) * 2021-05-31 2022-08-30 合肥工业大学 Non-contact heart rate measurement method based on space-time attention network and input optimization
CN113923464A (en) * 2021-09-26 2022-01-11 北京达佳互联信息技术有限公司 Video violation rate determination method, device, equipment, medium and program product
CN114740774B (en) * 2022-04-07 2022-09-27 青岛沃柏斯智能实验科技有限公司 Behavior analysis control system for safe operation of fume hood

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819730A (en) * 2012-07-23 2012-12-12 常州蓝城信息科技有限公司 Method for extracting and recognizing facial features
US20140201126A1 (en) * 2012-09-15 2014-07-17 Lotfi A. Zadeh Methods and Systems for Applications for Z-numbers
CN106339719A (en) * 2016-08-22 2017-01-18 微梦创科网络科技(中国)有限公司 Image identification method and image identification device
CN106919903A (en) * 2017-01-19 2017-07-04 中国科学院软件研究所 A kind of continuous mood tracking based on deep learning of robust
CN106980811A (en) * 2016-10-21 2017-07-25 商汤集团有限公司 Facial expression recognizing method and expression recognition device
CN107180234A (en) * 2017-06-01 2017-09-19 四川新网银行股份有限公司 The credit risk forecast method extracted based on expression recognition and face characteristic
CN107179683A (en) * 2017-04-01 2017-09-19 浙江工业大学 A kind of interaction intelligent robot motion detection and control method based on neutral net
CN107330785A (en) * 2017-07-10 2017-11-07 广州市触通软件科技股份有限公司 A kind of petty load system and method based on the intelligent air control of big data
CN107704834A (en) * 2017-10-13 2018-02-16 上海壹账通金融科技有限公司 Householder method, device and storage medium are examined in micro- expression face

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100124363A1 (en) * 2008-11-20 2010-05-20 Sony Ericsson Mobile Communications Ab Display privacy system
CN106447434A (en) * 2016-09-14 2017-02-22 全联征信有限公司 Personal credit ecological platform

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819730A (en) * 2012-07-23 2012-12-12 常州蓝城信息科技有限公司 Method for extracting and recognizing facial features
US20140201126A1 (en) * 2012-09-15 2014-07-17 Lotfi A. Zadeh Methods and Systems for Applications for Z-numbers
CN106339719A (en) * 2016-08-22 2017-01-18 微梦创科网络科技(中国)有限公司 Image identification method and image identification device
CN106980811A (en) * 2016-10-21 2017-07-25 商汤集团有限公司 Facial expression recognizing method and expression recognition device
CN106919903A (en) * 2017-01-19 2017-07-04 中国科学院软件研究所 A kind of continuous mood tracking based on deep learning of robust
CN107179683A (en) * 2017-04-01 2017-09-19 浙江工业大学 A kind of interaction intelligent robot motion detection and control method based on neutral net
CN107180234A (en) * 2017-06-01 2017-09-19 四川新网银行股份有限公司 The credit risk forecast method extracted based on expression recognition and face characteristic
CN107330785A (en) * 2017-07-10 2017-11-07 广州市触通软件科技股份有限公司 A kind of petty load system and method based on the intelligent air control of big data
CN107704834A (en) * 2017-10-13 2018-02-16 上海壹账通金融科技有限公司 Householder method, device and storage medium are examined in micro- expression face

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214719A (en) * 2018-11-02 2019-01-15 广东电网有限责任公司 A kind of system and method for the marketing inspection analysis based on artificial intelligence
CN109214719B (en) * 2018-11-02 2021-07-13 广东电网有限责任公司 Marketing inspection analysis system and method based on artificial intelligence
CN109635838A (en) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 Face samples pictures mask method, device, computer equipment and storage medium
CN109670940A (en) * 2018-11-12 2019-04-23 深圳壹账通智能科技有限公司 Credit Risk Assessment Model generation method and relevant device based on machine learning
CN109635838B (en) * 2018-11-12 2023-07-11 平安科技(深圳)有限公司 Face sample picture labeling method and device, computer equipment and storage medium
CN109711665A (en) * 2018-11-20 2019-05-03 深圳壹账通智能科技有限公司 A kind of prediction model construction method and relevant device based on financial air control data
CN109784170A (en) * 2018-12-13 2019-05-21 平安科技(深圳)有限公司 Vehicle insurance damage identification method, device, equipment and storage medium based on image recognition
CN109584051A (en) * 2018-12-18 2019-04-05 深圳壹账通智能科技有限公司 The overdue risk judgment method and device of client based on micro- Expression Recognition
CN109992505A (en) * 2019-03-15 2019-07-09 平安科技(深圳)有限公司 Applied program testing method, device, computer equipment and storage medium
CN110399927A (en) * 2019-07-26 2019-11-01 玖壹叁陆零医学科技南京有限公司 Identification model training method, target identification method and device
CN110569721A (en) * 2019-08-01 2019-12-13 平安科技(深圳)有限公司 Recognition model training method, image recognition method, device, equipment and medium
CN110569721B (en) * 2019-08-01 2023-08-29 平安科技(深圳)有限公司 Recognition model training method, image recognition method, device, equipment and medium
CN110619462A (en) * 2019-09-10 2019-12-27 苏州方正璞华信息技术有限公司 Project quality assessment method based on AI model
CN111144360A (en) * 2019-12-31 2020-05-12 新疆联海创智信息科技有限公司 Multimode information identification method and device, storage medium and electronic equipment
CN111460909A (en) * 2020-03-09 2020-07-28 兰剑智能科技股份有限公司 Vision-based goods location management method and device
CN111429215A (en) * 2020-03-18 2020-07-17 北京互金新融科技有限公司 Data processing method and device
CN111429215B (en) * 2020-03-18 2023-10-31 北京互金新融科技有限公司 Data processing method and device
CN111798047A (en) * 2020-06-30 2020-10-20 平安普惠企业管理有限公司 Wind control prediction method and device, electronic equipment and storage medium
CN112257974A (en) * 2020-09-09 2021-01-22 北京无线电计量测试研究所 Gas lock well risk prediction model data set, model training method and application
CN112131607A (en) * 2020-09-25 2020-12-25 腾讯科技(深圳)有限公司 Resource data processing method and device, computer equipment and storage medium
CN112131607B (en) * 2020-09-25 2022-07-08 腾讯科技(深圳)有限公司 Resource data processing method and device, computer equipment and storage medium
CN112201343A (en) * 2020-09-29 2021-01-08 浙江大学 Cognitive state recognition system and method based on facial micro-expression
CN112201343B (en) * 2020-09-29 2024-02-02 浙江大学 Cognitive state recognition system and method based on facial micro-expressions
CN114765634A (en) * 2021-01-13 2022-07-19 腾讯科技(深圳)有限公司 Network protocol identification method and device, electronic equipment and readable storage medium
CN114765634B (en) * 2021-01-13 2023-12-12 腾讯科技(深圳)有限公司 Network protocol identification method, device, electronic equipment and readable storage medium
CN113139812A (en) * 2021-04-27 2021-07-20 中国工商银行股份有限公司 User transaction risk identification method and device and server
CN115688130A (en) * 2022-10-17 2023-02-03 支付宝(杭州)信息技术有限公司 Data processing method, device and equipment
CN115688130B (en) * 2022-10-17 2023-10-20 支付宝(杭州)信息技术有限公司 Data processing method, device and equipment

Also Published As

Publication number Publication date
CN108510194B (en) 2022-11-29
WO2019184124A1 (en) 2019-10-03

Similar Documents

Publication Publication Date Title
CN108510194A (en) Air control model training method, Risk Identification Method, device, equipment and medium
Yadav et al. Identification of disease using deep learning and evaluation of bacteriosis in peach leaf
CN109902546A (en) Face identification method, device and computer-readable medium
CN107529650A (en) The structure and closed loop detection method of network model, related device and computer equipment
CN110135319A (en) A kind of anomaly detection method and its system
CN112052886A (en) Human body action attitude intelligent estimation method and device based on convolutional neural network
CN110213244A (en) A kind of network inbreak detection method based on space-time characteristic fusion
CN108764050A (en) Skeleton Activity recognition method, system and equipment based on angle independence
CN107122798A (en) Chin-up count detection method and device based on depth convolutional network
CN109817276A (en) A kind of secondary protein structure prediction method based on deep neural network
Guthikonda Kohonen self-organizing maps
CN108665005A (en) A method of it is improved based on CNN image recognition performances using DCGAN
CN104103033B (en) View synthesis method
Wu et al. U-GAN: Generative adversarial networks with U-Net for retinal vessel segmentation
CN109902715A (en) A kind of method for detecting infrared puniness target based on context converging network
CN106909938A (en) Viewing angle independence Activity recognition method based on deep learning network
CN106980830A (en) One kind is based on depth convolutional network from affiliation recognition methods and device
CN113344077A (en) Anti-noise solanaceae disease identification method based on convolution capsule network structure
Monigari et al. Plant leaf disease prediction
Oyedotun et al. Banknote recognition: investigating processing and cognition framework using competitive neural network
Ye et al. An improved efficientNetV2 model based on visual attention mechanism: application to identification of cassava disease
CN117010971B (en) Intelligent health risk providing method and system based on portrait identification
CN107633527A (en) Target tracking method and device based on full convolutional neural networks
Zheng et al. Fruit tree disease recognition based on convolutional neural networks
Prasetyo et al. The implementation of CNN on website-based rice plant disease detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant