CN108510194B - Wind control model training method, risk identification method, device, equipment and medium - Google Patents

Wind control model training method, risk identification method, device, equipment and medium Download PDF

Info

Publication number
CN108510194B
CN108510194B CN201810292057.1A CN201810292057A CN108510194B CN 108510194 B CN108510194 B CN 108510194B CN 201810292057 A CN201810292057 A CN 201810292057A CN 108510194 B CN108510194 B CN 108510194B
Authority
CN
China
Prior art keywords
training
face
target
wind control
control model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810292057.1A
Other languages
Chinese (zh)
Other versions
CN108510194A (en
Inventor
马潜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201810292057.1A priority Critical patent/CN108510194B/en
Priority to PCT/CN2018/094216 priority patent/WO2019184124A1/en
Publication of CN108510194A publication Critical patent/CN108510194A/en
Application granted granted Critical
Publication of CN108510194B publication Critical patent/CN108510194B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/03Credit; Loans; Processing thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Technology Law (AREA)
  • Data Mining & Analysis (AREA)
  • Human Resources & Organizations (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a wind control model training method, a risk identification device, a wind control model training device and a wind control model risk identification medium. The wind control model training method comprises the following steps: marking original video data to obtain positive and negative samples; performing framing and face detection on the positive and negative samples to obtain a training face picture; grouping the training face pictures according to a preset number to obtain at least one group of target training data; the target training data comprises continuous N frames of training face pictures; dividing target training data according to a preset proportion to obtain a training set and a test set; inputting each group of target training data in the training set into a convolutional neural network-long and short recurrent neural network model for training to obtain an original wind control model; and testing the original wind control model by adopting each group of target training data in the test set to obtain a target wind control model. The wind control model training method has the advantages of high training efficiency and high recognition precision.

Description

Wind control model training method, risk identification method, device, equipment and medium
Technical Field
The invention relates to the field of risk identification, in particular to a wind control model training method, a risk identification method device, equipment and a medium.
Background
In the financial industry, each loan fund is issued with risk control (hereinafter referred to as wind control) to determine whether the loan can be issued to the borrower. The traditional wind control process is mainly carried out in a mode that a letter auditor and a borrower carry out face-to-face communication, but in the face-to-face communication process, the letter auditor possibly ignores some slight expression changes of the face of the borrower due to inattention or incomprehensible understanding of the face of the borrower, and the slight expression changes may reflect psychological activities (such as lying) of the borrower during communication. And part of financial institutions gradually adopt a wind control model to identify whether the borrower lies or not so as to assist loan wind control. The current wind control model needs to use a series of micro expression recognition models to capture facial features of a human face, and then reflects the psychological activities of a borrower during loan based on the fine expression changes so as to achieve the purpose of wind control, but a general neural network is adopted during training of the micro expression recognition models, so that the accuracy of the models is not high and the recognition efficiency is low.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a medium for training a wind control model, and aims to solve the problem of low recognition efficiency caused by the fact that a series of micro-expression recognition models are required to be adopted for recognition of a current risk recognition model.
The embodiment of the invention provides a risk identification method, which aims to solve the problem that the accuracy of model identification is low because a current risk identification model is trained by adopting a universal neural network model.
In a first aspect, an embodiment of the present invention provides a method for training a wind control model, including:
marking original video data to obtain positive and negative samples;
performing framing and face detection on the positive and negative samples to obtain a training face picture;
grouping the training face pictures according to a preset number to obtain at least one group of target training data; the target training data comprises N continuous frames of the training face pictures;
dividing the target training data according to a preset proportion to obtain a training set and a test set;
inputting each group of target training data in the training set into a convolutional neural network-long and short recurrent neural network model for training to obtain an original wind control model;
and testing the original wind control model by adopting each group of target training data in the test set to obtain a target wind control model.
In a second aspect, an embodiment of the present invention provides a training device for a wind control model, including:
the positive and negative sample acquisition module is used for marking original video data to acquire positive and negative samples;
a training face picture acquisition module for performing framing and face detection on the positive and negative samples to acquire a training face picture;
the target training data acquisition module is used for grouping the training face pictures according to a preset number to acquire at least one group of target training data; the target training data comprises N continuous frames of the training face pictures;
the target training data dividing module is used for dividing the target training data according to a preset proportion to obtain a training set and a test set;
the original wind control model acquisition module is used for inputting each group of target training data in the training set into a convolutional neural network-long-time recursive neural network model for training to acquire an original wind control model;
and the target wind control model acquisition module is used for testing the original wind control model by adopting each group of target training data in the test set to acquire a target wind control model.
In a third aspect, an embodiment of the present invention provides a risk identification method, including:
acquiring video data to be identified;
carrying out face detection on the video data to be recognized by adopting a face detection model to obtain a face picture to be recognized;
grouping the face pictures to be recognized to obtain at least one group of target face pictures;
identifying at least one group of target face pictures by adopting a target wind control model obtained by the wind control model training method in the first aspect, and obtaining the risk identification probability corresponding to each group of target face pictures;
and acquiring a risk identification result based on the risk identification probability.
In a fourth aspect, an embodiment of the present invention provides a risk identification apparatus, including:
the to-be-identified video data acquisition module is used for acquiring the to-be-identified video data;
the image acquisition module is used for carrying out image detection on the video data to be recognized by adopting an image acquisition module;
the target face picture acquisition module is used for grouping the face pictures to be recognized to acquire at least one group of target face pictures;
a risk identification probability obtaining module, configured to identify at least one group of target face pictures by using a target wind control model obtained by the wind control model training method in the first aspect, and obtain a risk identification probability corresponding to each group of target face pictures;
and the risk identification result acquisition module is used for acquiring a risk identification result based on the risk identification probability.
In a fifth aspect, an embodiment of the present invention provides a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the wind control model training method in the first aspect when executing the computer program; alternatively, the processor implements the steps of the risk identification method of the third aspect when executing the computer program.
In a sixth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps of the wind control model training method in the first aspect are implemented; alternatively, the computer program realizes the steps of the risk identification method of the third aspect when executed by a processor.
In the wind control model training method, the wind control model training device, the wind control model training equipment and the wind control model training medium, original video data are marked to obtain positive and negative samples, so that model training is facilitated, and the efficiency of the model training is improved. Then, framing and face detection are carried out on the positive and negative samples, and pictures containing human facial features, namely training face pictures, are obtained, so that the wind control model can extract micro-expression features based on the training face pictures, deep learning is carried out, and the identification accuracy of the wind control model is improved. Grouping the training face pictures according to a preset number to obtain at least one group of target training data; the target training data comprises N continuous frames of the training face pictures; and (3) inputting each group of target training data in the training set into the convolutional neural network-long-time recurrent neural network model for training to obtain an original wind control model, identifying the pictures of the training face without adopting a series of universal micro expression identification models, and training by directly inputting each group of target training data into the convolutional neural network-long-time recurrent neural network model in the model, so that the efficiency of model training is improved. And finally, testing the original wind control model by adopting each group of target training data in the test set to obtain the target wind control model, so that the identification effect of the target wind control model is more accurate.
In the embodiment of the invention, a question is asked for a target client in a video chat mode to obtain video data replied by the target client, namely video data to be identified, so that the crediting process is intelligentized, a crediting person does not need to perform face-to-face communication with the target client, and the labor cost is saved. Then, the face detection model is used for carrying out face detection on the video data to be recognized, further extracting video images containing faces, namely face pictures to be recognized, and grouping the face pictures to be recognized to obtain at least one group of target face pictures, so that the accuracy of model recognition is improved. The target wind control model is adopted to identify at least one group of target face pictures, the risk identification probability corresponding to each group of target face pictures is obtained, and the identification efficiency and the identification accuracy of the target wind control model are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart of a method for training a wind control model provided in embodiment 1 of the present invention.
Fig. 2 is a specific schematic diagram of step S12 in fig. 1.
Fig. 3 is a specific schematic diagram of step S15 in fig. 1.
Fig. 4 is a specific diagram of step S153 in fig. 3.
Fig. 5 is a schematic block diagram of a training apparatus for a wind control model provided in embodiment 2 of the present invention.
Fig. 6 is a flowchart of a risk identification method provided in embodiment 3 of the present invention.
Fig. 7 is a schematic block diagram of a risk identification device provided in embodiment 4 of the present invention.
Fig. 8 is a schematic diagram of the computer device provided in embodiment 6 of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
Example 1
Fig. 1 shows a flowchart of a method for training a stroke control model in the present embodiment. The wind control model training method can be applied to financial institutions such as banks, securities and insurance, so that the trained wind control model is adopted to assist a creditor in carrying out risk control on a borrower, and whether loan can be issued to the borrower or not is determined. As shown in fig. 1, the training method of the wind control model includes the following steps:
s11: and marking the original video data to obtain positive and negative samples.
The original video data is open source video data obtained from a data set disclosed by the internet or a third-party organization/platform, and comprises lie-speaking video data and non-lie-speaking video data. Specifically, the lie labeling is performed on the original video data, namely the lie-speaking video data is labeled as '0', and the non-lie-speaking video data is labeled as '1', so that positive and negative samples are obtained, the model training is facilitated, and the model training efficiency is improved.
In this embodiment, the proportion of positive negative sample sets up to 1, obtains the video data of lying and the video data of not lying of equal proportion promptly, can effectively prevent the condition that the model trained overfitting to the recognition effect of the wind control model that makes through the training of positive negative sample obtain is more accurate.
S12: and performing framing and face detection on the positive and negative samples to obtain a training face picture.
The training face picture is a picture which is obtained by framing positive and negative samples and detecting faces and contains the face characteristics of people. Because the wind control model in this embodiment is trained based on the micro-expression features, the frame division and face detection need to be performed on the positive and negative samples, and the obtained picture containing the facial features of a person is the training face picture, so that model training is performed by using the training face picture, so that the wind control model can extract the micro-expression features based on the training face picture, perform deep learning, and improve the identification accuracy of the wind control model.
S13: grouping the training face pictures according to a preset number to obtain at least one group of target training data; the target training data comprises N consecutive frames of training face pictures.
The method comprises the steps of grouping according to a preset number, obtaining at least one group of target training data, enabling each group of target training data to comprise N continuous frames of training face pictures, and obtaining micro expression characteristic changes of faces from the N continuous frames of training face pictures so as to enable the training face pictures to have time sequence and increase the accuracy of a target wind control model.
In this embodiment, the preset number range may be set to [50,200], which is because if training face pictures with 50 frames or less are used as a set of training data in the training set, the recognition accuracy of the wind control model is not high because too few training face pictures cannot show the change process of facial features that a person lies. If the training face pictures with the number of frames greater than or equal to 200 are used as a group of training data in the training set, the time for model training is too long, and the efficiency of model training is reduced. In the embodiment, the training of the model is performed by taking the training face pictures of every hundred frames as a group of training data, so that the training efficiency of the model and the recognition accuracy of the trained wind control model are improved.
S14: and dividing target training data according to a preset proportion to obtain a training set and a test set.
The preset proportion is preset and is used for classifying the training face pictures. The preset ratio may be a ratio obtained from historical experience. The training set is a learning sample data set, and a classifier is established by matching some parameters, that is, a machine learning model is trained by using target training data in the training set to determine parameters of the machine learning model. The test set (test set) is used to test the resolving power, such as recognition rate, of the trained machine learning model. In this embodiment, the training face pictures may be divided according to the ratio of 9.
S15: and inputting each group of target training data in the training set into a convolutional neural network-long and short recurrent neural network model for training to obtain an original wind control model.
The convolutional neural network-long-and-short-term recurrent neural network model is a model obtained by combining the convolutional neural network model and the long-and-short-term recurrent neural network model. It can be understood that the convolutional neural network-long and short recurrent neural network model is equivalent to a model formed by connecting the convolutional neural network with the long and short recurrent neural network model.
Convolutional Neural Networks (CNN)) are locally connected networks. Compared with a fully-connected network, the method has the greatest characteristics of local connectivity and weight sharing. For a certain pixel p in an image, the closer the pixel p is, the larger the influence (local connectivity) on it. In addition, according to the statistical characteristics of the natural image, the weight of a certain region can also be used for another region, i.e. the sharing of the weight. The weight sharing can be understood as convolution kernel sharing, in a Convolution Neural Network (CNN), one convolution kernel and a given image are subjected to convolution operation to extract one image feature, and different convolution kernels can extract different image features. Due to the local connectivity of the convolutional neural network, the complexity of the model is reduced, and the efficiency of model training is improved; and moreover, because of the weight sharing property of the convolutional neural network, the convolutional neural network can learn in parallel, and the model training efficiency is further improved.
The long-short term memory (LSTM) model is a time recurrent neural network model, and is suitable for processing and predicting important events with time series and relatively long time series intervals and delays. The LSTM model has a time memory function, and since the features of each frame of training face picture are closely related to the features of the preceding and following frames of training face pictures in this embodiment, a long-term recurrent neural network model is used to train the extracted features to reflect the long-term memory capability of the data and improve the accuracy of the model.
In this embodiment, since the training is performed on the target training data, that is, the training face pictures of N consecutive frames, feature extraction needs to be performed on the training face pictures, and the convolutional neural network model is a neural network commonly used for picture feature extraction, and due to the weight sharing and local connectivity of the convolutional neural network, the model training efficiency is greatly increased. In the embodiment, the features of each frame of training face picture are closely related to the features of the training face pictures of the previous and next frames, so that the long-term recurrent neural network model is adopted to train the extracted face features, long-term memory capacity of data is embodied, and accuracy of the model is improved. Due to the weight sharing and local connectivity of the convolutional neural network and the advantage that the long-term memory capability of the data can be embodied by the long-term recurrent neural network model, the training efficiency of the wind control model obtained by training the convolutional neural network-long-term recurrent neural network model and the accuracy of the wind control model are greatly improved.
S16: and testing the original wind control model by adopting each group of target training data in the test set to obtain a target wind control model.
The target wind control model is a model which adopts training face pictures in a test set to test an original risk model so that the accuracy of the original wind control model reaches preset accuracy. Specifically, target training data in a test set, namely training face pictures of continuous N frames, are adopted to test the original wind control model so as to obtain corresponding accuracy; and if the accuracy reaches the preset accuracy, taking the original wind control model as a target wind control model.
In this embodiment, the original video data is labeled first to obtain positive and negative samples, so as to facilitate model training and improve the efficiency of model training. And the proportion of the positive and negative samples is set to be equal, so that the condition that the model is over-fitted during training can be effectively prevented, and the recognition effect of the wind control model obtained through the training of the positive and negative samples is more accurate. Then, framing and face detection are carried out on the positive and negative samples, and pictures containing human face features, namely training face pictures, are obtained, so that the wind control model can extract micro-expression features based on the training face pictures, deep learning is carried out, and the identification accuracy of the wind control model is improved. The training face pictures are grouped according to the preset number, so that each training face picture with the preset number of continuous N frames serves as a group of target training data to perform model training, and the training efficiency of the model and the accuracy of wind control model identification are improved. The method comprises the steps of dividing training face pictures according to a preset proportion, obtaining a training set and a testing set, inputting each group of target training data in the training set into a convolutional neural network-long-short-time recurrent neural network model for training, obtaining an original wind control model, enabling the original wind control model to have time sequence, enabling the network to learn in parallel due to the weight sharing of the convolutional neural network, improving the model training efficiency, enabling the complexity of the model to be reduced due to the local connectivity of the convolutional neural network, and improving the model training efficiency. And finally, testing the original wind control model by adopting each group of target training data in the test set to obtain the target wind control model, so that the identification effect of the target wind control model is more accurate.
In a specific embodiment, as shown in fig. 2, in step S12, that is, performing framing and face detection on the positive and negative samples to obtain a training face picture, the method specifically includes the following steps:
s121: and framing the positive and negative samples to obtain a video image.
The framing refers to dividing original video data according to preset time to obtain video images. Specifically, after the step of framing the positive and negative samples, the method further comprises the step of normalizing and time labeling the video image. Normalization is a simplified calculation mode, namely, a dimensional expression is transformed into a dimensionless expression to become a scalar. For example, in the positive and negative samples in this embodiment, the micro expression features of the client can be extracted only by the face region of the client, so that the pixels of the framed video image need to be normalized to 260 × 260 pixels, and the pixels need to be unified, so that face detection is performed on each frame of video image subsequently, and the accuracy of model identification is improved. And performing time annotation on the video images, namely annotating each frame of video image according to the time sequence so as to enable the video images to have time sequence and improve the accuracy of the model.
S122: and carrying out face detection on the video image by adopting a face detection model to obtain a training face picture.
The human face detection model is a pre-trained model for detecting whether each frame of video image contains a human face region. Specifically, each frame of video image is input into a face detection model, the face position in each frame of video image is detected, and then the video image containing the face is extracted to be a training face picture, so that technical support is provided for the input of a subsequent model.
In this embodiment, the positive and negative samples are subjected to framing and normalization processing to obtain video images, and pixels of each frame of video image are unified, so that face detection is performed on each frame of video image in the following process, and the efficiency of training the wind control model is improved. And finally, carrying out face detection on the video image by adopting a face detection model to obtain a video image containing a face, namely a training face picture, so as to provide technical support for the input of a subsequent model, and carrying out model training on the video image containing the face to eliminate the interference of other factors, so that the model can extract micro-expression characteristics based on the training face picture, and provide technical support for the training of the wind control model.
In a specific embodiment, the face detection model in step S122 is specifically a face detection model obtained by using a CascadeCNN network training.
The CascadeCNN (cascaded convolutional neural network) is realized by a deep convolutional network of a classical Violajones method, and is a face detection method with high detection speed. Violajones is a human face detection framework. In this embodiment, the CascadeCNN method is adopted to train the picture marked with the face position to obtain the face detection model, so that the recognition efficiency of the face detection model is improved.
Specifically, the step of training the picture (training face picture) marked with the face position by using the CascadeCNN method is as follows:
the first stage of training, adopting 12-net network to scan image, refusing more than 90% of windows, inputting the rest windows into 12-calibration-net network to make correction, then adopting non-maximum suppression algorithm to make treatment of corrected image so as to eliminate high-overlapping window. Here, 12-net is a detection window obtained by sliding on a W (width) × H (height) picture with a step size of 4 using a 12 × 12 detection window. 12-calibration-net is a correction network used for correcting the area of the face to obtain the area coordinates of the face. The non-maximum suppression algorithm is a widely used method in the fields of target detection and positioning, and the like, and the essence of the algorithm principle is to search local maximum and suppress non-maximum elements. The 12-net network is used for detecting the face of the training face picture, a window which is judged to be a non-face (namely, the window does not exceed a preset threshold) in the training face picture is used as a negative sample, and windows of all real faces (namely, the windows exceed the preset threshold) are used as positive samples, so that a corresponding detection window is obtained. The preset threshold is a threshold preset by a developer and used for judging whether a human face exists in the training data.
In the second training stage, the images output in the first stage are processed by adopting 24-net and 24-callibration-net networks; the difference between the 12-net and the 24-net is that the 24-net is based on the 12-net, 24 x 24 pictures are input into the 24-net network to obtain the features extracted by the 24-net full connection layer, the 21 x 24 pictures are simultaneously scaled to 12 x 12 and input into the 12-net full connection layer, and finally the features extracted by the 24-net full connection layer and the features obtained by the 12-net full connection layer are output together. The 12-ligation-net network and the 24-ligation-net network are orthotic networks. And performing face detection on the training data by using the 24-net network, taking a window which is judged to be a non-face in the training data as a negative sample, and taking all real faces as positive samples.
And in the third stage of training, 48-net and 48-catalysis-net networks are adopted to process the output result of the second stage of training so as to complete the training of the final stage. The processing procedure of this stage is similar to that of the second training stage, and is not repeated here.
In this embodiment, a face detection model obtained by the CascadeCNN network training is used to perform face detection on the video image, and the process of obtaining the training face picture is consistent with the training process.
In a specific embodiment, as shown in fig. 3, in step S15, each set of target training data in the training set is input into a convolutional neural network-long and short recurrent neural network model for training, so as to obtain an original wind control model, which specifically includes the following steps:
s151: initializing a convolutional neural network-a long and short recurrent neural network model.
The initializing of the convolutional neural network-long and short term recurrent neural network model refers to pre-initializing model parameters of the convolutional neural network model (namely, convolutional kernel and bias) and model parameters in the LSTM model (namely, connection weights between layers). The convolution kernel is a weight of a convolution neural network, and when training data is input, the training data is multiplied by a weight, namely the convolution kernel, and then the output of a neuron is obtained, which reflects the importance degree of the training data. The bias is a linear component used to alter the range of weights multiplied by the input. And completing the process of model training based on the determined convolution kernel, the bias and the connection weight values among all layers in the LSTM model.
S152: and (5) performing feature extraction on the target training data in the training set by adopting a convolutional neural network to obtain the human face features.
The face characteristic is obtained by adopting a convolutional neural network to perform characteristic extraction on target training data in a training set, namely continuous N frames of training face pictures. Specifically, the method for extracting the features of the target training data in the training set by using the convolutional neural network comprises the following steps:
wherein the face features are adoptedAnd the convolutional neural network model carries out convolution operation on the target training data in the training set to obtain the characteristics. Specifically, the calculation formula of the convolution operation includes
Figure BDA0001617778400000091
Wherein, denotes a convolution operation; x is the number of j Representing the jth input feature map; y is j Represents the jth output characteristic graph; w is a ij Is the convolution kernel (weight) between the ith input feature map and the jth output feature map; b j And representing the bias term of the jth output characteristic diagram. Then, the maximum pooling downsampling is adopted to carry out downsampling operation on the convolved feature map so as to realize dimension reduction on the feature map, and the calculation formula is
Figure BDA0001617778400000092
Wherein, y j The method comprises the steps of representing an ith output spectrum (namely a feature map after downsampling) in the downsampling process, wherein each neuron in the downsampling process is obtained by locally sampling from the ith input spectrum (the feature map after convolution) by adopting a downsampling frame of S & ltS > m and n represent the step size of the down-sampling frame movement, respectively.
S153: and inputting the human face characteristics into a long and short time recurrent neural network model for training to obtain an original wind control model.
Specifically, the LSTM model is one of neural network models with long-term memory capability, and has a three-layer network structure of an input layer, a hidden layer, and an output layer. The input layer is the first layer of the LSTM model and is used for receiving external signals, namely, is responsible for receiving human face features carrying time sequence states. In this embodiment, because the training face pictures in the training set have a time sequence, the face features obtained after the training face pictures in the training set are processed in step S152 also have a time sequence, so that the face features can be applied to the LSTM model, and the LSTM can obtain the face features carrying a time sequence state. The output layer is the last layer of the LSTM model and is used for outputting signals to the outside, namely responsible for outputting the calculation results of the LSTM model. The hidden layer is a layer except the input layer and the output layer in the LSTM model and is used for processing the input human face features and obtaining the calculation result of the LSTM model. The original wind control model is a model obtained by adopting an LSTM model to carry out multiple iterations on the human face features in the time sequence state until convergence. Understandably, model training is carried out on the extracted human face features by adopting the LSTM model, so that the time sequence of the obtained original wind control model is enhanced, and the accuracy of the original wind control model is improved.
In this embodiment, the output layer of the LSTM model performs regression processing using Softmax (regression model) for classifying and outputting the weight matrix. Softmax (regression model) is a classification function commonly used in neural networks, and maps the output of a plurality of neurons into a [0,1] interval, which can be understood as probability, and is simple and convenient to calculate, so that multi-classification output is performed, and the output result is more accurate.
In this embodiment, the convolutional neural network-long-and-short-term recurrent neural network model is initialized first, so that target training data in a training set is trained based on the convolutional neural network model to obtain face features, and then the obtained face features are input into the LSTM model for training.
As shown in fig. 4, inputting the face features into the long and short recurrent neural network model for training (i.e. step S153), specifically includes the following steps:
s1531: and training the face features by adopting a forward propagation algorithm to obtain a first state parameter.
Specifically, the training of the face features by using a Forward Propagation (Forward Propagation) algorithm refers to training according to the sequence of time sequence states carried by the face features by using a Forward Propagation algorithm. The first state parameter refers to a parameter obtained in an initial iteration process of model training based on human face features.
The Forward Propagation (Forward Propagation) algorithm is an algorithm for training a model according to a time sequence. Specifically, the forward propagation algorithm has a calculation formula of
Figure BDA0001617778400000101
And
Figure BDA0001617778400000102
wherein S is t An output representing a hidden layer at the current time;
Figure BDA0001617778400000103
representing the weight from the previous moment to the current moment of the hidden layer;
Figure BDA0001617778400000104
representing the weight from the input layer to the hidden layer;
Figure BDA0001617778400000105
a prediction output representing a current time;
Figure BDA0001617778400000106
representing the weights from the hidden layer to the output layer.
As will be appreciated, the forward propagation algorithm is to input X the current time instant t And the output S of the hidden unit at the previous moment t-1 I.e. the output S of the memory cells in the hidden layer in the LSTM model t-1 As the input of the hidden layer, the output S of the hidden layer at the current time is obtained by the transformation of the activation function tanh (hyperbolic tangent) t The predicted output at time t is
Figure BDA0001617778400000111
It is shown that U represents a weight from a previous time to a current time of the hidden layer, W represents a weight from the input layer to the hidden layer, and V represents a weight from the hidden layer to the output layer. From this, the predicted output
Figure BDA0001617778400000112
Output S from the current time t Correlation, S t The input at the time t and the state at the time t-1 are included, so that all information on a time sequence is reserved by the output of the model, and the model has time sequence.
In this embodiment, because the expression capability of the linear model is not sufficient, tanh (hyperbolic tangent) is used as the activation function, and a nonlinear factor may be added, so that the trained original wind control model can solve a more complex problem. Moreover, the activation function tanh (hyperbolic tangent) has the advantage of high convergence rate, so that the training time can be saved, and the model training efficiency can be improved.
S1532: and performing error calculation on the first state parameter by adopting a back propagation algorithm to obtain an original wind control model.
The Back Propagation (Back Propagation) algorithm is an algorithm that passes Back accumulated residuals from the last time and trains a neural network model. Specifically, the back propagation algorithm has a calculation formula of
Figure BDA0001617778400000113
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0001617778400000114
a prediction output representing time t; o t Represents time t and
Figure BDA0001617778400000115
the corresponding true value. In the embodiment, the error calculation is performed on the first state parameter by adopting a back propagation algorithm, and the error back propagation updating is performed on the basis of the error calculation result so as to update the weight parameter of the LSTM model and the weight parameter of the convolutional neural network, so that the accuracy of the wind control model can be effectively improved.
Specifically, performing error calculation on the first state parameter by using a Back Propagation (Back Propagation) algorithm means updating the optimization parameters, i.e., the three weight parameters U, V, and W in this embodiment, according to a time-reversed order. In this embodiment, the error calculation is performed by defining a loss function at the t-th time of backward propagation as cross entropy, that is, by using a formula
Figure BDA0001617778400000116
And (6) performing calculation. Finally, calculating the partial derivative of each layer according to a chain derivation method, namely calculating each layerThe layer partial derivatives are calculated
Figure BDA0001617778400000117
And updating the three weight parameters of U, V and W based on the three change rates to obtain the adjusted state parameters. Wherein the content of the first and second substances,
Figure BDA0001617778400000118
therefore, we only need to calculate partial derivatives of the loss function at each moment and then add the partial derivatives to obtain the four change rates, so as to update the weight parameters of the LSTM model. The chain derivation method is a derivation rule in calculus, is used for calculating a derivative of a complex function, and is a commonly used method in calculus derivation operation. Finally, the formula is adopted
Figure BDA0001617778400000119
And
Figure BDA00016177784000001110
and calculating the bias of the convolutional neural network and the partial derivative of the convolutional kernel, and reversely updating the model parameters (namely the convolutional kernel and the bias) of the convolutional neural network, wherein b represents the bias of the convolutional neural network, and k represents the convolutional kernel of the convolutional neural network. Because the LSTM model and the convolutional neural network model are a neural network, the model parameters of the LSTM model and the model parameters of the convolutional neural network model are updated based on a back propagation algorithm in the LSTM model, and therefore the optimization of the original wind control model can be completed.
Specifically, since the gradient may exponentially increase with the increasing number of the backward propagation layers to cause a phenomenon of gradient disappearance, the problem of gradient disappearance can be well solved by matching the cross entropy loss function with the tanh activation function in this embodiment, and the accuracy of training is increased.
In the embodiment, the face features are trained by adopting a forward propagation algorithm to obtain the first state parameters, then the backward propagation algorithm is adopted to carry out error calculation on the first state parameters, and error back propagation updating is carried out based on the error calculation result to update the weight parameters of the LSTM model and the weight parameters of the convolutional neural network, so that the accuracy of the obtained original wind control model can be effectively improved.
In this embodiment, because the Convolutional Neural Network (CNN) is a local connection network, and has local connectivity and weight sharing, so that the model can be learned in parallel, the convolutional neural network is used to perform feature extraction on the face pictures in the training set, thereby improving the efficiency of obtaining face features, and further improving the efficiency of model training. And then inputting the acquired human face features into an LSTM model for training to acquire an original wind control model with time sequence so as to enhance the prediction capability of the original wind control model in time and improve the accuracy of the original risk model.
In this embodiment, the original video data is labeled first to obtain positive and negative samples, so as to facilitate model training and improve the efficiency of model training. Then, the proportion of the positive and negative samples is set to be equal, so that the condition that the model is over-fitted during training can be effectively prevented, and the recognition effect of the wind control model obtained through training of the positive and negative samples is more accurate. And then, performing framing and normalization processing on the positive and negative samples to obtain video images, and unifying pixels of each frame of video image so as to perform face detection on each frame of video image in the following manner, thereby improving the accuracy of risk identification. And finally, carrying out face detection on the video image by adopting a face detection model to obtain a video image containing a face, namely a training face picture, so as to provide technical support for the input of a subsequent model, and carrying out model training on the video image containing the face to eliminate the interference of other factors, so that the model can extract micro-expression characteristics based on the training face picture, thereby achieving the purpose of risk control. The training face pictures are grouped according to the preset number, so that each training face picture with the preset number of continuous N frames serves as a group of target training data to perform model training, and the training efficiency of the model and the accuracy of wind control model recognition are improved. The method comprises the steps of dividing target training data according to a preset proportion, obtaining a training set and a testing set, inputting each group of target training data in the training set into a convolutional neural network-long-short time recurrent neural network model for training, and obtaining an original wind control model, so that the original wind control model has time sequence, and due to weight sharing of the convolutional neural network, the network can learn in parallel, and model training efficiency is improved. And finally, testing the original wind control model by adopting each group of target training data in the test set to obtain the target wind control model, so that the identification effect of the target wind control model is more accurate.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by functions and internal logic of the process, and should not limit the implementation process of the embodiments of the present invention in any way.
Example 2
Fig. 5 is a schematic block diagram of a wind control model training apparatus corresponding to the wind control model training method according to embodiment 1. As shown in fig. 5, the wind control model training device includes a positive and negative sample obtaining module 11, a training face picture obtaining module 12, a target training data obtaining module 13, a target training data dividing module 14, an original wind control model obtaining module 15, and a target wind control model obtaining module 16. Implementation functions of the positive and negative sample acquisition module 11, the training face image acquisition module 12, the target training data acquisition module 13, the target training data division module 14, the original wind control model acquisition module 15, and the target wind control model acquisition module 16 correspond to steps corresponding to the wind control model training method in embodiment 1 one by one, and for avoiding redundancy, detailed description is not repeated in this embodiment.
And the positive and negative sample acquisition module 11 is used for labeling the original video data to acquire positive and negative samples.
And a training face picture obtaining module 12, configured to perform framing and face detection on the positive and negative samples, and obtain a training face picture.
And the target training data acquisition module 13 is configured to divide the training face pictures according to a preset ratio to acquire a training set and a test set.
And a target training data dividing module 14, configured to divide the target training data according to a preset ratio to obtain a training set and a test set.
And the original wind control model acquisition module 15 is used for inputting each group of target training data in the training set into the convolutional neural network-long and short recurrent neural network model for training to acquire an original wind control model.
And the target wind control model acquisition module 16 is used for testing the original wind control model by adopting each group of target training data in the test set to acquire the target wind control model.
Preferably, the training face picture acquiring module 12 includes a video image acquiring unit 121 and a training face picture acquiring unit 122.
The video image obtaining unit 121 is configured to perform framing on the positive and negative samples to obtain a video image.
And a training face picture obtaining unit 122, configured to perform face detection on the video image by using a face detection model, and obtain a training face picture.
Preferably, the original wind control model obtaining module 15 includes a model initializing unit 151, a human face feature obtaining unit 152, and an original wind control model obtaining unit 153.
And a model initialization unit 151 for initializing a convolutional neural network-long and short recurrent neural network model.
And a face feature obtaining unit 152, configured to perform feature extraction on the target training data in the training set by using a convolutional neural network, so as to obtain a face feature.
And the original wind control model obtaining unit 153 is configured to input the face features into the long and short recurrent neural network model for training, so as to obtain an original wind control model.
Preferably, the original wind control model obtaining unit 153 includes a first state parameter obtaining sub-unit 1531 and an original wind control model obtaining sub-unit 1532.
The first state parameter obtaining subunit 1531 is configured to train the face features by using a forward propagation algorithm, and obtain a first state parameter.
The original wind control model obtaining subunit 1532 is configured to perform error calculation on the first state parameter by using a back propagation algorithm, so as to obtain an original wind control model.
Example 3
Fig. 6 shows a flowchart of the risk identification method in the present embodiment. The wind control model training method can be applied to computer equipment configured by financial institutions such as banks, securities and insurance, and can effectively assist a creditor in carrying out risk control on a borrower so as to determine whether to offer loan to the borrower. As shown in fig. 6, the risk identification method includes the following steps:
s21: and acquiring video data to be identified.
The video data to be identified is used for recording unprocessed video data of the borrower in the crediting process. Since the accuracy of identifying a frame of video image to be identified is not high, the video data to be identified in this embodiment is video data composed of at least two frames of video images to be identified.
In the embodiment, in the credit and audit process, the credit and audit person can ask a question of the target client in a video chat mode to acquire the video data (namely the video data to be identified) replied by the target client, so that the credit and audit process is intelligentized, the credit and audit person does not need to perform face-to-face communication with the target client, and the labor cost is saved.
S22: and carrying out face detection on the video data to be recognized by adopting a face detection model to obtain a face picture to be recognized.
The face picture to be recognized is a face picture which is obtained by adopting a face detection model to carry out face detection on video data to be recognized and is used for recognition. Specifically, each frame of video image to be recognized in the video data to be recognized is input into the face detection model, the face position in each frame of video image to be recognized is detected, and then the video image containing the face, namely the face image to be recognized, is extracted. Specifically, the face detection model is a face detection model obtained by using a CascadeCNN network training, and a process of performing face detection on video data to be recognized is the same as the detection process in embodiment 1, and is not repeated herein to avoid repetition.
S23: and grouping the face pictures to be recognized to obtain at least one group of target face pictures.
The method comprises the steps of grouping face pictures to be recognized according to a preset number, and obtaining at least one group of target face pictures. Specifically, the face pictures to be recognized are grouped in a cross selection mode. In this embodiment, every hundred frames are grouped into a group of data to be recognized (i.e. target face pictures), for example, 40s of video data to be recognized (including 960 frames) are grouped into every hundred frames of pictures, i.e. 1 st picture to 100 th picture are grouped into a group, 10 th picture to 110 th picture are grouped into a group, and so on, at least one group of target face pictures are obtained, at least one group of target face pictures is obtained by the way of cross selection, so that the relation between the face pictures to be recognized is fully reserved, and the accuracy of model recognition is improved
S24: and identifying at least one group of target face pictures by adopting a target wind control model, and acquiring the risk identification probability corresponding to each group of target face pictures.
The target wind control model is obtained by training with the wind control model training method in embodiment 1. In this embodiment, at least one group of target face pictures is input into the target risk model for recognition, at least one group of input target face pictures is calculated in the target risk model, and risk recognition probability corresponding to each group of target face pictures is output. In this embodiment, the recognition probability may be a real number between 0 and 1.
S25: and acquiring a risk identification result based on the risk identification probability.
Specifically, a weighted operation formula is adopted
Figure BDA0001617778400000151
And calculating the risk identification probability to obtain a wind control identification result. Wherein p is i Is the risk recognition probability, w, corresponding to each group of target face pictures i And the weight corresponding to each group of target face pictures.
In this embodiment, the weight corresponding to each group of target face pictures is set by dialogies for different problems, for example, for basic questions such as age, gender, and name, the set weight is lower, and for sensitive questions such as loan application, personal income, and repayment will, the set weight is higher, and the risk recognition probability is calculated by weighting calculation to obtain a wind control recognition result, so that the risk recognition result is more accurate. The basic class letter check problem and the sensitive class letter check problem are divided according to the condition whether the standard answer exists in the problems or not. Taking a bank as an example, if a target customer prestores some personal information (such as an identification number, a parent mobile phone number, a home address and the like) in a financial institution such as a bank, securities, insurance and the like, a question raised based on the personal information prestoring standard answers is a basic type of signal examination question. And regarding the information which is not pre-stored in financial institutions such as banks, securities, insurance and the like by the target customer, if the partial information has no standard answer, the question proposed based on the partial information is the sensitive letter and audit question.
In the embodiment, the question is asked for the target client in a video chat mode to obtain the video data replied by the target client, namely the video data to be identified, so that the crediting and reviewing process is intelligentized, a crediting and reviewing person does not need to communicate with the target client in a face-to-face mode, and the labor cost is saved. Then, the face detection model is used for carrying out face detection on the video data to be recognized, further, video images containing faces are extracted and are face pictures to be recognized, the face pictures to be recognized are grouped in a cross selection mode, at least one group of target face pictures are obtained, and the accuracy of model recognition is improved. The target wind control model is adopted to identify at least one group of target face pictures, the risk identification probability corresponding to each group of target face pictures is obtained, and the identification efficiency and the identification accuracy of the target wind control model are improved. And finally, calculating the risk identification probability through weighting operation to obtain a wind control identification result so as to enable the risk identification result to be more accurate.
Example 4
Fig. 7 is a schematic block diagram of a risk identification device corresponding to the risk identification method of embodiment 3. As shown in fig. 7, the risk identification device includes a to-be-identified video data acquisition module 21, a to-be-identified face picture acquisition module 22, a target face picture acquisition module 23, a risk identification probability acquisition module 24, and a risk identification result acquisition module 25. The implementation functions of the to-be-recognized video data obtaining module 21, the to-be-recognized face image obtaining module 22, the target face image obtaining module 23, the risk recognition probability obtaining module 24, and the risk recognition result obtaining module 25 correspond to the steps corresponding to the risk recognition method in embodiment 3 one to one, and in order to avoid redundancy, detailed descriptions are not needed in this embodiment.
And the video data to be identified acquiring module 21 is configured to acquire video data to be identified.
And the face image to be recognized acquiring module 22 is configured to perform face detection on the video data to be recognized by using a face detection model, and acquire a face image to be recognized.
And the target face image acquisition module 23 is configured to group the face images to be recognized to acquire at least one group of target face images.
The risk identification probability obtaining module 24 is configured to identify at least one group of target face pictures by using the target wind control model obtained by the wind control model training method in embodiment 1, and obtain a risk identification probability corresponding to each group of target face pictures.
And a risk identification result obtaining module 25, configured to obtain a risk identification result based on the risk identification probability.
Preferably, the risk identification result obtaining module 25 is configured to adopt a weighted operation formula
Figure BDA0001617778400000161
Calculating the risk identification probability to obtain a wind control identification result; wherein p is i Is the corresponding risk recognition probability, w, of each group of target face pictures i And the weight corresponding to each group of target face pictures.
Example 5
This embodiment provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method for training a wind control model in embodiment 1 is implemented, and for avoiding repetition, details are not repeated here.
Or, when being executed by the processor, the computer program implements the functions of each module/unit in the wind control model training apparatus in embodiment 2, and is not described herein again in order to avoid repetition;
or, when being executed by a processor, the computer program implements the risk identification method in embodiment 3, which is not described herein again to avoid repetition;
alternatively, the computer program, when executed by the processor, implements the functions of each module/unit in the risk identifying device in embodiment 4, and is not described herein again to avoid repetition.
Example 6
Fig. 8 is a schematic diagram of a computer device provided by an embodiment of the invention. As shown in fig. 8, the computer device 80 of this embodiment includes: a processor 81, a memory 82, and a computer program 83 stored in the memory 82 and operable on the processor 81. The steps of the training method for the wind control model in embodiment 1 are implemented by the processor 81 when executing the computer program 83, and are not described herein again for avoiding repetition. Alternatively, the processor 81 implements the functions of each module/unit in the wind control model training apparatus in embodiment 2 when executing the computer program 83, and for avoiding repetition, the details are not described here; alternatively, when the processor 81 executes the computer program 83, the steps of the risk identification method in embodiment 3 are implemented, and are not described herein again to avoid repetition; the processor 81 executes the computer program 83 to implement the functions of the modules/units in the risk identification device in embodiment 4, and therefore, for avoiding repetition, detailed descriptions thereof are omitted here.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (8)

1. A method for training a wind control model is characterized by comprising the following steps:
marking original video data to obtain positive and negative samples;
performing framing and face detection on the positive and negative samples to obtain a training face picture;
grouping the training face pictures according to a preset number to obtain at least one group of target training data; the target training data comprises N continuous frames of the training face pictures;
dividing the target training data according to a preset proportion to obtain a training set and a test set;
inputting each group of target training data in the training set into a convolutional neural network-long and short recurrent neural network model for training to obtain an original wind control model, wherein the training comprises the following steps: initializing model parameters of a convolutional neural network-long and short time recursive neural network model;
performing feature extraction on the target training data in the training set by adopting a convolutional neural network to obtain human face features;
inputting the human face features into a long and short recurrent neural network model for training, training the human face features by adopting a forward propagation algorithm to obtain first state parameters, and performing error calculation on the first state parameters by adopting a backward propagation algorithm to obtain the original wind control model;
and testing the original wind control model by adopting each group of the target training data in the test set to obtain a target wind control model.
2. The method for training a wind control model according to claim 1, wherein the step of performing framing and face detection on the positive and negative samples to obtain a training face picture comprises:
framing the positive and negative samples to obtain a video image;
and carrying out face detection on the video image by adopting a face detection model to obtain the training face picture.
3. A method for risk identification, comprising:
acquiring video data to be identified;
adopting a face detection model to carry out face detection on the video data to be recognized, and acquiring a face picture to be recognized;
grouping the face pictures to be recognized to obtain at least one group of target face pictures;
identifying at least one group of target face pictures by adopting a target wind control model obtained by the wind control model training method according to any one of claims 1-2, and obtaining the risk identification probability corresponding to each group of target face pictures;
and acquiring a risk identification result based on the risk identification probability.
4. The risk identification method of claim 3, wherein obtaining a risk identification result based on the risk identification probability comprises:
using weighted calculation formulas
Figure FDA0003809540600000021
Calculating the risk identification probability to obtain a wind control identification result; wherein p is i Is the risk recognition probability, w, corresponding to each group of the target face pictures i And the weight corresponding to each group of target face pictures.
5. A wind control model training device, comprising:
the positive and negative sample acquisition module is used for marking original video data to acquire positive and negative samples;
a training face picture acquisition module for performing framing and face detection on the positive and negative samples to acquire a training face picture;
the target training data acquisition module is used for grouping the training face pictures according to a preset number to acquire at least one group of target training data; the target training data comprises N continuous frames of the training face pictures;
the target training data dividing module is used for dividing the target training data according to a preset proportion to obtain a training set and a test set;
an original wind control model obtaining module, configured to input each set of the target training data in the training set into a convolutional neural network-long and short recurrent neural network model for training, so as to obtain an original wind control model, where the original wind control model obtaining module includes:
the model initialization unit is used for initializing model parameters of a convolutional neural network-long and short time recurrent neural network model;
the face feature acquisition unit is used for extracting features of the target training data in the training set by adopting a convolutional neural network to acquire face features;
an original wind control model obtaining unit, configured to input the face features into a long and short recurrent neural network model for training, train the face features by using a forward propagation algorithm, obtain a first state parameter, and perform error calculation on the first state parameter by using a backward propagation algorithm, so as to obtain an original wind control model;
and the target wind control model acquisition module is used for testing the original wind control model by adopting each group of target training data in the test set to acquire a target wind control model.
6. A risk identification device, comprising:
the to-be-identified video data acquisition module is used for acquiring the to-be-identified video data;
the to-be-recognized face picture acquisition module is used for carrying out face detection on the to-be-recognized video data by adopting a face detection model to acquire a to-be-recognized face picture;
the target face picture acquisition module is used for grouping the face pictures to be recognized to acquire at least one group of target face pictures;
a risk identification probability acquisition module, configured to identify at least one group of target face pictures by using the target wind control model acquired by the wind control model training method according to any one of claims 1 to 2, and acquire a risk identification probability corresponding to each group of target face pictures;
and the risk identification result acquisition module is used for acquiring a risk identification result based on the risk identification probability.
7. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor when executing the computer program performs the steps of the method for training a wind control model according to any of claims 1-2; alternatively, the processor realizes the steps of the risk identification method according to any of claims 3-4 when executing the computer program.
8. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for training a wind control model according to any one of claims 1-2; alternatively, the computer program realizes the steps of the risk identification method according to any of claims 3-4 when executed by a processor.
CN201810292057.1A 2018-03-30 2018-03-30 Wind control model training method, risk identification method, device, equipment and medium Active CN108510194B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810292057.1A CN108510194B (en) 2018-03-30 2018-03-30 Wind control model training method, risk identification method, device, equipment and medium
PCT/CN2018/094216 WO2019184124A1 (en) 2018-03-30 2018-07-03 Risk-control model training method, risk identification method and apparatus, and device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810292057.1A CN108510194B (en) 2018-03-30 2018-03-30 Wind control model training method, risk identification method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN108510194A CN108510194A (en) 2018-09-07
CN108510194B true CN108510194B (en) 2022-11-29

Family

ID=63380183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810292057.1A Active CN108510194B (en) 2018-03-30 2018-03-30 Wind control model training method, risk identification method, device, equipment and medium

Country Status (2)

Country Link
CN (1) CN108510194B (en)
WO (1) WO2019184124A1 (en)

Families Citing this family (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214719B (en) * 2018-11-02 2021-07-13 广东电网有限责任公司 Marketing inspection analysis system and method based on artificial intelligence
CN109635838B (en) * 2018-11-12 2023-07-11 平安科技(深圳)有限公司 Face sample picture labeling method and device, computer equipment and storage medium
CN109670940A (en) * 2018-11-12 2019-04-23 深圳壹账通智能科技有限公司 Credit Risk Assessment Model generation method and relevant device based on machine learning
CN109711665A (en) * 2018-11-20 2019-05-03 深圳壹账通智能科技有限公司 A kind of prediction model construction method and relevant device based on financial air control data
CN109784170B (en) * 2018-12-13 2023-11-17 平安科技(深圳)有限公司 Vehicle risk assessment method, device, equipment and storage medium based on image recognition
CN109584051A (en) * 2018-12-18 2019-04-05 深圳壹账通智能科技有限公司 The overdue risk judgment method and device of client based on micro- Expression Recognition
CN110399927B (en) * 2019-07-26 2022-02-01 玖壹叁陆零医学科技南京有限公司 Recognition model training method, target recognition method and device
CN110569721B (en) * 2019-08-01 2023-08-29 平安科技(深圳)有限公司 Recognition model training method, image recognition method, device, equipment and medium
CN110619462A (en) * 2019-09-10 2019-12-27 苏州方正璞华信息技术有限公司 Project quality assessment method based on AI model
CN112651267A (en) * 2019-10-11 2021-04-13 阿里巴巴集团控股有限公司 Recognition method, model training, system and equipment
CN110826320B (en) * 2019-11-28 2023-10-13 上海观安信息技术股份有限公司 Sensitive data discovery method and system based on text recognition
CN112949359A (en) * 2019-12-10 2021-06-11 清华大学 Convolutional neural network-based abnormal behavior identification method and device
CN111210335B (en) * 2019-12-16 2023-11-14 北京淇瑀信息科技有限公司 User risk identification method and device and electronic equipment
CN111144360A (en) * 2019-12-31 2020-05-12 新疆联海创智信息科技有限公司 Multimode information identification method and device, storage medium and electronic equipment
CN111222026B (en) * 2020-01-09 2023-07-14 支付宝(杭州)信息技术有限公司 Training method of user category recognition model and user category recognition method
CN111291668A (en) * 2020-01-22 2020-06-16 北京三快在线科技有限公司 Living body detection method, living body detection device, electronic equipment and readable storage medium
CN111460909A (en) * 2020-03-09 2020-07-28 兰剑智能科技股份有限公司 Vision-based goods location management method and device
CN111400663B (en) * 2020-03-17 2022-06-14 深圳前海微众银行股份有限公司 Model training method, device, equipment and computer readable storage medium
CN111429215B (en) * 2020-03-18 2023-10-31 北京互金新融科技有限公司 Data processing method and device
CN111582654B (en) * 2020-04-14 2023-03-28 五邑大学 Service quality evaluation method and device based on deep cycle neural network
CN113657136B (en) * 2020-05-12 2024-02-13 阿里巴巴集团控股有限公司 Identification method and device
CN111768286B (en) * 2020-05-14 2024-02-20 北京旷视科技有限公司 Risk prediction method, apparatus, device and storage medium
CN111723907B (en) * 2020-06-11 2023-02-24 浪潮电子信息产业股份有限公司 Model training device, method, system and computer readable storage medium
CN111859913B (en) * 2020-06-12 2024-04-12 北京百度网讯科技有限公司 Processing method and device of wind control characteristic factors, electronic equipment and storage medium
CN111522570B (en) * 2020-06-19 2023-09-05 杭州海康威视数字技术股份有限公司 Target library updating method and device, electronic equipment and machine-readable storage medium
CN111798047A (en) * 2020-06-30 2020-10-20 平安普惠企业管理有限公司 Wind control prediction method and device, electronic equipment and storage medium
CN111861701A (en) * 2020-07-09 2020-10-30 深圳市富之富信息技术有限公司 Wind control model optimization method and device, computer equipment and storage medium
CN111950625B (en) * 2020-08-10 2023-10-27 中国平安人寿保险股份有限公司 Risk identification method and device based on artificial intelligence, computer equipment and medium
CN112329974B (en) * 2020-09-03 2024-02-27 中国人民公安大学 LSTM-RNN-based civil aviation security event behavior subject identification and prediction method and system
CN112257974A (en) * 2020-09-09 2021-01-22 北京无线电计量测试研究所 Gas lock well risk prediction model data set, model training method and application
CN112070215B (en) * 2020-09-10 2023-08-29 北京理工大学 Processing method and processing device for dangerous situation analysis based on BP neural network
CN112116577B (en) * 2020-09-21 2024-01-23 公安部物证鉴定中心 Deep learning-based tamper portrait video detection method and system
CN112131607B (en) * 2020-09-25 2022-07-08 腾讯科技(深圳)有限公司 Resource data processing method and device, computer equipment and storage medium
CN112201343B (en) * 2020-09-29 2024-02-02 浙江大学 Cognitive state recognition system and method based on facial micro-expressions
CN112258026B (en) * 2020-10-21 2023-12-15 国网江苏省电力有限公司信息通信分公司 Dynamic positioning scheduling method and system based on video identity recognition
CN112329849A (en) * 2020-11-04 2021-02-05 中冶赛迪重庆信息技术有限公司 Scrap steel stock yard unloading state identification method based on machine vision, medium and terminal
CN112397204B (en) * 2020-11-16 2024-01-19 中国人民解放军空军特色医学中心 Method, device, computer equipment and storage medium for predicting altitude sickness
CN112509129B (en) * 2020-12-21 2022-12-30 神思电子技术股份有限公司 Spatial view field image generation method based on improved GAN network
CN114765634B (en) * 2021-01-13 2023-12-12 腾讯科技(深圳)有限公司 Network protocol identification method, device, electronic equipment and readable storage medium
CN112990432B (en) * 2021-03-04 2023-10-27 北京金山云网络技术有限公司 Target recognition model training method and device and electronic equipment
CN113139812A (en) * 2021-04-27 2021-07-20 中国工商银行股份有限公司 User transaction risk identification method and device and server
CN113343821B (en) * 2021-05-31 2022-08-30 合肥工业大学 Non-contact heart rate measurement method based on space-time attention network and input optimization
CN113923464A (en) * 2021-09-26 2022-01-11 北京达佳互联信息技术有限公司 Video violation rate determination method, device, equipment, medium and program product
CN114740774B (en) * 2022-04-07 2022-09-27 青岛沃柏斯智能实验科技有限公司 Behavior analysis control system for safe operation of fume hood
CN115688130B (en) * 2022-10-17 2023-10-20 支付宝(杭州)信息技术有限公司 Data processing method, device and equipment

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100124363A1 (en) * 2008-11-20 2010-05-20 Sony Ericsson Mobile Communications Ab Display privacy system
US9916538B2 (en) * 2012-09-15 2018-03-13 Z Advanced Computing, Inc. Method and system for feature detection
CN102819730A (en) * 2012-07-23 2012-12-12 常州蓝城信息科技有限公司 Method for extracting and recognizing facial features
CN106339719A (en) * 2016-08-22 2017-01-18 微梦创科网络科技(中国)有限公司 Image identification method and image identification device
CN106447434A (en) * 2016-09-14 2017-02-22 全联征信有限公司 Personal credit ecological platform
CN106980811A (en) * 2016-10-21 2017-07-25 商汤集团有限公司 Facial expression recognizing method and expression recognition device
CN106919903B (en) * 2017-01-19 2019-12-17 中国科学院软件研究所 robust continuous emotion tracking method based on deep learning
CN107179683B (en) * 2017-04-01 2020-04-24 浙江工业大学 Interactive robot intelligent motion detection and control method based on neural network
CN107180234A (en) * 2017-06-01 2017-09-19 四川新网银行股份有限公司 The credit risk forecast method extracted based on expression recognition and face characteristic
CN107330785A (en) * 2017-07-10 2017-11-07 广州市触通软件科技股份有限公司 A kind of petty load system and method based on the intelligent air control of big data
CN107704834B (en) * 2017-10-13 2021-03-30 深圳壹账通智能科技有限公司 Micro-surface examination assisting method, device and storage medium

Also Published As

Publication number Publication date
CN108510194A (en) 2018-09-07
WO2019184124A1 (en) 2019-10-03

Similar Documents

Publication Publication Date Title
CN108510194B (en) Wind control model training method, risk identification method, device, equipment and medium
US11443536B2 (en) System and methods for efficiently implementing a convolutional neural network incorporating binarized filter and convolution operation for performing image classification
US11295178B2 (en) Image classification method, server, and computer-readable storage medium
WO2021042828A1 (en) Neural network model compression method and apparatus, and storage medium and chip
CN112949565B (en) Single-sample partially-shielded face recognition method and system based on attention mechanism
CN109345538B (en) Retinal vessel segmentation method based on convolutional neural network
WO2020215557A1 (en) Medical image interpretation method and apparatus, computer device and storage medium
WO2022036777A1 (en) Method and device for intelligent estimation of human body movement posture based on convolutional neural network
CN110288555B (en) Low-illumination enhancement method based on improved capsule network
CN113705769A (en) Neural network training method and device
CN112800876B (en) Super-spherical feature embedding method and system for re-identification
US20230215166A1 (en) Few-shot urban remote sensing image information extraction method based on meta learning and attention
Murru et al. A Bayesian approach for initialization of weights in backpropagation neural net with application to character recognition
CN111222457B (en) Detection method for identifying authenticity of video based on depth separable convolution
CN112307982A (en) Human behavior recognition method based on staggered attention-enhancing network
CN111967361A (en) Emotion detection method based on baby expression recognition and crying
Monigari et al. Plant leaf disease prediction
CN115115924A (en) Concrete image crack type rapid intelligent identification method based on IR7-EC network
CN111768792A (en) Audio steganalysis method based on convolutional neural network and domain confrontation learning
CN116206227B (en) Picture examination system and method for 5G rich media information, electronic equipment and medium
CN111079930B (en) Data set quality parameter determining method and device and electronic equipment
CN111882551B (en) Pathological image cell counting method, system and device
CN113609957A (en) Human behavior recognition method and terminal
CN113034473A (en) Lung inflammation image target detection method based on Tiny-YOLOv3
CN113256556A (en) Image selection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant