WO2019184124A1 - 风控模型训练方法、风险识别方法、装置、设备及介质 - Google Patents

风控模型训练方法、风险识别方法、装置、设备及介质 Download PDF

Info

Publication number
WO2019184124A1
WO2019184124A1 PCT/CN2018/094216 CN2018094216W WO2019184124A1 WO 2019184124 A1 WO2019184124 A1 WO 2019184124A1 CN 2018094216 W CN2018094216 W CN 2018094216W WO 2019184124 A1 WO2019184124 A1 WO 2019184124A1
Authority
WO
WIPO (PCT)
Prior art keywords
training
target
wind control
model
control model
Prior art date
Application number
PCT/CN2018/094216
Other languages
English (en)
French (fr)
Inventor
马潜
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019184124A1 publication Critical patent/WO2019184124A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/03Credit; Loans; Processing thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the present application relates to the field of risk identification, and in particular, to a wind control model training method, a risk identification method device, a device, and a medium.
  • each loan loan is subject to risk control (hereinafter referred to as risk control) to determine whether the loan can be issued to the lender.
  • risk control The traditional risk control process is mainly conducted by face-to-face communication between the credit reviewer and the lender, but in the face-to-face communication process, the credit reviewer may not understand the insufficiency or understand the facial expression of the person, and ignore the loan.
  • Some subtle facial changes in the human face, these subtle changes in expression may reflect the psychological activities (such as lying) when the lender communicates.
  • Some financial institutions gradually adopted a risk control model to identify whether the lender was lying to assist in the risk control of the loan.
  • the current wind control model needs to use a series of micro-expression recognition models to capture the facial features of the face, and then based on these subtle expression changes to reflect the lender's psychological activities during the loan, in order to achieve the purpose of risk control, but in training these
  • the micro-expression recognition model adopts a universal neural network, which makes the model less accurate and has low recognition efficiency.
  • the embodiment of the present application provides a wind control model training method, device, device and medium, so as to solve the problem that the current risk recognition model needs to adopt a series of micro-expression recognition models, resulting in low recognition efficiency.
  • the embodiment of the present application provides a risk identification method to solve the problem that the current risk recognition model is trained by using a general neural network model, so that the model recognition accuracy is not high.
  • the embodiment of the present application provides a wind control model training method, including:
  • the target training data includes the training face images of consecutive N frames;
  • the target training data of each group in the training set is input into a convolutional neural network-long-term recurrent neural network model for training, and the original wind control model is obtained;
  • the original wind control model is tested by using each set of the target training data in the test set to obtain a target wind control model.
  • the embodiment of the present application provides a wind control model training device, including:
  • Positive and negative sample acquisition module for labeling original video data to obtain positive and negative samples
  • the training face image obtaining module is configured to perform framing and face detection on the positive and negative samples to obtain a training face image
  • a target training data acquiring module configured to group the training face images according to a preset number to obtain at least one set of target training data; the target training data includes the training face images of consecutive N frames;
  • a target training data dividing module configured to divide the target training data according to a preset ratio, and acquire a training set and a test set;
  • the original wind control model obtaining module is configured to input the target training data of each group in the training set into a convolutional neural network-long-short recursive neural network model for training, and obtain an original wind control model;
  • the target wind control model obtaining module is configured to test the original wind control model by using the target training data of each group in the test set to obtain a target wind control model.
  • An embodiment of the present application provides a computer device including a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, the processor implementing the computer readable instructions The following steps:
  • the target training data includes the training face images of consecutive N frames;
  • the target training data of each group in the training set is input into a convolutional neural network-long-term recurrent neural network model for training, and the original wind control model is obtained;
  • the original wind control model is tested by using each set of the target training data in the test set to obtain a target wind control model.
  • Embodiments of the present application provide one or more non-volatile readable storage media storing computer readable instructions, when executed by one or more processors, causing the one or more processors Perform the following steps:
  • the target training data includes the training face images of consecutive N frames;
  • the target training data of each group in the training set is input into a convolutional neural network-long-term recurrent neural network model for training, and the original wind control model is obtained;
  • the original wind control model is tested by using each set of the target training data in the test set to obtain a target wind control model.
  • the embodiment of the present application provides a risk identification method, including:
  • the target wind control model obtained by using the wind control model training method of the first aspect identifies at least one set of the target face images, and obtains a risk recognition probability corresponding to each set of the target face images;
  • the embodiment of the present application provides a risk identification apparatus, including:
  • a video data acquiring module to be used for acquiring video data to be identified
  • a face image obtaining module to be used for performing face detection on the to-be-identified video data by using a face detection model, and acquiring a face image to be recognized;
  • a target face image obtaining module configured to group the to-be-recognized face images to obtain at least one set of target face images
  • a risk identification probability acquisition module configured to identify at least one set of the target face images by using a target wind control model obtained by using the wind control model training method of the first aspect, and acquiring corresponding images of the target face images of each group Risk identification probability;
  • the risk identification result obtaining module is configured to obtain the risk identification result based on the risk identification probability.
  • An embodiment of the present application provides a computer device including a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, the processor implementing the computer readable instructions The following steps:
  • the target wind control model obtained by using the wind control model training method identifies at least one set of the target face images, and obtains a risk recognition probability corresponding to each set of the target face images;
  • Embodiments of the present application provide one or more non-volatile readable storage media storing computer readable instructions, when executed by one or more processors, causing the one or more processors Perform the following steps:
  • the target wind control model obtained by using the wind control model training method identifies at least one set of the target face images, and obtains a risk recognition probability corresponding to each set of the target face images;
  • Embodiment 1 is a flow chart of a wind control model training method provided in Embodiment 1 of the present application.
  • FIG. 2 is a specific schematic view of step S12 of Figure 1;
  • FIG. 3 is a specific schematic view of step S15 of Figure 1;
  • FIG. 4 is a specific schematic view of step S153 of Figure 3;
  • FIG. 5 is a schematic block diagram of a wind control model training device provided in Embodiment 2 of the present application.
  • FIG. 6 is a flowchart of a risk identification method provided in Embodiment 3 of the present application.
  • FIG. 7 is a schematic block diagram of a risk identification device provided in Embodiment 4 of the present application.
  • FIG. 8 is a schematic diagram of a computer device provided in Embodiment 6 of the present application.
  • Fig. 1 is a flow chart showing the wind control model training method in this embodiment.
  • the wind control model training method can be applied to financial institutions such as banks, securities, insurance, etc., so that the trained wind control model can be used to assist the creditor to conduct risk control on the lender, thereby determining whether the loan can be issued to the lender.
  • the wind control model training method includes the following steps:
  • S11 Label the original video data to obtain positive and negative samples.
  • the original video data is open source video data obtained from a data set disclosed by the Internet or a third party institution/platform, and includes lie video data and non-lie video data.
  • the lie tag is marked in the original video data, that is, the lie video data is marked as “0”, and the unspoken video data is marked as “1” to obtain positive and negative samples, which facilitates model training and improves the efficiency of model training.
  • the ratio of the positive and negative samples is set to 1:1, that is, the same proportion of the lying video data and the non-lying video data are obtained, which can effectively prevent the model from over-fitting, so that the training obtained by the positive and negative samples can be obtained.
  • the recognition effect of the wind control model is more precise.
  • S12 Perform framing and face detection on the positive and negative samples to obtain a training face image.
  • the training face image is a picture including facial features of a person obtained by performing framing and face detection on positive and negative samples. Since the wind control model in the embodiment is trained based on the micro-feature feature, it is necessary to perform framing and face detection on the positive and negative samples, and acquiring a picture including the facial features of the person is a training face image, so as to adopt The face image is trained to carry out model training, so that the wind control model can extract the micro-feature features based on the trained face image and perform deep learning to improve the recognition accuracy of the wind control model.
  • S13 grouping the training face images according to a preset number to obtain at least one group of target training data; the target training data includes a training face image of consecutive N frames.
  • the grouping is performed according to a preset quantity, and at least one set of target training data is acquired, so that each group of target training data includes a training face image of consecutive N frames, so as to obtain a face from the training face image of consecutive N frames.
  • the micro-feature features are changed so that the training face images are time-series, thereby increasing the accuracy of the target wind control model.
  • the preset number range may be set to [50, 200], because if the training face image of 50 frames or less is used as a training set of training data, the training face image may be too small. Can not show the change process of a person's lying facial features, resulting in a low accuracy of the wind control model recognition. If the training face image of 200 frames or more is used as a set of training data in the training set, the model training time is too long, and the efficiency of the model training is reduced. In this embodiment, the face image is trained as a set of training data for each hundred frames to perform model training, thereby improving the training efficiency of the model and the recognition accuracy of the trained wind control model.
  • S14 The target training data is divided according to a preset ratio, and the training set and the test set are obtained.
  • the preset ratio is a preset ratio for classifying the training face image.
  • the preset ratio may be a ratio obtained based on historical experience.
  • the training set is a learning sample data set.
  • the classifier is built by matching some parameters, that is, the target training data in the training set is used to train the machine learning model to determine the parameters of the machine learning model.
  • the test set is used to test the resolving power of a trained machine learning model, such as recognition rate.
  • the training face image can be divided according to the ratio of 9:1, and 90% of the training face image can be used as the training set, and the remaining 10% of the data is used as the test set.
  • S15 Input each group of target training data in the training set into a convolutional neural network-long-term recurrent neural network model for training, and obtain the original wind control model.
  • the convolutional neural network-long-term recurrent neural network model is a model obtained by combining a convolutional neural network model and a long-term recurrent neural network model. It can be understood that the convolutional neural network-long-term recurrent neural network model is equivalent to a model formed by convolutional neural network and long-short recurrent neural network model.
  • the Convolutional Neural Network is a locally connected network.
  • the biggest feature compared to a fully connected network is local connectivity and weight sharing.
  • For a pixel p in an image the closer the pixel is to pixel p, the greater its influence (local connectivity).
  • Weight sharing can be understood as convolution kernel sharing.
  • CNN convolutional neural networks
  • a convolution kernel can be convoluted with a given image to extract an image feature. Different convolution kernels can be extracted. Different image features. Due to the local connectivity of convolutional neural networks, the complexity of the model is reduced, and the efficiency of model training is improved. Moreover, due to the weight sharing of convolutional neural networks, convolutional neural networks can be learned in parallel, further improving the efficiency of model training. .
  • the long-short term memory (LSTM) model is a time recurrent neural network model suitable for processing and predicting important events with time series and relatively long time series intervals and delays.
  • the LSTM model has a time memory function. Since the features of the training face image in each frame in this embodiment are closely related to the training face image features of the two frames before and after, the long and short recursive neural network model is used to train the extracted features. To reflect the long-term memory of the data and improve the accuracy of the model.
  • the target training data that is, the training face image of the continuous N frame
  • the feature extraction of the trained face image is required
  • the convolutional neural network model is a commonly used neural network for image feature extraction
  • the weight sharing and local connectivity of convolutional neural networks greatly increase the efficiency of model training.
  • the feature of training the face image in each frame in this embodiment is closely related to the training face image features of the two frames before and after, so the long-short recursive neural network model is used to train the extracted face features to reflect the data. Long-term memory ability to improve the accuracy of the model.
  • the target wind control model is a model that tests the original risk model by using the trained face image in the test set to make the accuracy of the original wind control model reach the preset accuracy.
  • the original wind control model is tested by using the target training data in the test set, that is, the training face image of the continuous N frames, to obtain the corresponding accuracy; if the accuracy reaches the preset accuracy, the original wind control model is adopted.
  • the target wind control model As a target wind control model.
  • the original video data is first labeled to obtain positive and negative samples to facilitate model training and improve the efficiency of model training.
  • the proportion of the positive and negative samples is set to the same proportion, which can effectively prevent the model from over-fitting, so that the recognition effect of the wind control model obtained through the positive and negative sample training is more accurate.
  • the framing and face detection are performed on the positive and negative samples, and the picture containing the facial features of the person is obtained, that is, the training face image is obtained, so that the wind control model can extract the micro expression features based on the trained face image, and perform deep learning to improve the wind. Control model recognition accuracy.
  • the training face images are grouped according to a preset number, so that each preset number of consecutive N frames of training face images is used as a set of target training data for model training, thereby improving the training efficiency of the model and the accuracy of the wind control model recognition.
  • the training face image is divided according to the preset proportion, the training set and the test set are obtained, and each group of target training data in the training set is input into the convolutional neural network-long-short recursive neural network model for training, and the original wind control is obtained.
  • Model so that the original wind control model is time-series, and because of the weight sharing of the convolutional neural network, the network can learn in parallel, improve the efficiency of model training, and the complexity of the model due to the local connectivity of the convolutional neural network. Reduce and improve the efficiency of model training.
  • the original wind control model is tested by using each set of target training data in the test set to obtain the target wind control model, so that the recognition effect of the target wind control model is more accurate.
  • step S12 the framing and face detection are performed on the positive and negative samples, and the training face image is obtained, which specifically includes the following steps:
  • the framing refers to dividing the original video data according to a preset time to obtain a video image. Specifically, after the step of framing the positive and negative samples, the step of normalizing and time stamping the video image is further included. Normalization is a way of simplifying computations, where a dimensional expression is transformed into a dimensionless expression that becomes a scalar. For example, in the positive and negative samples in this embodiment, the customer's face area is required to extract the customer's micro-feature feature. Therefore, the pixels of the framed video image need to be normalized to 260*260 pixels, and the pixels are unified. Subsequent face detection is performed on each frame of video image to improve the accuracy of model recognition.
  • the video image is time-labeled, that is, each frame of the video image is marked in time order, so that the video image has time series, and the accuracy of the model is improved.
  • S122 Perform a face detection on the video image by using a face detection model to obtain a training face image.
  • the face detection detection model is a model that is pre-trained for detecting whether each frame of the video image contains a person's face region. Specifically, each frame of the video image is input into the face detection model, and the face position in each frame of the video image is detected, and then the video image including the face is extracted to train the face image, which is provided for inputting the subsequent model.
  • the positive and negative samples are framed and normalized, the video image is acquired, and the pixels of each frame of the video image are unified, so that the face detection of each frame of the video image is performed subsequently, and the wind control model training is improved. effectiveness.
  • the face detection model is used to detect the face of the video image to obtain the video image containing the face, which is to train the face image, provide technical support for the input of the subsequent model, and model the video image containing the human face. Training, excluding other factors to interfere, so that the model can extract micro-expression features based on training face images, and provide technical support for the training of wind control models.
  • the face detection model in step S122 is specifically a face detection model obtained by using CascadeCNN network training.
  • CascadeCNN (Cascade Convolutional Neural Network) is a deep convolutional network implementation of the classic Violajones method, which is a faster detection method for face detection.
  • Violajones is a face detection framework.
  • the CascadeCNN method is used to train the picture with the face position to obtain the face detection model, which improves the recognition efficiency of the face detection model.
  • the steps of training the picture of the face position (training the face picture) by using the CascadeCNN method are as follows:
  • the 12-net network In the first stage of training, the 12-net network is used to scan the image, and more than 90% of the windows are rejected. The remaining windows are input to the 12-calibration-net network for correction, and then the corrected image is subjected to the non-maximum suppression algorithm. Handle to eliminate highly overlapping windows. Among them, 12-net uses a 12 ⁇ 12 detection window, with a step size of 4, and slides on a W (wide) ⁇ H (high) picture to obtain a detection window. 12-calibration-net is a correction network that corrects the area of the face and derives the regional coordinates of the face. The non-maximum suppression algorithm is a widely used method in the fields of target detection and localization.
  • the essence of the algorithm principle is to search for local maxima and suppress non-maximum elements.
  • the window in the training face picture which is judged as non-human face (ie, does not exceed the preset threshold) as a negative sample all true faces (ie, A window exceeding a preset threshold is used as a positive sample to obtain a corresponding detection window.
  • the preset threshold is a threshold that is preset by the developer to determine whether there is a face in the training data.
  • the images outputted in the first stage are processed by a 24-net and 24-calibration-net network; wherein 12-net and 24-net are used to determine whether the network is a face area, the difference is that 24-net is based on 12-net, input 24 ⁇ 24 pictures into the 24-net network to get the 24-net full-connect layer extraction feature, and simultaneously zoom 21 ⁇ 24 pictures to 12 ⁇ 12, Input to the 12-net fully connected layer, and finally output the features extracted by the 24-net fully connected layer together with the features obtained by the 12-net fully connected layer.
  • the 12-calibration-net network and the 24-calibration-net network are correction networks.
  • the face detection is performed on the training data by using the 24-net network described above, and the window determined as non-human face in the training data is taken as a negative sample, and all real faces are taken as positive samples.
  • the output of the second phase of the training is processed using a 48-net and 48-calibration-net network to complete the final phase of training.
  • This stage of processing is similar to the second stage of training. To avoid repetition, we will not repeat them here.
  • the face detection model obtained by the CascadeCNN network training is used to perform face detection on the video image, and the process of acquiring the training face image is consistent with the above training process. To avoid repetition, no further details are provided herein.
  • step S15 each group of target training data in the training set is input into a convolutional neural network-long-term recurrent neural network model for training, and the original wind control model is obtained. Including the following steps:
  • the initial convolutional neural network-long-term recurrent neural network model refers to the model parameters (ie, convolution kernel and offset) of the pre-initialized convolutional neural network model and the model parameters in the LSTM model (ie, the connection between layers).
  • Weight refers to the weight of the convolutional neural network.
  • the convolution kernel refers to the weight of the convolutional neural network.
  • the training data is input, it is multiplied by a weight, that is, the convolution kernel, and then the output of the neuron is reflected, which reflects the importance of the training data.
  • Offset is the linear component used to change the weight multiplied by the input.
  • the process of model training can be completed based on the determined convolution kernel, the offset, and the connection weights between the layers in the LSTM model.
  • S152 Using a convolutional neural network to extract features of the target training data in the training set to obtain facial features.
  • the facial feature is a facial feature obtained by using a convolutional neural network to perform feature extraction on the target training data in the training set, that is, the training face image of consecutive N frames.
  • the feature extraction is performed on the target training data in the training set by using a convolutional neural network, which specifically includes the following steps:
  • the face feature is a feature obtained by convolution operation of the target training data in the training set by using a convolutional neural network model.
  • the calculation formula of the convolution operation includes Where * represents a convolution operation; x j represents a jth input feature map; y j represents a jth output feature map; w ij is a convolution kernel between the i-th input feature map and the j-th output feature map (weight); b j represents the offset term of the jth output feature map.
  • the maximum pooled downsampling is used to downsample the convolved feature map to achieve dimensionality reduction on the feature map.
  • y j represents the ith output spectrum in the downsampling process (ie, the downsampled feature map), and each neuron in the downsampling process is from the ith input spectrum (the convolved feature map) It is obtained by local sampling of the downsampling frame of S*S; m and n respectively represent the step size of the moving of the downsampling frame.
  • the LSTM model is one of a neural network model with long-term memory capability, and has a three-layer network structure of an input layer, a hidden layer, and an output layer.
  • the input layer is the first layer of the LSTM model and is used to receive external signals, that is, to receive facial features carrying timing states.
  • the face features acquired by the training face images in the training set after being processed in step S152 are also time-series, so that they can be applied in the LSTM model. Enables the LSTM to acquire facial features that carry timing states.
  • the output layer is the last layer of the LSTM model and is used to output signals to the outside world, which is responsible for outputting the calculation results of the LSTM model.
  • the hidden layer is the layer other than the input layer and the output layer in the LSTM model, which is used to process the input facial features and obtain the calculation result of the LSTM model.
  • the original wind control model is a model obtained by using the LSTM model to iterate the face features carrying the time series multiple times until convergence. It can be understood that the model training of the extracted facial features by using the LSTM model enhances the timing of the obtained original wind control model, thereby improving the accuracy of the original wind control model.
  • the output layer of the LSTM model is subjected to regression processing using Softmax (Regression Model) for classifying the output weight matrix.
  • Softmax regression model
  • Softmax is a classification function commonly used in neural networks. It maps the output of multiple neurons to the interval [0,1], which can be understood as probability. It is simple and convenient to calculate, so as to carry out multi-classification. Output to make its output more accurate.
  • the convolutional neural network-long-term recurrent neural network model is initialized to train the target training data in the training set based on the convolutional neural network model, obtain facial features, and then input the acquired facial features.
  • the LSTM model is trained. This process does not need to extract features manually. It only needs to input the training face image directly into the convolutional neural network-long-recurrent recurrent neural network model, and the model can extract features by itself and improve the training efficiency of the model.
  • the face feature is input into the long-term recurrent neural network model for training (ie, step S153), and specifically includes the following steps:
  • S1531 The face feature is trained by using a forward propagation algorithm to obtain a first state parameter.
  • using the Forward Propagation algorithm to train facial features refers to using a forward propagation algorithm to train according to the sequence of timing states carried by facial features.
  • the first state parameter refers to a parameter obtained by an initial iterative process of model training based on the face feature.
  • the Forward Propagation algorithm is an algorithm for model training according to the order of time.
  • the calculation formula of the forward propagation algorithm is with Where, S t represents the output of the hidden layer at the current time; Indicates the weight of the hidden layer from the moment to the current moment; Indicates the weight of the input layer to the hidden layer; Indicates the predicted output of the current time; Represents the weight of the hidden layer to the output layer.
  • the front propagation algorithm is the output S hidden unit input X t at the current time and the previous time point t-1, i.e. LSTM model hidden output S of memory cells within the layer t-1 as a hidden layer, input, then tanh (hyperbolic tangent) S conversion output obtained at the current time t through the hidden layer activation function, the predicted output at time t is a Representation, U represents the weight of the hidden layer to the current moment, W represents the weight from the input layer to the hidden layer, and V represents the weight from the hidden layer to the output layer.
  • U represents the weight of the hidden layer to the current moment
  • W represents the weight from the input layer to the hidden layer
  • V represents the weight from the hidden layer to the output layer.
  • tanh hyperbolic tangent
  • nonlinear factors can be added to enable the trained original wind control model to solve more complicated problems.
  • the activation function tanh has the advantage of fast convergence, which can save training time and improve the efficiency of model training.
  • S1532 The back propagation algorithm is used to calculate the error of the first state parameter to obtain the original wind control model.
  • the Back Propagation algorithm is an algorithm that transfers the accumulated residuals from the last time and trains the neural network model.
  • the calculation formula of the backward propagation algorithm is among them, Indicates the predicted output at time t ; o t represents the time t and Corresponding true value.
  • the back propagation algorithm is used to calculate the error of the first state parameter, and the error back propagation update is performed based on the result of the error calculation to update the weight parameter of the LSTM model and the weight parameter of the convolutional neural network. Effectively improve the accuracy of the wind control model.
  • the error calculation of the first state parameter by using the Back Propagation algorithm refers to updating the optimization parameters in the order of time reversal, that is, the three weight parameters U, V, and W in this embodiment.
  • the error calculation is to calculate the loss function at the t-th time of the backward propagation as the cross entropy, that is, the formula is adopted. Calculation.
  • the partial derivative of each layer is calculated, and the partial derivative of each layer is calculated.
  • the three weight parameters U, V, and W are updated based on the three rates of change to obtain the adjusted state parameters.
  • the chain derivation method is the derivation rule in calculus, which is used to find the derivative of a compound function, which is a commonly used method in the derivation operation of calculus.
  • the model parameters of the LSTM model and the model parameters of the convolutional neural network model can be updated based on the backward propagation algorithm in the LSTM model to complete the original wind control model. optimization.
  • the gradient is exponentially increased as the gradient of the number of back propagation layers increases, the gradient disappears.
  • the cross entropy loss function and the tanh activation function are combined to solve the problem of gradient disappearance, and the problem is increased. The accuracy of the training.
  • the forward propagation algorithm is used to train the facial features to obtain the first state parameter, and then the back propagation algorithm is used to calculate the error of the first state parameter, and the error back propagation is updated based on the result of the error calculation.
  • the accuracy of the obtained original wind control model can be effectively improved.
  • the Convolutional Neural Network is a locally connected network with local connectivity and weight sharing, so that the model can learn in parallel
  • the convolutional neural network is used to characterize the face image in the training set. Extraction improves the efficiency of face feature acquisition and improves the efficiency of model training. Then the acquired facial features are input into the LSTM model for training, and the original wind control model with time series is obtained to enhance the prediction ability of the original wind control model in time and improve the accuracy of the original risk model.
  • the original video data is first labeled to obtain positive and negative samples to facilitate model training and improve the efficiency of model training.
  • the proportion of the positive and negative samples is set to the same proportion, which can effectively prevent the model from over-fitting, so that the recognition effect of the wind control model obtained by the positive and negative sample training is more accurate.
  • the positive and negative samples are framed and normalized to obtain a video image, and the pixels of each frame of the video image are unified, so that the face detection of each frame of the video image is performed later, and the accuracy of the risk recognition is improved.
  • the face detection model is used to perform face detection on the video image to obtain the video image containing the face, that is, the training face image, to provide technical support for the input of the subsequent model, and to perform model training on the video image containing the human face.
  • the model can extract micro-expression features based on training face images to achieve the purpose of risk control.
  • the training face images are grouped according to a preset number, so that each preset number of consecutive N frames of training face images is used as a set of target training data for model training, thereby improving the training efficiency of the model and the accuracy of the wind control model recognition. .
  • the target training data is divided according to a preset ratio, the training set and the test set are acquired, and each set of target training data in the training set is input into a convolutional neural network-long-term recurrent neural network model for training, and the original wind control is obtained.
  • the model makes the original wind control model time-series, and because of the weight sharing of the convolutional neural network, the network can learn in parallel and improve the efficiency of model training.
  • the original wind control model is tested by using each set of target training data in the test set to obtain the target wind control model, so that the recognition effect of the target wind control model is more accurate.
  • Fig. 5 is a block diagram showing the principle of the wind control model training device corresponding to the wind control model training method of the first embodiment.
  • the wind control model training device includes a positive and negative sample acquisition module 11, a training face image acquisition module 12, a target training data acquisition module 13, a target training data division module 14, an original wind control model acquisition module 15 and The target wind control model acquisition module 16.
  • the implementation functions of the positive and negative sample acquisition module 11, the training face image acquisition module 12, the target training data acquisition module 13, the target training data division module 14, the original wind control model acquisition module 15, and the target wind control model acquisition module 16 are The steps corresponding to the wind control model training method in the first embodiment are in one-to-one correspondence. To avoid redundancy, the present embodiment will not be described in detail.
  • the positive and negative sample acquisition module 11 is configured to label the original video data to obtain positive and negative samples.
  • the training face image obtaining module 12 is configured to perform framing and face detection on the positive and negative samples to obtain a training face image.
  • the target training data obtaining module 13 is configured to divide the training face image according to a preset ratio to obtain a training set and a test set.
  • the target training data dividing module 14 is configured to divide the target training data according to a preset ratio to obtain a training set and a test set.
  • the original wind control model obtaining module 15 is configured to input each group of target training data in the training set into a convolutional neural network-long-term recurrent neural network model for training, and obtain an original wind control model.
  • the target wind control model acquisition module 16 is configured to test the original wind control model by using each set of target training data in the test set to obtain a target wind control model.
  • the training face image acquisition module 12 includes a video image acquisition unit 121 and a training face image acquisition unit 122.
  • the video image obtaining unit 121 is configured to frame the positive and negative samples to acquire a video image.
  • the training face image obtaining unit 122 is configured to perform face detection on the video image by using a face detection model to obtain a training face image.
  • the original wind control model acquisition module 15 includes a model initialization unit 151, a face feature acquisition unit 152, and an original wind control model acquisition unit 153.
  • the model initializing unit 151 is configured to initialize a convolutional neural network-long-term recursive neural network model.
  • the facial feature acquiring unit 152 is configured to perform feature extraction on the target training data in the training set by using a convolutional neural network to acquire facial features.
  • the original wind control model acquisition unit 153 is configured to input the facial features into the long-term recurrent neural network model for training, and obtain the original wind control model.
  • the original wind control model acquisition unit 153 includes a first state parameter acquisition subunit 1531 and an original wind control model acquisition subunit 1532.
  • the first state parameter obtaining sub-unit 1531 is configured to perform training on the face feature by using a forward propagation algorithm to acquire the first state parameter.
  • the original wind control model acquisition sub-unit 1532 is configured to perform error calculation on the first state parameter by using a backward propagation algorithm to obtain an original wind control model.
  • Fig. 6 is a flow chart showing the risk identification method in this embodiment.
  • the wind control model training method can be applied to computer equipments configured by financial institutions such as banks, securities, insurance, etc., and can effectively assist the credit reviewer to conduct risk control on the lender, thereby determining whether to issue loans to the lender.
  • the risk identification method includes the following steps:
  • the video data to be identified is used to record unprocessed video data of the lender during the credit review process. Since the accuracy of identifying the video image to be identified for one frame is not high, the video data to be identified in this embodiment is video data composed of at least two frames of video images to be identified.
  • the credit reviewer may ask the target client through the video chat method to obtain the video data (ie, the video data to be identified) that the target customer replies, so that the credit review process is intelligent. There is no need for face-to-face communication between the reviewer and the target customer to save labor costs.
  • S22 Perform face detection on the video data to be recognized by using a face detection model, and obtain a face image to be recognized.
  • the face image to be recognized is a face image obtained by the face detection model for performing face detection on the video data to be recognized. Specifically, each frame of the video data to be identified in the to-be-identified video data is input into the face detection model, and the location of the face in the video image to be recognized for each frame is detected, and then the video image including the human face is extracted to be recognized. Face picture.
  • the face detection model is specifically a face detection model obtained by using CascadeCNN network training, and the process of performing face detection on the video data to be recognized is the same as the detection process in Embodiment 1, and is not in this case to avoid duplication. A narrative.
  • S23 Group the faces to be recognized, and obtain at least one set of target face images.
  • the face images to be recognized according to the preset quantity are grouped to obtain at least one set of target face images.
  • the face images to be recognized are grouped according to the manner of cross selection.
  • a group of to-be-identified data ie, a target face picture
  • a 40-s to-be-identified video data including 960 frames
  • the first picture to the 100th picture is a group
  • the 10th picture to the 110th picture is a group
  • at least one set of target face images is obtained
  • at least one set of targets is obtained by the cross selection method. Face image to fully maintain the connection between the faces to be recognized, improving the accuracy of model recognition
  • S24 Identifying at least one set of target face images by using a target wind control model, and acquiring a risk recognition probability corresponding to each set of target face images.
  • the target wind control model is a target wind control model obtained by training the wind control model training method in Embodiment 1.
  • at least one set of target face images is input into the target risk model for identification, and at least one set of target face images is input in the target risk model, and outputted with each group of target faces
  • the probability of risk identification corresponding to the picture.
  • the recognition probability may be a real number between 0-1.
  • a weighting operation formula is employed. Calculate the probability of risk identification and obtain the results of wind control.
  • p i is the risk identification probability corresponding to each group of target face images
  • w i is the weight corresponding to each group of target face images.
  • the weight corresponding to each group of target face images is set by the phone to set different weights for different questions.
  • the weight of the setting is lower.
  • the weights of the sensitive questions such as loan use, personal income and repayment will be relatively high.
  • the risk identification probability is calculated by weighted operation, and the wind control identification result is obtained to make the risk identification result more accurate.
  • the distinction between the letter review problem of the basic class and the credit review question of the sensitive class is divided according to the condition that the question has a standard answer. Take the bank as an example.
  • the target customer pre-stores some personal information (such as ID number, family phone number and home address) in financial institutions such as banks, securities, insurance, etc., based on these personal information pre-stored with standard answers.
  • the question raised is the question of the basic class.
  • the target customer does not pre-store the information in the financial institutions such as banks, securities, insurance, etc., and thinks that there is no standard answer to the part of the information, the problem raised based on the part of the information is the sensitive type of the letter review problem.
  • the target customer is first asked by video chat to obtain the video data of the target customer reply, which is the video data to be identified, so that the credit review process is intelligent, and the face reviewer and the target customer are not required to face-to-face communication.
  • the face detection model is used to perform face detection on the video data to be recognized, and then the video image containing the human face is extracted as the face image to be recognized, and the face images to be recognized are grouped by the cross selection method to acquire at least one group of target faces. Images that improve the accuracy of model recognition.
  • the target wind control model is used to identify at least one set of target face images, and the risk recognition probability corresponding to each group of target face images is obtained, which improves the recognition efficiency and recognition accuracy of the target wind control model. Finally, the risk identification probability is calculated by weighting operation, and the wind control identification result is obtained to make the risk identification result more accurate.
  • Fig. 7 is a block diagram showing the principle of the risk identification device corresponding to the risk identification method in the third embodiment.
  • the risk identification device includes a to-be-identified video data acquisition module 21, a to-be-recognized face image acquisition module 22, a target face image acquisition module 23, a risk identification probability acquisition module 24, and a risk identification result acquisition module 25.
  • the corresponding steps of the method correspond one-to-one. In order to avoid redundancy, the present embodiment is not described in detail.
  • the to-be-identified video data obtaining module 21 is configured to acquire video data to be identified.
  • the face image obtaining module 22 to be used is configured to perform face detection on the video data to be recognized by using the face detection model, and obtain a face image to be recognized.
  • the target face image obtaining module 23 is configured to group the face images to be recognized, and acquire at least one set of target face images.
  • the risk identification probability obtaining module 24 is configured to identify at least one set of target face images by using the target wind control model obtained by the wind control model training method of Embodiment 1 to obtain a risk recognition probability corresponding to each set of target face images.
  • the risk identification result obtaining module 25 is configured to obtain the risk identification result based on the risk identification probability.
  • the risk identification result obtaining module 25 is configured to adopt a weighting operation formula The risk identification probability is calculated, and the wind control recognition result is obtained; wherein, p i is a risk recognition probability corresponding to each group of target face images, and w i is a weight corresponding to each group of target face images.
  • the embodiment provides one or more non-volatile readable storage media having computer readable instructions that, when executed by one or more processors, cause the one or more processors to execute
  • the wind control model training method in Embodiment 1 is implemented. To avoid repetition, details are not described herein again.
  • the computer readable instructions are executed by one or more processors such that when executed by the one or more processors, the functions of the modules/units in the wind control model training device of Embodiment 2 are implemented, in order to avoid duplication, I will not repeat them here;
  • Embodiment 3 when the computer readable instructions are executed by one or more processors, when the one or more processors are executed, the risk identification method in Embodiment 3 is implemented. To avoid repetition, no further details are provided herein;
  • the computer readable instructions are executed by one or more processors such that when executed by the one or more processors, the functions of the modules/units in the risk identification device of Embodiment 4 are implemented, to avoid repetition, here No longer.
  • FIG. 8 is a schematic diagram of a computer device according to an embodiment of the present application.
  • computer device 80 of this embodiment includes a processor 81, a memory 82, and computer readable instructions 83 stored in memory 82 and executable on processor 81.
  • the processor 81 executes the computer readable instructions 83, the steps of the wind control model training method in the first embodiment are implemented. To avoid repetition, details are not described herein again.
  • the processor 81 executes the computer readable instructions 83, the functions of the modules/units in the wind control model training device in the second embodiment are implemented. To avoid repetition, details are not described herein; or the processor 81 executes the computer readable instructions 83.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Technology Law (AREA)
  • Data Mining & Analysis (AREA)
  • Human Resources & Organizations (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Image Analysis (AREA)

Abstract

一种风控模型训练方法、风险识别方法、装置、设备及介质。该风控模型训练方法包括:对原始视频数据进行标注,获取正负样本(S11);对所述正负样本进行分帧和人脸检测,获取训练人脸图片(S12);对所述训练人脸图片按照预设数量进行分组,获取至少一组目标训练数据(S13);目标训练数据包括连续N帧的训练人脸图片;对目标训练数据按照预设比例进行划分,获取训练集和测试集(S14);将训练集中每一组目标训练数据输入到卷积神经网络-长短时递归神经网络模型中进行训练,获取原始风控模型(S15);采用测试集中每一组目标训练数据对原始风控模型进行测试,获取目标风控模型(S16)。该风控模型训练方法具有训练效率高且识别精度高的优点。

Description

风控模型训练方法、风险识别方法、装置、设备及介质
本专利申请以2018年3月30日提交的申请号为201810292057.1,名称为“风控模型训练方法、风险识别方法、装置、设备及介质”的中国发明专利申请为基础,并要求其优先权。
技术领域
本申请涉及风险识别领域,尤其涉及一种风控模型训练方法、风险识别方法装置、设备及介质。
背景技术
在金融行业,每一笔贷款资金的发放均需进行风险控制(以下简称风控),以确定能否给贷款人发放贷款。传统的风控过程,主要采用信审人与贷款人进行面对面交流的方式进行,但是在面对面的交流过程中,信审人可能因为注意力不集中或者对人的面部表情了解不深,忽略贷款人面部的一些细微的表情变化,这些细微的表情变化可能反映贷款人交流时的心理活动(如说谎)。部分金融机构逐步采用风控模型识别贷款人是否说谎,以辅助进行贷款风控。当前的风控模型需要使用一系列的微表情识别模型抓取人脸的面部特征,进而基于这些细微的表情变化反映贷款人在贷款时的心理活动,以达到风控的目的,但在训练这些微表情识别模型时采用通用的神经网络,使得模型的准确率不高且识别效率低。
发明内容
本申请实施例提供一种风控模型训练方法、装置、设备及介质,以解决当前风险识别模型需要采用一系列微表情识别模型进行识别,导致识别效率低的问题。
本申请实施例提供一种风险识别方法,以解决当前风险识别模型采用通用的神经网络模型进行训练,使得模型识别准确率不高的问题。
本申请实施例提供一种风控模型训练方法,包括:
对原始视频数据进行标注,获取正负样本;
对所述正负样本进行分帧和人脸检测,获取训练人脸图片;
对所述训练人脸图片按照预设数量进行分组,获取至少一组目标训练数据;所述目标训练数据包括连续N帧的所述训练人脸图片;
对所述目标训练数据按照预设比例进行划分,获取训练集和测试集;
将所述训练集中每一组所述目标训练数据输入到卷积神经网络-长短时递归神经网络模型中进行训练,获取原始风控模型;
采用所述测试集中每一组所述目标训练数据对所述原始风控模型进行测试,获取目标风控模型。
本申请实施例提供一种风控模型训练装置,包括:
正负样本获取模块,用于对原始视频数据进行标注,获取正负样本;
训练人脸图片获取模块,用于对所述正负样本进行分帧和人脸检测,获取训练人脸图片;
目标训练数据获取模块,用于对所述训练人脸图片按照预设数量进行分组,获取至少一组目标训练数据;所述目标训练数据包括连续N帧的所述训练人脸图片;
目标训练数据划分模块,用于对所述目标训练数据按照预设比例进行划分,获取训练集和测试集;
原始风控模型获取模块,用于将所述训练集中每一组所述目标训练数据输入到卷积神经网络-长短时递归神经网络模型中进行训练,获取原始风控模型;
目标风控模型获取模块,用于采用所述测试集中每一组所述目标训练数据对所述原始风控模型进行测试,获取目标风控模型。
本申请实施例提供一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
对原始视频数据进行标注,获取正负样本;
对所述正负样本进行分帧和人脸检测,获取训练人脸图片;
对所述训练人脸图片按照预设数量进行分组,获取至少一组目标训练数据;所述目标训练数据包括连续N帧的所述训练人脸图片;
对所述目标训练数据按照预设比例进行划分,获取训练集和测试集;
将所述训练集中每一组所述目标训练数据输入到卷积神经网络-长短时递归神经网络模型中进行训练,获取原始风控模型;
采用所述测试集中每一组所述目标训练数据对所述原始风控模型进行测试,获取目标风控模型。
本申请实施例提供一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
对原始视频数据进行标注,获取正负样本;
对所述正负样本进行分帧和人脸检测,获取训练人脸图片;
对所述训练人脸图片按照预设数量进行分组,获取至少一组目标训练数据;所述目标训练数据包括连续N帧的所述训练人脸图片;
对所述目标训练数据按照预设比例进行划分,获取训练集和测试集;
将所述训练集中每一组所述目标训练数据输入到卷积神经网络-长短时递归神经网络模型中进行训练,获取原始风控模型;
采用所述测试集中每一组所述目标训练数据对所述原始风控模型进行测试,获取目标风控模型。
本申请实施例提供一种风险识别方法,包括:
获取待识别视频数据;
采用人脸检测模型对所述待识别视频数据进行人脸检测,获取待识别人脸图片;
对所述待识别人脸图片进行分组,获取至少一组目标人脸图片;
采用第一方面所述风控模型训练方法获取的目标风控模型对至少一组所述目标人脸图片进行识别,获取每一组所述目标人脸图片对应的风险识别概率;
基于所述风险识别概率,获取风险识结果。
本申请实施例提供一种风险识别装置,包括:
待识别视频数据获取模块,用于获取待识别视频数据;
待识别人脸图片获取模块,用于采用人脸检测模型对所述待识别视频数据进行人脸检测,获取待识别人脸图片;
目标人脸图片获取模块,用于对所述待识别人脸图片进行分组,获取至少一组目标人脸图片;
风险识别概率获取模块,用于采用第一方面所述风控模型训练方法获取的目标风控模型对至少一组所述目标人脸图片进行识别,获取每一组所述目标人脸图片对应的风险识别概率;
风险识结果获取模块,用于基于所述风险识别概率,获取风险识结果。
本申请实施例提供一种计算机设备,包括存储器、处理器以及存储在所述存储器中并 可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
获取待识别视频数据;
采用人脸检测模型对所述待识别视频数据进行人脸检测,获取待识别人脸图片;
对所述待识别人脸图片进行分组,获取至少一组目标人脸图片;
采用所述风控模型训练方法获取的目标风控模型对至少一组所述目标人脸图片进行识别,获取每一组所述目标人脸图片对应的风险识别概率;
基于所述风险识别概率,获取风险识结果。
本申请实施例提供一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
获取待识别视频数据;
采用人脸检测模型对所述待识别视频数据进行人脸检测,获取待识别人脸图片;
对所述待识别人脸图片进行分组,获取至少一组目标人脸图片;
采用所述风控模型训练方法获取的目标风控模型对至少一组所述目标人脸图片进行识别,获取每一组所述目标人脸图片对应的风险识别概率;
基于所述风险识别概率,获取风险识结果。
本申请的一个或多个实施例的细节在下面的附图及描述中提出。本申请的其他特征和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例1中提供的风控模型训练方法的一流程图。
图2是图1中步骤S12的一具体示意图;
图3是图1中步骤S15的一具体示意图;
图4是图3中步骤S153的一具体示意图;
图5是本申请实施例2中提供的风控模型训练装置的一原理框图;
图6是本申请实施例3中提供的风险识别方法的一流程图;
图7是本申请实施例4中提供的风险识别装置的一原理框图;
图8是本申请实施例6中提供的计算机设备的一示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
实施例1
图1示出本实施例中风控模型训练方法的流程图。该风控模型训练方法可应用在银行、证券、保险等金融机构上,以便采用训练好的风控模型辅助信审人对贷款人进行风险控制,从而确定能否给该贷款人发放贷款。如图1所示,该风控模型训练方法包括如下步骤:
S11:对原始视频数据进行标注,获取正负样本。
其中,原始视频数据是由互联网或第三方机构/平台所公开的数据集中获取的开源视 频数据,其包括说谎视频数据和未说谎视频数据。具体地,对原始视频数据中进行谎言标注,即对说谎视频数据标注为“0”,未说谎视频数据标注为“1”,以获取正负样本,方便模型训练,提高模型训练的效率。
本实施例中,正负样本的比例设置为1:1,即获取同等比例的说谎视频数据和未说谎视频数据,能够有效防止模型训练过拟合的情况,以使通过正负样本训练获得的风控模型的识别效果更加精准。
S12:对正负样本进行分帧和人脸检测,获取训练人脸图片。
其中,训练人脸图片是对正负样本进行分帧和人脸检测所得到的包含人的面部特征的图片。由于本实施例中的风控模型是基于微表情特征进行训练的,因此,需要对正负样本进行分帧和人脸检测,获取包含人的面部特征的图片即为训练人脸图片,以便采用训练人脸图片进行模型训练,以使风控模型能够基于训练人脸图片提取微表情特征,并进行深度学习,提高风控模型的识别准确率。
S13:对训练人脸图片按照预设数量进行分组,获取至少一组目标训练数据;目标训练数据包括连续N帧的训练人脸图片。
其中,按照预设数量进行分组,获取至少一组目标训练数据,使每一组目标训练数据中包含连续N帧的训练人脸图片,以便从连续N帧的训练人脸图片中获取人脸的微表情特征变化,以使训练人脸图片具有时序性,从而增加目标风控模型的准确率。
本实施例中,预设数量的范围可设置为[50,200],其原因在于,若将小于等于50帧的训练人脸图片作为训练集中一组训练数据,则会由于训练人脸图片过少,不能表现出一个人撒谎的面部特征的变化过程,导致风控模型的识别准确率不高。若将大于等于200帧的训练人脸图片作为训练集中的一组训练数据,则会导致模型训练的时间过长,降低模型训练的效率。本实施例中,按照每一百帧训练人脸图片作为一组训练数据进行模型训练,提高模型的训练效率和训练得到的风控模型的识别准确率。
S14:对目标训练数据按照预设比例进行划分,获取训练集和测试集。
其中,预设比例是预先设定好的,用于对训练人脸图片进行分类的比例。该预设比例可以是根据历史经验获取的比例。其中,训练集(training set)是学习样本数据集,是通过匹配一些参数来建立分类器,即采用训练集中的目标训练数据来训练机器学习模型,以确定机器学习模型的参数。测试集(test set)是用于测试训练好的机器学习模型的分辨能力,如识别率。本实施例中,可按照9:1的比例对训练人脸图片进行划分,即可将90%的训练人脸图片作为训练集,剩余10%的数据作为测试集。
S15:将训练集中每一组目标训练数据输入到卷积神经网络-长短时递归神经网络模型中进行训练,获取原始风控模型。
其中,卷积神经网络-长短时递归神经网络模型是由卷积神经网络模型和长短时递归神经网络模型相结合所得到的模型。可以理解地,卷积神经网络-长短时递归神经网络模型相当于卷积神经网络与长短时递归神经网络模型相连接形成的模型。
卷积神经网络(Convolutional Neural Network,CNN))是局部连接网络。相对于全连接网络其最大的特点就是局部连接性和权值共享性。对于一副图像中的某个像素p来说,离像素p越近的像素对其影响也就越大(局部连接性)。另外,根据自然图像的统计特性,某个区域的权值也可以用于另一个区域,即权值共享性。权值共享可以理解为卷积核共享,在卷积神经网络(CNN)中,将一个卷积核与给定的图像做卷积运算就可以提取一种图像特征,不同的卷积核可以提取不同的图像特征。由于卷积神经网络的局部连接性,使得模型的复杂度降低,提高模型训练的效率;并且,由于卷积神经网络的权值共享性,因此卷积神经网络可以并行学习,进一步提高模型训练效率。
长短时递归神经网络(long-short term memory,以下简称LSTM)模型是一种时间递归神经网络模型,适合于处理和预测具有时间序列,且时间序列间隔和延迟相对较长的重 要事件。LSTM模型具有时间记忆功能,由于本实施例中每一帧训练人脸图片的特征与前后两帧的训练人脸图片特征具有密切联系,因此采用长短时递归神经网络模型对提取到的特征进行训练,以体现数据的长期记忆能力,提高模型的准确率。
本实施例中,由于是对目标训练数据即连续N帧的训练人脸图片进行训练,因此需对训练人脸图片进行特征提取,而卷积神经网络模型是图片特征提取常用的神经网络,由于卷积神经网络的权值共享性和局部连接性,大大增加了模型训练的效率。而本实施例中每一帧训练人脸图片的特征与前后两帧的训练人脸图片特征具有密切联系,因此采用长短时递归神经网络模型对提取到的人脸特征进行训练,以体现数据的长期记忆能力,提高模型的准确率。由于卷积神经网络的权值共享性和局部连接性,以及长短时递归神经网络模型能够体现数据的长期记忆能力的优点,大大增加了由卷积神经网络-长短时递归神经网络模型进行训练得到的风控模型训练的效率以及风控模型的准确率。
S16:采用测试集中每一组目标训练数据对原始风控模型进行测试,获取目标风控模型。
其中,目标风控模型是采用测试集中的训练人脸图片对原始风险模型进行测试,以使原始风控模型的准确度达到预设准确度的模型。具体地,采用测试集中的目标训练数据即连续N帧的训练人脸图片对原始风控模型进行测试,以获取对应的准确度;若准确度达到预设准确度,则将该原始风控模型作为目标风控模型。
本实施例中,先对原始视频数据进行标注,获取正负样本,以方便模型训练,提高模型训练的效率。并将正负样本的比例设置同等比例,能够有效防止模型训练过拟合的情况,以使通过正负样本训练获得的风控模型的识别效果更加精准。然后,对正负样本进行分帧和人脸检测,获取包含人面部特征的图片即训练人脸图片,以使风控模型能够基于训练人脸图片提取微表情特征,并进行深度学习,提高风控模型的识别准确率。对训练人脸图片按照预设数量进行分组,以使每一预设数量的连续N帧的训练人脸图片作为一组目标训练数据进行模型训练,提高模型的训练效率和风控模型识别的准确率。对训练人脸图片按照预设比例进行划分,获取训练集和测试集,并将训练集中每一组目标训练数据输入到卷积神经网络-长短时递归神经网络模型中进行训练,获取原始风控模型,以使原始风控模型具有时序性,并且由于卷积神经网络的权值共享性,因此网络可以并行学习,提高模型训练效率,由于卷积神经网络的局部连接性,使得模型的复杂度降低,提高模型训练的效率。最后,采用测试集中每一组目标训练数据对原始风控模型进行测试,获取目标风控模型,以使目标风控模型的识别效果更加精准。
在一具体实施方式中,如图2所示,步骤S12中,即对正负样本进行分帧和人脸检测,获取训练人脸图片,具体包括如下步骤:
S121:对正负样本进行分帧,获取视频图像。
其中,分帧是指按照预设时间对原始视频数据进行划分,以获取视频图像。具体地,在对正负样本进行分帧的步骤之后,还包括对视频图像进行归一化和时间标注的步骤。归一化是一种简化计算的方式,即将有量纲的表达式,经过变换,化为无量纲的表达式,成为标量。例如本实施例中的正负样本中,需要有客户的面部区域,才能提取客户的微表情特征,因此需要将分帧后的视频图像的像素归一化到260*260像素,统一像素,以便后续对每一帧视频图像进行人脸检测,提高模型识别的准确率。对视频图像进行时间标注,即对每一帧视频图像按照时间的先后顺序进行标注,以使视频图像具有时序性,提高模型的准确率。
S122:采用人脸检测模型对视频图像进行人脸检测,获取训练人脸图片。
其中,人脸检检测模型是预先训练好的用于检测每一帧视频图像是否包含人的面部区域的模型。具体地,将每一帧视频图像输入到人脸检测模型中,检测每一帧视频图像中的人脸位置,进而提取包含有人脸的视频图像即为训练人脸图片,为后续模型的输入提供技 术支持。
本实施例中,对正负样本进行分帧以及归一化处理,获取视频图像,统一每一帧视频图像的像素,以便后续对每一帧视频图像进行人脸检测,提高风控模型训练的效率。最后,采用人脸检测模型对视频图像进行人脸检测,以获取包含人脸的视频图像即为训练人脸图片,为后续模型的输入提供技术支持,并且通过对包含有人脸的视频图像进行模型训练,排除其他因素干扰,以使模型能够基于训练人脸图片,提取微表情特征,为风控模型的训练提供技术支持。
在一具体实施方式中,步骤S122中的人脸检测模型具体为采用CascadeCNN网络训练得到的人脸检测模型。
其中,CascadeCNN(级联卷积神经网络)是对经典的Violajones方法的深度卷积网络实现,是一种检测速度较快的人脸检测方法。Violajones是一种人脸检测框架。本实施例中,采用CascadeCNN方法对标注好人脸位置的图片进行训练,以获取人脸检测模型,提高了人脸检测模型的识别效率。
具体地,采用CascadeCNN方法对标注好人脸位置的图片(训练人脸图片)进行训练的步骤如下:
训练第一阶段,采用12-net网络扫描图像,并拒绝90%以上的窗口,将剩余窗口输入到12-calibration-net网络进行矫正,然后对采用非极大值抑制算法对矫正后的图像进行处理,以消除高度重叠窗口。其中,12-net是使用12×12的检测窗口,以步长为4,在W(宽)×H(高)的图片上滑动,得到检测窗口。12-calibration-net是矫正网络,用于矫正人脸所在区域,得出人脸的区域坐标。非极大值抑制算法在目标检测和定位等领域是一种被广泛使用的方法,其算法原理的本质是搜索局部极大值并抑制非极大值元素。利用上述的12-net网络对训练人脸图片作人脸检测,将训练人脸图片中判为非人脸(即没有超过预设阈值的)的窗口作为负样本,将所有真实人脸(即超过预设阈值的)的窗口作为正样本,以获取对应的检测窗口。其中,预设阈值是开发人员预先设定好的用于判断训练数据中是否存在人脸的阈值。
训练第二阶段,采用24-net和24-calibration-net网络对第一阶段输出的图像进行处理;其中,12-net和24-net都是用于判断是否为人脸区的网络,其区别在于24-net是在12-net的基础上,将24×24的图片输入到24-net网络得到24-net的全连接层提取的特征,并同时将21×24的图片缩放到12×12,输入到12-net全连接层,最后将24-net全连接层提取的特征与12-net全连接层得到的特征一起输出。12-calibration-net网络和24-calibration-net网络是矫正网络。利用上述的24-net网络在训练数据上作人脸检测,将训练数据中判定为非人脸的窗口作为负样本,将所有真实人脸作为正样本。
训练第三阶段,采用48-net和48-calibration-net网络对训练第二阶段的输出结果进行处理,以完成最后阶段的训练。该阶段处理过程与训练第二阶段类似,为避免重复,这里不再一一赘述。
本实施例中,采用CascadeCNN网络训练得到的人脸检测模型对视频图像进行人脸检测,获取训练人脸图片的过程与上述训练过程保持一致,为避免重复,这里不再一一赘述。
在一具体实施方式中,如图3所示,步骤S15中,即将训练集中每一组目标训练数据输入到卷积神经网络-长短时递归神经网络模型中进行训练,获取原始风控模型,具体包括如下步骤:
S151:初始化卷积神经网络-长短时递归神经网络模型。
其中,初始化卷积神经网络-长短时递归神经网络模型是指预先初始化卷积神经网络模型的模型参数(即卷积核和偏置)以及LSTM模型中的模型参数(即各层之间的连接权值)。卷积核是指卷积神经网络的权值,当输入训练数据时,会乘上一个权值即卷积核,然后得到神经元的输出,它反映了训练数据的重要程度。偏置是用于更改权重乘输入的范 围的线性分量。基于确定的卷积核、偏置以及LSTM模型中各层之间的连接权值,即可完成模型训练的过程。
S152:采用卷积神经网络对训练集中的目标训练数据进行特征提取,获取人脸特征。
其中,人脸特特征是采用卷积神经网络对训练集中的的目标训练数据即连续N帧的训练人脸图片进行特征提取所得到的面部特征。具体地,采用卷积神经网络对训练集中的目标训练数据进行特征提取,具体包括如下步骤:
其中,人脸特征是采用卷积神经网络模型对训练集中的目标训练数据进行卷积运算所得到的特征。具体地,卷积运算的计算公式包括
Figure PCTCN2018094216-appb-000001
其中,*代表卷积运算;x j代表第j个输入特征图;y j代表第j个输出特征图;w ij是第i个输入特征图与第j个输出特征图之间的卷积核(权值);b j代表第j个输出特征图的偏置项。然后采用最大池化下采样对卷积后的特征图进行下采样操作以实现对特征图的降维,其计算公式为
Figure PCTCN2018094216-appb-000002
其中,y j表示下采样过程中的第i个输出谱(即下采样后的特征图),下采样过程中的每一个神经元是从第i个输入谱(卷积后的特征图)中采用S*S的下采样框局部采样得到的;m与n分别表示下采样框移动的步长。
S153:将人脸特征输入到长短时递归神经网络模型中进行训练,获取原始风控模型。
具体地,LSTM模型是具有长时记忆能力的神经网络模型中的一种,具有输入层、隐藏层和输出层这三层网络结构。其中,输入层是LSTM模型的第一层,用于接收外界信号,即负责接收携带时序状态的人脸特征。本实施例中,由于训练集中的训练人脸图片具有时序性,因此,训练集中的训练人脸图片经步骤S152处理后获取的人脸特征也具有时序性,使其可应用在LSTM模型中,使得LSTM获取携带时序状态的人脸特征。输出层是LSTM模型的最后一层,用于向外界输出信号,即负责输出LSTM模型的计算结果。隐藏层是LSTM模型中除输入层和输出层之外的各层,用于对输入的人脸特征进行处理,获取LSTM模型的计算结果。其中,原始风控模型是采用LSTM模型对携带时序状态的人脸特征进行多次迭代直至收敛所得到的模型。可以理解地,采用LSTM模型对提取的人脸特征进行模型训练增强了获取到的原始风控模型的时序性,从而提高了原始风控模型的准确率。
本实施例中,LSTM模型的输出层采用Softmax(回归模型)进行回归处理,用于分类输出权重矩阵。Softmax(回归模型)是一种常用于神经网络的分类函数,它将多个神经元的输出,映射到[0,1]区间内,可以理解成概率,计算起来简单方便,从而来进行多分类输出,使其输出结果更准确。
本实施例中,先初始化卷积神经网络-长短时递归神经网络模型,以便基于卷积神经网络模型对训练集中的目标训练数据进行训练,获取人脸特征,然后将获取到的人脸特征输入LSTM模型进行训练,该过程无需人为提取特征,只需将训练人脸图片直接输入到卷积神经网络-长短时递归神经网络模型中,即可由模型自行提取特征,提高模型训练效率。
如图4所示,将人脸特征输入到长短时递归神经网络模型中进行训练(即步骤S153),具体包括如下步骤:
S1531:采用前向传播算法对人脸特征进行训练,获取第一状态参数。
具体地,采用前向传播(Forward Propagation)算法对人脸特征进行训练,是指采用前向传播算法依据人脸特征携带的时序状态的先后顺序进行训练。其中,第一状态参数是指基于人脸特征进行模型训练的初始迭代过程所得到的参数。
其中,前向传播(Forward Propagation)算法是依据时间的顺序进行模型训练的算 法。具体地,前向传播算法的计算公式为
Figure PCTCN2018094216-appb-000003
Figure PCTCN2018094216-appb-000004
其中,S t表示当前时刻隐藏层的输出;
Figure PCTCN2018094216-appb-000005
表示隐藏层上一时刻到当前时刻的权值;
Figure PCTCN2018094216-appb-000006
表示输入层到隐藏层的权值;
Figure PCTCN2018094216-appb-000007
表示当前时刻的预测输出;
Figure PCTCN2018094216-appb-000008
表示隐藏层到输出层的权值。
可以理解地,前向传播算法是将当前时刻的输入X t以及上一时刻的隐藏单元的输出S t-1,即LSTM模型中隐藏层内的记忆单元的输出S t-1作为隐藏层的输入,之后通过激活函数tanh(双曲正切)的变换得到隐藏层当前时刻的输出S t,t时刻的预测输出则用
Figure PCTCN2018094216-appb-000009
表示,U表示隐藏层上一时刻到当前时刻的权值,W表示从输入层到隐藏层的权值,V表示从隐藏层到输出层的权值。由此可知,预测输出
Figure PCTCN2018094216-appb-000010
与当前时刻的输出S t相关,S t包括了t时刻的输入与t-1时刻的状态,使得模型输出保留了时间序列上所有的信息,具有时序性。
本实施例中,由于线性模型的表达能力不够,因此采用tanh(双曲正切)作为激活函数,可加入非线性因素使得训练出的原始风控模型能够解决更复杂的问题。并且,激活函数tanh(双曲正切)具有收敛速度快的优点,能够节省训练时间,提高模型训练的效率。
S1532:采用后向传播算法对第一状态参数进行误差计算,获取原始风控模型。
其中,后向传播(Back Propagation)算法是从最后一个时间将累积的残差传递回来并进行神经网络模型训练的算法。具体地,后向传播算法的计算公式为
Figure PCTCN2018094216-appb-000011
其中,
Figure PCTCN2018094216-appb-000012
表示t时刻的预测输出;o t表示t时刻与
Figure PCTCN2018094216-appb-000013
对应的真实值。本实施例中,采用后向传播算法对第一状态参数进行误差计算,并基于误差计算的结果进行误差反传更新,以更新LSTM模型的权值参数和卷积神经网络的权值参数,可有效提高风控模型的准确率。
具体地,采用后向传播(Back Propagation)算法对第一状态参数进行误差计算,是指按照时间反向的顺序更新优化参数,即本实施例中的U、V和W这三个权重参数。本实施例中,误差计算是将后向传播的第t时刻的损失函数定义为交叉熵来进行计算,即采用公式
Figure PCTCN2018094216-appb-000014
进行计算。最后根据链式求导法计算出每一层的偏导即计算出每一层的偏导即计算出
Figure PCTCN2018094216-appb-000015
基于这三个变化率来更新U、V和W这三个权值参数,以获取调节后的状态参数。其中,
Figure PCTCN2018094216-appb-000016
由此可知我们只需对每一时刻的损失函数计算偏导数再相加即可得到上述四个变化率从而更新LSTM模型的权值参数。其中,链式求导法是微积分中的求导法则,用于求一个复合函数的导数, 是在微积分的求导运算中一种常用的方法。最后,采用公式
Figure PCTCN2018094216-appb-000017
Figure PCTCN2018094216-appb-000018
计算卷积神经网络的偏置和卷积核的偏导,反向更新卷积神经网络的模型参数(即卷积核和偏置),其中,b表示卷积神经网络的偏置,k表示卷积神经网络的卷积核。由于LSTM模型和卷积神经网络模型是一个神经网络,因此,基于LSTM模型中的后向传播算法更新LSTM模型的模型参数以及卷积神经网络模型的模型参数,即可完成对原始风控模型的优化。
具体地,由于梯度会随着反向传播层数的递增而成指数增长造成梯度消失的现象,本实施例中采用交叉熵损失函数与tanh激活函数配合能够很好的解决梯度消失的问题,增加训练的准确率。
本实施例中,先采用前向传播算法对人脸特征进行训练,获取第一状态参数,然后采用后向传播算法对第一状态参数进行误差计算,并基于误差计算的结果进行误差反传更新,以更新LSTM模型的权值参数和卷积神经网络的权值参数,可有效提高获取到的原始风控模型的准确率。
本实施例中,由于卷积神经网络(CNN)是局部连接网络,具有局部连接性和权值共享性,以使模型能够并行学习,因此采用卷积神经网络对训练集中的人脸图片进行特征提取,提高了人脸特征的获取效率,进而提高了模型训练的效率。然后将获取的人脸特征输入到LSTM模型中进行训练,获取具有时序性的原始风控模型,以增强原始风控模型在时间上的预测能力,提高原始风险模型的准确率。
本实施例中,先对原始视频数据进行标注,获取正负样本,以方便模型训练,提高模型训练的效率。然后,将正负样本的比例设置同等比例,能够有效防止模型训练过拟合的情况,以使通过正负样本训练获得的风控模型的识别效果更加精准。然后,对正负样本进行分帧以及归一化处理,获取视频图像,统一每一帧视频图像的像素,以便后续对每一帧视频图像进行人脸检测,提高风险识别的准确率。最后,采用人脸检测模型对视频图像进行人脸检测,以获取包含人脸的视频图像即训练人脸图片,为后续模型的输入提供技术支持,并且通过对包含有人脸的视频图像进行模型训练,排除其他因素干扰,以使模型能够基于训练人脸图片,提取微表情特征,达到风险控制的目的。对训练人脸图片按照预设数量进行分组,以使每一预设数量的连续N帧的训练人脸图片作为一组目标训练数据进行模型训练,提高模型的训练效率和风控模型识别的准确率。对目标训练数据按照预设比例进行划分,获取训练集和测试集,并将训练集中的每一组目标训练数据输入到卷积神经网络-长短时递归神经网络模型中进行训练,获取原始风控模型,以使原始风控模型具有时序性,并且由于卷积神经网络的权值共享性,因此网络可以并行学习,提高模型训练效率。最后,采用测试集中每一组目标训练数据对原始风控模型进行测试,获取目标风控模型,以使目标风控模型的识别效果更加精准。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
实施例2
图5示出与实施例1中风控模型训练方法一一对应的风控模型训练装置的原理框图。如图5所示,该风控模型训练装置包括正负样本获取模块11、训练人脸图片获取模块12、目标训练数据获取模块13、目标训练数据划分模块14、原始风控模型获取模块15和目标风控模型获取模块16。其中,正负样本获取模块11、训练人脸图片获取模块12、目标训练数据获取模块13、目标训练数据划分模块14、原始风控模型获取模块15和目标风控模型获取模块16的实现功能与实施例1中风控模型训练方法对应的步骤一一对应,为避免赘述,本实施例不一一详述。
正负样本获取模块11,用于对原始视频数据进行标注,获取正负样本。
训练人脸图片获取模块12,用于对正负样本进行分帧和人脸检测,获取训练人脸图片。
目标训练数据获取模块13,用于对训练人脸图片按照预设比例进行划分,获取训练集和测试集。
目标训练数据划分模块14,用于对目标训练数据按照预设比例进行划分,获取训练集和测试集。
原始风控模型获取模块15,用于将训练集中每一组目标训练数据输入到卷积神经网络-长短时递归神经网络模型中进行训练,获取原始风控模型。
目标风控模型获取模块16,用于采用测试集中每一组目标训练数据对原始风控模型进行测试,获取目标风控模型。
优选地,训练人脸图片获取模块12包括视频图像获取单元121和训练人脸图片获取单元122。
视频图像获取单元121,用于对正负样本进行分帧,获取视频图像。
训练人脸图片获取单元122,用于采用人脸检测模型对视频图像进行人脸检测,获取训练人脸图片。
优选地,原始风控模型获取模块15包括模型初始化单元151、人脸特征获取单元152和原始风控模型获取单元153。
模型初始化单元151,用于初始化卷积神经网络-长短时递归神经网络模型。
人脸特征获取单元152,用于采用卷积神经网络对训练集中的目标训练数据进行特征提取,获取人脸特征。
原始风控模型获取单元153,用于将人脸特征输入到长短时递归神经网络模型中进行训练,获取原始风控模型。
优选地,原始风控模型获取单元153包括第一状态参数获取子单元1531和原始风控模型获取子单元1532。
第一状态参数获取子单元1531,用于采用前向传播算法对人脸特征进行训练,获取第一状态参数。
原始风控模型获取子单元1532,用于采用后向传播算法对第一状态参数进行误差计算,获取原始风控模型。
实施例3
图6示出本实施例中风险识别方法的流程图。该风控模型训练方法可应用在银行、证券、保险等金融机构配置的计算机设备上,能够有效辅助信审人对贷款人进行风险控制,进而确定是否给该贷款人发放贷款。如图6所示,该风险识别方法包括如下步骤:
S21:获取待识别视频数据。
其中,待识别视频数据是用于记录贷款人在信审过程中的未经处理的视频数据。由于针对一帧待识别视频图像进行识别的准确性不高,因此本实施例中的待识别视频数据是由至少两帧待识别视频图像组成的视频数据。
本实施例中,在信审过程中,信审人可通过视频聊天的方式对目标客户进行提问,以获取目标客户回复的视频数据(即待识别视频数据),以使信审过程智能化,无需信审人与目标客户进行面对面交流,以节省人工成本。
S22:采用人脸检测模型对待识别视频数据进行人脸检测,获取待识别人脸图片。
其中,待识别人脸图片是采用人脸检测模型对待识别视频数据进行人脸检测所获取的用于进行识别的人脸图片。具体地,将待识别视频数据中的每一帧待识别视频图像输入到人脸检测模型中,检测每一帧待识别视频图像中的人脸位置,进而提取包含有人脸的视频图像即待识别人脸图片。具体地,该人脸检测模型具体为采用CascadeCNN网络训练得到 的人脸检测模型,其对待识别视频数据进行人脸检测的过程与实施例1中的检测过程相同,为避免重复,在此不一一赘述。
S23:对待识别人脸图片进行分组,获取至少一组目标人脸图片。
其中,按照预设数量对待识别人脸图片进行分组,获取至少一组目标人脸图片。具体地,对待识别人脸图片按照交叉选取的方式对待识别人脸图片进行分组。本实施例中,按照每一百帧为一组待识别数据(即目标人脸图片)进行分组,例如,一个40s的待识别视频数据(包含960帧),按照每一百帧图片进行分组即第1张图片到第100张图片为一组,第10张图片到第110张图片为一组,以此类推,获取至少一组目标人脸图片,通过此交叉选取的方式获取至少一组目标人脸图片,以充分保留待识别人脸图片之间的联系,提高了模型识别的准确率
S24:采用目标风控模型对至少一组目标人脸图片进行识别,获取每一组目标人脸图片对应的风险识别概率。
其中,目标风控模型是采用实施例1中风控模型训练方法进行训练所获取的目标风控模型。本实施例中,将至少一组目标人脸图片输入到与目标风险模型中进行识别,在目标风险模型中对输入的至少一组目标人脸图片进行计算,并输出与每一组目标人脸图片对应的风险识别概率。本实施例中,该识别概率可以为0-1之间的实数。
S25:基于风险识别概率,获取风险识别结果。
具体地,采用加权运算公式
Figure PCTCN2018094216-appb-000019
对风险识别概率进行计算,获取风控识结果。其中,p i是每一组目标人脸图片对应的风险识别概率,w i为每一组目标人脸图片对应的权重。
本实施例中,每一组目标人脸图片对应的权重是由话术针对不同问题设置不同的权重,例如,对于年龄、性别和姓名等基础类的信审问题,设置的权重会较低,而对于贷款用途、个人收入和偿还意愿等敏感类的信审问题设置的权重会相对较高,通过加权运算对风险识别概率进行计算,获取风控识结果,以使风险识别结果更加准确。其中,基础类的信审问题和敏感类的信审问题的区分是依据该问题是否存在标准答案的条件进行划分。以银行为例,若目标客户在银行、证券、保险等金融机构预存储了一些个人信息(如身份证号、亲人手机号和家庭住址等),则基于这些预先存储有标准答案的个人信息所提出的问题即为基础类的信审问题。而对于目标客户没有在银行、证券、保险等金融机构预存储的信息,认为该部分信息没有标准答案,则基于该部分信息所提出的问题即为敏感类的信审问题。
本实施例中,先通过视频聊天的方式对目标客户进行提问,以获取目标客户回复的视频数据即为待识别视频数据,以使信审过程智能化,无需信审人与目标客户进行面对面交流,以节省人工成本。然后,用人脸检测模型对待识别视频数据进行人脸检测,进而提取包含有人脸的视频图像即为待识别人脸图片,通过交叉选取方式对待识别人脸图片进行分组,获取至少一组目标人脸图片,提高模型识别的准确率。采用目标风控模型对至少一组目标人脸图片进行识别,获取每一组目标人脸图片对应的风险识别概率,提高了目标风控模型的识别效率和识别准确率。最后,通过加权运算对风险识别概率进行计算,获取风控识结果,以使风险识别结果更加准确。
实施例4
图7示出与实施例3中风险识别方法一一对应的风险识别装置的原理框图。如图7所示,该风险识别装置包括待识别视频数据获取模块21、待识别人脸图片获取模块22、 目标人脸图片获取模块23、风险识别概率获取模块24和风险识结果获取模块25。其中,待识别视频数据获取模块21、待识别人脸图片获取模块22、目标人脸图片获取模块23、风险识别概率获取模块24和风险识结果获取模块25的实现功能与实施例3中风险识别方法对应的步骤一一对应,为避免赘述,本实施例不一一详述。
待识别视频数据获取模块21,用于获取待识别视频数据。
待识别人脸图片获取模块22,用于采用人脸检测模型对待识别视频数据进行人脸检测,获取待识别人脸图片。
目标人脸图片获取模块23,用于对待识别人脸图片进行分组,获取至少一组目标人脸图片。
风险识别概率获取模块24,用于采用实施例1风控模型训练方法获取的目标风控模型对至少一组目标人脸图片进行识别,获取每一组目标人脸图片对应的风险识别概率。
风险识结果获取模块25,用于基于风险识别概率,获取风险识结果。
优选地,风险识结果获取模块25,用于采用加权运算公式
Figure PCTCN2018094216-appb-000020
对风险识别概率进行计算,获取风控识结果;其中,p i是每一组目标人脸图片对应的风险识别概率,w i为每一组目标人脸图片对应的权重。
实施例5
本实施例提供一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行时实现实施例1中风控模型训练方法,为避免重复,这里不再赘述。
或者,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行时实现实施例2中风控模型训练装置中各模块/单元的功能,为避免重复,这里不再赘述;
或者,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行时实现实施例3中风险识别方法,为避免重复,这里不再赘述;
或者,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行时实现实施例4中风险识别装置中各模块/单元的功能,为避免重复,这里不再赘述。
实施例6
图8是本申请一实施例提供的计算机设备的示意图。如图8所示,该实施例的计算机设备80包括:处理器81、存储器82以及存储在存储器82中并可在处理器81上运行的计算机可读指令83。处理器81执行计算机可读指令83时实现上述实施例1中风控模型训练方法的步骤,为避免重复,这里不再赘述。或者,处理器81执行计算机可读指令83时实现上述实施例2中风控模型训练装置中各模块/单元的功能,为避免重复,这里不再赘述;或者,处理器81执行计算机可读指令83时实现上述实施例3中风险识别方法的步骤,为避免重复,这里不再赘述;处理器81执行计算机可读指令83时实现上述实施例4中风险识别装置中各模块/单元的功能,为避免重复,这里不再赘述。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种风控模型训练方法,其特征在于,包括:
    对原始视频数据进行标注,获取正负样本;
    对所述正负样本进行分帧和人脸检测,获取训练人脸图片;
    对所述训练人脸图片按照预设数量进行分组,获取至少一组目标训练数据;所述目标训练数据包括连续N帧的所述训练人脸图片;
    对所述目标训练数据按照预设比例进行划分,获取训练集和测试集;
    将所述训练集中每一组所述目标训练数据输入到卷积神经网络-长短时递归神经网络模型中进行训练,获取原始风控模型;
    采用所述测试集中每一组所述目标训练数据对所述原始风控模型进行测试,获取目标风控模型。
  2. 如权利要求1所述的风控模型训练方法,其特征在于,所述对所述正负样本进行分帧和人脸检测,获取训练人脸图片,包括:
    对所述正负样本进行分帧,获取视频图像;
    采用人脸检测模型对所述视频图像进行人脸检测,获取所述训练人脸图片。
  3. 如权利要求1所述的风控模型训练方法,其特征在于,所述将所述训练集中每一组所述目标训练数据输入到卷积神经网络-长短时递归神经网络模型中进行训练,获取原始风控模型,包括:
    初始化卷积神经网络-长短时递归神经网络模型的模型参数;
    采用卷积神经网络对所述训练集中的目标训练数据进行特征提取,获取人脸特征;
    将所述人脸特征输入到长短时递归神经网络模型中进行训练,获取所述原始风控模型。
  4. 如权利要求3所述的风控模型训练方法,其特征在于,将所述人脸特征输入到长短时递归神经网络模型中进行训练,获取所述原始风控模型,包括:
    采用前向传播算法对所述人脸特征进行训练,获取第一状态参数;
    采用后向传播算法对所述第一状态参数进行误差计算,获取原始风控模型。
  5. 一种风险识别方法,其特征在于,包括:
    获取待识别视频数据;
    采用人脸检测模型对所述待识别视频数据进行人脸检测,获取待识别人脸图片;
    对所述待识别人脸图片进行分组,获取至少一组目标人脸图片;
    采用权利要求1-4任一项所述风控模型训练方法获取的目标风控模型对至少一组所述目标人脸图片进行识别,获取每一组所述目标人脸图片对应的风险识别概率;
    基于所述风险识别概率,获取风险识结果。
  6. 如权利要求5所述的风险识别方法,其特征在于,所述基于所述风险识别概率,获取风险识结果,包括:
    采用加权运算公式
    Figure PCTCN2018094216-appb-100001
    对所述风险识别概率进行计算,获取风控识结果;其中,p i是每一组所述目标人脸图片对应的风险识别概率,w i为每一组所述目标人脸图片对应的权重。
  7. 一种风控模型训练装置,其特征在于,包括:
    正负样本获取模块,用于对原始视频数据进行标注,获取正负样本;
    训练人脸图片获取模块,用于对所述正负样本进行分帧和人脸检测,获取训练人脸图 片;
    目标训练数据获取模块,用于对所述训练人脸图片按照预设数量进行分组,获取至少一组目标训练数据;所述目标训练数据包括连续N帧的所述训练人脸图片;
    目标训练数据划分模块,用于对所述目标训练数据按照预设比例进行划分,获取训练集和测试集;
    原始风控模型获取模块,用于将所述训练集中每一组所述目标训练数据输入到卷积神经网络-长短时递归神经网络模型中进行训练,获取原始风控模型;
    目标风控模型获取模块,用于采用所述测试集中每一组所述目标训练数据对所述原始风控模型进行测试,获取目标风控模型。
  8. 一种风险识别装置,其特征在于,包括:
    待识别视频数据获取模块,用于获取待识别视频数据;
    待识别人脸图片获取模块,用于采用人脸检测模型对所述待识别视频数据进行人脸检测,获取待识别人脸图片;
    目标人脸图片获取模块,用于对所述待识别人脸图片进行分组,获取至少一组目标人脸图片;
    风险识别概率获取模块,用于采用权利要求1-4任一项所述风控模型训练方法获取的目标风控模型对至少一组所述目标人脸图片进行识别,获取每一组所述目标人脸图片对应的风险识别概率;
    风险识结果获取模块,用于基于所述风险识别概率,获取风险识结果。
  9. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:
    对原始视频数据进行标注,获取正负样本;
    对所述正负样本进行分帧和人脸检测,获取训练人脸图片;
    对所述训练人脸图片按照预设数量进行分组,获取至少一组目标训练数据;所述目标训练数据包括连续N帧的所述训练人脸图片;
    对所述目标训练数据按照预设比例进行划分,获取训练集和测试集;
    将所述训练集中每一组所述目标训练数据输入到卷积神经网络-长短时递归神经网络模型中进行训练,获取原始风控模型;
    采用所述测试集中每一组所述目标训练数据对所述原始风控模型进行测试,获取目标风控模型。
  10. 如权利要求9所述的计算机设备,其特征在于,所述对所述正负样本进行分帧和人脸检测,获取训练人脸图片,包括:
    对所述正负样本进行分帧,获取视频图像;
    采用人脸检测模型对所述视频图像进行人脸检测,获取所述训练人脸图片。
  11. 如权利要求9所述的计算机设备,其特征在于,所述将所述训练集中每一组所述目标训练数据输入到卷积神经网络-长短时递归神经网络模型中进行训练,获取原始风控模型,包括:
    初始化卷积神经网络-长短时递归神经网络模型的模型参数;
    采用卷积神经网络对所述训练集中的目标训练数据进行特征提取,获取人脸特征;
    将所述人脸特征输入到长短时递归神经网络模型中进行训练,获取所述原始风控模型。
  12. 如权利要求11所述的计算机设备,其特征在于,将所述人脸特征输入到长短时递归神经网络模型中进行训练,获取所述原始风控模型,包括:
    采用前向传播算法对所述人脸特征进行训练,获取第一状态参数;
    采用后向传播算法对所述第一状态参数进行误差计算,获取原始风控模型。
  13. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:
    获取待识别视频数据;
    采用人脸检测模型对所述待识别视频数据进行人脸检测,获取待识别人脸图片;
    对所述待识别人脸图片进行分组,获取至少一组目标人脸图片;
    采用权利要求1-4任一项所述风控模型训练方法获取的目标风控模型对至少一组所述目标人脸图片进行识别,获取每一组所述目标人脸图片对应的风险识别概率;
    基于所述风险识别概率,获取风险识结果。
  14. 如权利要求13所述的计算机设备,其特征在于,所述基于所述风险识别概率,获取风险识结果,包括:
    采用加权运算公式
    Figure PCTCN2018094216-appb-100002
    对所述风险识别概率进行计算,获取风控识结果;其中,p i是每一组所述目标人脸图片对应的风险识别概率,w i为每一组所述目标人脸图片对应的权重。
  15. 一个或多个存储有计算机可读指令的非易失性可读存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
    对原始视频数据进行标注,获取正负样本;
    对所述正负样本进行分帧和人脸检测,获取训练人脸图片;
    对所述训练人脸图片按照预设数量进行分组,获取至少一组目标训练数据;所述目标训练数据包括连续N帧的所述训练人脸图片;
    对所述目标训练数据按照预设比例进行划分,获取训练集和测试集;
    将所述训练集中每一组所述目标训练数据输入到卷积神经网络-长短时递归神经网络模型中进行训练,获取原始风控模型;
    采用所述测试集中每一组所述目标训练数据对所述原始风控模型进行测试,获取目标风控模型。
  16. 如权利要求15所述的非易失性可读存储介质,其特征在于,所述对所述正负样本进行分帧和人脸检测,获取训练人脸图片,包括:
    对所述正负样本进行分帧,获取视频图像;
    采用人脸检测模型对所述视频图像进行人脸检测,获取所述训练人脸图片。
  17. 如权利要求15所述的非易失性可读存储介质,其特征在于,所述将所述训练集中每一组所述目标训练数据输入到卷积神经网络-长短时递归神经网络模型中进行训练,获取原始风控模型,包括:
    初始化卷积神经网络-长短时递归神经网络模型的模型参数;
    采用卷积神经网络对所述训练集中的目标训练数据进行特征提取,获取人脸特征;
    将所述人脸特征输入到长短时递归神经网络模型中进行训练,获取所述原始风控模型。
  18. 如权利要求17所述的非易失性可读存储介质,其特征在于,将所述人脸特征输入到长短时递归神经网络模型中进行训练,获取所述原始风控模型,包括:
    采用前向传播算法对所述人脸特征进行训练,获取第一状态参数;
    采用后向传播算法对所述第一状态参数进行误差计算,获取原始风控模型。
  19. 一个或多个存储有计算机可读指令的非易失性可读存储介质,其特征在于,所述 计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
    获取待识别视频数据;
    采用人脸检测模型对所述待识别视频数据进行人脸检测,获取待识别人脸图片;
    对所述待识别人脸图片进行分组,获取至少一组目标人脸图片;
    采用权利要求1-4任一项所述风控模型训练方法获取的目标风控模型对至少一组所述目标人脸图片进行识别,获取每一组所述目标人脸图片对应的风险识别概率;
    基于所述风险识别概率,获取风险识结果。
  20. 如权利要求19所述的非易失性可读存储介质,其特征在于,所述基于所述风险识别概率,获取风险识结果,包括:
    采用加权运算公式
    Figure PCTCN2018094216-appb-100003
    对所述风险识别概率进行计算,获取风控识结果;其中,p i是每一组所述目标人脸图片对应的风险识别概率,w i为每一组所述目标人脸图片对应的权重。
PCT/CN2018/094216 2018-03-30 2018-07-03 风控模型训练方法、风险识别方法、装置、设备及介质 WO2019184124A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810292057.1A CN108510194B (zh) 2018-03-30 2018-03-30 风控模型训练方法、风险识别方法、装置、设备及介质
CN201810292057.1 2018-03-30

Publications (1)

Publication Number Publication Date
WO2019184124A1 true WO2019184124A1 (zh) 2019-10-03

Family

ID=63380183

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/094216 WO2019184124A1 (zh) 2018-03-30 2018-07-03 风控模型训练方法、风险识别方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN108510194B (zh)
WO (1) WO2019184124A1 (zh)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826320A (zh) * 2019-11-28 2020-02-21 上海观安信息技术股份有限公司 一种基于文本识别的敏感数据发现方法及系统
CN111210335A (zh) * 2019-12-16 2020-05-29 北京淇瑀信息科技有限公司 用户风险识别方法、装置及电子设备
CN111222026A (zh) * 2020-01-09 2020-06-02 支付宝(杭州)信息技术有限公司 用户类别识别模型的训练方法和用户类别识别方法
CN111291668A (zh) * 2020-01-22 2020-06-16 北京三快在线科技有限公司 活体检测方法、装置、电子设备及可读存储介质
CN111400663A (zh) * 2020-03-17 2020-07-10 深圳前海微众银行股份有限公司 风险识别方法、装置、设备及计算机可读存储介质
CN111460909A (zh) * 2020-03-09 2020-07-28 兰剑智能科技股份有限公司 基于视觉的货位管理方法和装置
CN111522570A (zh) * 2020-06-19 2020-08-11 杭州海康威视数字技术股份有限公司 目标库更新方法、装置、电子设备及机器可读存储介质
CN111582654A (zh) * 2020-04-14 2020-08-25 五邑大学 基于深度循环神经网络的服务质量评价方法及其装置
CN111723907A (zh) * 2020-06-11 2020-09-29 浪潮电子信息产业股份有限公司 一种模型训练设备、方法、系统及计算机可读存储介质
CN111768286A (zh) * 2020-05-14 2020-10-13 北京旷视科技有限公司 风险预测方法、装置、设备及存储介质
CN111859913A (zh) * 2020-06-12 2020-10-30 北京百度网讯科技有限公司 风控特征因子的处理方法、装置、电子设备及存储介质
CN111861701A (zh) * 2020-07-09 2020-10-30 深圳市富之富信息技术有限公司 风控模型优化方法、装置、计算机设备及存储介质
CN111950625A (zh) * 2020-08-10 2020-11-17 中国平安人寿保险股份有限公司 基于人工智能的风险识别方法、装置、计算机设备及介质
CN112070215A (zh) * 2020-09-10 2020-12-11 北京理工大学 基于bp神经网络的危险态势分析的处理方法及处理装置
CN112116577A (zh) * 2020-09-21 2020-12-22 公安部物证鉴定中心 一种基于深度学习的篡改人像视频检测方法及系统
CN112258026A (zh) * 2020-10-21 2021-01-22 国网江苏省电力有限公司信息通信分公司 基于视频身份识别的动态定位调度方法及系统
CN112329974A (zh) * 2020-09-03 2021-02-05 中国人民公安大学 基于lstm-rnn的民航安保事件行为主体识别与预测方法及系统
CN112330114A (zh) * 2020-10-27 2021-02-05 南京航空航天大学 基于混合深度神经网络的飞机危险识别方法
CN112329849A (zh) * 2020-11-04 2021-02-05 中冶赛迪重庆信息技术有限公司 基于机器视觉的废钢料场卸料状态识别方法、介质及终端
CN112397204A (zh) * 2020-11-16 2021-02-23 中国人民解放军空军特色医学中心 一种预测高原病的方法、装置、计算机设备和存储介质
CN112509129A (zh) * 2020-12-21 2021-03-16 神思电子技术股份有限公司 一种基于改进gan网络的空间视场图像生成方法
CN112651267A (zh) * 2019-10-11 2021-04-13 阿里巴巴集团控股有限公司 识别方法、模型训练、系统及设备
CN112949359A (zh) * 2019-12-10 2021-06-11 清华大学 基于卷积神经网络的异常行为识别方法和装置
CN112990432A (zh) * 2021-03-04 2021-06-18 北京金山云网络技术有限公司 目标识别模型训练方法、装置及电子设备
CN113010736A (zh) * 2019-12-20 2021-06-22 北京金山云网络技术有限公司 一种视频分类方法、装置、电子设备及存储介质
CN113343821A (zh) * 2021-05-31 2021-09-03 合肥工业大学 一种基于时空注意力网络和输入优化的非接触式心率测量方法
CN113657136A (zh) * 2020-05-12 2021-11-16 阿里巴巴集团控股有限公司 识别方法及装置
CN113923464A (zh) * 2021-09-26 2022-01-11 北京达佳互联信息技术有限公司 视频违规率确定方法、装置、设备、介质及程序产品
CN114740774A (zh) * 2022-04-07 2022-07-12 青岛沃柏斯智能实验科技有限公司 一种通风柜安全操作的行为分析控制系统

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214719B (zh) * 2018-11-02 2021-07-13 广东电网有限责任公司 一种基于人工智能的营销稽查分析的系统和方法
CN109670940A (zh) * 2018-11-12 2019-04-23 深圳壹账通智能科技有限公司 基于机器学习的信用风险评估模型生成方法及相关设备
CN109635838B (zh) * 2018-11-12 2023-07-11 平安科技(深圳)有限公司 人脸样本图片标注方法、装置、计算机设备及存储介质
CN109711665A (zh) * 2018-11-20 2019-05-03 深圳壹账通智能科技有限公司 一种基于金融风控数据的预测模型构建方法及相关设备
CN109784170B (zh) * 2018-12-13 2023-11-17 平安科技(深圳)有限公司 基于图像识别的车险定损方法、装置、设备及存储介质
CN109584051A (zh) * 2018-12-18 2019-04-05 深圳壹账通智能科技有限公司 基于微表情识别的客户逾期风险判断方法及装置
CN109992505B (zh) * 2019-03-15 2024-07-02 平安科技(深圳)有限公司 应用程序测试方法、装置、计算机设备及存储介质
CN110399927B (zh) * 2019-07-26 2022-02-01 玖壹叁陆零医学科技南京有限公司 识别模型训练方法、目标识别方法及装置
CN110569721B (zh) * 2019-08-01 2023-08-29 平安科技(深圳)有限公司 识别模型训练方法、图像识别方法、装置、设备及介质
CN110619462A (zh) * 2019-09-10 2019-12-27 苏州方正璞华信息技术有限公司 一种基于ai模型的项目质量评估方法
CN111144360A (zh) * 2019-12-31 2020-05-12 新疆联海创智信息科技有限公司 多模信息识别方法、装置、存储介质及电子设备
CN111429215B (zh) * 2020-03-18 2023-10-31 北京互金新融科技有限公司 数据的处理方法和装置
CN111798047A (zh) * 2020-06-30 2020-10-20 平安普惠企业管理有限公司 风控预测方法、装置、电子设备及存储介质
CN112257974A (zh) * 2020-09-09 2021-01-22 北京无线电计量测试研究所 一种燃气闸井风险预测模型数据集、模型训练方法和应用
CN112131607B (zh) * 2020-09-25 2022-07-08 腾讯科技(深圳)有限公司 资源数据处理方法、装置、计算机设备和存储介质
CN112201343B (zh) * 2020-09-29 2024-02-02 浙江大学 基于脸部微表情的认知状态识别系统及方法
CN114765634B (zh) * 2021-01-13 2023-12-12 腾讯科技(深圳)有限公司 网络协议识别方法、装置、电子设备及可读存储介质
CN113139812A (zh) * 2021-04-27 2021-07-20 中国工商银行股份有限公司 用户的交易风险识别方法、装置和服务器
CN115688130B (zh) * 2022-10-17 2023-10-20 支付宝(杭州)信息技术有限公司 数据处理方法、装置及设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100124363A1 (en) * 2008-11-20 2010-05-20 Sony Ericsson Mobile Communications Ab Display privacy system
CN106339719A (zh) * 2016-08-22 2017-01-18 微梦创科网络科技(中国)有限公司 一种图像识别方法及装置
CN106447434A (zh) * 2016-09-14 2017-02-22 全联征信有限公司 个人信用生态平台
CN107704834A (zh) * 2017-10-13 2018-02-16 上海壹账通金融科技有限公司 微表情面审辅助方法、装置及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9916538B2 (en) * 2012-09-15 2018-03-13 Z Advanced Computing, Inc. Method and system for feature detection
CN102819730A (zh) * 2012-07-23 2012-12-12 常州蓝城信息科技有限公司 一种人脸特征提取和识别的方法
CN106980811A (zh) * 2016-10-21 2017-07-25 商汤集团有限公司 人脸表情识别方法和人脸表情识别装置
CN106919903B (zh) * 2017-01-19 2019-12-17 中国科学院软件研究所 一种鲁棒的基于深度学习的连续情绪跟踪方法
CN107179683B (zh) * 2017-04-01 2020-04-24 浙江工业大学 一种基于神经网络的交互机器人智能运动检测与控制方法
CN107180234A (zh) * 2017-06-01 2017-09-19 四川新网银行股份有限公司 基于人脸表情识别和人脸特征提取的信用风险预测方法
CN107330785A (zh) * 2017-07-10 2017-11-07 广州市触通软件科技股份有限公司 一种基于大数据智能风控的小额贷款系统及方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100124363A1 (en) * 2008-11-20 2010-05-20 Sony Ericsson Mobile Communications Ab Display privacy system
CN106339719A (zh) * 2016-08-22 2017-01-18 微梦创科网络科技(中国)有限公司 一种图像识别方法及装置
CN106447434A (zh) * 2016-09-14 2017-02-22 全联征信有限公司 个人信用生态平台
CN107704834A (zh) * 2017-10-13 2018-02-16 上海壹账通金融科技有限公司 微表情面审辅助方法、装置及存储介质

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651267A (zh) * 2019-10-11 2021-04-13 阿里巴巴集团控股有限公司 识别方法、模型训练、系统及设备
CN110826320B (zh) * 2019-11-28 2023-10-13 上海观安信息技术股份有限公司 一种基于文本识别的敏感数据发现方法及系统
CN110826320A (zh) * 2019-11-28 2020-02-21 上海观安信息技术股份有限公司 一种基于文本识别的敏感数据发现方法及系统
CN112949359A (zh) * 2019-12-10 2021-06-11 清华大学 基于卷积神经网络的异常行为识别方法和装置
CN111210335B (zh) * 2019-12-16 2023-11-14 北京淇瑀信息科技有限公司 用户风险识别方法、装置及电子设备
CN111210335A (zh) * 2019-12-16 2020-05-29 北京淇瑀信息科技有限公司 用户风险识别方法、装置及电子设备
CN113010736A (zh) * 2019-12-20 2021-06-22 北京金山云网络技术有限公司 一种视频分类方法、装置、电子设备及存储介质
CN111222026B (zh) * 2020-01-09 2023-07-14 支付宝(杭州)信息技术有限公司 用户类别识别模型的训练方法和用户类别识别方法
CN111222026A (zh) * 2020-01-09 2020-06-02 支付宝(杭州)信息技术有限公司 用户类别识别模型的训练方法和用户类别识别方法
CN111291668A (zh) * 2020-01-22 2020-06-16 北京三快在线科技有限公司 活体检测方法、装置、电子设备及可读存储介质
CN111460909A (zh) * 2020-03-09 2020-07-28 兰剑智能科技股份有限公司 基于视觉的货位管理方法和装置
CN111400663A (zh) * 2020-03-17 2020-07-10 深圳前海微众银行股份有限公司 风险识别方法、装置、设备及计算机可读存储介质
CN111582654B (zh) * 2020-04-14 2023-03-28 五邑大学 基于深度循环神经网络的服务质量评价方法及其装置
CN111582654A (zh) * 2020-04-14 2020-08-25 五邑大学 基于深度循环神经网络的服务质量评价方法及其装置
CN113657136B (zh) * 2020-05-12 2024-02-13 阿里巴巴集团控股有限公司 识别方法及装置
CN113657136A (zh) * 2020-05-12 2021-11-16 阿里巴巴集团控股有限公司 识别方法及装置
CN111768286B (zh) * 2020-05-14 2024-02-20 北京旷视科技有限公司 风险预测方法、装置、设备及存储介质
CN111768286A (zh) * 2020-05-14 2020-10-13 北京旷视科技有限公司 风险预测方法、装置、设备及存储介质
CN111723907B (zh) * 2020-06-11 2023-02-24 浪潮电子信息产业股份有限公司 一种模型训练设备、方法、系统及计算机可读存储介质
CN111723907A (zh) * 2020-06-11 2020-09-29 浪潮电子信息产业股份有限公司 一种模型训练设备、方法、系统及计算机可读存储介质
CN111859913B (zh) * 2020-06-12 2024-04-12 北京百度网讯科技有限公司 风控特征因子的处理方法、装置、电子设备及存储介质
CN111859913A (zh) * 2020-06-12 2020-10-30 北京百度网讯科技有限公司 风控特征因子的处理方法、装置、电子设备及存储介质
CN111522570B (zh) * 2020-06-19 2023-09-05 杭州海康威视数字技术股份有限公司 目标库更新方法、装置、电子设备及机器可读存储介质
CN111522570A (zh) * 2020-06-19 2020-08-11 杭州海康威视数字技术股份有限公司 目标库更新方法、装置、电子设备及机器可读存储介质
CN111861701A (zh) * 2020-07-09 2020-10-30 深圳市富之富信息技术有限公司 风控模型优化方法、装置、计算机设备及存储介质
CN111950625A (zh) * 2020-08-10 2020-11-17 中国平安人寿保险股份有限公司 基于人工智能的风险识别方法、装置、计算机设备及介质
CN111950625B (zh) * 2020-08-10 2023-10-27 中国平安人寿保险股份有限公司 基于人工智能的风险识别方法、装置、计算机设备及介质
CN112329974B (zh) * 2020-09-03 2024-02-27 中国人民公安大学 基于lstm-rnn的民航安保事件行为主体识别与预测方法及系统
CN112329974A (zh) * 2020-09-03 2021-02-05 中国人民公安大学 基于lstm-rnn的民航安保事件行为主体识别与预测方法及系统
CN112070215A (zh) * 2020-09-10 2020-12-11 北京理工大学 基于bp神经网络的危险态势分析的处理方法及处理装置
CN112070215B (zh) * 2020-09-10 2023-08-29 北京理工大学 基于bp神经网络的危险态势分析的处理方法及处理装置
CN112116577A (zh) * 2020-09-21 2020-12-22 公安部物证鉴定中心 一种基于深度学习的篡改人像视频检测方法及系统
CN112116577B (zh) * 2020-09-21 2024-01-23 公安部物证鉴定中心 一种基于深度学习的篡改人像视频检测方法及系统
CN112258026A (zh) * 2020-10-21 2021-01-22 国网江苏省电力有限公司信息通信分公司 基于视频身份识别的动态定位调度方法及系统
CN112258026B (zh) * 2020-10-21 2023-12-15 国网江苏省电力有限公司信息通信分公司 基于视频身份识别的动态定位调度方法及系统
CN112330114A (zh) * 2020-10-27 2021-02-05 南京航空航天大学 基于混合深度神经网络的飞机危险识别方法
CN112329849A (zh) * 2020-11-04 2021-02-05 中冶赛迪重庆信息技术有限公司 基于机器视觉的废钢料场卸料状态识别方法、介质及终端
CN112397204B (zh) * 2020-11-16 2024-01-19 中国人民解放军空军特色医学中心 一种预测高原病的方法、装置、计算机设备和存储介质
CN112397204A (zh) * 2020-11-16 2021-02-23 中国人民解放军空军特色医学中心 一种预测高原病的方法、装置、计算机设备和存储介质
CN112509129A (zh) * 2020-12-21 2021-03-16 神思电子技术股份有限公司 一种基于改进gan网络的空间视场图像生成方法
CN112990432B (zh) * 2021-03-04 2023-10-27 北京金山云网络技术有限公司 目标识别模型训练方法、装置及电子设备
CN112990432A (zh) * 2021-03-04 2021-06-18 北京金山云网络技术有限公司 目标识别模型训练方法、装置及电子设备
CN113343821A (zh) * 2021-05-31 2021-09-03 合肥工业大学 一种基于时空注意力网络和输入优化的非接触式心率测量方法
CN113343821B (zh) * 2021-05-31 2022-08-30 合肥工业大学 一种基于时空注意力网络和输入优化的非接触式心率测量方法
CN113923464A (zh) * 2021-09-26 2022-01-11 北京达佳互联信息技术有限公司 视频违规率确定方法、装置、设备、介质及程序产品
CN114740774A (zh) * 2022-04-07 2022-07-12 青岛沃柏斯智能实验科技有限公司 一种通风柜安全操作的行为分析控制系统

Also Published As

Publication number Publication date
CN108510194B (zh) 2022-11-29
CN108510194A (zh) 2018-09-07

Similar Documents

Publication Publication Date Title
WO2019184124A1 (zh) 风控模型训练方法、风险识别方法、装置、设备及介质
CN113705769B (zh) 一种神经网络训练方法以及装置
CN112507898B (zh) 一种基于轻量3d残差网络和tcn的多模态动态手势识别方法
WO2021042828A1 (zh) 神经网络模型压缩的方法、装置、存储介质和芯片
WO2020215557A1 (zh) 医学影像解释方法、装置、计算机设备及存储介质
US12056941B2 (en) Computer vision systems and methods for information extraction from text images using evidence grounding techniques
WO2019228317A1 (zh) 人脸识别方法、装置及计算机可读介质
US20210241034A1 (en) Method of and system for generating training images for instance segmentation machine learning algorithm
WO2021147325A1 (zh) 一种物体检测方法、装置以及存储介质
WO2021218899A1 (zh) 人脸识别模型训练方法、人脸识别方法及装置
WO2022111506A1 (zh) 视频动作识别方法、装置、电子设备和存储介质
CN111291809B (zh) 一种处理装置、方法及存储介质
CN110288555B (zh) 一种基于改进的胶囊网络的低照度增强方法
WO2019232847A1 (zh) 手写模型训练方法、手写字识别方法、装置、设备及介质
CN112307982B (zh) 基于交错增强注意力网络的人体行为识别方法
US20230048405A1 (en) Neural network optimization method and apparatus
CN111782840A (zh) 图像问答方法、装置、计算机设备和介质
CN110837846A (zh) 一种图像识别模型的构建方法、图像识别方法及装置
WO2023061102A1 (zh) 视频行为识别方法、装置、计算机设备和存储介质
CN113807214B (zh) 基于deit附属网络知识蒸馏的小目标人脸识别方法
TWI803243B (zh) 圖像擴增方法、電腦設備及儲存介質
WO2019232855A1 (zh) 手写模型训练方法、手写字识别方法、装置、设备及介质
CN111079930B (zh) 数据集质量参数的确定方法、装置及电子设备
CN116596916A (zh) 缺陷检测模型的训练和缺陷检测方法及其装置
CN109101984B (zh) 一种基于卷积神经网络的图像识别方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18912608

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19/01/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18912608

Country of ref document: EP

Kind code of ref document: A1