WO2019223139A1 - 一种风险预测方法、装置、存储介质和服务器 - Google Patents
一种风险预测方法、装置、存储介质和服务器 Download PDFInfo
- Publication number
- WO2019223139A1 WO2019223139A1 PCT/CN2018/101579 CN2018101579W WO2019223139A1 WO 2019223139 A1 WO2019223139 A1 WO 2019223139A1 CN 2018101579 W CN2018101579 W CN 2018101579W WO 2019223139 A1 WO2019223139 A1 WO 2019223139A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video image
- key frame
- video
- micro
- fraud
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/03—Credit; Loans; Processing thereof
Definitions
- the present application relates to the field of information monitoring, and in particular, to a risk prediction method, device, storage medium, and server.
- a credit applicant When applying for a bank loan, a credit applicant usually needs to conduct a credit interview.
- the risk forecaster interviews the credit applicant to verify the relevant information.
- Existing credit interviews generally involve credit applicants filling in paper credit materials, and risk forecasters review the credit materials filled by credit applicants, and interview the credit applicants to repay the loan in the future. Predict repayment ability and propose risk prevention and control measures.
- the existing risk forecasting methods mainly rely on the credit applicant's honesty filling and the experience forecaster's experience evaluation. Without some objective auxiliary judgments, it is easy to cause the problem of low repayment ability prediction accuracy.
- the embodiments of the present application provide a risk prediction method, device, storage medium, and server to solve the existing risk prediction methods, which mainly rely on the integrity of the credit applicant and the experience evaluation of the risk predictor, which lacks some objective Auxiliary judgments can easily cause problems such as low accuracy and low efficiency of risk prediction.
- a first aspect of the embodiments of the present application provides a risk prediction method, including:
- a risk prediction is performed on the repayment ability of the applicant.
- a second aspect of the embodiments of the present application provides a risk prediction device, including:
- a video image acquisition module for acquiring video images of the applicant during the interview process
- a key image determination module configured to cluster video frames in the video image to determine a key frame video image
- a fraud probability determining module configured to determine a fraud probability of a micro-expression corresponding to the key frame video image according to the determined key-frame video image and a micro-expression facial expression fraud probability model;
- a risk prediction module is configured to perform risk prediction on the repayment ability of the applicant according to the determined fraud probability of the micro-expressions corresponding to the key frame video image.
- a third aspect of the embodiments of the present application provides a server, including a memory and a processor, where the memory stores computer-readable instructions that can run on the processor, and the processor executes the computer-readable instructions To achieve the following steps:
- a risk prediction is performed on the repayment ability of the applicant.
- a fourth aspect of the embodiments of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores computer-readable instructions, and when the computer-readable instructions are executed by a processor, the following steps are implemented:
- a risk prediction is performed on the repayment ability of the applicant.
- the expression fraud probability model determines the fraud probability of the micro-expressions corresponding to the key frame video image, and finally performs risk prediction on the applicant's repayment ability according to the determined fraud probability of the micro-expressions corresponding to the key frame video image.
- This solution analyzes the applicant's micro-expression during the interview process to evaluate the applicant's fraud probability, and performs risk prediction on the applicant's ability to repay according to the fraud probability, and evaluates the applicant's repayment for prediction.
- FIG. 1 is an implementation flowchart of a risk prediction method provided by an embodiment of the present application
- FIG. 2 is a specific implementation flowchart of a risk prediction method S102 provided by an embodiment of the present application
- FIG. 3 is another specific implementation flowchart of a risk prediction method S102 according to an embodiment of the present application.
- FIG. 5 is a specific implementation flowchart of a risk prediction method S104 provided by an embodiment of the present application.
- FIG. 6 is a structural block diagram of a risk prediction device according to an embodiment of the present application.
- FIG. 7 is a structural block diagram of a risk prediction apparatus according to another embodiment of the present application.
- FIG. 8 is a schematic diagram of a server provided by an embodiment of the present application.
- FIG. 1 illustrates an implementation flow of a risk prediction method provided by an embodiment of the present application, and the method flow includes steps S101 to S104.
- the specific implementation principle of each step is as follows:
- the performance of the applicant during the interview is captured by a camera. Therefore, the video image includes an expression image of the applicant.
- many of the applicant's expressions are flashed, that is, micro expressions.
- Micro expressions are very fast expressions that last only 1/25 to 1 / 5s. It is when people try to suppress or hide their true emotions. Show short, uncontrollable facial expressions.
- the applicant answers a question most of the time the applicant is expressionless or some other common expressions, and useful information often appears only in those micro expressions that may pass by in. Therefore, by taking a video of the applicant's performance during the interview, the user's micro-expressions are analyzed to avoid missing possible micro-expressions and improve the accuracy of risk prediction.
- the complete video image during the interview can be divided into multiple video images according to the length of the question and answer, and the divided multiple video images are separately
- the micro-expression analysis is performed, that is, the video image during the interview process is divided into several sub-video images for adults according to content or duration.
- S102 Cluster video frames in the video image to determine a key frame video image.
- clustering is performed according to facial expressions in a video frame.
- the principles of clustering include: the facial expressions in the video frame images tend to be similar to each other in a certain sense (high similarity); different facial expressions tend to be dissimilar (low similarity). Among them, the high similarity and the low similarity are compared with the set similarity threshold. If the similarity is not lower than the preset similarity threshold, the similarity is considered to be high and tend to be similar to each other. Below the preset similarity threshold, the similarity is considered low and tends to be dissimilar to each other.
- the above S102 specifically includes:
- A1 Select a specified number of video frames from the video image as an initial clustering center.
- A2 Calculate the similarity between the video frame in the video image and the initial cluster center. Specifically, the similarity between the facial expression in the video frame and the facial expression in the video frame as the initial clustering center is calculated.
- A3 cluster video frames in the video image according to the calculated similarity and a preset minimum similarity. Specifically, if the calculated similarity is not less than a preset minimum similarity, the video frame is clustered with the initial clustering center. Conversely, if the calculated similarity is less than the preset minimum similarity, no clustering is performed. Further, all video frames whose calculated similarity is less than a preset similarity are individually clustered into a cluster.
- A4 Re-select the cluster center from the clustered video frames and repeat the clustering until the cluster center converges.
- cluster center convergence means that the video frame image as the cluster center is no longer changed.
- each video frame image is marked with a time stamp, and whether the cluster center has converged can be determined by determining whether the time stamp of the video frame image as the cluster center has changed. If the time stamp as the video frame image does not change, the cluster center is considered to have converged, and if the time stamp as the video frame image is changed, the cluster center is considered to have not yet converged.
- A5 Determine the finally determined cluster center as a key frame video image of the video image.
- the key frame video image is determined by clustering the video frames in the video image, and redundant video frames are eliminated to improve the efficiency of image analysis.
- each frame of the video image includes a face and facial action unit. Therefore, the above S102 specifically includes:
- B2 classify the video frame pictures according to the intensity value of the face and facial action unit, and determine the key frame video image according to the classification result.
- a set of codes are used to describe the expressions, and each code is called an ActionUnit.
- a person's facial expression is represented by a series of ActionUnits, and an action unit number mapping is established. Table, each action unit is represented by a predetermined number.
- a surprised expression includes the inner eyebrows raised, the outer eyebrows raised, the upper eyelids raised, and the jaw opened.
- the action unit number mapping table it can be known that the corresponding action unit numbers for these actions are 1, 2, 6, and 18.
- This set of codes describes the surprised expression.
- Action unit recognition can objectively describe a person's facial movements, and can also be used to analyze the emotional state corresponding to an expression.
- a facial expression includes a plurality of action units.
- the intensity value of a specified action unit is not less than a preset intensity threshold among the plurality of action units, it is determined that the plurality of action units correspond to such an expression.
- the intensity value of the action unit may be reflected by the motion amplitude of the facial organ.
- the intensity value of the action unit may be determined by the difference between the width of the jaw opening and a preset amplitude threshold.
- more than one action unit may be included in the same facial expression.
- the amplitude and The difference of the corresponding preset amplitude threshold is used to determine the intensity value of each action unit.
- the intensity value of the group of action units is determined according to the sum of the intensity values of the respective action units, so as to determine the facial expression corresponding to the group of action units.
- the facial expressions corresponding to the facial and facial action units with the difference between the intensity values within a preset difference range are also very similar. Therefore, the video frames are processed according to the intensity values of the facial and facial action units. The pictures are classified, and the key frame video images are determined according to the classification results, and redundant video frames are eliminated.
- the intensity of the face and facial action units in each frame of the picture all video frames in the video image are clustered, and a set number of video frames are randomly selected as the initial clustering center (such as 7 clustering centers. Select 7 video frame images), and select key frame pictures according to the preset intensity threshold of the action unit obtained in advance through statistical results. For example, for a video frame image, the intensity value of the specified facial action unit is not less than the preset intensity threshold, and the difference between the intensity value of the facial action unit of the video frame image and the intensity value of the cluster center is not less than the preset value. Set the intensity difference to consider this picture as a keyframe video image.
- S103 Determine the fraud probability of the micro expression corresponding to the key frame video image according to the determined key frame video image and the micro expression fraud probability model.
- the micro-expression fraud probability model is used to obtain the fraud probability of micro-expressions in the key frame video image.
- the micro-expression fraud probability model is pre-trained, and machine learning can be used to The micro-expression fraud probability model is trained.
- FIG. 4 shows a specific implementation process of training the micro-expression fraud probability model of the risk prediction method provided by the embodiment of the present application, as follows:
- C1 Obtain a set number of sample videos with tags, where the tags include fraud and non-fraud. Since the sample video is labeled with fraud or non-fraud, each video frame image in the sample video is labeled with the same label as the sample video.
- C2 Extract a sample key frame image in each sample video, and the label of the sample key frame image is the same as the label of the sample video to which it belongs.
- SVM Small Vector Machine
- SVM Support Vector Machine
- supervised learning model which is usually used for pattern recognition, classification, and regression analysis.
- the sample key frame image of each sample video is extracted according to the K-means clustering algorithm, the strength of the action unit in the sample key frame image is determined, and the sample key frame image is used as a training sample to train the SVM classifier, where:
- the data point is a sample key frame image, which is characterized by the intensity value of the facial action unit.
- the label is fraud or non-fraud.
- the optimal parameters of the SVM classifier are determined to generate a micro-expression fraud probability model.
- an SVM classifier is used to train the sample key frame images to generate a micro-expression fraud probability model, and the determined key-frame video image is input to the micro-expression fraud probability model to determine the key. Fraud probability of micro-expressions corresponding to frame video images.
- a hyperplane is trained using the SVM classifier, and the facial expression action unit is a point in the feature space. Enter the intensity value of the facial expression action unit in a picture, that is, input a point, and judge whether the point reaches The distance of the hyperplane of the SVM classifier. The distance is input to the sigmoid function to obtain the probability value.
- a probability value can be obtained every time a video image is input into the micro-expression fraud probability model. If the fraud probability of the key frame video image is greater than a preset probability threshold, the micro expression in the key frame video image is determined to be a fraud expression, and if the video image's fraud probability is not greater than a preset probability threshold, the key is determined Micro expressions in frame video images are non-fraud expressions. Generally, the preset probability threshold is set to 50%.
- the multiple key frame video images of the video image are sequentially input to the micro-expression fraud probability model to obtain probability values, and then the average value of the multiple key frame picture probability values is determined, and the average value is determined as the fraud probability of the video image. . Furthermore, it is determined whether the video image is fraudulent according to a comparison result between the fraud probability and a preset probability threshold.
- S104 Perform risk prediction on the repayment ability of the applicant according to the determined fraud probability of the micro-expressions corresponding to the key frame video image.
- the credibility of the applicant can be determined according to the fraud probability of the micro-expressions corresponding to the key frame video image.
- the credibility reliability is inversely proportional to the credibility of the fraud probability.
- the higher the fraud probability, the credibility and reliability The lower the degree, the reliability of honesty is also inversely proportional to the risk of repayment.
- FIG. 5 shows a specific implementation process of step S104 of the risk prediction method provided by the embodiment of the present application, which is detailed as follows:
- D1 Compare the fraud probability of the micro expression corresponding to the key frame video image with a preset risk probability threshold.
- the preset risk probability threshold as a critical point, it is predicted whether the repayment risk of the applicant exceeds a preset risk range. Further, a difference between the fraud probability and the preset risk probability threshold and a preset risk difference are calculated, and the level of the repayment risk of the applicant is determined according to a pre-established difference level comparison table.
- the step S104 further includes:
- the measures that are matched with the repayment risk of the applicant in the measure database established according to the historical review information are called to provide suggestions to the reviewer.
- the video frames in the video image are clustered, and the key frame video is determined according to the similarity of the facial expressions in the video frame.
- Image, or clustering according to the intensity value of the face and facial action unit in the video frame to determine the key frame video image and then determining the key frame video image according to the determined key frame video image and the micro-expression fraud probability model
- the fraud probability of the corresponding micro-expression and finally, according to the determined fraud probability of the micro-expression corresponding to the key frame video image, a risk prediction is made on the applicant's repayment ability.
- Analyze the micro-expressions of the applicant evaluate the applicant's fraud probability, judge the credibility of the applicant based on the comparison of the fraud probability and the preset risk probability threshold, and perform the repayment ability of the applicant according to the credibility and reliability Risk prediction, providing objective auxiliary judgments for predicting and assessing the applicant's repayment ability, thereby improving the accuracy of risk prediction As well as the efficiency of prediction.
- FIG. 6 shows a structural block diagram of a risk prediction device provided by an embodiment of the present application. For convenience of explanation, only a part related to the embodiment of the present application is shown.
- the risk prediction device includes a video image acquisition module 61, a key image determination module 62, a fraud probability determination module 63, and a risk prediction module 64, where:
- a video image acquisition module 61 configured to acquire a video image of the applicant during the interview process
- a key image determination module 62 configured to cluster video frames in the video image to determine a key frame video image
- a fraud probability determining module 63 is configured to determine a fraud probability of a micro expression corresponding to the key frame video image according to the determined key frame video image and a micro expression fraud probability model;
- a risk prediction module 64 is configured to perform risk prediction on the repayment ability of the applicant according to the determined fraud probability of the micro-expressions corresponding to the key frame video image.
- the key image determination module 62 includes:
- a center determining submodule configured to select a specified number of video frames from the video image as an initial clustering center
- a similarity calculation submodule configured to calculate a similarity between a video frame in the video image and the initial cluster center
- a clustering submodule configured to cluster video frames in the video image according to the calculated similarity and a preset minimum similarity
- the clustering sub-module is further configured to re-select the cluster center from the clustered video frames and repeat the clustering until the cluster center converges;
- a first image determination sub-module is configured to determine a finally determined cluster center as a key frame video image of the video image.
- the key image determination module 62 includes:
- An intensity value determination sub-module configured to obtain an intensity value of a human face and facial action unit in each frame of the video image according to the video frame in the video image;
- a second image determination sub-module is configured to classify the video frame pictures according to the intensity value of a human face and facial action unit, and determine a key frame video image according to the classification result.
- the risk prediction module 64 includes:
- a probability comparison submodule configured to compare the fraud probability of the micro-expressions corresponding to the key frame video image with a preset risk probability threshold
- a first prediction submodule configured to predict that the repayment risk of the applicant exceeds a preset risk range if the fraud probability of the micro-expressions corresponding to the key frame video image is not less than the preset risk probability threshold;
- the second prediction submodule is configured to predict that the repayment risk of the applicant is within a preset risk range if the fraud probability of the micro-expressions corresponding to the key frame video image is less than the preset risk probability threshold.
- the risk prediction device further includes:
- a sample video obtaining module 71 configured to obtain a set number of sample videos with tags, where the tags include fraud and non-fraud;
- the key image extraction module 72 is configured to extract a sample key frame image in each sample video, and a label of the sample key frame image is the same as a label of the sample video to which the sample key frame image belongs;
- a classification training module 73 is configured to train the SVM classifier using the extracted sample key frame images as training samples, and determine the trained SVM classifier as a micro-expression fraud probability model.
- the expression fraud probability model determines the fraud probability of the micro-expressions corresponding to the key frame video image, and finally performs risk prediction on the applicant's repayment ability according to the determined fraud probability of the micro-expressions corresponding to the key frame video image
- This solution analyzes the applicant's micro-expression during the interview process to evaluate the applicant's fraud probability, and performs risk prediction on the applicant's ability to repay according to the fraud probability, and evaluates the applicant's repayment for prediction.
- FIG. 8 is a schematic diagram of a server provided by an embodiment of the present application.
- the server 8 of this embodiment includes: a processor 80, a memory 81, and computer-readable instructions 82 stored in the memory 81 and executable on the processor 80, such as a risk prediction program.
- steps in the embodiments of the foregoing risk prediction methods are implemented, for example, steps 101 to 104 shown in FIG. 1.
- the processor 80 executes the computer-readable instructions 82
- the functions of each module / unit in the foregoing device embodiments are implemented, for example, the functions of modules 61 to 64 shown in FIG. 6.
- the computer-readable instructions 82 may be divided into one or more modules / units, the one or more modules / units are stored in the memory 81 and executed by the processor 80, To complete this application.
- the one or more modules / units may be a series of computer-readable instruction instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer-readable instructions 82 in the server 8.
- the server 8 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
- the server may include, but is not limited to, a processor 80 and a memory 81.
- FIG. 8 is only an example of the server 8 and does not constitute a limitation on the server 8. It may include more or fewer components than shown in the figure, or combine some components, or different components, such as
- the server may further include an input-output device, a network access device, a bus, and the like.
- the processor 80 may be a central processing unit (Central Processing Unit (CPU), or other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (Application Specific Integrated Circuits) Specific Integrated Circuit (ASIC), off-the-shelf Programmable Gate Array (FPGA), or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
- CPU Central Processing Unit
- DSP Digital Signal Processor
- ASIC Application Specific Integrated Circuits
- FPGA off-the-shelf Programmable Gate Array
- a general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
- the memory 81 may be an internal storage unit of the server 8, such as a hard disk or a memory of the server 8.
- the memory 81 may also be an external storage device of the server 8, such as a plug-in hard disk, a smart media card (SMC), and a secure digital (SD) card provided on the server 8. Flash memory card Card) and so on.
- the memory 81 may further include both an internal storage unit of the server 8 and an external storage device.
- the memory 81 is configured to store the computer-readable instructions and other programs and data required by the server.
- the memory 81 may also be used to temporarily store data that has been output or is to be output.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.
- the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
- the computer-readable medium may include: any entity or device capable of carrying the computer-readable instructions, a recording medium, a U disk, a mobile hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signals, telecommunication signals, and software distribution media.
- ROM Read-Only Memory
- RAM Random Access Memory
- electric carrier signals telecommunication signals
- software distribution media any entity or device capable of carrying the computer-readable instructions
- a recording medium a U disk, a mobile hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signals, telecommunication signals, and software distribution media.
Landscapes
- Business, Economics & Management (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Engineering & Computer Science (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- Technology Law (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
本申请提供了一种风险预测方法、装置、存储介质和服务器,包括:获取申请人在面审过程中的视频图像;将所述视频图像中的视频帧进行聚类,确定关键帧视频图像;根据确定的所述关键帧视频图像与微表情欺诈概率模型,确定所述关键帧视频图像对应的微表情的欺诈概率;根据确定的所述关键帧视频图像对应的微表情的欺诈概率,对所述申请人的还款能力进行风险预测。本申请通过对申请人在面审过程中的微表情进行分析,评估申请人的欺诈概率,并根据欺诈概率对所述申请人的还款能力进行风险预测,为预测评估申请人的还款能力提供客观的辅助判断,从而提高风险预测的准确性以及预测的效率。
Description
本申请要求于2018年05月22日提交中国专利局、申请号为CN201810496195.1、发明名称为“一种风险预测方法、存储介质和服务器”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及信息监控领域,尤其涉及一种风险预测方法、装置、存储介质和服务器。
信贷申请人在办理银行贷款时通常需要进行信贷面审,风险预测人员跟信贷申请人面谈核实相关信息。现有的信贷面审一般是有信贷申请人填写纸质信贷材料,由风险预测人员审核信贷申请人填写的信贷材料,并通过与信贷申请人的面谈来对信贷申请人未来偿还该笔贷款的还款能力进行预测,提出风险防控措施。
事实上,根据纸质信贷材料以及风险预测人员的审核经验对对信贷申请人未来偿还该笔贷款的还款能力进行预测,提出风险防控措施,主要是依靠信贷申请人的诚信填写以及风险预测人的经验,缺少一些客观的辅助判断,易造成还款能力预测不准确,从而影响提出风险防控措施的准确性。
综上所述,现有的风险预测方式中存在主要是依靠信贷申请人的诚信填写以及风险预测人的经验评估,缺少一些客观的辅助判断,易造成还款能力预测确性不高的问题。
本申请实施例提供了一种风险预测方法、装置、存储介质和服务器,以解决现有的风险预测方式中存在主要是依靠信贷申请人的诚信填写以及风险预测人的经验评估,缺少一些客观的辅助判断,易造成风险预测准确性不高、效率也不高的问题。
本申请实施例的第一方面提供了一种风险预测方法,包括:
获取申请人在面审过程中的视频图像;
将所述视频图像中的视频帧进行聚类,确定关键帧视频图像;
根据确定的所述关键帧视频图像与微表情欺诈概率模型,确定所述关键帧视频图像对应的微表情的欺诈概率;
根据确定的所述关键帧视频图像对应的微表情的欺诈概率,对所述申请人的还款能力进行风险预测。
本申请实施例的第二方面提供了一种风险预测装置,包括:
视频图像获取模块,用于获取申请人在面审过程中的视频图像;
关键图像确定模块,用于将所述视频图像中的视频帧进行聚类,确定关键帧视频图像;
欺诈概率确定模块,用于根据确定的所述关键帧视频图像与微表情欺诈概率模型,确定所述关键帧视频图像对应的微表情的欺诈概率;
风险预测模块,用于根据确定的所述关键帧视频图像对应的微表情的欺诈概率,对所述申请人的还款能力进行风险预测。
本申请实施例的第三方面提供了一种服务器,包括存储器以及处理器,所述存储器存储有可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
获取申请人在面审过程中的视频图像;
将所述视频图像中的视频帧进行聚类,确定关键帧视频图像;
根据确定的所述关键帧视频图像与微表情欺诈概率模型,确定所述关键帧视频图像对应的微表情的欺诈概率;
根据确定的所述关键帧视频图像对应的微表情的欺诈概率,对所述申请人的还款能力进行风险预测。
本申请实施例的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令被处理器执行时实现如下步骤:
获取申请人在面审过程中的视频图像;
将所述视频图像中的视频帧进行聚类,确定关键帧视频图像;
根据确定的所述关键帧视频图像与微表情欺诈概率模型,确定所述关键帧视频图像对应的微表情的欺诈概率;
根据确定的所述关键帧视频图像对应的微表情的欺诈概率,对所述申请人的还款能力进行风险预测。
本申请实施例中,通过获取申请人在面审过程中的视频图像,将所述视频图像中的视频帧进行聚类,确定关键帧视频图像,然后根据确定的所述关键帧视频图像与微表情欺诈概率模型,确定所述关键帧视频图像对应的微表情的欺诈概率,最后根据确定的所述关键帧视频图像对应的微表情的欺诈概率,对所述申请人的还款能力进行风险预测,本方案通过对申请人在面审过程中的微表情进行分析,评估申请人的欺诈概率,并根据欺诈概率对所述申请人的还款能力进行风险预测,为预测评估申请人的还款能力提供客观的辅助判断,从而提高风险预测的准确性以及预测的效率。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的风险预测方法的实现流程图;
图2是本申请实施例提供的风险预测方法S102的具体实现流程图;
图3是本申请实施例提供的风险预测方法S102的另一种具体实现流程图;
图4是本申请实施例提供的风险预测方法训练微表情欺诈概率模型的;
图5是本申请实施例提供的风险预测方法S104的具体实现流程图;
图6是本申请实施例提供的风险预测装置的结构框图;
图7是本申请另一实施例提供的风险预测装置的结构框图;
图8是本申请实施例提供的服务器的示意图。
为使得本申请的发明目的、特征、优点能够更加的明显和易懂,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,下面所描述的实施例仅仅是本申请一部分实施例,而非全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
实施例1
图1示出了本申请实施例提供的风险预测方法的实现流程,该方法流程包括步骤S101至S104。各步骤的具体实现原理如下:
S101:获取申请人在面审过程中的视频图像。
在本申请实施例中,在申请人进行面审的过程中,通过摄像头拍摄面审过程中申请人的表现,尤其是申请人的面部表情。因此,所述视频图像包括所述申请人的表情图像。事实上,申请人很多表情是一闪而过,也即微表情,微表情是一种持续时间仅为 1/25~1/5s非常快速的表情,它是人们试图压抑或隐藏自己真实情感时表现出短暂的、不能自主控制的面部表情。在本申请实施例中,由于在申请人回答一个问题时,在大部分时间申请人都是面无表情或其他一些常见表情,而有用的信息往往只出现在那些可能一闪而过的微表情里。因此通过对面审过程中申请人的表现拍摄录像,对用户的微表情进行分析,避免错过可能出现过的微表情,提高风险预测的准确性。
进一步地,由于在面审过程中申请人可能回答的问题不止一个,可以将面谈过程中的完整视频图像按一个问题及回答的时长分割成多个视频图像,对分割出来的多个视频图像分别进行微表情分析,即将面审过程中的视频图像按内容或时长分割成人若干个子视频图像。
S102:将所述视频图像中的视频帧进行聚类,确定关键帧视频图像。
有用的信息往往只出现在那些可能一闪而过的微表情中,在一个视频图像中,如果统计一个表情出现的时长,对于分析该表情的影响非常大,因此,通过确定视频图像中的关键帧视频图像,剔除冗余帧图像,可消除视频时长对微表情分析的影响。在本申请实施例中,通过将所述视频图像中的视频帧进行聚类,确定关键帧视频图像,从而避免对每一帧视频图像进行分析,提高分析的效率。
具体地,在本申请实施例中,根据视频帧中的人脸表情进行聚类。聚类的原则包括:视频帧图像中的人脸表情在某种意义上趋于彼此相似(相似度较高);不同的人脸表情趋近于不相似(相似度较低)。其中,相似度较高和相似度较低通过与设定的相似度阈值进行比较,若相似度不低于于预设相似度阈值,则认为相似度较高,趋于彼此相似;若相似度低于预设相似度阈值,则认为相似度较低,趋于彼此不相似。
作为本申请的一个实施例,如图2所示,上述S102具体包括:
A1:从所述视频图像中选取指定数量的视频帧作为初始聚类中心。
A2:计算所述视频图像中的视频帧与所述初始聚类中心的相似度。具体地,计算视频帧中人脸表情与作为初始聚类中心的视频帧中的人脸表情的相似度。
A3:根据计算的相似度与预设的最小相似度将所述视频图像中的视频帧进行聚类。具体地,若计算的相似度不小于预设的最小相似度,则将该视频帧与所述初始聚类中心聚类。相反的,若计算的相似度小于预设的最小相似度,则不进行聚类。进一步地,将所有计算的相似度小于预设的相似度的视频帧单独聚类为一簇。
A4:从聚类后的视频帧中重新选取聚类中心,重复进行聚类直至聚类中心收敛。其中,聚类中心收敛是指作为聚类中心的视频帧图像不再更改。具体地,每一视频帧图像都标有时间戳,可通过判断作为聚类中心的视频帧图像的时间戳是否改变来判断聚类中心是否收敛。若作为视频帧图像的时间戳未改变,则认为聚类中心已收敛,若作为视频帧图像的时间戳改变,则认为聚类中心还未收敛。
A5:将最终确定的聚类中心确定为所述视频图像的关键帧视频图像。
在本申请实施例中,通过将视频图像中的视频帧聚类确定关键帧视频图像,将冗余的视频帧剔除,以提高图像分析的效率。
作为本申请的一个实施例,如图3所示,每一帧视频图像中都包括人脸面部动作单元,因此,上述S102具体包括:
B1:根据所述视频图像中的视频帧,获取每一帧视频图像中人脸面部动作单元的强度值。
B2:根据人脸面部动作单元的强度值对所述视频帧图片进行分类,并根据分类结果确定关键帧视频图像。
具体地,为客观刻画申请人的面部表情,采用一组编码描述表情,每个编码称为一个动作单元(ActionUnit),人的面部表情用一系列动作单元(ActionUnit) 表示,建立动作单元编号映射表,每个动作单元用一个预先规定的编号表示。例如,一个惊讶的表情包括眉毛内侧上扬、外侧眉毛上扬、上眼睑上扬、下颚张开,根据动作单元编号映射表可知,这些动作对应的动作单元编号分别是1、2、6和18。这组编码描述该惊讶的表情。动作单元识别可以客观描述人的面部动作,也可以用来分析表情对应的情绪状态。当动作单元的强度值不小于预设强度阈值时,才认为动作单元标准。具体地,一个面部表情包括若干个动作单元,当若干个动作单元中,指定的动作单元的强度值都不小于预设强度阈值时,认定该若干个的动作单元对应这种表情。其中,动作单元的强度值可通过面部器官的活动幅度体现,例如,通过下颚张开的幅度与预设幅度阈值的差值来判断该动作单元的强度值。进一步地,在同一个面部表情中可能包括不止一个动作单元,例如,一个惊讶的表情中,通过分别判断眉毛内侧上扬、 外侧眉毛上扬、上眼睑上扬、下颚张开等一系列面部器官的幅度与对应的预设的幅度阈值的差值,来确定各个动作单元的强度值。根据各个动作单元的强度值之和来确定该组动作单元的强度值,以便确定该组动作单元对应的人脸表情。
在本申请实施例中,强度值的差值在预设差值范围内的人脸面部动作单元对应的人脸表情也极为相似,因此,根据人脸面部动作单元的强度值对所述视频帧图片进行分类,并根据分类结果确定关键帧视频图像,剔除冗余的视频帧。
进一步地,根据每一帧图片中人脸面部动作单元的强度,对视频图像中所有的视频帧进行聚类,随机抽取设定数量视频帧作为初始聚类中心(如7个聚类中心,就选择7个视频帧图像),并根据事先通过统计结果得到的动作单元的预设强度阈值,筛选出关键帧图片。例如,对于一视频帧图像,其中指定的面部动作单元的强度值都不小于预设强度阈值,且该视频帧图像的面部动作单元的强度值与聚类中心的强度值的差值不小于预设强度差值,才认为这张图片是关键帧视频图像。
S103:根据确定的所述关键帧视频图像与微表情欺诈概率模型,确定所述关键帧视频图像对应的微表情的欺诈概率。
在本申请实施例中,所述微表情欺诈概率模型是用于获取对所述关键帧视频图像中的微表情的欺诈概率,该微表情欺诈概率模型是预先训练好的,可采用机器学习对该微表情欺诈概率模型进行训练。
作为本申请的一个实施例,图4示出了本申请实施例提供的风险预测方法训练微表情欺诈概率模型的具体实现流程,详述如下:
C1:获取设定数量贴有标签的样本视频,所述标签包括欺诈和非欺诈。由于样本视频贴有欺诈或者非欺诈的标签,因此,样本视频中的每一视频帧图像都贴有与所述样本视频相同的标签。
C2:抽取每一个样本视频中的样本关键帧图像,所述样本关键帧图像的标签与所属的样本视频的标签相同。
C3:将抽取的样本关键帧图像作为训练样本对SVM分类器进行训练,将训练完成的SVM分类器确定为微表情欺诈概率模型。其中,SVM(Support Vector Machine)分类机是支持向量机,是常见的一种判别方法。在机器学习领域,是一个有监督的学习模型,通常用来进行模式识别、分类以及回归分析。
具体地,根据K-means聚类算法抽取每一个样本视频的样本关键帧图像,确定样本关键帧图像中的动作单元的强度,将样本关键帧图像作为训练样本对SVM分类器进行训练,其中,数据点为一张样本关键帧图像,特征为面部动作单元的强度值,标签为欺诈或者非欺诈,经过反复训练确定SVM分类器的最优参数,从而生成微表情欺诈概率模型。
在本申请实施例中,采用SVM分类器对样本关键帧图像进行训练,生成微表情欺诈概率模型,再将确定的所述关键帧视频图像输入至所述微表情欺诈概率模型,确定所述关键帧视频图像对应的微表情的欺诈概率。
示例性地,用SVM分类器训练出一个超平面,人脸表情动作单元为特征空间中为一个点,输入一张图片中人脸表情的动作单元强度值,即输入一个点,判断该点到SVM分类器的超平面的距离,将该距离输入到sigmoid函数获得概率值,sigmoid函数表示为: S(x)=1/(1+e
-x),x表示动作单元代表的点到SVM训练的超平面的距离。若概率值大于50%,则判断该关键帧图像为欺诈,若概率值不大于50%,则判断该关键帧图像为不欺诈。
在本申请实施例中,每输入一张视频图像至微表情欺诈概率模型中就能获取一个概率值。若关键帧视频图像的欺诈概率大于预设概率阈值,则判定所述关键帧视频图像中的微表情为欺诈表情,若所述视频图像的欺诈概率不大于预设概率阈值,则判定所述关键帧视频图像中的微表情为非欺诈表情。一般的,将预设概率阈值设置为50%。
对于同一个申请人在面审过程中可能存在不止一个视频图像。对于同一个视频图像,可以确定不止一张关键帧视频图像。将视频图像的多张关键帧视频图像依次输入至微表情欺诈概率模型中分别获取概率值,再对这多张关键帧图片概率值取平均值,所述平均值确定为该视频图像的欺诈概率。进而根据该欺诈概率与预设概率阈值的比较结果判断该视频图像是否欺诈。
S104:根据确定的所述关键帧视频图像对应的微表情的欺诈概率,对所述申请人的还款能力进行风险预测。
在本申请实施例中,根据所述关键帧视频图像对应的微表情的欺诈概率,可以确定申请人的诚信可靠度,诚信可靠度与欺诈概率的可靠度成反比,欺诈概率越高,诚信可靠度越低,诚信可靠度与还款风险也成反比,诚信可靠度越低,还款风险越高。
作为本申请的一个实施例,图5示出了本申请实施例提供的风险预测方法步骤S104的具体实现流程,详述如下:
D1:将所述关键帧视频图像对应的微表情的欺诈概率与预设风险概率阈值进行比较。
D2:若所述关键帧视频图像对应的微表情的欺诈概率不小于所述预设风险概率阈值,则预测所述申请人的还款风险超出预设风险范围。
D3:若所述关键帧视频图像对应的微表情的欺诈概率小于所述预设风险概率阈值,则预测所述申请人的还款风险在预设风险范围之内。
具体地,通过所述预设风险概率阈值作为临界点,预测所述申请人的还款风险是否超出预设风险范围。进一步地,计算所述欺诈概率与所述预设风险概率阈值的差值与预设风险差值,根据预先建立的差值等级对照表来判断所述申请人的还款风险的等级。
可选地,作为本申请的一个实施例,所述步骤S104还包括:
D4:若预测所述申请人的还款风险超出预设风险范围,提出与风险防控措施。
具体地,若预测所述申请人的还款风险超出预设风险范围,则调取根据历史审核信息建立的措施库中与所述申请人的还款风险匹配的措施,为审核人提供建议。
本申请实施例中,通过获取申请人在面审过程中的视频图像,将所述视频图像中的视频帧进行聚类,根据视频帧中人脸表情的相似度进行聚类,确定关键帧视频图像,或者,根据视频帧中人脸面部动作单元的强度值进行聚类,确定关键帧视频图像,然后根据确定的所述关键帧视频图像与微表情欺诈概率模型,确定所述关键帧视频图像对应的微表情的欺诈概率,最后根据确定的所述关键帧视频图像对应的微表情的欺诈概率,对所述申请人的还款能力进行风险预测,本方案通过对申请人在面审过程中的微表情进行分析,评估申请人的欺诈概率,根据欺诈概率与预设风险概率阈值的比较结果判断所述申请人的诚信可靠度,并根据诚信可靠度对所述申请人的还款能力进行风险预测,为预测评估申请人的还款能力提供客观的辅助判断,从而提高风险预测的准确性以及预测的效率。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
实施例2
对应于上文实施例所述的风险预测方法,图6示出了本申请实施例提供的风险预测装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。
参照图6,该风险预测装置包括:视频图像获取模块61,关键图像确定模块62,欺诈概率确定模块63,风险预测模块64,其中:
视频图像获取模块61,用于获取申请人在面审过程中的视频图像;
关键图像确定模块62,用于将所述视频图像中的视频帧进行聚类,确定关键帧视频图像;
欺诈概率确定模块63,用于根据确定的所述关键帧视频图像与微表情欺诈概率模型,确定所述关键帧视频图像对应的微表情的欺诈概率;
风险预测模块64,用于根据确定的所述关键帧视频图像对应的微表情的欺诈概率,对所述申请人的还款能力进行风险预测。
可选地,所述关键图像确定模块62包括:
中心确定子模块,用于从所述视频图像中选取指定数量的视频帧作为初始聚类中心;
相似度计算子模块,用于计算所述视频图像中的视频帧与所述初始聚类中心的相似度;
聚类子模块,用于根据计算的相似度与预设的最小相似度将所述视频图像中的视频帧进行聚类;
所述聚类子模块,还用于从聚类后的视频帧中重新选取聚类中心,重复进行聚类直至聚类中心收敛;
第一图像确定子模块,用于将最终确定的聚类中心确定为所述视频图像的关键帧视频图像。
可选地,所述关键图像确定模块62包括:
强度值确定子模块,用于根据所述视频图像中的视频帧,获取每一帧视频图像中人脸面部动作单元的强度值;
第二图像确定子模块,用于根据人脸面部动作单元的强度值对所述视频帧图片进行分类,并根据分类结果确定关键帧视频图像。
可选地,所述风险预测模块64包括:
概率比较子模块,用于将所述关键帧视频图像对应的微表情的欺诈概率与预设风险概率阈值进行比较;
第一预测子模块,用于若所述关键帧视频图像对应的微表情的欺诈概率不小于所述预设风险概率阈值,则预测所述申请人的还款风险超出预设风险范围;
第二预测子模块,用于若所述关键帧视频图像对应的微表情的欺诈概率小于所述预设风险概率阈值,则预测所述申请人的还款风险在预设风险范围之内。
可选地,如图7所示,所述风险预测装置还包括:
样本视频获取模块71,用于获取设定数量贴有标签的样本视频,所述标签包括欺诈和非欺诈;
关键图像抽取模块72,用于抽取每一个样本视频中的样本关键帧图像,所述样本关键帧图像的标签与所属的样本视频的标签相同;
分类训练模块73,用于将抽取的样本关键帧图像作为训练样本对SVM分类器进行训练,将训练完成的SVM分类器确定为微表情欺诈概率模型。
本申请实施例中,通过获取申请人在面审过程中的视频图像,将所述视频图像中的视频帧进行聚类,确定关键帧视频图像,然后根据确定的所述关键帧视频图像与微表情欺诈概率模型,确定所述关键帧视频图像对应的微表情的欺诈概率,最后根据确定的所述关键帧视频图像对应的微表情的欺诈概率,对所述申请人的还款能力进行风险预测,本方案通过对申请人在面审过程中的微表情进行分析,评估申请人的欺诈概率,并根据欺诈概率对所述申请人的还款能力进行风险预测,为预测评估申请人的还款能力提供客观的辅助判断,从而提高风险预测的准确性以及预测的效率。
实施例3
图8是本申请一实施例提供的服务器的示意图。如图8所示,该实施例的服务器8包括:处理器80、存储器81以及存储在所述存储器81中并可在所述处理器80上运行的计算机可读指令82,例如风险预测程序。所述处理器80执行所述计算机可读指令82时实现上述各个风险预测方法实施例中的步骤,例如图1所示的步骤101至104。或者,所述处理器80执行所述计算机可读指令82时实现上述各装置实施例中各模块/单元的功能,例如图6所示模块61至64的功能。
示例性的,所述计算机可读指令82可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器81中,并由所述处理器80执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机可读指令指令段,该指令段用于描述所述计算机可读指令82在所述服务器8中的执行过程。
所述服务器8可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述服务器可包括,但不仅限于,处理器80、存储器81。本领域技术人员可以理解,图8仅仅是服务器8的示例,并不构成对服务器8的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述服务器还可以包括输入输出设备、网络接入设备、总线等。
所述处理器80可以是中央处理单元(Central
Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路 (Application
Specific Integrated Circuit,ASIC)、现成可编程门阵列 (Field-Programmable Gate Array,FPGA) 或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器81可以是所述服务器8的内部存储单元,例如服务器8的硬盘或内存。所述存储器81也可以是所述服务器8的外部存储设备,例如所述服务器8上配备的插接式硬盘,智能存储卡(Smart Media Card, SMC),安全数字(Secure Digital, SD)卡,闪存卡(Flash
Card)等。进一步地,所述存储器81还可以既包括所述服务器8的内部存储单元也包括外部存储设备。所述存储器81用于存储所述计算机可读指令以及所述服务器所需的其他程序和数据。所述存储器81还可以用于暂时地存储已经输出或者将要输出的数据。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述计算机可读介质可以包括:能够携带所述计算机可读指令的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only
Memory)、随机存取存储器(RAM,Random
Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。
Claims (20)
- 一种风险预测方法,其特征在于,包括:获取申请人在面审过程中的视频图像;将所述视频图像中的视频帧进行聚类,确定关键帧视频图像;根据确定的所述关键帧视频图像与微表情欺诈概率模型,确定所述关键帧视频图像对应的微表情的欺诈概率;根据确定的所述关键帧视频图像对应的微表情的欺诈概率,对所述申请人的还款能力进行风险预测。
- 根据权利要求1所述的方法,其特征在于,所述将所述视频图像中的视频帧进行聚类,确定关键帧视频图像的步骤,包括:从所述视频图像中选取指定数量的视频帧作为初始聚类中心;计算所述视频图像中的视频帧与所述初始聚类中心的相似度;根据计算的相似度与预设的最小相似度将所述视频图像中的视频帧进行聚类;从聚类后的视频帧中重新选取聚类中心,重复进行聚类直至聚类中心收敛;将最终确定的聚类中心确定为所述视频图像的关键帧视频图像。
- 根据权利要求1所述的方法,其特征在于,所述将所述视频图像中的视频帧进行聚类,确定关键帧视频图像的步骤,包括:根据所述视频图像中的视频帧,获取每一帧视频图像中人脸面部动作单元的强度值;根据人脸面部动作单元的强度值对所述视频帧图片进行分类,并根据分类结果确定关键帧视频图像。
- 根据权利要求1所述的方法,其特征在于,在所述根据确定的所述关键帧视频图像与微表情欺诈概率模型,确定所述关键帧视频图像对应的微表情的欺诈概率的步骤之前,包括:获取设定数量贴有标签的样本视频,所述标签包括欺诈和非欺诈;抽取每一个样本视频中的样本关键帧图像,所述样本关键帧图像的标签与所属的样本视频的标签相同;将抽取的样本关键帧图像作为训练样本对SVM分类器进行训练,将训练完成的SVM分类器确定为微表情欺诈概率模型。
- 根据权利要求1至4任一项所述的方法,其特征在于,所述根据确定的所述关键帧视频图像对应的微表情的欺诈概率,进行风险预测的步骤,包括:将所述关键帧视频图像对应的微表情的欺诈概率与预设风险概率阈值进行比较;若所述关键帧视频图像对应的微表情的欺诈概率不小于所述预设风险概率阈值,则预测所述申请人的还款风险超出预设风险范围;若所述关键帧视频图像对应的微表情的欺诈概率小于所述预设风险概率阈值,则预测所述申请人的还款风险在预设风险范围之内。
- 一种风险预测装置,其特征在于,包括:视频图像获取模块,用于获取申请人在面审过程中的视频图像;关键图像确定模块,用于将所述视频图像中的视频帧进行聚类,确定关键帧视频图像;欺诈概率确定模块,用于根据确定的所述关键帧视频图像与微表情欺诈概率模型,确定所述关键帧视频图像对应的微表情的欺诈概率;风险预测模块,用于根据确定的所述关键帧视频图像对应的微表情的欺诈概率,对所述申请人的还款能力进行风险预测。
- 根据权利要求6所述的风险预测装置,其特征在于,所述关键图像确定模块包括:中心确定子模块,用于从所述视频图像中选取指定数量的视频帧作为初始聚类中心;相似度计算子模块,用于计算所述视频图像中的视频帧与所述初始聚类中心的相似度;聚类子模块,用于根据计算的相似度与预设的最小相似度将所述视频图像中的视频帧进行聚类;所述聚类子模块,还用于从聚类后的视频帧中重新选取聚类中心,重复进行聚类直至聚类中心收敛;第一图像确定子模块,用于将最终确定的聚类中心确定为所述视频图像的关键帧视频图像。
- 根据权利要求6所述的风险预测装置,其特征在于,所述关键图像确定模块包括:强度值确定子模块,用于根据所述视频图像中的视频帧,获取每一帧视频图像中人脸面部动作单元的强度值;第二图像确定子模块,用于根据人脸面部动作单元的强度值对所述视频帧图片进行分类,并根据分类结果确定关键帧视频图像。
- 根据权利要求6所述的风险预测装置,其特征在于,所述风险预测装置还包括:样本视频获取模块,用于获取设定数量贴有标签的样本视频,所述标签包括欺诈和非欺诈;关键图像抽取模块,用于抽取每一个样本视频中的样本关键帧图像,所述样本关键帧图像的标签与所属的样本视频的标签相同;分类训练模块,用于将抽取的样本关键帧图像作为训练样本对SVM分类器进行训练,将训练完成的SVM分类器确定为微表情欺诈概率模型。
- 根据权利要求6至9任一项所述的风险预测装置,其特征在于,所述风险预测模块包括:概率比较子模块,用于将所述关键帧视频图像对应的微表情的欺诈概率与预设风险概率阈值进行比较;第一预测子模块,用于若所述关键帧视频图像对应的微表情的欺诈概率不小于所述预设风险概率阈值,则预测所述申请人的还款风险超出预设风险范围;第二预测子模块,用于若所述关键帧视频图像对应的微表情的欺诈概率小于所述预设风险概率阈值,则预测所述申请人的还款风险在预设风险范围之内。
- 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,其特征在于,所述计算机可读指令被处理器执行时实现如下步骤:获取申请人在面审过程中的视频图像;将所述视频图像中的视频帧进行聚类,确定关键帧视频图像;根据确定的所述关键帧视频图像与微表情欺诈概率模型,确定所述关键帧视频图像对应的微表情的欺诈概率;根据确定的所述关键帧视频图像对应的微表情的欺诈概率,对所述申请人的还款能力进行风险预测。
- 根据权利要求11所述的计算机可读存储介质,其特征在于,所述将所述视频图像中的视频帧进行聚类,确定关键帧视频图像的步骤,包括:从所述视频图像中选取指定数量的视频帧作为初始聚类中心;计算所述视频图像中的视频帧与所述初始聚类中心的相似度;根据计算的相似度与预设的最小相似度将所述视频图像中的视频帧进行聚类;从聚类后的视频帧中重新选取聚类中心,重复进行聚类直至聚类中心收敛;将最终确定的聚类中心确定为所述视频图像的关键帧视频图像。
- 根据权利要求11所述的计算机可读存储介质,其特征在于,所述将所述视频图像中的视频帧进行聚类,确定关键帧视频图像的步骤,包括:根据所述视频图像中的视频帧,获取每一帧视频图像中人脸面部动作单元的强度值;根据人脸面部动作单元的强度值对所述视频帧图片进行分类,并根据分类结果确定关键帧视频图像。
- 根据权利要求11所述的计算机可读存储介质,其特征在于,在所述根据确定的所述关键帧视频图像与微表情欺诈概率模型,确定所述关键帧视频图像对应的微表情的欺诈概率的步骤之前,包括:获取设定数量贴有标签的样本视频,所述标签包括欺诈和非欺诈;抽取每一个样本视频中的样本关键帧图像,所述样本关键帧图像的标签与所属的样本视频的标签相同;将抽取的样本关键帧图像作为训练样本对SVM分类器进行训练,将训练完成的SVM分类器确定为微表情欺诈概率模型。
- 根据权利要求11至14任一项所述的计算机可读存储介质,其特征在于,所述根据确定的所述关键帧视频图像对应的微表情的欺诈概率,进行风险预测的步骤,包括:将所述关键帧视频图像对应的微表情的欺诈概率与预设风险概率阈值进行比较;若所述关键帧视频图像对应的微表情的欺诈概率不小于所述预设风险概率阈值,则预测所述申请人的还款风险超出预设风险范围;若所述关键帧视频图像对应的微表情的欺诈概率小于所述预设风险概率阈值,则预测所述申请人的还款风险在预设风险范围之内。
- 一种服务器,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:获取申请人在面审过程中的视频图像;将所述视频图像中的视频帧进行聚类,确定关键帧视频图像;根据确定的所述关键帧视频图像与微表情欺诈概率模型,确定所述关键帧视频图像对应的微表情的欺诈概率;根据确定的所述关键帧视频图像对应的微表情的欺诈概率,对所述申请人的还款能力进行风险预测。
- 如权利要求16所述的服务器,其特征在于,所述将所述视频图像中的视频帧进行聚类,确定关键帧视频图像的步骤,包括:从所述视频图像中选取指定数量的视频帧作为初始聚类中心;计算所述视频图像中的视频帧与所述初始聚类中心的相似度;根据计算的相似度与预设的最小相似度将所述视频图像中的视频帧进行聚类;从聚类后的视频帧中重新选取聚类中心,重复进行聚类直至聚类中心收敛;将最终确定的聚类中心确定为所述视频图像的关键帧视频图像。
- 如权利要求16所述的服务器,其特征在于,所述将所述视频图像中的视频帧进行聚类,确定关键帧视频图像的步骤,包括:根据所述视频图像中的视频帧,获取每一帧视频图像中人脸面部动作单元的强度值;根据人脸面部动作单元的强度值对所述视频帧图片进行分类,并根据分类结果确定关键帧视频图像。
- 如权利要求16所述的服务器,其特征在于,在所述根据确定的所述关键帧视频图像与微表情欺诈概率模型,确定所述关键帧视频图像对应的微表情的欺诈概率的步骤之前,包括:获取设定数量贴有标签的样本视频,所述标签包括欺诈和非欺诈;抽取每一个样本视频中的样本关键帧图像,所述样本关键帧图像的标签与所属的样本视频的标签相同;将抽取的样本关键帧图像作为训练样本对SVM分类器进行训练,将训练完成的SVM分类器确定为微表情欺诈概率模型。
- 如权利要求16至19任一项所述的服务器,其特征在于,所述根据确定的所述关键帧视频图像对应的微表情的欺诈概率,进行风险预测的步骤,包括:将所述关键帧视频图像对应的微表情的欺诈概率与预设风险概率阈值进行比较;若所述关键帧视频图像对应的微表情的欺诈概率不小于所述预设风险概率阈值,则预测所述申请人的还款风险超出预设风险范围;若所述关键帧视频图像对应的微表情的欺诈概率小于所述预设风险概率阈值,则预测所述申请人的还款风险在预设风险范围之内。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810496195.1 | 2018-05-22 | ||
CN201810496195.1A CN108734570A (zh) | 2018-05-22 | 2018-05-22 | 一种风险预测方法、存储介质和服务器 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019223139A1 true WO2019223139A1 (zh) | 2019-11-28 |
Family
ID=63937780
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/101579 WO2019223139A1 (zh) | 2018-05-22 | 2018-08-21 | 一种风险预测方法、装置、存储介质和服务器 |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN108734570A (zh) |
TW (1) | TWI731297B (zh) |
WO (1) | WO2019223139A1 (zh) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111325185A (zh) * | 2020-03-20 | 2020-06-23 | 上海看看智能科技有限公司 | 人脸防欺诈方法及系统 |
CN111582757A (zh) * | 2020-05-20 | 2020-08-25 | 深圳前海微众银行股份有限公司 | 欺诈风险的分析方法、装置、设备及计算机可读存储介质 |
CN112215700A (zh) * | 2020-10-13 | 2021-01-12 | 中国银行股份有限公司 | 信贷面审的审核方法及装置 |
CN112348318A (zh) * | 2020-10-19 | 2021-02-09 | 深圳前海微众银行股份有限公司 | 一种供应链风险预测模型的训练和应用方法及装置 |
CN112381036A (zh) * | 2020-11-26 | 2021-02-19 | 厦门大学 | 一种应用于刑侦的微表情与宏表情片段识别方法 |
CN113657440A (zh) * | 2021-07-08 | 2021-11-16 | 同盾科技有限公司 | 一种基于用户特征聚类的拒绝样本推断方法和装置 |
CN117132391A (zh) * | 2023-10-16 | 2023-11-28 | 杭银消费金融股份有限公司 | 一种基于人机交互的授信审批方法与系统 |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109711297A (zh) * | 2018-12-14 | 2019-05-03 | 深圳壹账通智能科技有限公司 | 基于面部图片的风险识别方法、装置、计算机设备及存储介质 |
CN109584050A (zh) * | 2018-12-14 | 2019-04-05 | 深圳壹账通智能科技有限公司 | 基于微表情识别的用户风险程度分析方法及装置 |
CN109829359A (zh) * | 2018-12-15 | 2019-05-31 | 深圳壹账通智能科技有限公司 | 无人商店的监控方法、装置、计算机设备及存储介质 |
CN109766772A (zh) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | 风险控制方法、装置、计算机设备和存储介质 |
CN109816518A (zh) * | 2019-01-04 | 2019-05-28 | 深圳壹账通智能科技有限公司 | 面核结果获取方法、装置、计算机设备和可读存储介质 |
CN109729383B (zh) * | 2019-01-04 | 2021-11-02 | 深圳壹账通智能科技有限公司 | 双录视频质量检测方法、装置、计算机设备和存储介质 |
CN109800703A (zh) * | 2019-01-17 | 2019-05-24 | 深圳壹账通智能科技有限公司 | 基于微表情的风险审核方法、装置、计算机设备及存储介质 |
CN109858411A (zh) * | 2019-01-18 | 2019-06-07 | 深圳壹账通智能科技有限公司 | 基于人工智能的案件审判方法、装置及计算机设备 |
CN109919001A (zh) * | 2019-01-23 | 2019-06-21 | 深圳壹账通智能科技有限公司 | 基于情绪识别的客服监测方法、装置、设备和存储介质 |
CN110222554A (zh) * | 2019-04-16 | 2019-09-10 | 深圳壹账通智能科技有限公司 | 欺诈识别方法、装置、电子设备及存储介质 |
CN110197295A (zh) * | 2019-04-23 | 2019-09-03 | 深圳壹账通智能科技有限公司 | 预测金融产品购入中的风险的方法和相关装置 |
CN111860554B (zh) * | 2019-04-28 | 2023-06-30 | 杭州海康威视数字技术股份有限公司 | 风险监控方法、装置、存储介质及电子设备 |
CN110223158B (zh) * | 2019-05-21 | 2023-08-18 | 平安银行股份有限公司 | 一种风险用户的识别方法、装置、存储介质和服务器 |
CN110503563B (zh) * | 2019-07-05 | 2023-07-21 | 中国平安人寿保险股份有限公司 | 风险控制方法及系统 |
CN111080874B (zh) * | 2019-12-31 | 2022-06-03 | 中国银行股份有限公司 | 基于面部图像的金库安全门的控制方法和装置 |
CN111768286B (zh) * | 2020-05-14 | 2024-02-20 | 北京旷视科技有限公司 | 风险预测方法、装置、设备及存储介质 |
CN111667359A (zh) * | 2020-06-19 | 2020-09-15 | 上海印闪网络科技有限公司 | 一种基于实时视频的信息审核方法 |
CN112001785A (zh) * | 2020-07-21 | 2020-11-27 | 小花网络科技(深圳)有限公司 | 一种基于图像识别的网贷欺诈识别方法及系统 |
CN112541411A (zh) * | 2020-11-30 | 2021-03-23 | 中国工商银行股份有限公司 | 线上视频反欺诈识别方法及装置 |
CN113283978B (zh) * | 2021-05-06 | 2024-05-10 | 北京思图场景数据科技服务有限公司 | 基于生物基础与行为特征及业务特征的金融风险评估方法 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140105467A1 (en) * | 2005-09-28 | 2014-04-17 | Facedouble, Inc. | Image Classification And Information Retrieval Over Wireless Digital Networks And The Internet |
CN105893920A (zh) * | 2015-01-26 | 2016-08-24 | 阿里巴巴集团控股有限公司 | 一种人脸活体检测方法和装置 |
CN107704834A (zh) * | 2017-10-13 | 2018-02-16 | 上海壹账通金融科技有限公司 | 微表情面审辅助方法、装置及存储介质 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103258204B (zh) * | 2012-02-21 | 2016-12-14 | 中国科学院心理研究所 | 一种基于Gabor和EOH特征的自动微表情识别方法 |
CN103065122A (zh) * | 2012-12-21 | 2013-04-24 | 西北工业大学 | 基于面部动作单元组合特征的人脸表情识别方法 |
CN107292218A (zh) * | 2016-04-01 | 2017-10-24 | 中兴通讯股份有限公司 | 一种表情识别方法及装置 |
CN105913046A (zh) * | 2016-05-06 | 2016-08-31 | 姜振宇 | 微表情识别装置及方法 |
CN106529453A (zh) * | 2016-10-28 | 2017-03-22 | 深圳市唯特视科技有限公司 | 一种基于增强补丁和多标签学习相结合的表情测谎方法 |
-
2018
- 2018-05-22 CN CN201810496195.1A patent/CN108734570A/zh not_active Withdrawn
- 2018-08-21 WO PCT/CN2018/101579 patent/WO2019223139A1/zh active Application Filing
-
2019
- 2019-01-25 TW TW108102852A patent/TWI731297B/zh active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140105467A1 (en) * | 2005-09-28 | 2014-04-17 | Facedouble, Inc. | Image Classification And Information Retrieval Over Wireless Digital Networks And The Internet |
CN105893920A (zh) * | 2015-01-26 | 2016-08-24 | 阿里巴巴集团控股有限公司 | 一种人脸活体检测方法和装置 |
CN107704834A (zh) * | 2017-10-13 | 2018-02-16 | 上海壹账通金融科技有限公司 | 微表情面审辅助方法、装置及存储介质 |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111325185A (zh) * | 2020-03-20 | 2020-06-23 | 上海看看智能科技有限公司 | 人脸防欺诈方法及系统 |
CN111325185B (zh) * | 2020-03-20 | 2023-06-23 | 上海看看智能科技有限公司 | 人脸防欺诈方法及系统 |
CN111582757A (zh) * | 2020-05-20 | 2020-08-25 | 深圳前海微众银行股份有限公司 | 欺诈风险的分析方法、装置、设备及计算机可读存储介质 |
CN111582757B (zh) * | 2020-05-20 | 2024-04-30 | 深圳前海微众银行股份有限公司 | 欺诈风险的分析方法、装置、设备及计算机可读存储介质 |
CN112215700A (zh) * | 2020-10-13 | 2021-01-12 | 中国银行股份有限公司 | 信贷面审的审核方法及装置 |
CN112348318A (zh) * | 2020-10-19 | 2021-02-09 | 深圳前海微众银行股份有限公司 | 一种供应链风险预测模型的训练和应用方法及装置 |
CN112348318B (zh) * | 2020-10-19 | 2024-04-23 | 深圳前海微众银行股份有限公司 | 一种供应链风险预测模型的训练和应用方法及装置 |
CN112381036A (zh) * | 2020-11-26 | 2021-02-19 | 厦门大学 | 一种应用于刑侦的微表情与宏表情片段识别方法 |
CN113657440A (zh) * | 2021-07-08 | 2021-11-16 | 同盾科技有限公司 | 一种基于用户特征聚类的拒绝样本推断方法和装置 |
CN117132391A (zh) * | 2023-10-16 | 2023-11-28 | 杭银消费金融股份有限公司 | 一种基于人机交互的授信审批方法与系统 |
Also Published As
Publication number | Publication date |
---|---|
TWI731297B (zh) | 2021-06-21 |
CN108734570A (zh) | 2018-11-02 |
TW202004637A (zh) | 2020-01-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI731297B (zh) | 一種風險預測方法、存儲介質和伺服器 | |
CN110147726B (zh) | 业务质检方法和装置、存储介质及电子装置 | |
TWI773180B (zh) | 計算人類使用者的真實性的運算系統與方法 | |
CN108717663B (zh) | 基于微表情的面签欺诈判断方法、装置、设备及介质 | |
WO2020024395A1 (zh) | 疲劳驾驶检测方法、装置、计算机设备及存储介质 | |
CN111046959A (zh) | 模型训练方法、装置、设备和存储介质 | |
JP2022141931A (ja) | 生体検出モデルのトレーニング方法及び装置、生体検出の方法及び装置、電子機器、記憶媒体、並びにコンピュータプログラム | |
CN109033953A (zh) | 多任务学习深度网络的训练方法、设备及存储介质 | |
CN113706502B (zh) | 一种人脸图像质量评估方法及装置 | |
US20210174104A1 (en) | Finger vein comparison method, computer equipment, and storage medium | |
CN116863522A (zh) | 痤疮分级方法、装置、设备及介质 | |
CN112884326A (zh) | 一种多模态分析的视频面试评估方法、装置和存储介质 | |
RU2768797C1 (ru) | Способ и система для определения синтетически измененных изображений лиц на видео | |
CN112613341A (zh) | 训练方法及装置、指纹识别方法及装置、电子设备 | |
CN116580442A (zh) | 基于可分离卷积的微表情识别方法、装置、设备及介质 | |
WO2021217907A1 (zh) | 面审欺诈辅助识别方法、装置、电子设备及存储介质 | |
CN115661885A (zh) | 一种基于表情识别的学生心理状态分析方法及装置 | |
CN113269190B (zh) | 基于人工智能的数据分类方法、装置、计算机设备及介质 | |
CN112580538B (zh) | 客服人员调度方法、装置、设备及存储介质 | |
CN111898473B (zh) | 一种基于深度学习的司机状态实时监测方法 | |
Schaefer et al. | Scleroderma capillary pattern identification using texture descriptors and ensemble classification | |
CN114049676A (zh) | 疲劳状态检测方法、装置、设备及存储介质 | |
CN111275035B (zh) | 一种识别背景信息的方法及系统 | |
TWI844284B (zh) | 跨域分類器的訓練方法與電子裝置 | |
CN114881994B (zh) | 产品缺陷检测方法、装置以及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18919570 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20/04/2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18919570 Country of ref document: EP Kind code of ref document: A1 |