WO2019085331A1 - 欺诈可能性分析方法、装置及存储介质 - Google Patents

欺诈可能性分析方法、装置及存储介质 Download PDF

Info

Publication number
WO2019085331A1
WO2019085331A1 PCT/CN2018/076122 CN2018076122W WO2019085331A1 WO 2019085331 A1 WO2019085331 A1 WO 2019085331A1 CN 2018076122 W CN2018076122 W CN 2018076122W WO 2019085331 A1 WO2019085331 A1 WO 2019085331A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
sample
fraud
feature
analyzed
Prior art date
Application number
PCT/CN2018/076122
Other languages
English (en)
French (fr)
Inventor
陈林
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019085331A1 publication Critical patent/WO2019085331A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities

Definitions

  • the present application relates to the field of information processing technologies, and in particular, to a fraud possibility analysis method, apparatus, and storage medium.
  • the present application provides a fraud possibility analysis method, the method comprising:
  • Sample preparation step collecting facial video of a person's predetermined duration as a sample, and assigning a fraud label to each sample;
  • Sample feature extraction step extracting image features and audio features of each sample, and combining to obtain video features of each sample;
  • Network construction step setting the number of neural network layers and the number of neurons in each layer network according to the sequence length of each sample and the dimension of the video feature;
  • Network training step Define the Softmax loss function, use the fraud labeling and video features of each sample as sample data, train the neural network, output the fraud probability and no fraud probability of each sample, and update the training of the neural network every training.
  • a parameter the training parameter that minimizes the Softmax loss function is used as a final parameter to obtain a fraud probability analysis model;
  • the model application step is: collecting a facial video of a predetermined duration of the object to be analyzed, and analyzing the facial video of the object to be analyzed by using the fraud possibility analysis model to obtain an analysis result of the possibility of fraud of the object to be analyzed.
  • the application also provides a computing device comprising a memory and a processor, the memory including a fraud probability analysis program.
  • the computing device is directly or indirectly connected to the camera device, and the camera device transmits the facial video of the captured person's conversation to the computing device.
  • the processor of the computing device executes the fraud probability analysis program in memory, the following steps are implemented:
  • Sample preparation step collecting facial video of a person's predetermined duration as a sample, and assigning a fraud label to each sample;
  • Sample feature extraction step extracting image features and audio features of each sample, and combining to obtain video features of each sample;
  • Network construction step setting the number of neural network layers and the number of neurons in each layer network according to the sequence length of each sample and the dimension of the video feature;
  • Network training step Define the Softmax loss function, use the fraud labeling and video features of each sample as sample data, train the neural network, output the fraud probability and no fraud probability of each sample, and update the training of the neural network every training.
  • a parameter the training parameter that minimizes the Softmax loss function is used as a final parameter to obtain a fraud probability analysis model;
  • the model application step is: collecting a facial video of a predetermined duration of the object to be analyzed, and analyzing the facial video of the object to be analyzed by using the fraud possibility analysis model to obtain an analysis result of the possibility of fraud of the object to be analyzed.
  • the present application further provides a computer readable storage medium including a fraud possibility analysis program, where the fraud possibility analysis program is executed by a processor, implementing the following steps :
  • Sample preparation step collecting facial video of a person's predetermined duration as a sample, and assigning a fraud label to each sample;
  • Sample feature extraction step extracting image features and audio features of each sample, and combining to obtain video features of each sample;
  • Network construction step setting the number of neural network layers and the number of neurons in each layer network according to the sequence length of each sample and the dimension of the video feature;
  • Network training step Define the Softmax loss function, use the fraud labeling and video features of each sample as sample data, train the neural network, output the fraud probability and no fraud probability of each sample, and update the training of the neural network every training.
  • a parameter the training parameter that minimizes the Softmax loss function is used as a final parameter to obtain a fraud probability analysis model;
  • the model application step is: collecting a facial video of a predetermined duration of the object to be analyzed, and analyzing the facial video of the object to be analyzed by using the fraud possibility analysis model to obtain an analysis result of the possibility of fraud of the object to be analyzed.
  • the fraud possibility analysis method, device and storage medium trains the neural network through the face video of a large number of people, updates the training parameters of the neural network according to the Softmax loss function, and obtains the last updated training parameter as the final parameter to obtain fraud. Probability analysis model. Then, collecting the facial video of the predetermined duration of the object to be analyzed, extracting the audio feature and the image feature of the video, combining the video features of the video, and inputting the video feature into the training fraud probability analysis model to obtain the The analysis result of the possibility of fraud of the object to be analyzed.
  • This application it is possible to objectively and effectively judge whether a person is suspected of fraud, which also reduces costs and saves time.
  • FIG. 1 is an application environment diagram of a first preferred embodiment of a fraud possibility analysis method according to the present application.
  • FIG. 2 is an application environment diagram of a second preferred embodiment of the fraud possibility analysis method of the present application.
  • FIG. 3 is a program block diagram of the fraud possibility analysis program in FIGS. 1 and 2.
  • FIG. 4 is a flow chart of a preferred embodiment of a fraud possibility analysis method of the present application.
  • FIG. 1 an application environment diagram of a first preferred embodiment of a fraud possibility analysis method of the present application is shown.
  • the imaging device 3 is connected to the computing device 1 via the network 2, and the camera device 3 captures the face video of the person's conversation, and transmits it to the computing device 1 via the network 2, and the computing device 1 utilizes the fraud possibility analysis program provided by the present application.
  • 10 Analyze the video to output the person's fraud probability and no fraud probability for reference.
  • the computing device 1 may be a terminal device having a storage and computing function, such as a server, a smart phone, a tablet computer, a portable computer, a desktop computer, or the like.
  • the computing device 1 includes a memory 11, a processor 12, a network interface 13, and a communication bus 14.
  • the image pickup apparatus 3 is installed in a specific place, such as an office place, a monitoring area, and the like, for photographing a face video at the time of a person's conversation, and then transmits the captured video to the memory 11 through the network 2.
  • the network interface 13 may include a standard wired interface, a wireless interface (such as a WI-FI interface).
  • Communication bus 14 is used to implement connection communication between these components.
  • the memory 11 includes at least one type of readable storage medium.
  • the at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory, or the like.
  • the readable storage medium may be an internal storage unit of the computing device 1, such as a hard disk of the computing device 1.
  • the readable storage medium may also be an external memory 11 of the computing device 1, such as a plug-in hard disk equipped on the computing device 1, a smart memory card (SMC). , Secure Digital (SD) card, Flash Card, etc.
  • SMC smart memory card
  • SD Secure Digital
  • the memory 11 stores the program code of the fraud possibility analysis program 10, the dialog video captured by the camera 3, and the data to which the processor 12 executes the program code of the fraud possibility analysis program 10 and The last output of the data, etc.
  • Processor 12 may be a Central Processing Unit (CPU), microprocessor or other data processing chip in some embodiments.
  • CPU Central Processing Unit
  • microprocessor or other data processing chip in some embodiments.
  • Figure 1 shows only computing device 1 with components 11-14, but it should be understood that not all illustrated components may be implemented and that more or fewer components may be implemented instead.
  • the computing device 1 may further include a user interface
  • the user interface may include an input unit such as a keyboard, a voice input device such as a microphone, a device with a voice recognition function, a voice output device such as an audio, a headphone, and the like.
  • the user interface may also include a standard wired interface and a wireless interface.
  • the computing device 1 may also include a display.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch sensor, or the like in some embodiments.
  • the display is used to display information processed by the computing device 1 and a visualized user interface.
  • the computing device 1 further comprises a touch sensor.
  • the area provided by the touch sensor for the user to perform a touch operation is referred to as a touch area.
  • the touch sensor described herein may be a resistive touch sensor, a capacitive touch sensor, or the like.
  • the touch sensor includes not only a contact type touch sensor but also a proximity type touch sensor or the like.
  • the touch sensor may be a single sensor or a plurality of sensors arranged, for example, in an array.
  • a user such as a counselor, can initiate the fraud probability analysis program 10 by touch.
  • the computing device 1 may also include radio frequency (RF) circuits, sensors, audio circuits, and the like, and details are not described herein.
  • RF radio frequency
  • FIG. 2 it is an application environment diagram of a second preferred embodiment of the fraud possibility analysis method of the present application.
  • the object to be analyzed realizes the analysis process of the possibility of character fraud by the terminal 3, the camera device 30 of the terminal 3 captures the face video of the object to be analyzed, and transmits it to the computing device 1 via the network 2, and the processor 12 of the computing device 1 executes the memory.
  • the program code of the stored fraud possibility analysis program 10 analyzes the audio part and the video frame of the video, and outputs the fraud probability and the non-fraud probability of the object to be analyzed, for reference by the object to be analyzed or the reviewer.
  • the terminal 3 can be a terminal device having a storage and computing function, such as a smart phone, a tablet computer, a portable computer, and a desktop computer.
  • the fraud possibility analysis program 10 of Figures 1 and 2 when executed by the processor 12, implements the following steps:
  • Sample preparation step collecting facial video of a person's predetermined duration as a sample, and assigning a fraud label to each sample;
  • Sample feature extraction step extracting image features and audio features of each sample, and combining to obtain video features of each sample;
  • Network construction step setting the number of neural network layers and the number of neurons in each layer network according to the sequence length of each sample and the dimension of the video feature;
  • Network training step Define the Softmax loss function, use the fraud labeling and video features of each sample as sample data, train the neural network, output the fraud probability and no fraud probability of each sample, and update the training of the neural network every training.
  • a parameter the training parameter that minimizes the Softmax loss function is used as a final parameter to obtain a fraud probability analysis model;
  • the model application step is: collecting a facial video of a predetermined duration of the object to be analyzed, and analyzing the facial video of the object to be analyzed by using the fraud possibility analysis model to obtain an analysis result of the possibility of fraud of the object to be analyzed.
  • FIG. 3 it is a program block diagram of the fraud possibility analysis program 10 in Figs.
  • the fraud possibility analysis program 10 is divided into a plurality of modules, which are stored in the memory 11 and executed by the processor 12 to complete the present application.
  • a module as referred to in this application refers to a series of computer program instructions that are capable of performing a particular function.
  • the fraud possibility analysis program 10 can be divided into: an acquisition module 110, an extraction module 120, a training module 130, and a prediction module 140.
  • the obtaining module 110 is configured to acquire a facial video of a predetermined duration when the character is in a conversation.
  • the video may be acquired by the camera device 3 of FIG. 1 or the camera device 30 of FIG. 2, or may be a face video and a fraud-free face video that are obviously fraudulently selected from a network information or a video library.
  • Fragment labeling is assigned to the sample video used for neural network training.
  • the fraud label indicates that the person in the sample video has suspected fraud. For example, 1 indicates fraud suspect and 0 indicates no fraud suspect.
  • the extracting module 120 is configured to extract audio features and image features of the video, and combine the audio features and the image features to obtain video features of each video. Decoding and pre-processing the video acquired by the obtaining module 110 to obtain an audio part and a video frame of each video, respectively performing feature extraction on the audio part and the video frame to obtain audio features and image features of each video, The audio features and image features are combined to obtain the video features of each video.
  • the extraction module 120 extracts the image features of the video
  • the HOG feature, the LBP feature, and the like of the video frame processed by normalization, noise removal, etc. may be used as image features, or the feature of the video frame may be directly extracted by the convolutional neural network. vector.
  • the amplitude value of the audio portion of the video may be used as an audio feature. For example, assuming that the predetermined duration of the video is 3 minutes and the audio sampling rate is 8000 Hz, the audio portion of the 3 minute video extracts 8000*60*3 amplitude values as audio features.
  • the dimension of the combined video feature is the sum of the image feature dimension of each frame image and the corresponding audio feature dimension.
  • the audio sample rate of the face video of the character dialogue is 8000HZ and the video sampling rate is 20HZ, it takes 50ms for each image to be read, and 400 audio amplitude values for 50ms, if the image features of each frame of the video
  • the training module 130 is configured to optimize the constructed neural network by iterative training to obtain a trained fraud possibility analysis model.
  • the video frames and audio frames of the face video of the character dialogue are arranged in chronological order, so the present application uses a Long Short-Term Memory (LSTM) in a cyclic neural network.
  • LSTM Long Short-Term Memory
  • the network shape is defined, and the number of layers of the LSTM and the neurons of each layer of the LSTM are set. Number.
  • the predetermined duration of the video is 3 minutes
  • the video sampling rate is 20 Hz
  • the combined video feature has a dimension of k
  • the sequence length of each video is recorded as 3*60*20
  • the shape of the LSTM is expressed as follows:
  • the training parameters are set. Assuming that the number of iterations is 100, the gradient optimization algorithm is adam, and the verification set is 0.1, the code for the LSTM model training using the tflearn deep learning library is as follows:
  • the training module 130 trains the LSTM by using the fraud feature of each sample and the combined video features, and updates the training parameters of the LSTM each time to minimize the Softmax loss function, and finally uses the last updated training parameter as a final Parameters, get the fraud possibility analysis model.
  • the analysis module 140 is configured to analyze the fraud possibility of the character.
  • the obtaining module 110 acquires a facial video of a predetermined duration of the object to be analyzed, and the extracting module 120 extracts image features and audio features of the video, and combines the image features and audio features into video features of the video, and the analyzing module 140 inputs the video features.
  • the training module 130 trains the fraud possibility analysis model to output the fraud probability and the no fraud probability of the object to be analyzed.
  • FIG. 4 it is a flowchart of a preferred embodiment of the fraud possibility analysis method of the present application.
  • the computing device 1 is started, and the processor 12 executes the fraud possibility analysis program 10 stored in the memory 11 to implement the following steps:
  • the acquisition module 110 is used to collect the predetermined duration video of the person's conversation and assign a fraud label to the video.
  • the video may be acquired by the camera device 3 of FIG. 1 or the camera device 30 of FIG. 2, or may be a face video with obvious fraudulent behavior when a person selects from a network information or a video library, and a normal fraud-free device. video.
  • step S20 the audio feature and the image feature of each video are extracted by the extraction module 120, and the audio feature and the image feature are combined to obtain a video feature of each video.
  • the image feature may be an underlying feature such as a HOG feature or an LBP feature of the video frame, or may be a feature vector of a video frame directly extracted by the convolutional neural network.
  • the audio feature may be a set of amplitude values of audio corresponding to each frame of image.
  • the dimension of the video feature is the sum of the image feature dimension of the video frame and the corresponding audio feature dimension.
  • Step S30 constructing a neural network according to the sequence length of the video of the predetermined duration and the dimension of the video feature.
  • the number of layers of the neural network and the number of neurons in each layer of the network are set according to the sequence length of the face video of the predetermined duration acquired by the obtaining module 110 and the dimension of the video feature extracted and combined by the extraction module 120, because the output result of the neural network
  • the probability of fraud and the probability of no fraud of the character, so the number of neurons as the classifier of the network output layer is 2.
  • Step S40 training the neural network according to video features and fraud labels of each video to obtain a trained fraud possibility analysis model.
  • the video features extracted and combined by the fraud annotation and extraction module 120 acquired by the acquisition module 110 are sample data, and the neural network is iteratively trained, and the training parameters of the neural network are updated each time to make the Softmax loss.
  • the training parameters that minimize the function are used as the final parameters to obtain a trained fraud possibility analysis model.
  • step S50 the acquisition module 110 is used to collect the facial video of the object to be analyzed for a predetermined duration.
  • This face video is acquired by the image pickup device 3 of Fig. 1 or the image pickup device 30 of Fig. 2 .
  • step S60 the image feature and the audio feature of the video to be analyzed are extracted by the extraction module 120, and the image feature and the audio feature are combined to obtain a video feature of the video to be analyzed.
  • the extraction module 120 For the specific process of feature extraction and combination, please refer to the extraction module 120 and the detailed description of step S20.
  • Step S70 input the video feature into the fraud probability analysis model to obtain a fraud possibility analysis result of the object to be analyzed.
  • the video feature of the object to be analyzed obtained by the extraction module 120 is input into the training fraud probability analysis model, and the fraud probability value and the non-fraud probability value of the object to be analyzed are output, and the output result with the large probability value is taken as the object to be analyzed. No fraud analysis results.
  • the embodiment of the present application further provides a computer readable storage medium, which may be a hard disk, a multimedia card, an SD card, a flash memory card, an SMC, a read only memory (ROM), and an erasable programmable Any combination or combination of any one or more of read only memory (EPROM), portable compact disk read only memory (CD-ROM), USB memory, and the like.
  • the computer readable storage medium includes a sample video and fraud possibility analysis program 10, and when the fraud possibility analysis program 10 is executed by the processor, the following operations are performed:
  • Sample preparation step collecting facial video of a person's predetermined duration as a sample, and assigning a fraud label to each sample;
  • Sample feature extraction step extracting image features and audio features of each sample, and combining to obtain video features of each sample;
  • Network construction step setting the number of neural network layers and the number of neurons in each layer network according to the sequence length of each sample and the dimension of the video feature;
  • Network training step Define the Softmax loss function, use the fraud labeling and video features of each sample as sample data, train the neural network, output the fraud probability and no fraud probability of each sample, and update the training of the neural network every training.
  • a parameter the training parameter that minimizes the Softmax loss function is used as a final parameter to obtain a fraud probability analysis model;
  • the model application step is: collecting a facial video of a predetermined duration of the object to be analyzed, and analyzing the facial video of the object to be analyzed by using the fraud possibility analysis model to obtain an analysis result of the possibility of fraud of the object to be analyzed.
  • the specific implementation manner of the computer readable storage medium of the present application is substantially the same as the foregoing fraud possibility analysis method and the specific implementation manner of the computing device 1, and details are not described herein again.
  • a disk including a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the various embodiments of the present application.
  • a terminal device which may be a mobile phone, a computer, a server, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

本申请提供一种欺诈可能性分析方法、装置及计算机可读存储介质。该方法包括以下步骤:收集样本视频并分配欺诈标注;提取每个样本视频的图像特征和音频特征,组合得到每个样本视频的视频特征;根据样本视频的序列长度及视频特征的维度构建神经网络;用每个样本视频的视频特征及欺诈标注训练神经网络,优化训练参数,得到欺诈可能性分析模型;采集待分析对象预定时长的面部视频;提取该视频的图像特征和音频特征,组合得到该视频的视频特征;将该视频特征输入所述欺诈可能性分析模型,输出该待分析对象的欺诈概率和无欺诈概率,取概率值大的输出结果作为该待分析对象有无欺诈的分析结果。利用本申请,可以客观判断人物是否存在欺诈嫌疑。

Description

欺诈可能性分析方法、装置及存储介质
优先权申明
本申请要求于2017年11月2日提交中国专利局、申请号为201711061172.X、发明名称为“欺诈可能性分析方法、装置及存储介质”的中国专利申请的优先权,其内容全部通过引用结合在本申请中。
技术领域
本申请涉及信息处理技术领域,尤其涉及一种欺诈可能性分析方法、装置及存储介质。
背景技术
目前,人物欺诈分析一般通过面审的方式实现,极度依赖分析人员的经验和判断,且耗费大量的时间和人力,分析结果往往不准确客观。也有利用专业的仪器设备,通过检测呼吸、脉搏、皮电等指标判断欺诈嫌疑人有无欺诈行为,但此类仪器设备通常价格昂贵且容易对人权构成侵犯。
发明内容
鉴于以上原因,有必要提供一种欺诈可能性分析方法、装置及存储介质,通过分析人物的面部视频,客观、准确地判断人物是否存在欺诈嫌疑。
为实现上述目的,本申请提供一种欺诈可能性分析方法,该方法包括:
样本准备步骤:收集人物预定时长的面部视频作为样本,为每个样本分配一个欺诈标注;
样本特征提取步骤:提取每个样本的图像特征和音频特征,组合得到每个样本的视频特征;
网络构建步骤:根据每个样本的序列长度和视频特征的维度设置神经网络层数和每层网络的神经元个数;
网络训练步骤:定义Softmax损失函数,以各样本的欺诈标注及视频特征为样本数据,对所述神经网络进行训练,输出各样本的欺诈概率和无欺诈概率,每次训练更新该神经网络的训练参数,以使所述Softmax损失函数最小化 的训练参数作为最终参数,得到欺诈可能性分析模型;及
模型应用步骤:采集待分析对象预定时长的面部视频,利用所述欺诈可能性分析模型分析该待分析对象的该面部视频,得到该待分析对象欺诈可能性的分析结果。
本申请还提供一种计算装置,包括存储器和处理器,所述存储器中包括欺诈可能性分析程序。该计算装置直接或间接地与摄像装置相连接,摄像装置将拍摄的人物对话时的面部视频传送至计算装置。该计算装置的处理器执行存储器中的欺诈可能性分析程序时,实现以下步骤:
样本准备步骤:收集人物预定时长的面部视频作为样本,为每个样本分配一个欺诈标注;
样本特征提取步骤:提取每个样本的图像特征和音频特征,组合得到每个样本的视频特征;
网络构建步骤:根据每个样本的序列长度和视频特征的维度设置神经网络层数和每层网络的神经元个数;
网络训练步骤:定义Softmax损失函数,以各样本的欺诈标注及视频特征为样本数据,对所述神经网络进行训练,输出各样本的欺诈概率和无欺诈概率,每次训练更新该神经网络的训练参数,以使所述Softmax损失函数最小化的训练参数作为最终参数,得到欺诈可能性分析模型;及
模型应用步骤:采集待分析对象预定时长的面部视频,利用所述欺诈可能性分析模型分析该待分析对象的该面部视频,得到该待分析对象欺诈可能性的分析结果。
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质中包括欺诈可能性分析程序,所述欺诈可能性分析程序被处理器执行时,实现以下步骤:
样本准备步骤:收集人物预定时长的面部视频作为样本,为每个样本分配一个欺诈标注;
样本特征提取步骤:提取每个样本的图像特征和音频特征,组合得到每个样本的视频特征;
网络构建步骤:根据每个样本的序列长度和视频特征的维度设置神经网络层数和每层网络的神经元个数;
网络训练步骤:定义Softmax损失函数,以各样本的欺诈标注及视频特征为样本数据,对所述神经网络进行训练,输出各样本的欺诈概率和无欺诈概率,每次训练更新该神经网络的训练参数,以使所述Softmax损失函数最小化的训练参数作为最终参数,得到欺诈可能性分析模型;及
模型应用步骤:采集待分析对象预定时长的面部视频,利用所述欺诈可能性分析模型分析该待分析对象的该面部视频,得到该待分析对象欺诈可能性的分析结果。
本申请提供的欺诈可能性分析方法、装置及存储介质,通过大量人物的面部视频训练神经网络,根据Softmax损失函数更新神经网络的训练参数,以最后一次更新后的训练参数作为最终参数,得到欺诈可能性分析模型。之后,采集待分析对象对话时预定时长的面部视频,提取该视频的音频特征和图像特征,组合得到该视频的视频特征,将该视频特征输入训练得到的欺诈可能性分析模型,即可得到该待分析对象欺诈可能性的分析结果。利用本申请,可以客观、有效地判断人物是否存在欺诈嫌疑,也降低了成本,节省了时间。
附图说明
图1为本申请欺诈可能性分析方法第一较佳实施例的应用环境图。
图2为本申请欺诈可能性分析方法第二较佳实施例的应用环境图。
图3为图1、图2中欺诈可能性分析程序的程序模块图。
图4为本申请欺诈可能性分析方法较佳实施例的流程图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
下面将参考若干具体实施例来描述本申请的原理和精神。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
参照图1所示,为本申请欺诈可能性分析方法第一较佳实施例的应用环境图。在该实施例中,摄像装置3通过网络2连接计算装置1,摄像装置3拍摄人物对话时的面部视频,通过网络2传送至计算装置1,计算装置1利用本申请提供的欺诈可能性分析程序10分析所述视频,输出人物的欺诈概率和无欺诈概率,供人参考。
计算装置1可以是服务器、智能手机、平板电脑、便携计算机、桌上型计算机等具有存储和运算功能的终端设备。
该计算装置1包括存储器11、处理器12、网络接口13及通信总线14。
摄像装置3安装于特定场所,如办公场所、监控区域等,用于拍摄人物对话时的面部视频,然后通过网络2将拍摄得到的视频传输至存储器11。网络接口13可以包括标准的有线接口、无线接口(如WI-FI接口)。通信总线14用于实现这些组件之间的连接通信。
存储器11包括至少一种类型的可读存储介质。所述至少一种类型的可读存储介质可为如闪存、硬盘、多媒体卡、卡型存储器等的非易失性存储介质。在一些实施例中,所述可读存储介质可以是所述计算装置1的内部存储单元,例如该计算装置1的硬盘。在另一些实施例中,所述可读存储介质也可以是所述计算装置1的外部存储器11,例如所述计算装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
在本实施例中,所述存储器11存储所述欺诈可能性分析程序10的程序代码、摄像装置3拍摄的对话视频,以及处理器12执行欺诈可能性分析程序10的程序代码应用到的数据以及最后输出的数据等。
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其他数据处理芯片。
图1仅示出了具有组件11-14的计算装置1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
可选地,该计算装置1还可以包括用户接口,用户接口可以包括输入单元比如键盘(Keyboard)、语音输入装置比如麦克风(microphone)等具有语音识别功能的设备、语音输出装置比如音响、耳机等,可选地用户接口还可以包括标准的有线接口、无线接口。
可选地,该计算装置1还可以包括显示器。显示器在一些实施例中可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。显示器用于显示计算装置1处理的信息以及可视化的用户界面。
可选地,该计算装置1还包括触摸传感器。所述触摸传感器所提供的供用户进行触摸操作的区域称为触控区域。此外,这里所述的触摸传感器可以为电阻式触摸传感器、电容式触摸传感器等。而且,所述触摸传感器不仅包括接触式的触摸传感器,也可包括接近式的触摸传感器等。此外,所述触摸传感器可以为单个传感器,也可以为例如阵列布置的多个传感器。用户,例如心理咨询师,可以通过触摸启动欺诈可能性分析程序10。
该计算装置1还可以包括射频(Radio Frequency,RF)电路、传感器和音频电路等等,在此不再赘述。
参照图2所示,为本申请欺诈可能性分析方法第二较佳实施例的应用环境图。待分析对象通过终端3实现人物欺诈可能性的分析过程,终端3的摄像装置30拍摄待分析对象的面部视频,并通过网络2传送至所述计算装置1,计算装置1的处理器12执行存储器11存储的欺诈可能性分析程序10的程序代码,对视频的音频部分和视频帧进行分析,输出该待分析对象的欺诈概率和无欺诈概率,供待分析对象或审查人员等参考。
图2中计算装置1的组件,例如图中示出的存储器11、处理器12、网络接口13及通信总线14,以及图中未示出的组件,请参照关于图1的介绍。
所述终端3可以为智能手机、平板电脑、便携计算机、桌上型计算机等具有存储和运算功能的终端设备。
图1、图2中的欺诈可能性分析程序10,在被处理器12执行时,实现以下步骤:
样本准备步骤:收集人物预定时长的面部视频作为样本,为每个样本分配一个欺诈标注;
样本特征提取步骤:提取每个样本的图像特征和音频特征,组合得到每个样本的视频特征;
网络构建步骤:根据每个样本的序列长度和视频特征的维度设置神经网络层数和每层网络的神经元个数;
网络训练步骤:定义Softmax损失函数,以各样本的欺诈标注及视频特征为样本数据,对所述神经网络进行训练,输出各样本的欺诈概率和无欺诈概率,每次训练更新该神经网络的训练参数,以使所述Softmax损失函数最小化的训练参数作为最终参数,得到欺诈可能性分析模型;及
模型应用步骤:采集待分析对象预定时长的面部视频,利用所述欺诈可能性分析模型分析该待分析对象的该面部视频,得到该待分析对象欺诈可能性的分析结果。
关于上述步骤的详细介绍,请参照下述图3关于欺诈可能性分析程序10的程序模块图及图4关于欺诈可能性分析方法较佳实施例的流程图的说明。
参照图3所示,为图1、图2中欺诈可能性分析程序10的程序模块图。在本实施例中,欺诈可能性分析程序10被分割为多个模块,该多个模块被存储于存储器11中,并由处理器12执行,以完成本申请。本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段。
所述欺诈可能性分析程序10可以被分割为:获取模块110、提取模块120、训练模块130及预测模块140。
获取模块110,用于获取人物对话时的预定时长的面部视频。所述视频可以是通过图1的摄像装置3或图2的摄像装置30获取的,也可以是从网络信息或视频资料库中选取的明显存在欺诈行为的面部视频和无欺诈面部视频。为用于神经网络训练的样本视频分配欺诈标注,欺诈标注表示该样本视频中人物有无欺诈嫌疑,例如1表示有欺诈嫌疑,0代表无欺诈嫌疑。
提取模块120,用于提取所述视频的音频特征和图像特征,并将音频特征和图像特征组合,得到每个视频的视频特征。对获取模块110获取的视频进行解码和预处理,得到每个视频的音频部分和视频帧,对所述音频部分和视频帧分别进行特征提取,得到每个视频的音频特征和图像特征,对所述音频特征和图像特征进行组合,得到每个视频的视频特征。
提取模块120提取所述视频的图像特征时,可将经过归一化、去除噪声 等处理的视频帧的HOG特征、LBP特征等作为图像特征,也可以直接利用卷积神经网络提取视频帧的特征向量。
提取模块120提取所述视频的音频特征时,可将所述视频的音频部分的振幅值作为音频特征。例如,假设所述视频的预定时长为3分钟,音频采样率为8000HZ,则3分钟视频的音频部分提取8000*60*3个振幅值作为音频特征。
提取模块120组合所述图像特征和音频特征时,组合得到的视频特征的维度是每帧图像的图像特征维度与对应的音频特征维度之和。依上述例子,假设人物对话的面部视频的音频采样率为8000HZ,视频采样率为20HZ,则每读取一帧图像需要50ms,50ms对应400个音频振幅值,若该视频每帧图像的图像特征维度为k1,则对应的音频特征的维度k2=400,组合得到的视频特征的维度k=k1+k2。
训练模块130,用于通过迭代训练优化构建的神经网络,得到训练好的欺诈可能性分析模型。人物对话的面部视频的视频帧和音频帧按时间顺序排列,因此本申请采用循环神经网络中的长短期记忆网络(Long Short-Term Memory,LSTM)。
构建LSTM时,先根据获取模块110获取的人物对话时的预定时长面部视频的序列长度及提取模块120提取组合得到的视频特征的维度定义网络形状,设置LSTM的层数及每层LSTM的神经元个数。以上述例子,假设所述视频的预定时长为3分钟,视频采样率为20HZ,组合得到的视频特征的维度为k,则每个视频的序列长度记为3*60*20,该LSTM的形状可用tflearn深度学习库的代码表示成如下形式:
net=tflearn.input_data(shape=[None,3*60*20,k])
然后构建两个隐含层,每层128个神经单元,用tflearn深度学习库的代码表示如下:
net=tflearn.lstm(net,128)
net=tflearn.lstm(net,128)
接下来定义Softmax损失函数公式如下:
Figure PCTCN2018076122-appb-000001
LSTM及Softmax损失函数构建完成后,设置训练参数。假设迭代次数为100、梯度优化算法为adam、验证集为0.1,则LSTM模型训练用tflearn深度 学习库的代码表示如下:
net=tflearn.regression(net,optimizer=‘adam’,loss=‘categorical_crossentropy’,name=‘output1’)
model=tflearn.DNN(net,tersorboard_verbose=2)
model.fit(X,Y,n_epoch=100,validation_set=0.1,snapshot_step=100)
训练模块130利用每个样本的欺诈标注及组合得到的视频特征对LSTM进行训练,每次训练更新该LSTM的训练参数,使所述Softmax损失函数最小化,以最后一次更新后的训练参数作为最终参数,得到欺诈可能性分析模型。
分析模块140,用于分析人物的欺诈可能性。获取模块110获取待分析对象预定时长的面部视频,提取模块120提取该视频的图像特征及音频特征,并将该图像特征和音频特征组合成该视频的视频特征,分析模块140将该视频特征输入训练模块130训练好的欺诈可能性分析模型,输出待分析对象的欺诈概率和无欺诈概率。
参照图4所示,为本申请欺诈可能性分析方法较佳实施例的流程图。利用图1或图2所示的架构,启动计算装置1,处理器12执行存储器11中存储的欺诈可能性分析程序10,实现如下步骤:
步骤S10,利用获取模块110收集人物对话时的预定时长面部视频并为视频分配欺诈标注。所述视频可以是通过图1的摄像装置3或图2的摄像装置30获取的,也可以是从网络信息或视频资料库中选取的人物对话时明显存在欺诈行为的面部视频和无欺诈的正常视频。
步骤S20,利用提取模块120提取每个视频的音频特征和图像特征,并组合该音频特征和图像特征,得到每个视频的视频特征。所述图像特征可以是视频帧的HOG特征、LBP特征等底层特征,也可以是直接利用卷积神经网络提取的视频帧的特征向量。所述音频特征可以是每帧图像对应音频的振幅值的集合。所述视频特征的维度是视频帧的图像特征维度与对应的音频特征维度之和。
步骤S30,根据所述预定时长的视频的序列长度、视频特征的维度构建神经网络。根据获取模块110获取的预定时长的面部视频的序列长度以及提取 模块120提取、组合得到的视频特征的维数设置神经网络的层数及每层网络的神经元个数,因为神经网络的输出结果为人物的欺诈概率和无欺诈概率,所以作为网络输出层的分类器的神经元个数为2。
步骤S40,根据每个视频的视频特征及欺诈标注训练所述神经网络,得到训练好的欺诈可能性分析模型。以获取模块110获取的样本视频的欺诈标注和提取模块120提取、组合得到的视频特征为样本数据,对神经网络进行迭代训练,每次训练更新该神经网络的训练参数,以使所述Softmax损失函数最小化的训练参数作为最终参数,得到训练好的欺诈可能性分析模型。
步骤S50,利用获取模块110采集待分析对象预定时长的面部视频。该面部视频通过图1的摄像装置3或图2的摄像装置30获取。
步骤S60,利用提取模块120提取所述待分析对象视频的图像特征和音频特征,将该图像特征和音频特征组合,得到该待分析对象视频的视频特征。特征提取和组合的具体过程,请参照提取模块120以及步骤S20的详细介绍。
步骤S70,将该视频特征输入所述欺诈可能性分析模型,得到待分析对象的欺诈可能性分析结果。将提取模块120得到的待分析对象的视频特征输入训练得到的欺诈可能性分析模型,输出该待分析对象的欺诈概率值和无欺诈概率值,取概率值大的输出结果作为该待分析对象有无欺诈的分析结果。
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质可以是硬盘、多媒体卡、SD卡、闪存卡、SMC、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器等等中的任意一种或者几种的任意组合。所述计算机可读存储介质中包括样本视频及欺诈可能性分析程序10,所述欺诈可能性分析程序10被处理器执行时实现如下操作:。
样本准备步骤:收集人物预定时长的面部视频作为样本,为每个样本分配一个欺诈标注;
样本特征提取步骤:提取每个样本的图像特征和音频特征,组合得到每个样本的视频特征;
网络构建步骤:根据每个样本的序列长度和视频特征的维度设置神经网络层数和每层网络的神经元个数;
网络训练步骤:定义Softmax损失函数,以各样本的欺诈标注及视频特征为样本数据,对所述神经网络进行训练,输出各样本的欺诈概率和无欺诈概率,每次训练更新该神经网络的训练参数,以使所述Softmax损失函数最小化的训练参数作为最终参数,得到欺诈可能性分析模型;及
模型应用步骤:采集待分析对象预定时长的面部视频,利用所述欺诈可能性分析模型分析该待分析对象的该面部视频,得到该待分析对象欺诈可能性的分析结果。
本申请之计算机可读存储介质的具体实施方式与上述欺诈可能性分析方法以及计算装置1的具体实施方式大致相同,在此不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种欺诈可能性分析方法,其特征在于,该方法包括:
    样本准备步骤:收集人物预定时长的面部视频作为样本,为每个样本分配一个欺诈标注;
    样本特征提取步骤:提取每个样本的图像特征和音频特征,组合得到每个样本的视频特征;
    网络构建步骤:根据每个样本的序列长度和视频特征的维度设置神经网络层数和每层网络的神经元个数;
    网络训练步骤:定义Softmax损失函数,以各样本的欺诈标注及视频特征为样本数据,对所述神经网络进行训练,输出各样本的欺诈概率和无欺诈概率,每次训练更新该神经网络的训练参数,以使所述Softmax损失函数最小化的训练参数作为最终参数,得到欺诈可能性分析模型;及
    模型应用步骤:采集待分析对象预定时长的面部视频,利用所述欺诈可能性分析模型分析该待分析对象的该面部视频,得到该待分析对象欺诈可能性的分析结果。
  2. 如权利要求1所述的欺诈可能性分析方法,其特征在于,所述样本特征提取步骤包括:
    对每个样本进行解码和预处理,得到每个样本的视频帧和音频部分;
    对每个样本的视频帧进行特征提取,得到每个样本的图像特征;
    对每个样本的音频部分进行特征提取,得到每个样本的音频特征。
  3. 如权利要求2所述的欺诈可能性分析方法,其特征在于,所述图像特征为每个样本的视频帧的HOG特征、LBP特征或直接利用卷积神经网络提取的视频帧的特征向量。
  4. 如权利要求1所述的欺诈可能性分析方法,其特征在于,所述视频特征的维度为所述图像特征的维度与对应的音频特征的维度之和。
  5. 如权利要求1所述的欺诈可能性分析方法,其特征在于,所述Softmax 损失函数公式如下:
    Figure PCTCN2018076122-appb-100001
    其中,θ为所述神经网络的训练参数,X j表示第j个样本,y j表示第j个样本的欺诈概率。
  6. 如权利要求1所述的欺诈可能性分析方法,其特征在于,所述网络训练步骤中的训练参数包括迭代次数。
  7. 如权利要求1所述的欺诈可能性分析方法,其特征在于,所述模型应用步骤还包括:
    对所述待分析对象视频进行解码和预处理,得到该待分析对象视频的音频部分和视频帧;
    对该待分析对象视频的视频帧进行特征提取,得到该待分析对象视频的图像特征;
    对该待分析对象视频的音频部分进行特征提取,得到该待分析对象视频的音频特征;
    对该待分析对象视频的图像特征和音频特征进行组合,得到该待分析对象视频的视频特征;
    将该视频特征输入训练得到的欺诈可能性分析模型,输出该待分析对象的欺诈概率和无欺诈概率。
  8. 一种计算装置,包括存储器和处理器,其特征在于,所述存储器中包括欺诈可能性分析程序,所述欺诈可能性分析程序被所述处理器执行时实现如下步骤:
    样本准备步骤:收集人物预定时长的面部视频作为样本,为每个样本分配一个欺诈标注;
    样本特征提取步骤:提取每个样本的图像特征和音频特征,组合得到每个样本的视频特征;
    网络构建步骤:根据每个样本的序列长度和视频特征的维度设置神经网 络层数和每层网络的神经元个数;
    网络训练步骤:定义Softmax损失函数,以各样本的欺诈标注及视频特征为样本数据,对所述神经网络进行训练,输出各样本的欺诈概率和无欺诈概率,每次训练更新该神经网络的训练参数,以使所述Softmax损失函数最小化的训练参数作为最终参数,得到欺诈可能性分析模型;及
    模型应用步骤:采集待分析对象预定时长的面部视频,利用所述欺诈可能性分析模型分析该待分析对象的该面部视频,得到该待分析对象欺诈可能性的分析结果。
  9. 如权利要求8所述的计算装置,其特征在于,所述样本特征提取步骤包括:
    对每个样本进行解码和预处理,得到每个样本的视频帧和音频部分;
    对每个样本的视频帧进行特征提取,得到每个样本的图像特征;
    对每个样本的音频部分进行特征提取,得到每个样本的音频特征。
  10. 如权利要求9所述的欺诈可能性分析方法,其特征在于,所述图像特征为每个样本的视频帧的HOG特征、LBP特征或直接利用卷积神经网络提取的视频帧的特征向量。
  11. 如权利要求8所述的计算装置,其特征在于,所述视频特征的维度为所述图像特征的维度与对应的音频特征的维度之和。
  12. 如权利要求8所述的计算装置,其特征在于,所述Softmax损失函数公式如下:
    Figure PCTCN2018076122-appb-100002
    其中,θ为所述神经网络的训练参数,X j表示第j个样本,y j表示第j个样本的欺诈概率。
  13. 如权利要求8所述的计算装置,其特征在于,所述网络训练步骤中 的训练参数包括迭代次数。
  14. 如权利要求8所述的计算装置,其特征在于,所述模型应用步骤还包括:
    对所述待分析对象视频进行解码和预处理,得到该待分析对象视频的音频部分和视频帧;
    对该待分析对象视频的视频帧进行特征提取,得到该待分析对象视频的图像特征;
    对该待分析对象视频的音频部分进行特征提取,得到该待分析对象视频的音频特征;
    对该待分析对象视频的图像特征和音频特征进行组合,得到该待分析对象视频的视频特征;
    将该视频特征输入训练得到的欺诈可能性分析模型,输出该待分析对象的欺诈概率和无欺诈概率。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中包括欺诈可能性分析程序,所述欺诈可能性分析程序被处理器执行时实现如下步骤:
    样本准备步骤:收集人物预定时长的面部视频作为样本,为每个样本分配一个欺诈标注;
    样本特征提取步骤:提取每个样本的图像特征和音频特征,组合得到每个样本的视频特征;
    网络构建步骤:根据每个样本的序列长度和视频特征的维度设置神经网络层数和每层网络的神经元个数;
    网络训练步骤:定义Softmax损失函数,以各样本的欺诈标注及视频特征为样本数据,对所述神经网络进行训练,输出各样本的欺诈概率和无欺诈概率,每次训练更新该神经网络的训练参数,以使所述Softmax损失函数最小化的训练参数作为最终参数,得到欺诈可能性分析模型;及
    模型应用步骤:采集待分析对象预定时长的面部视频,利用所述欺诈可能性分析模型分析该待分析对象的该面部视频,得到该待分析对象欺诈可能 性的分析结果。
  16. 如权利要求15所述的介质,其特征在于,所述样本特征提取步骤包括:
    对每个样本进行解码和预处理,得到每个样本的视频帧和音频部分;
    对每个样本的视频帧进行特征提取,得到每个样本的图像特征;
    对每个样本的音频部分进行特征提取,得到每个样本的音频特征。
  17. 如权利要求15所述的介质,其特征在于,所述视频特征的维度为所述图像特征的维度与对应的音频特征的维度之和。
  18. 如权利要求15所述的介质,其特征在于,所述Softmax损失函数公式如下:
    Figure PCTCN2018076122-appb-100003
    其中,θ为所述神经网络的训练参数,X j表示第j个样本,y j表示第j个样本的欺诈概率。
  19. 如权利要求15所述的介质,其特征在于,所述网络训练步骤中的训练参数包括迭代次数。
  20. 如权利要求15所述的介质,其特征在于,所述模型应用步骤还包括:
    对所述待分析对象视频进行解码和预处理,得到该待分析对象视频的音频部分和视频帧;
    对该待分析对象视频的视频帧进行特征提取,得到该待分析对象视频的图像特征;
    对该待分析对象视频的音频部分进行特征提取,得到该待分析对象视频的音频特征;
    对该待分析对象视频的图像特征和音频特征进行组合,得到该待分析对象视频的视频特征;
    将该视频特征输入训练得到的欺诈可能性分析模型,输出该待分析对象的欺诈概率和无欺诈概率。
PCT/CN2018/076122 2017-11-02 2018-02-10 欺诈可能性分析方法、装置及存储介质 WO2019085331A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711061172.X 2017-11-02
CN201711061172.XA CN108038413A (zh) 2017-11-02 2017-11-02 欺诈可能性分析方法、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2019085331A1 true WO2019085331A1 (zh) 2019-05-09

Family

ID=62092695

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/076122 WO2019085331A1 (zh) 2017-11-02 2018-02-10 欺诈可能性分析方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN108038413A (zh)
WO (1) WO2019085331A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705585A (zh) * 2019-08-22 2020-01-17 深圳壹账通智能科技有限公司 网络欺诈识别方法、装置、计算机装置及存储介质
CN111540375A (zh) * 2020-04-29 2020-08-14 全球能源互联网研究院有限公司 音频分离模型的训练方法、音频信号的分离方法及装置
CN112926623A (zh) * 2021-01-22 2021-06-08 北京有竹居网络技术有限公司 识别合成视频的方法、装置、介质及电子设备
CN113630495A (zh) * 2020-05-07 2021-11-09 中国电信股份有限公司 涉诈订单预测模型训练方法和装置,订单预测方法和装置
US11244050B2 (en) * 2018-12-03 2022-02-08 Mayachitra, Inc. Malware classification and detection using audio descriptors
CN114549026A (zh) * 2022-04-26 2022-05-27 浙江鹏信信息科技股份有限公司 基于算法组件库分析的未知诈骗的识别方法及系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108776932A (zh) * 2018-05-22 2018-11-09 深圳壹账通智能科技有限公司 用户投资类型的确定方法、存储介质和服务器
CN109284371B (zh) * 2018-09-03 2023-04-18 平安证券股份有限公司 反欺诈方法、电子装置及计算机可读存储介质
CN109344908B (zh) * 2018-10-30 2020-04-28 北京字节跳动网络技术有限公司 用于生成模型的方法和装置
CN111382623B (zh) * 2018-12-28 2023-06-23 广州市百果园信息技术有限公司 一种直播审核的方法、装置、服务器和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6826300B2 (en) * 2001-05-31 2004-11-30 George Mason University Feature based classification
CN105956572A (zh) * 2016-05-15 2016-09-21 北京工业大学 一种基于卷积神经网络的活体人脸检测方法
CN107007257A (zh) * 2017-03-17 2017-08-04 深圳大学 面部不自然度的自动评级方法和装置
CN107103266A (zh) * 2016-02-23 2017-08-29 中国科学院声学研究所 二维人脸欺诈检测分类器的训练及人脸欺诈检测方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104545950A (zh) * 2014-12-23 2015-04-29 上海博康智能信息技术有限公司 非接触式测谎方法及其测谎系统
CN105160318B (zh) * 2015-08-31 2018-11-09 北京旷视科技有限公司 基于面部表情的测谎方法及系统
CN106909896B (zh) * 2017-02-17 2020-06-30 竹间智能科技(上海)有限公司 基于人物性格与人际关系识别的人机交互系统及工作方法
CN106901758B (zh) * 2017-02-23 2019-10-25 南京工程学院 一种基于卷积神经网络的言语置信度评测方法
CN107133481A (zh) * 2017-05-22 2017-09-05 西北工业大学 基于dcnn‑dnn和pv‑svm的多模态抑郁症估计和分类方法
CN107256392A (zh) * 2017-06-05 2017-10-17 南京邮电大学 一种联合图像、语音的全面情绪识别方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6826300B2 (en) * 2001-05-31 2004-11-30 George Mason University Feature based classification
CN107103266A (zh) * 2016-02-23 2017-08-29 中国科学院声学研究所 二维人脸欺诈检测分类器的训练及人脸欺诈检测方法
CN105956572A (zh) * 2016-05-15 2016-09-21 北京工业大学 一种基于卷积神经网络的活体人脸检测方法
CN107007257A (zh) * 2017-03-17 2017-08-04 深圳大学 面部不自然度的自动评级方法和装置

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11244050B2 (en) * 2018-12-03 2022-02-08 Mayachitra, Inc. Malware classification and detection using audio descriptors
US20220114256A1 (en) * 2018-12-03 2022-04-14 Mayachitra, Inc. Malware classification and detection using audio descriptors
CN110705585A (zh) * 2019-08-22 2020-01-17 深圳壹账通智能科技有限公司 网络欺诈识别方法、装置、计算机装置及存储介质
CN111540375A (zh) * 2020-04-29 2020-08-14 全球能源互联网研究院有限公司 音频分离模型的训练方法、音频信号的分离方法及装置
CN111540375B (zh) * 2020-04-29 2023-04-28 全球能源互联网研究院有限公司 音频分离模型的训练方法、音频信号的分离方法及装置
CN113630495A (zh) * 2020-05-07 2021-11-09 中国电信股份有限公司 涉诈订单预测模型训练方法和装置,订单预测方法和装置
CN113630495B (zh) * 2020-05-07 2022-08-02 中国电信股份有限公司 涉诈订单预测模型训练方法和装置,订单预测方法和装置
CN112926623A (zh) * 2021-01-22 2021-06-08 北京有竹居网络技术有限公司 识别合成视频的方法、装置、介质及电子设备
CN112926623B (zh) * 2021-01-22 2024-01-26 北京有竹居网络技术有限公司 识别合成视频的方法、装置、介质及电子设备
CN114549026A (zh) * 2022-04-26 2022-05-27 浙江鹏信信息科技股份有限公司 基于算法组件库分析的未知诈骗的识别方法及系统

Also Published As

Publication number Publication date
CN108038413A (zh) 2018-05-15

Similar Documents

Publication Publication Date Title
WO2019085331A1 (zh) 欺诈可能性分析方法、装置及存储介质
WO2019085329A1 (zh) 基于循环神经网络的人物性格分析方法、装置及存储介质
WO2019085330A1 (zh) 人物性格分析方法、装置及存储介质
WO2019104890A1 (zh) 结合音频分析和视频分析的欺诈识别方法、装置及存储介质
Mason et al. An investigation of biometric authentication in the healthcare environment
WO2021000678A1 (zh) 企业信贷审核方法、装置、设备及计算机可读存储介质
WO2019200781A1 (zh) 票据识别方法、装置及存储介质
WO2019109526A1 (zh) 人脸图像的年龄识别方法、装置及存储介质
CN107239666B (zh) 一种对医疗影像数据进行脱敏处理的方法及系统
WO2019071903A1 (zh) 微表情面审辅助方法、装置及存储介质
US20210398416A1 (en) Systems and methods for a hand hygiene compliance checking system with explainable feedback
CN107958230B (zh) 人脸表情识别方法及装置
CN112395979B (zh) 基于图像的健康状态识别方法、装置、设备及存储介质
TWI712980B (zh) 理賠資訊提取方法和裝置、電子設備
CN108549848B (zh) 用于输出信息的方法和装置
US11641352B2 (en) Apparatus, method and computer program product for biometric recognition
WO2019109530A1 (zh) 情绪识别方法、装置及存储介质
CN112509690A (zh) 用于控制质量的方法、装置、设备和存储介质
US20230410222A1 (en) Information processing apparatus, control method, and program
CN111738199B (zh) 图像信息验证方法、装置、计算装置和介质
WO2021051602A1 (zh) 基于唇语密码的人脸识别方法、系统、装置及存储介质
CN110874570A (zh) 面部识别方法、装置、设备及计算机可读存储介质
US20170277423A1 (en) Information processing method and electronic device
CN110393539B (zh) 心理异常检测方法、装置、存储介质及电子设备
CN113673318B (zh) 一种动作检测方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18874173

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/09/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18874173

Country of ref document: EP

Kind code of ref document: A1