WO2021022795A1 - 人脸识别过程中的欺诈行为检测方法、装置及设备 - Google Patents

人脸识别过程中的欺诈行为检测方法、装置及设备 Download PDF

Info

Publication number
WO2021022795A1
WO2021022795A1 PCT/CN2020/072048 CN2020072048W WO2021022795A1 WO 2021022795 A1 WO2021022795 A1 WO 2021022795A1 CN 2020072048 W CN2020072048 W CN 2020072048W WO 2021022795 A1 WO2021022795 A1 WO 2021022795A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
bypass
decision result
decision
face recognition
Prior art date
Application number
PCT/CN2020/072048
Other languages
English (en)
French (fr)
Inventor
曹佳炯
Original Assignee
创新先进技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 创新先进技术有限公司 filed Critical 创新先进技术有限公司
Priority to US16/804,635 priority Critical patent/US10936715B1/en
Publication of WO2021022795A1 publication Critical patent/WO2021022795A1/zh
Priority to US17/188,881 priority patent/US11182475B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2474Sequence data queries, e.g. querying versioned data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/03Indexing scheme relating to G06F21/50, monitoring users, programs or devices to maintain the integrity of platforms
    • G06F2221/034Test or assess a computer or a system

Definitions

  • One or more embodiments of this specification relate to the field of computer technology, and in particular to a method, device and device for detecting fraud in the face recognition process.
  • Face recognition is a landmark technology and application of artificial intelligence society. At present, in payment, travel, campus and other scenarios, face recognition has greatly improved production efficiency and improved user experience. However, sometimes there are fraudulent behaviors that use non-real people instead of real people for face recognition, such as using photos or videos instead of real people for face recognition; such fraudulent behaviors are also called live attacks. For the current face recognition technology, fraud is the main security risk. Therefore, how to effectively detect fraud in the face recognition process has become a problem to be solved.
  • image information including but not limited to light, etc.
  • deep learning algorithms are usually used to detect whether there is fraudulent behavior currently.
  • the quality of the image is easily affected by environmental factors such as illumination and background, and the imaging of human faces is also easily affected by expressions and poses, the performance of traditional detection methods is unstable.
  • deep learning algorithms rely heavily on training data, once the training data is significantly different from the real scene, the performance of the detection method will also decrease.
  • One or more embodiments of this specification describe a fraud detection method, device, and equipment in the face recognition process, which can improve the accuracy of fraud detection.
  • a fraud detection method in the face recognition process including:
  • bypass information includes device information used by the user and user behavior information
  • bypass decision result is used to predict the probability of fraud in the face recognition
  • a fraud detection device in the process of face recognition including:
  • the receiving unit is used to receive the user's face recognition request
  • a collection unit configured to collect bypass information of the user in the process of processing the face recognition request received by the receiving unit; the bypass information includes device information used by the user and user behavior information;
  • An input unit configured to input the bypass information collected by the collection unit into at least one decision model to obtain a bypass decision result; the bypass decision result is used to predict the probability of fraud in the face recognition;
  • the determining unit is configured to determine whether there is a fraudulent behavior in the face recognition at least based on the bypass decision result.
  • a fraud detection device in the process of face recognition including:
  • One or more processors are One or more processors.
  • One or more programs wherein the one or more programs are stored in the memory and are configured to be executed by the one or more processors, and the following steps are implemented when the programs are executed by the processor:
  • bypass information includes device information used by the user and user behavior information
  • bypass decision result is used to predict the probability of fraud in the face recognition
  • One or more embodiments of this specification provide a method, device and device for detecting fraud in the face recognition process, which receive a user's face recognition request.
  • the user's bypass information is collected.
  • the bypass information includes device information used by the user and user behavior information.
  • the result of the bypass decision is used to predict the probability of fraud in the face recognition.
  • Figure 1 is a schematic diagram of the face recognition system provided in this manual
  • FIG. 2 is a flowchart of a method for detecting fraud in the face recognition process provided by an embodiment of this specification
  • FIG. 3 is a schematic diagram of a fraud detection device in a face recognition process provided by an embodiment of this specification
  • Fig. 4 is a schematic diagram of a fraud detection device in a face recognition process provided by an embodiment of this specification.
  • FIG. 1 is a schematic diagram of the face recognition system provided in this manual.
  • the face recognition system may include a receiving module 102 and a detecting module 104.
  • the receiving module 102 is configured to receive a face recognition request of the user.
  • the detection module 104 is used to detect fraud based on the bypass information of the user.
  • the bypass information here refers to other types of information other than image information (described later).
  • image information described later.
  • the user's image information can also be combined, which is not limited in this specification.
  • Fig. 2 is a flowchart of a fraud detection method in the face recognition process provided by an embodiment of this specification.
  • the execution subject of the method may be a device with processing capability: a server or a system or a device, for example, it may be the face recognition system in FIG. 1.
  • the method may specifically include:
  • Step 202 Receive a face recognition request from the user.
  • the aforementioned face recognition request may be received after the user operates a button for triggering the face recognition process.
  • Step 204 In the process of processing the face recognition request, collect bypass information of the user.
  • the bypass information may include device information used by the user and user behavior information.
  • the device information may include at least one of the location sensor information of the device and the temperature sensor information of the device.
  • the user behavior information includes at least one of the user's operation history information before issuing the face recognition request, the time-consuming information of the user in the face recognition process, and the distance information of the user from the device.
  • the above constant information may refer to information that does not change during the face recognition process. It may include at least one of time-consuming information and operation history information. For time-consuming information, it may include time-consuming collection (Tc) and time-consuming identification (Tz).
  • Tc time-consuming collection
  • Tz time-consuming identification
  • the collection time may refer to the total time consumed in collecting the user's face information in the face recognition process.
  • Recognition time-consuming may refer to the total time-consuming recognition of collected facial information through a facial recognition algorithm. Both can be obtained using a timer.
  • the operation history information may also be referred to as remote procedure call (RPC) information, which may be information about a series of operations performed by the user before performing face recognition, and is usually used to reflect the user's behavior habits.
  • a series of operations here include, but are not limited to, clicks on advertisements, personal information query, password modification, etc.
  • the user's operation history information can be obtained by setting a buried point on the page. Specifically, you can set the buried point for the user's click or drag operation on the page in advance, and then, when the user performs a click or drag operation on the page, the corresponding buried point can be triggered to collect the operation performed by the user. Operational information.
  • t0 is the start collection time
  • t1 is the end collection time
  • t2 is the start recognition time
  • t3 is the end recognition time.
  • acquisition time and recognition time are the same as those described above, and will not be repeated here.
  • RPC information since it is information related to a series of click operations, it may have a corresponding length.
  • RPC information of any length can be collected. Taking the collection of 10 clicks of the user before face recognition as an example, the collected RPC information can be expressed as:
  • RPC 10 [r 0 , r 1 ,..., r 9 ]
  • the length of the collected RPC information may be 10, and r0 to r9 represent 10 consecutive click operations.
  • the above timing information may refer to the information that changes over time during the face recognition process, which may include the position sensor information of the device (L), the temperature sensor information of the device (W), and the distance information (D) of the user from the device. At least one.
  • the position sensor information here can be determined by acquiring global positioning system (GPS) data, and the temperature sensor information can be determined by acquiring temperature sensor data.
  • GPS global positioning system
  • x, y and z are the GPS coordinates of the user equipment.
  • w is the ambient temperature of the device.
  • the distance information from the user to the device can be expressed as:
  • d is the average distance between the user's face and the device.
  • d can be determined by accessing the data of the distance sensor.
  • the collection can be performed at multiple times according to a predetermined time interval.
  • the timing information collected at a certain moment can be expressed as:
  • timing information collected at multiple times may form a second data sequence. Taking the predetermined time interval of 100ms and continuous collection of 1s as an example, 10 timing information can be obtained, and the 10 timing information can form the following second data sequence:
  • a feature vector for representing bypass information can be formed.
  • the feature vector can finally be expressed as:
  • bypass information as a feature vector, which can facilitate subsequent input into the decision model.
  • bypass information mentioned above may not be fully available in some scenarios.
  • the temperature sensor information may not be obtained.
  • a predefined average value can be used for filling.
  • the predefined average value here can be calculated based on a large amount of temperature data. It is understandable that by performing the preprocessing operation of filling missing values on the bypass information that has not been obtained, the negative impact caused by the missing values can be reduced.
  • bypass information listed above in this storybook are only examples. In actual applications, other bypass information can also be obtained, such as the network status of the device, which is not limited in this specification.
  • Step 206 Input the bypass information into at least one decision model to obtain the bypass decision result.
  • the result of the bypass decision is used to predict the probability of fraud in the face recognition.
  • the fraudulent behavior here includes, for example, the malicious behavior of using photos or videos to replace a living body for face recognition.
  • the aforementioned at least one decision model may include a decision tree and/or a support vector machine (Support Vector Machine, SVM).
  • SVM Support Vector Machine
  • deep learning networks can also be included.
  • SVM Support Vector Machine
  • the output result is the above decision result.
  • the output results of multiple decision models can be fused to obtain the aforementioned decision result.
  • the determination process of the aforementioned decision result may be: input bypass information into the decision tree to obtain the first decision result. Input the bypass information into the SVM to obtain the second decision result.
  • the first decision result and the second decision result here may be the probability values respectively output by the two decision models.
  • the first decision result and the second decision result are integrated (for example, average or weighted average) to obtain the above decision result.
  • this solution can also perform the following steps:
  • the image decision result which is determined based on the user's image information and other decision models.
  • the other decision model here can be, for example, a neural network.
  • the bypass decision result and the image decision result are merged to obtain the final decision result. Fusion here includes but is not limited to weighted average.
  • the fusion process can be: determining the respective weights of the bypass decision result and the image decision result. Based on the determined weight, a weighted average is performed on the bypass decision result and the image decision result to obtain the final decision result.
  • bypass decision result and image decision result can be fused based on the following formula.
  • P is the final decision result
  • Pp is the bypass decision result
  • Pt is the image decision result
  • a is the weight corresponding to the bypass decision result.
  • the respective weights of the bypass decision result and the image decision result can be preset.
  • the weight corresponding to the image decision result can be preset, for example, set to 1.
  • it can be specifically determined by the following steps: sampling N values from a predetermined value range. Based on the preset test set, the accuracy test is performed on the N values to obtain the respective accuracy rates. Select the value with the highest accuracy from N values. The selected value is determined as the weight corresponding to the bypass decision result.
  • N is a positive integer.
  • sampling can be performed every 0.01 to obtain 50 candidate values of a.
  • an accuracy test is performed on a pre-collected test set, and finally the candidate value corresponding to the highest accuracy is selected as the final a value.
  • Step 208 Determine whether there is a fraudulent behavior in the face recognition at least based on the result of the bypass decision.
  • the image decision result when the image decision result is also obtained, it can be determined based on the final decision result whether there is fraud in the face recognition. Since the bypass information and the image information can form a good complementarity, the performance and stability of the detection method can be improved when fraud is detected based on the final decision result.
  • the specific judging process may be: judging whether Pp or P is greater than the threshold T, and if so, it is determined that there is a fraud in the face recognition; otherwise, there is no fraud.
  • step 202 is performed by the receiving module 102 in FIG. 1, and the steps 204 to 208 are performed by the detection module 104.
  • the fraud detection method provided in the embodiments of this specification can detect fraud based on bypass information. Because bypass information is not easily affected by environmental factors, that is, it is relatively stable, which can greatly improve fraud detection. Method stability.
  • an embodiment of this specification also provides a fraud detection device in the face recognition process.
  • the device may include:
  • the receiving unit 302 is configured to receive a face recognition request of the user.
  • the collecting unit 304 is configured to collect the bypass information of the user in the process of processing the face recognition request received by the receiving unit 302.
  • the bypass information may include device information used by the user and user behavior information.
  • the device information may include at least one of the location sensor information of the device and the temperature sensor information of the device.
  • the user behavior information may include at least one of operation history information of the user before issuing the face recognition request, time-consuming information of the user in the face recognition process, and distance information of the user from the device.
  • the collection unit 304 may be specifically used for:
  • the constant information in the bypass information is formed into a first data sequence, where the constant information may include at least one of time-consuming information and operation history information.
  • the timing information in the bypass information is collected at multiple times at predetermined time intervals, and the timing information collected at multiple times is formed into a second data sequence, where the timing information may include position sensor information, temperature sensor information, and distance information At least one of.
  • a feature vector for representing bypass information is formed.
  • the input unit 306 is configured to input the bypass information collected by the collecting unit 304 into at least one decision model to obtain the bypass decision result.
  • the result of the bypass decision is used to predict the probability of fraud in the face recognition.
  • the at least one decision model here may include a decision tree and/or a support vector machine SVM.
  • the input unit 306 may be specifically used to:
  • the first decision result and the second decision result are integrated to obtain the bypass decision result.
  • the determining unit 308 is configured to determine whether there is fraud in the face recognition at least based on the result of the bypass decision.
  • the determining unit 308 may be specifically used for:
  • bypass decision result and the image decision result are merged to obtain the final decision result.
  • the determining unit 308 may also be specifically used for:
  • a weighted average is performed on the bypass decision result and the image decision result to obtain the final decision result.
  • the determining unit 308 may also be specifically used for:
  • the accuracy test is performed on the N values to obtain the respective accuracy rates.
  • the selected value is determined as the weight corresponding to the bypass decision result.
  • the device may further include:
  • the preprocessing unit (not shown in the figure) is used for preprocessing the bypass information to remove noise of the bypass information.
  • the preprocessing may at least include filtering processing and/or missing value filling.
  • the input unit 306 may be specifically used for:
  • the device may further include:
  • the obtaining unit (not shown in the figure) is used to obtain the image decision result, which is determined based on the user's image information and other decision models.
  • the receiving unit 302 receives the face recognition request of the user.
  • the collection unit 304 collects the bypass information of the user in the process of processing the face recognition request.
  • the bypass information may include device information used by the user and user behavior information.
  • the input unit 306 inputs the bypass information into at least one decision model to obtain the bypass decision result.
  • the bypass decision result is used to predict the probability of fraud in the face recognition, and the determining unit 308 determines whether there is a fraud in the face recognition at least based on the result of the bypass decision. Therefore, the accuracy of fraud detection can be greatly improved.
  • the functions of the receiving unit 302 described above may be implemented by the receiving module 102 in FIG. 1, and the functions of the collecting unit 304, the input unit 306, and the determining unit 308 may be implemented by the detecting module 104 in FIG.
  • the embodiment of this specification also provides a fraud detection device in the face recognition process.
  • the device may include: a memory 402, a Or more processors 404 and one or more programs.
  • the one or more programs are stored in the memory 402 and are configured to be executed by one or more processors 404, and the following steps are implemented when the programs are executed by the processor 404:
  • the bypass information includes device information used by the user and user behavior information.
  • the result of the bypass decision is used to predict the probability of fraud in the face recognition.
  • the fraud detection device in the face recognition process provided in an embodiment of this specification can greatly improve the accuracy of fraud detection.
  • the steps of the method or algorithm described in combination with the disclosure of this specification may be implemented in a hardware manner, or may be implemented in a manner in which a processor executes software instructions.
  • Software instructions can be composed of corresponding software modules, which can be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, mobile hard disk, CD-ROM or any other form of storage known in the art Medium.
  • An exemplary storage medium is coupled to the processor, so that the processor can read information from the storage medium and can write information to the storage medium.
  • the storage medium may also be an integral part of the processor.
  • the processor and storage medium may be located in the ASIC.
  • the ASIC may be located in the server.
  • the processor and the storage medium may also exist as discrete components in the server.
  • the functions described in the present invention can be implemented by hardware, software, firmware or any combination thereof.
  • these functions can be stored in a computer-readable medium or transmitted as one or more instructions or codes on the computer-readable medium.
  • the computer-readable medium includes a computer storage medium and a communication medium, where the communication medium includes any medium that facilitates the transfer of a computer program from one place to another.
  • the storage medium may be any available medium that can be accessed by a general-purpose or special-purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Medical Informatics (AREA)
  • Computer Security & Cryptography (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Hardware Design (AREA)
  • Fuzzy Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本说明书实施例提供一种人脸识别过程中的欺诈行为检测方法、装置及设备,在检测方法中,接收用户的人脸识别请求。在处理人脸识别请求的过程中,采集用户的旁路信息。该旁路信息包括用户使用的设备信息以及用户行为信息。将旁路信息输入至少一个决策模型,以获得旁路决策结果。旁路决策结果用于预测该次人脸识别存在欺诈行为的概率。至少基于旁路决策结果,确定该次人脸识别是否存在欺诈行为。

Description

人脸识别过程中的欺诈行为检测方法、装置及设备 技术领域
本说明书一个或多个实施例涉及计算机技术领域,尤其涉及一种人脸识别过程中的欺诈行为检测方法、装置及设备。
背景技术
人脸识别是人工智能社会的标志性技术和应用。目前,在支付、出行、校园等多个场景,人脸识别都极大提升了生产效率、提高了用户体验。然而,有时会出现采用非真人非活体代替真人活体进行人脸识别的欺诈行为,例如使用照片或视频代替真人进行人脸识别;这样的欺诈行为又称为活体攻击。对于目前的人脸识别技术,欺诈行为是主要的安全隐患。因此,在人脸识别过程中如何对欺诈行为进行有效检测就成为要解决的问题。
传统技术中,通常是基于图像信息(包括但不限于光线等)以及深度学习算法,来检测当前是否存在欺诈行为。然而,由于图像的质量容易受到光照、背景等环境因素的影响,以及人脸的成像也容易受到表情、姿态的影响,从而导致传统的检测方法性能不稳定。此外,由于深度学习算法,严重依赖训练数据,因此一旦训练数据与真实场景有较大差异,也会造成检测方法性能下降。
因此,需要提供一种更准确、性能更稳定的欺诈行为检测方法。
发明内容
本说明书一个或多个实施例描述了一种人脸识别过程中的欺诈行为检测方法、装置及设备,可以提升欺诈行为检测的准确性。
第一方面,提供了一种人脸识别过程中的欺诈行为检测方法,包括:
接收用户的人脸识别请求;
在处理所述人脸识别请求的过程中,采集所述用户的旁路信息;所述旁路信息包括所述用户使用的设备信息以及用户行为信息;
将所述旁路信息输入至少一个决策模型,以获得旁路决策结果;所述旁路决策结果用于预测该次人脸识别存在欺诈行为的概率;
至少基于所述旁路决策结果,确定该次人脸识别是否存在欺诈行为。
第二方面,提供了一种人脸识别过程中的欺诈行为检测装置,包括:
接收单元,用于接收用户的人脸识别请求;
采集单元,用于在处理所述接收单元接收的所述人脸识别请求的过程中,采集所述用户的旁路信息;所述旁路信息包括所述用户使用的设备信息以及用户行为信息;
输入单元,用于将所述采集单元采集的所述旁路信息输入至少一个决策模型,以获得旁路决策结果;所述旁路决策结果用于预测该次人脸识别存在欺诈行为的概率;
确定单元,用于至少基于所述旁路决策结果,确定该次人脸识别是否存在欺诈行为。
第三方面,提供了一种人脸识别过程中的欺诈行为检测设备,包括:
存储器;
一个或多个处理器;以及
一个或多个程序,其中所述一个或多个程序存储在所述存储器中,并且被配置成由所述一个或多个处理器执行,所述程序被所述处理器执行时实现以下步骤:
接收用户的人脸识别请求;
在处理所述人脸识别请求的过程中,采集所述用户的旁路信息;所述旁路信息包括所述用户使用的设备信息以及用户行为信息;
将所述旁路信息输入至少一个决策模型,以获得旁路决策结果;所述旁路决策结果用于预测该次人脸识别存在欺诈行为的概率;
至少基于所述旁路决策结果,确定该次人脸识别是否存在欺诈行为。
本说明书一个或多个实施例提供的人脸识别过程中的欺诈行为检测方法、装置及设备,接收用户的人脸识别请求。在处理人脸识别请求的过程中,采集用户的旁路信息。该旁路信息包括用户使用的设备信息以及用户行为信息。将旁路信息输入至少一个决策模型,以获得旁路决策结果。旁路决策结果用于预测该次人脸识别存在欺诈行为的概率。至少基于旁路决策结果,确定该次人脸识别是否存在欺诈行为。也即本说明书提供的方案中,可以基于旁路信息,来检测活欺诈行为,由于旁路信息不容易受到环境因素的影响,也即其比较稳定,从而可以大大提升本说明书提供的欺诈行为检测方法的稳定性。
附图说明
为了更清楚地说明本说明书实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为本说明书提供的人脸识别系统示意图;
图2为本说明书一个实施例提供的人脸识别过程中的欺诈行为检测方法流程图;
图3为本说明书一个实施例提供的人脸识别过程中的欺诈行为检测装置示意图;
图4为本说明书一个实施例提供的人脸识别过程中的欺诈行为检测设备示意图。
具体实施方式
下面结合附图,对本说明书提供的方案进行描述。
图1为本说明书提供的人脸识别系统示意图。如图1所示,该人脸识别系统可以包括接收模块102和检测模块104。接收模块102用于接收用户的人脸识别请求。检测模块104用于基于用户的旁路信息,对欺诈行为进行检测。这里的旁路信息是指除图像信息以外的其它类型信息(后续说明)。当然,在实际应用中,为提高欺诈行为检测的准确性,还可以结合用户的图像信息,本说明书对此不作限定。
以下结合附图对上述欺诈行为的检测过程进行详细说明。
图2为本说明书一个实施例提供的人脸识别过程中的欺诈行为检测方法流程图。所述方法的执行主体可以为具有处理能力的设备:服务器或者系统或者装置,如,可以为图1中的人脸识别系统。如图2所示,所述方法具体可以包括:
步骤202,接收用户的人脸识别请求。
在一个例子中,上述人脸识别请求可以是在用户操作用于触发人脸识别过程的按钮后接收的。
步骤204,在处理人脸识别请求的过程中,采集用户的旁路信息。
该旁路信息可以包括用户使用的设备信息以及用户行为信息。其中,设备信息可以包括设备的位置传感器信息、设备的温度传感器信息中的至少一项。用户行为信息包括 用户在发出人脸识别请求之前的操作历史信息、用户在人脸识别过程中的耗时信息以及用户距离设备的距离信息中的至少一项。
需要说明的是,对于上述各类型信息,按照其在人脸识别过程中是否发生变化可以划分为两大类:常量信息(Iconst)以及时序信息(Iseq)。以下分别对该两种旁路信息进行详细说明。
上述常量信息可以是指在人脸识别过程中不发生变化的信息。其可以包括耗时信息以及操作历史信息中的至少一项。对于耗时信息,其可以包括采集耗时(Tc)和识别耗时(Tz)。采集耗时可以是指在人脸识别过程中采集用户的人脸信息的总耗时。识别耗时可以是指通过人脸识别算法识别采集的人脸信息的总耗时。该两者均可以使用计时器得到。
对于操作历史信息,也可以称为远程过程调用(Remote Procedure Call,RPC)信息,其可以是用户在进行人脸识别之前所执行的一系列操作的信息,通常用于反映用户的行为习惯。这里的一系列操作包括但不限于广告点击、个人信息查询以及密码修改等等。在一种实现方式中,可以通过在页面设置埋点的方式来获取用户的操作历史信息。具体地,可以预先在页面中针对用户的点击或者拖拽等操作设置埋点,之后,当用户在页面中执行点击或者拖拽等操作时,就可以触发相应的埋点采集用户所执行操作的操作信息。
以下对属于常量信息的各类型信息的表示方式进行说明。
对于上述采集耗时和识别耗时,其分别可以表示为如下公式:
T c=t 1-t 0
T z=t 3-t 2
其中,t0为开始采集时间,t1为结束采集时间;t2为开始识别时间,t3为结束识别时间。这里,采集时间和识别时间的定义同上所述,在此不复赘述。
对于上述RPC信息,由于其是与一系列点击操作相关的信息,因此其可以具有相应的长度。理论上,可以采集任意长度的RPC信息。以采集用户在进行人脸识别之前的10次点击操作为例来说,所采集的RPC信息可以表示为:
RPC 10=[r 0,r 1,...,r 9]
在该例子中,所采集的RPC信息的长度可以为10,且r0到r9表示连续的10次点击操作。
需要说明的是,对于上述常量信息,在采集之后可以形成如下的第一数据序列:
I const=[T c,T z,RPC 10]=[T c,T z,r 0,r 1,...,r 9]
以上是对旁路信息中的常量信息的说明,以下对旁路信息中的时序信息进行说明。
上述时序信息可以是指在人脸识别过程中随时间变化的信息,其可以包括设备的位置传感器信息(L)、设备的温度传感器信息(W)以及用户距离设备的距离信息(D)中的至少一项。这里的位置传感器信息可以通过获取全球定位系统(,GPS)的数据来确定,温度传感器信息可以通过获取温度传感器的数据来确定。
对于上述位置传感器信息和温度传感器信息,分别可以表示为:
L=[x,y,z]
W=w
其中,x,y以及z为用户设备的GPS坐标。w为设备的环境温度。
需要说明的是,本说明书中基于设备的位置传感器信息以及温度传感器信息,可以监控设备是否在异常位置或者异常环境使用。
此外,对于用户距离设备的距离信息,可以表示为:
D=d
其中,d为用户脸部与设备之间的平均距离。对于有距离传感器的设备,可以通过访问距离传感器的数据来确定d。
对于上述时序信息,由于其会随着时间而发生变化。因此,可以按照预定时间间隔,在多个时刻进行采集。其中,在某一时刻采集的时序信息可以表示为:
I seq=[L,W,D]=[x,y,z,w,d]
需要说明的是,多个时刻采集的时序信息可以形成第二数据序列。以预定时间间隔为100ms,且持续采集1s为例来说,可以得到10个时序信息,该10个时序信息可以形成如下的第二数据序列:
Figure PCTCN2020072048-appb-000001
可以理解是,基于上述第一数据序列和第二数据序列,就可以形成用于表示旁路信息的特征向量。该特征向量最终可以表示为:
I=[I SEQ,I const]
本说明书通过将旁路信息表示为特征向量的方式,可以方便于后续将其输入到决策模型中。
此外,对于上述多种旁路信息,在部分场景下可能无法全部获取到。如,在设备没有温度传感器时,可能无法获取到温度传感器信息。对于无法获取到的旁路信息,可以使用预定义的平均值进行填充。这里的预定义平均值可以是基于大量温度数据计算得到的。可以理解的是,通过对未获取到的旁路信息进行缺失值填充的预处理操作,可以减少缺失值所带来的负面影响。
需要说明的是,在实际应用中,还可以对上述旁路信息进行其它预处理操作,如,滤波处理等,以去除旁路信息的噪声,本说明书对此不再赘述。
还需要说明的是,本说书上述列举的几种旁路信息只是作为示例性说明,在实际应用中,还可以获取其它旁路信息,如,设备的网络状况等,本说明书对此不作限定。
步骤206,将旁路信息输入至少一个决策模型,以获得旁路决策结果。
该旁路决策结果用于预测该次人脸识别存在欺诈行为的概率。这里的欺诈行为,例如包括使用照片或者视频代替活体进行人脸识别的恶意行为。
上述至少一个决策模型可以包括决策树和/或支持向量机(Support Vector Machine,SVM)。此外,还可以包括深度学习网络等。对于上述决策树和/或支持向量机,其可以是基于大量有标签的旁路信息训练得到。
可以理解的是,当上述决策模型的个数为1个时,如,只包括决策树(或者SVM)时,将旁路信息输入决策树(或者SVM)后,所输出的结果即为上述决策结果。而当上述决策模型的个数为多个时,可以将多个决策模型的输出结果进行融合,以得到上述决策结果。
以上述决策模型的个数为两个,且该两个决策模型分别为决策树和SVM时,上述决策结果的确定过程可以为:将旁路信息输入决策树,以得到第一决策结果。将旁路信息输入SVM,以得到第二决策结果。这里的第一决策结果和第二决策结果可以是由两个决策模型分别输出的概率值。对第一决策结果以及第二决策结果进行综合(如,求平均或者求加权平均),以获得上述决策结果。
需要说明的是,为了进一步提升欺诈行为检测的准确性,本方案还可以执行以下步骤:
获取图像决策结果,该图像决策结果基于用户的图像信息以及其它决策模型确定。这里的其它决策模型例如可以为神经网络等。对旁路决策结果以及图像决策结果进行融合,以得到最终决策结果。这里的融合包括但不限于加权平均等。
以加权平均为例来说,其融合过程可以为:确定旁路决策结果以及图像决策结果各自对应的权重。基于确定的权重,对旁路决策结果以及图像决策结果进行加权平均,以得到最终决策结果。
在一个例子中,可以基于如下公式对上述旁路决策结果以及图像决策结果进行融合。
P=(p t+a*p p)/(1+a)
其中,P为最终决策结果,Pp为旁路决策结果,Pt为图像决策结果,a为旁路决策结果对应的权重。
对于旁路决策结果以及图像决策结果各自对应的权重,在一种实现方式中,可以预先设定旁路决策结果以及图像决策结果各自对应的权重。
在另一种实现方式中,对于图像决策结果对应的权重,可以预先设定,如设定为1。对于旁路决策结果对应的权重,其具体可以通过以下步骤确定:从预定取值范围内采样N个数值。基于预设的测试集,对N个数值进行准确性测试,以得到各自对应的准确率。从N个数值中选取准确率最高的数值。将选取的数值确定为旁路决策结果对应的权重。其中,N为正整数。
举例来说,可以在[0,0.5]取值范围内,每隔0.01进行采样,得到50个a的候选值。对于每一个候选值,在一个预先采集的测试集上进行准确率测试,最后选择准确率最高时对应的候选值作为最终的a值。
步骤208,至少基于旁路决策结果,确定该次人脸识别是否存在欺诈行为。
可以理解的是,当还获取图像决策结果时,可以基于最终决策结果,确定该次人脸识别是否存在欺诈行为。由于旁路信息与图像信息可以形成良好的互补,因此,基于最终决策结果,对欺诈行为进行检测时,可以提升检测方法的性能和稳定性。
其具体判断过程可以为:判断Pp或者P是否大于阈值T,如果是,则确定该次人脸识别存在欺诈行为;否则,不存在欺诈行为。
需要说明的是,上述步骤202是由图1中接收模块102执行的,步骤204-步骤208是由检测模块104执行的。
综上,本说明书实施例提供的欺诈行为检测方法,可以基于旁路信息,来检测欺诈行为,由于旁路信息不容易受到环境因素的影响,也即其比较稳定,从而可以大大提升欺诈行为检测方法的稳定性。
与上述人脸识别过程中的欺诈行为检测方法对应地,本说明书一个实施例还提供的一种人脸识别过程中的欺诈行为检测装置,如图3所示,该装置可以包括:
接收单元302,用于接收用户的人脸识别请求。
采集单元304,用于在处理接收单元302接收的人脸识别请求的过程中,采集用户的旁路信息。该旁路信息可以包括用户使用的设备信息以及用户行为信息。
其中,设备信息可以包括设备的位置传感器信息、设备的温度传感器信息中的至少一项。用户行为信息可以包括用户在发出人脸识别请求之前的操作历史信息、用户在人脸识别过程中的耗时信息以及用户距离设备的距离信息中的至少一项。
采集单元304具体可以用于:
将旁路信息中的常量信息形成第一数据序列,其中,常量信息可以包括耗时信息以及操作历史信息中的至少一项。
按照预定时间间隔、在多个时刻采集所述旁路信息中的时序信息,将多个时刻采集的时序信息形成第二数据序列,其中,时序信息可以包括位置传感器信息、温度传感器信息以及距离信息中的至少一项。
基于第一数据序列和第二数据序列,形成用于表示旁路信息的特征向量。
输入单元306,用于将采集单元304采集的旁路信息输入至少一个决策模型,以获得旁路决策结果。该旁路决策结果用于预测该次人脸识别存在欺诈行为的概率。
这里的至少一个决策模型可以包括决策树和/或支持向量机SVM。
当至少一个决策模型包括决策树和支持向量机SVM时,输入单元306具体可以用于:
将旁路信息输入决策树,以得到第一决策结果。
将旁路信息输入SVM,以得到第二决策结果。
对第一决策结果以及第二决策结果进行综合,以获得旁路决策结果。
确定单元308,用于至少基于旁路决策结果,确定该次人脸识别是否存在欺诈行为。
确定单元308具体可以用于:
对旁路决策结果以及图像决策结果进行融合,以得到最终决策结果。
基于最终决策结果,确定该次人脸识别是否存在欺诈行为。
确定单元308还具体可以用于:
确定旁路决策结果以及图像决策结果各自对应的权重。
基于确定的权重,对旁路决策结果以及图像决策结果进行加权平均,以得到最终决策结果。
确定单元308还具体可以用于:
从预定取值范围内采样N个数值。
基于预设的测试集,对N个数值进行准确性测试,以得到各自对应的准确率。
从N个数值中选取准确率最高的数值。
将选取的数值确定为旁路决策结果对应的权重。
可选地,该装置还可以包括:
预处理单元(图中未示出),用于对旁路信息进行预处理,以去除旁路信息的噪声。
该预处理至少可以包括滤波处理和/或缺失值填充。
输入单元306具体可以用于:
将预处理后的旁路信息输入至少一个决策模型,以获得旁路决策结果。
可选地,该装置还可以包括:
获取单元(图中未示出),用于获取图像决策结果,该图像决策结果基于用户的图像信息以及其它决策模型确定。
本说明书上述实施例装置的各功能模块的功能,可以通过上述方法实施例的各步骤来实现,因此,本说明书一个实施例提供的装置的具体工作过程,在此不复赘述。
本说明书一个实施例提供的人脸识别过程中的欺诈行为检测装置,接收单元302接收用户的人脸识别请求。采集单元304在处理人脸识别请求的过程中,采集用户的旁路信息。该旁路信息可以包括用户使用的设备信息以及用户行为信息。输入单元306将 旁路信息输入至少一个决策模型,以获得旁路决策结果。该旁路决策结果用于预测该次人脸识别存在欺诈行为的概率,确定单元308至少基于旁路决策结果,确定该次人脸识别是否存在欺诈行为。由此,可以大大提升欺诈行为检测的准确性。
需要说明的是,上述接收单元302的功能可以由图1中的接收模块102来实现,采集单元304、输入单元306以及确定单元308的功能可以由图1中的检测模块104来实现。
与上述人脸识别过程中的欺诈行为检测方法对应地,本说明书实施例还提供了一种人脸识别过程中的欺诈行为检测设备,如图4所示,该设备可以包括:存储器402、一个或多个处理器404以及一个或多个程序。其中,该一个或多个程序存储在存储器402中,并且被配置成由一个或多个处理器404执行,该程序被处理器404执行时实现以下步骤:
接收用户的人脸识别请求。
在处理人脸识别请求的过程中,采集用户的旁路信息。该旁路信息包括用户使用的设备信息以及用户行为信息。
将旁路信息输入至少一个决策模型,以获得旁路决策结果。旁路决策结果用于预测该次人脸识别存在欺诈行为的概率。
至少基于旁路决策结果,确定该次人脸识别是否存在欺诈行为。
本说明书一个实施例提供的人脸识别过程中的欺诈行为检测设备,可以大大提升欺诈行为检测的准确性。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于设备实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
结合本说明书公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于RAM存储器、闪存、ROM存储器、EPROM存储器、EEPROM存储器、寄存器、硬盘、移动硬盘、CD-ROM或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存 储介质可以位于ASIC中。另外,该ASIC可以位于服务器中。当然,处理器和存储介质也可以作为分立组件存在于服务器中。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本发明所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
以上所述的具体实施方式,对本说明书的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本说明书的具体实施方式而已,并不用于限定本说明书的保护范围,凡在本说明书的技术方案的基础之上,所做的任何修改、等同替换、改进等,均应包括在本说明书的保护范围之内。

Claims (19)

  1. 一种人脸识别过程中的欺诈行为检测方法,包括:
    接收用户的人脸识别请求;
    在处理所述人脸识别请求的过程中,采集所述用户的旁路信息;所述旁路信息包括所述用户使用的设备信息以及用户行为信息;
    将所述旁路信息输入至少一个决策模型,以获得旁路决策结果;所述旁路决策结果用于预测该次人脸识别存在欺诈行为的概率;
    至少基于所述旁路决策结果,确定该次人脸识别是否存在欺诈行为。
  2. 根据权利要求1所述的方法,所述设备信息包括设备的位置传感器信息、设备的温度传感器信息中的至少一项;所述用户行为信息包括用户在发出所述人脸识别请求之前的操作历史信息、用户在人脸识别过程中的耗时信息以及用户距离设备的距离信息中的至少一项。
  3. 根据权利要求2所述的方法,所述采集所述用户的旁路信息包括:
    将所述旁路信息中的常量信息形成第一数据序列,其中,所述常量信息包括所述耗时信息以及所述操作历史信息中的至少一项;
    按照预定时间间隔、在多个时刻采集所述旁路信息中的时序信息,将所述多个时刻采集的时序信息形成第二数据序列,其中,所述时序信息包括所述位置传感器信息、所述温度传感器信息以及所述距离信息中的至少一项;
    基于所述第一数据序列和所述第二数据序列,形成用于表示所述旁路信息的特征向量。
  4. 根据权利要求1所述的方法,所述至少一个决策模型包括决策树或支持向量机SVM。
  5. 根据权利要求1所述的方法,所述至少一个决策模型包括决策树和支持向量机SVM,所述将所述旁路信息输入至少一个决策模型,以获得旁路决策结果,包括:
    将所述旁路信息输入所述决策树,以得到第一决策结果;
    将所述旁路信息输入所述SVM,以得到第二决策结果;
    对所述第一决策结果以及所述第二决策结果进行综合,以获得所述旁路决策结果。
  6. 根据权利要求1所述的方法,在所述将所述旁路信息输入至少一个决策模型,以获得旁路决策结果之前,还包括:
    对所述旁路信息进行预处理,以去除所述旁路信息的噪声;
    所述预处理至少包括滤波处理和/或缺失值填充;
    所述将所述旁路信息输入至少一个决策模型,以获得旁路决策结果,包括:
    将预处理后的旁路信息输入至少一个决策模型,以获得旁路决策结果。
  7. 根据权利要求1所述的方法,还包括:
    获取图像决策结果;所述图像决策结果基于所述用户的图像信息以及其它决策模型确定;
    所述至少基于所述旁路决策结果,确定该次人脸识别是否存在欺诈行为,包括:
    对所述旁路决策结果以及所述图像决策结果进行融合,以得到最终决策结果;
    基于所述最终决策结果,确定该次人脸识别是否存在欺诈行为。
  8. 根据权利要求7所述的方法,所述对所述旁路决策结果以及所述图像决策结果进行融合,以得到最终决策结果,包括:
    确定所述旁路决策结果以及所述图像决策结果各自对应的权重;
    基于确定的权重,对所述旁路决策结果以及所述图像决策结果进行加权平均,以得到最终决策结果。
  9. 根据权利要求8所述的方法,所述确定所述旁路决策结果对应的权重,包括:
    从预定取值范围内采样N个数值;
    基于预设的测试集,对N个数值进行准确性测试,以得到各自对应的准确率;
    从N个数值中选取准确率最高的数值;
    将选取的数值确定为所述旁路决策结果对应的权重。
  10. 一种人脸识别过程中的欺诈行为检测装置,包括:
    接收单元,用于接收用户的人脸识别请求;
    采集单元,用于在处理所述接收单元接收的所述人脸识别请求的过程中,采集所述用户的旁路信息;所述旁路信息包括所述用户使用的设备信息以及用户行为信息;
    输入单元,用于将所述采集单元采集的所述旁路信息输入至少一个决策模型,以获得旁路决策结果;所述旁路决策结果用于预测该次人脸识别存在欺诈行为的概率;
    确定单元,用于至少基于所述旁路决策结果,确定该次人脸识别是否存在欺诈行为。
  11. 根据权利要求10所述的装置,所述设备信息包括设备的位置传感器信息、设备的温度传感器信息中的至少一项;所述用户行为信息包括用户在发出所述人脸识别请求之前的操作历史信息、用户在人脸识别过程中的耗时信息以及用户距离设备的距离信息中的至少一项。
  12. 根据权利要求11所述的装置,所述采集单元具体用于:
    将所述旁路信息中的常量信息形成第一数据序列,其中,所述常量信息包括所述耗时信息以及所述操作历史信息中的至少一项;
    按照预定时间间隔、在多个时刻采集所述旁路信息中的时序信息,将所述多个时刻采集的时序信息形成第二数据序列,其中,所述时序信息包括所述位置传感器信息、所述温度传感器信息以及所述距离信息中的至少一项;
    基于所述第一数据序列和所述第二数据序列,形成用于表示所述旁路信息的特征向量。
  13. 根据权利要求10所述的装置,所述至少一个决策模型包括决策树或支持向量机SVM。
  14. 根据权利要求10所述的装置,所述至少一个决策模型包括决策树和支持向量机SVM;
    所述输入单元具体用于:
    将所述旁路信息输入所述决策树,以得到第一决策结果;
    将所述旁路信息输入所述SVM,以得到第二决策结果;
    对所述第一决策结果以及所述第二决策结果进行综合,以获得所述旁路决策结果。
  15. 根据权利要求10所述的装置,还包括:
    预处理单元,用于对所述旁路信息进行预处理,以去除所述旁路信息的噪声;
    所述预处理至少包括滤波处理和/或缺失值填充;
    所述输入单元具体用于:
    将预处理后的旁路信息输入至少一个决策模型,以获得旁路决策结果。
  16. 根据权利要求10所述的装置,还包括:
    获取单元,用于获取图像决策结果;所述图像决策结果基于所述用户的图像信息以及其它决策模型确定;
    所述确定单元具体用于:
    对所述旁路决策结果以及所述图像决策结果进行融合,以得到最终决策结果;
    基于所述最终决策结果,确定该次人脸识别是否存在欺诈行为。
  17. 根据权利要求16所述的装置,所述确定单元还具体用于:
    确定所述旁路决策结果以及所述图像决策结果各自对应的权重;
    基于确定的权重,对所述旁路决策结果以及所述图像决策结果进行加权平均,以得到最终决策结果。
  18. 根据权利要求17所述的装置,所述确定单元还具体用于:
    从预定取值范围内采样N个数值;
    基于预设的测试集,对N个数值进行准确性测试,以得到各自对应的准确率;
    从N个数值中选取准确率最高的数值;
    将选取的数值确定为所述旁路决策结果对应的权重。
  19. 一种人脸识别过程中的欺诈行为检测设备,包括:
    存储器;
    一个或多个处理器;以及
    一个或多个程序,其中所述一个或多个程序存储在所述存储器中,并且被配置成由所述一个或多个处理器执行,所述程序被所述处理器执行时实现以下步骤:
    接收用户的人脸识别请求;
    在处理所述人脸识别请求的过程中,采集所述用户的旁路信息;所述旁路信息包括所述用户使用的设备信息以及用户行为信息;
    将所述旁路信息输入至少一个决策模型,以获得旁路决策结果;所述旁路决策结果用于预测该次人脸识别存在欺诈行为的概率;
    至少基于所述旁路决策结果,确定该次人脸识别是否存在欺诈行为。
PCT/CN2020/072048 2019-08-06 2020-01-14 人脸识别过程中的欺诈行为检测方法、装置及设备 WO2021022795A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/804,635 US10936715B1 (en) 2019-08-06 2020-02-28 Detecting fraudulent facial recognition
US17/188,881 US11182475B2 (en) 2019-08-06 2021-03-01 Detecting fraudulent facial recognition

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910722542.2A CN110532895B (zh) 2019-08-06 2019-08-06 人脸识别过程中的欺诈行为检测方法、装置及设备
CN201910722542.2 2019-08-06

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/804,635 Continuation US10936715B1 (en) 2019-08-06 2020-02-28 Detecting fraudulent facial recognition

Publications (1)

Publication Number Publication Date
WO2021022795A1 true WO2021022795A1 (zh) 2021-02-11

Family

ID=68662143

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/072048 WO2021022795A1 (zh) 2019-08-06 2020-01-14 人脸识别过程中的欺诈行为检测方法、装置及设备

Country Status (3)

Country Link
US (2) US10936715B1 (zh)
CN (1) CN110532895B (zh)
WO (1) WO2021022795A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532895B (zh) * 2019-08-06 2020-10-23 创新先进技术有限公司 人脸识别过程中的欺诈行为检测方法、装置及设备
CN111325185B (zh) * 2020-03-20 2023-06-23 上海看看智能科技有限公司 人脸防欺诈方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160125404A1 (en) * 2014-10-31 2016-05-05 Xerox Corporation Face recognition business model and method for identifying perpetrators of atm fraud
CN107832669A (zh) * 2017-10-11 2018-03-23 广东欧珀移动通信有限公司 人脸检测方法及相关产品
CN108376239A (zh) * 2018-01-25 2018-08-07 努比亚技术有限公司 一种人脸识别方法、移动终端及存储介质
CN109446981A (zh) * 2018-10-25 2019-03-08 腾讯科技(深圳)有限公司 一种脸部活体检测、身份认证方法及装置
CN109784015A (zh) * 2018-12-27 2019-05-21 腾讯科技(深圳)有限公司 一种身份鉴别方法及装置
CN110532895A (zh) * 2019-08-06 2019-12-03 阿里巴巴集团控股有限公司 人脸识别过程中的欺诈行为检测方法、装置及设备

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110063108A1 (en) * 2009-09-16 2011-03-17 Seiko Epson Corporation Store Surveillance System, Alarm Device, Control Method for a Store Surveillance System, and a Program
CN103530543B (zh) * 2013-10-30 2017-11-14 无锡赛思汇智科技有限公司 一种基于行为特征的用户识别方法及系统
US10803160B2 (en) * 2014-08-28 2020-10-13 Facetec, Inc. Method to verify and identify blockchain with user question data
CN104657610B (zh) * 2015-02-13 2017-11-17 南京邮电大学 一种信息物理融合系统时序逻辑鲁棒性评估方法
CN105096420A (zh) * 2015-07-31 2015-11-25 北京旷视科技有限公司 门禁系统以及用于其的数据处理方法
CN105138981A (zh) * 2015-08-20 2015-12-09 北京旷视科技有限公司 活体检测系统和方法
CN105512632B (zh) * 2015-12-09 2019-04-05 北京旷视科技有限公司 活体检测方法及装置
US10210518B2 (en) * 2016-04-13 2019-02-19 Abdullah Abdulaziz I. Alnajem Risk-link authentication for optimizing decisions of multi-factor authentications
US9774824B1 (en) * 2016-07-18 2017-09-26 Cisco Technology, Inc. System, method, and logic for managing virtual conferences involving multiple endpoints
US10769635B2 (en) * 2016-08-05 2020-09-08 Nok Nok Labs, Inc. Authentication techniques including speech and/or lip movement analysis
CN108875497B (zh) * 2017-10-27 2021-04-27 北京旷视科技有限公司 活体检测的方法、装置及计算机存储介质
CN108023876B (zh) * 2017-11-20 2021-07-30 西安电子科技大学 基于可持续性集成学习的入侵检测方法及入侵检测系统
CN107977559A (zh) * 2017-11-22 2018-05-01 杨晓艳 一种身份认证方法、装置、设备和计算机可读存储介质
CN109271915B (zh) * 2018-09-07 2021-10-08 北京市商汤科技开发有限公司 防伪检测方法和装置、电子设备、存储介质
CN109272398B (zh) * 2018-09-11 2020-05-08 北京芯盾时代科技有限公司 一种操作请求处理系统
US10860874B2 (en) * 2018-12-21 2020-12-08 Oath Inc. Biometric based self-sovereign information management

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160125404A1 (en) * 2014-10-31 2016-05-05 Xerox Corporation Face recognition business model and method for identifying perpetrators of atm fraud
CN107832669A (zh) * 2017-10-11 2018-03-23 广东欧珀移动通信有限公司 人脸检测方法及相关产品
CN108376239A (zh) * 2018-01-25 2018-08-07 努比亚技术有限公司 一种人脸识别方法、移动终端及存储介质
CN109446981A (zh) * 2018-10-25 2019-03-08 腾讯科技(深圳)有限公司 一种脸部活体检测、身份认证方法及装置
CN109784015A (zh) * 2018-12-27 2019-05-21 腾讯科技(深圳)有限公司 一种身份鉴别方法及装置
CN110532895A (zh) * 2019-08-06 2019-12-03 阿里巴巴集团控股有限公司 人脸识别过程中的欺诈行为检测方法、装置及设备

Also Published As

Publication number Publication date
CN110532895A (zh) 2019-12-03
US20210042406A1 (en) 2021-02-11
CN110532895B (zh) 2020-10-23
US20210182384A1 (en) 2021-06-17
US11182475B2 (en) 2021-11-23
US10936715B1 (en) 2021-03-02

Similar Documents

Publication Publication Date Title
CN108009528B (zh) 基于Triplet Loss的人脸认证方法、装置、计算机设备和存储介质
CN109284733B (zh) 一种基于yolo和多任务卷积神经网络的导购消极行为监控方法
US9202121B2 (en) Liveness detection
WO2018121690A1 (zh) 对象属性检测、神经网络训练、区域检测方法和装置
CN111062239A (zh) 人体目标检测方法、装置、计算机设备及存储介质
CN112364827B (zh) 人脸识别方法、装置、计算机设备和存储介质
WO2021022795A1 (zh) 人脸识别过程中的欺诈行为检测方法、装置及设备
CN111027481A (zh) 基于人体关键点检测的行为分析方法及装置
CN105224947A (zh) 分类器训练方法和系统
JP2022540101A (ja) ポジショニング方法及び装置、電子機器、コンピュータ読み取り可能な記憶媒体
WO2022048572A1 (zh) 目标识别方法、装置和电子设备
JP2019192082A (ja) 学習用サーバ、不足学習用画像収集支援システム、及び不足学習用画像推定プログラム
CN111476160A (zh) 损失函数优化方法、模型训练方法、目标检测方法及介质
CN113516144A (zh) 目标检测方法及装置、计算设备
CN112668438A (zh) 红外视频时序行为定位方法、装置、设备及存储介质
CN112183356A (zh) 驾驶行为检测方法、设备及可读存储介质
CN111881740A (zh) 人脸识别方法、装置、电子设备及介质
CN113780145A (zh) 精子形态检测方法、装置、计算机设备和存储介质
CN113743455A (zh) 目标检索方法、装置、电子设备及存储介质
CN117095436A (zh) 企业员工信息智能管理系统及其方法
CN112508135B (zh) 模型训练方法、行人属性预测方法、装置及设备
CN106446837B (zh) 一种基于运动历史图像的挥手检测方法
RU2694140C1 (ru) Способ идентификации человека в режиме одновременной работы группы видеокамер
JP2018142137A (ja) 情報処理装置、情報処理方法、及びプログラム
CN110472680B (zh) 目标分类方法、装置和计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20849237

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20849237

Country of ref document: EP

Kind code of ref document: A1