WO2019113776A1 - Face and voiceprint-based payment authentication method, and terminal - Google Patents

Face and voiceprint-based payment authentication method, and terminal Download PDF

Info

Publication number
WO2019113776A1
WO2019113776A1 PCT/CN2017/115617 CN2017115617W WO2019113776A1 WO 2019113776 A1 WO2019113776 A1 WO 2019113776A1 CN 2017115617 W CN2017115617 W CN 2017115617W WO 2019113776 A1 WO2019113776 A1 WO 2019113776A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
face
voiceprint
payment authentication
feature parameter
Prior art date
Application number
PCT/CN2017/115617
Other languages
French (fr)
Chinese (zh)
Inventor
张炽成
唐超旬
Original Assignee
福建联迪商用设备有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 福建联迪商用设备有限公司 filed Critical 福建联迪商用设备有限公司
Priority to PCT/CN2017/115617 priority Critical patent/WO2019113776A1/en
Priority to CN201780002078.9A priority patent/CN108124488A/en
Publication of WO2019113776A1 publication Critical patent/WO2019113776A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/382Payment protocols; Details thereof insuring higher security of transaction
    • G06Q20/3823Payment protocols; Details thereof insuring higher security of transaction combining multiple encryption tools for a transaction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces

Definitions

  • the present invention relates to the field of electronic payment technologies, and in particular, to a payment authentication method and a terminal based on a face and a voiceprint.
  • the current payment authentication method mainly performs payment authentication through fingerprint or face recognition, which has the following disadvantages: biometrics Easy to be stolen: Fingerprint information is easier to take when the trader touches the item, and the non-living information, the facial image information is originally public, and it is easy to steal through video or photographing; the stolen biometrics are easy to use.
  • biometrics Easy to be stolen Fingerprint information is easier to take when the trader touches the item, and the non-living information, the facial image information is originally public, and it is easy to steal through video or photographing; the stolen biometrics are easy to use.
  • the stolen fingerprint and facial information can be used to attack the payment device by making fingerprint and image synthesis techniques respectively, thereby achieving the purpose of stealing.
  • the technical problem to be solved by the present invention is that the present invention provides a face and voiceprint payment authentication method and a terminal, which improves the security of payment authentication.
  • the present invention provides a payment authentication method based on a face and a voiceprint, comprising the following steps:
  • S1 determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
  • the present invention also provides a face and voiceprint based payment authentication terminal comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor implementing the program to implement the following step:
  • S1 determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
  • the invention provides a payment authentication method and a terminal based on a face and a voiceprint.
  • determining whether there is high frequency information greater than a preset threshold in the face image information it is possible to prevent the image information synthesized by the computer from attacking the authentication attack.
  • Problem Because of the image synthesized by the computer, a large number of spatial hopping occurs at the edge of the face, the edge of the eye, the edge of the mouth, etc., and there is a large amount of high-frequency information corresponding to the frequency domain); at the same time, the voiceprint is judged.
  • the above method is implemented in the face and High-frequency detection is added to the voiceprint recognition to prevent face or recording synthesis attacks, and face recognition is combined with relatively difficult to disguise voiceprints for payment authentication, which can effectively avoid camouflage attacks and make payment more secure.
  • FIG. 1 is a schematic diagram showing main steps of a face and voiceprint based payment authentication method according to an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a face authentication and voiceprint based payment authentication terminal according to an embodiment of the present invention
  • the present invention provides a payment authentication method based on face and voiceprint, which includes the following steps:
  • S1 determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
  • the present invention provides a payment authentication method based on a face and a voiceprint. By determining whether there is high frequency information greater than a preset threshold in the face image information, it is possible to prevent image information pairing by computer synthesis.
  • the problem of authentication attack (due to the image synthesized by the computer, a large number of spatial hopping occurs at the edge of the face, the edge of the eye, the edge of the mouth, etc., and there is a large amount of high frequency information corresponding to the frequency domain); Determining whether there is a high-frequency component greater than a preset second threshold in the voiceprint information, and preventing payment authentication by the stitched recording information (the stitched recording information has a high-frequency component in the splicing section), and the method is implemented High-frequency detection is added to face and voiceprint recognition to prevent facial or recording synthesis attacks, and face recognition is combined with relatively difficult to disguise voiceprints for payment authentication, which can effectively avoid camouflage attacks and make payment more secure.
  • the method further includes:
  • S02 Simultaneously collecting voice information while simultaneously generating voice information;
  • the face information includes face video information and face image information;
  • S03 determining whether the face motion in the face video information is consistent with the specified action, and determining whether the text corresponding to the voiceprint information is consistent with the text corresponding to the specified text information;
  • step S04 If they are all consistent, step S1 is performed, otherwise the payment authentication fails.
  • the collected face information is the face video information and the face image information collected by the camera device when the user performs the corresponding face action according to the specified action information displayed; the collected voiceprint information is displayed by the user according to the display.
  • the specified text information is used to input the voiceprint information obtained by voice input, and the user does not know the specified action information and the specified text information in advance, and the security of the payment verification is improved by the above verification method.
  • the S03 and the S04 further include:
  • the synchronization is performed, it is determined whether the first text information corresponding to the lip language information is consistent with the specified text information.
  • the S02 and the S03 further include:
  • the face information and the voiceprint information are separately subjected to noise reduction and filtering processing.
  • the accuracy of data processing can be improved by the above method.
  • the face specified action information and the specified text information requiring voice input are randomly generated, and the specified action information and the specified text information are displayed.
  • the new specified action information and the new designated text information are randomly displayed, and the face information and the voiceprint information are re-acquired.
  • the method further includes:
  • the first mathematical model of the voiceprint information under the stress state and the second mathematical model of the face image information under the stress state can be established by the above method, so that the state of the user can be accurately determined subsequently. Increase the security of payment.
  • the method further includes:
  • the second mathematical model and the voiceprint feature parameter it is determined whether the user is in a state of being coerced.
  • S2 is specifically:
  • the face feature parameters and the voiceprint feature parameters are encrypted and transmitted during the transmission process, which can prevent the user data from being stolen and cause a certain economic loss to the user; and at the same time, the saliency analysis can accurately determine Whether the feature parameters corresponding to the face feature parameters and the pre-stored face information match, and can accurately determine whether the feature parameters corresponding to the voiceprint feature parameters and the pre-stored voiceprint information match; the above-mentioned double verification method improves the security of payment authentication.
  • S1 is specifically:
  • the method further includes:
  • the location information in which the transaction is stored during the transaction process makes the transaction location traceable.
  • the present invention provides a face and voiceprint based payment authentication terminal, comprising a memory 1, a processor 2, and a computer program stored on the memory 1 and operable on the processor 2, characterized in that The processor 2 implements the following steps when executing the program:
  • S1 determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
  • the above-mentioned face and voiceprint-based payment authentication terminal further includes:
  • S02 Simultaneously collecting voice information while simultaneously generating voice information;
  • the face information includes face video information and face image information;
  • S03 determining whether the face motion in the face video information is consistent with the specified action, and determining whether the text corresponding to the voiceprint information is consistent with the text corresponding to the specified text information;
  • step S04 If they are all consistent, step S1 is performed, otherwise the payment authentication fails.
  • the above-mentioned face and voiceprint-based payment authentication terminal further includes: between S03 and S04:
  • the synchronization is performed, it is determined whether the first text information corresponding to the lip language information is consistent with the specified text information.
  • the above-mentioned face and voiceprint-based payment authentication terminal further includes: between S02 and S03:
  • the face information and the voiceprint information are separately subjected to noise reduction and filtering processing.
  • the above-mentioned face and voiceprint-based payment authentication terminal the face specified action information required for displaying the payment verification in S01 and the specified text information requiring voice input are specifically:
  • the face specified action information and the specified text information requiring voice input are randomly generated, and the specified action information and the specified text information are displayed.
  • the payment authentication terminal based on the face and the voiceprint is configured to randomly display the new specified action information and the new designated text information if the face information or the voiceprint information fails to be collected within a preset time. And re-collect face information and voiceprint information.
  • the above-mentioned face and voiceprint-based payment authentication terminal further includes:
  • the above-mentioned face and voiceprint-based payment authentication terminal before the S2, further includes:
  • the second mathematical model and the voiceprint feature parameter it is determined whether the user is in a state of being coerced.
  • the S2 is specifically:
  • the above-mentioned face and voiceprint-based payment authentication terminal further includes:
  • a first embodiment of the present invention is:
  • the invention provides a payment authentication method based on face and voiceprint, comprising the following steps:
  • step S0 randomly generating face specified action information and designated text information requiring voice input, displaying the specified action information and designated text information; collecting face information while simultaneously generating voiceprint information; the face information including face video Information and face image information; after performing noise reduction and filtering processing on the face information and the voiceprint information respectively, determining whether the face motion in the face video information is consistent with the specified action, and determining the text corresponding to the voiceprint information Whether the characters corresponding to the specified text information are consistent; determining whether the lip information in the face video information and the audio information in the voiceprint information are synchronized, if not, the payment authentication fails; if the synchronization, the lip information is determined to be corresponding Whether the first text information is consistent with the specified text information; if they are consistent, step S1 is performed, otherwise the payment authentication fails;
  • step S0 is re-executed
  • S1 determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
  • Embodiment 2 of the present invention is:
  • the invention provides a payment authentication method based on face and voiceprint, comprising the following steps:
  • Reading the second characteristic parameter corresponding to each sample (soundprint information), fitting all the second characteristic parameters by deep learning convolutional neural network, ie defining neural network, collecting original data, classifying training, Correcting and outputting a result, obtaining a second mathematical model between the stressed state and the second characteristic parameter;
  • the default transaction terminal and the server are first authenticated. If the authentication fails, the payment authentication fails and the transaction is terminated.
  • the authentication succeeds, acquiring current location information of the transaction terminal; encrypting the current location information to obtain location encryption information; and transmitting the location encryption information to a server, so that the server saves the location encryption information in a preset Security log information;
  • the face specified action information and the specified text information requiring voice input are randomly generated, and the specified action information and the specified text information are displayed; while the face information is collected, the voiceprint information is simultaneously; the face information is
  • the method includes the face video information and the face image information; after performing noise reduction and filtering processing on the face information and the voiceprint information, determining whether the face motion in the face video information is consistent with the specified action, and determining the voiceprint Whether the text corresponding to the information is consistent with the text corresponding to the specified text information; determining whether the audio information in the lip information and the voiceprint information in the face video information is synchronized, if not synchronized, the payment authentication fails; if it is synchronized, it is determined Whether the first text information corresponding to the lip language information is consistent with the specified text information; if there is an inconsistency, the payment authentication fails, otherwise the following steps are performed:
  • the third embodiment of the present invention is:
  • the present invention provides a face and voiceprint based payment authentication terminal comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein the processor executes the program
  • a face and voiceprint based payment authentication terminal comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein the processor executes the program
  • step S0 randomly generating face specified action information and designated text information requiring voice input, displaying the specified action information and designated text information; collecting face information while simultaneously generating voiceprint information; the face information including face video Information and face image information; after performing noise reduction and filtering processing on the face information and the voiceprint information respectively, determining whether the face motion in the face video information is consistent with the specified action, and determining the text corresponding to the voiceprint information Whether the characters corresponding to the specified text information are consistent; determining whether the lip information in the face video information and the audio information in the voiceprint information are synchronized, if not, the payment authentication fails; if the synchronization, the lip information is determined to be corresponding Whether the first text information is consistent with the specified text information; if they are consistent, step S1 is performed, otherwise the payment authentication fails;
  • step S0 is re-executed
  • S1 determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
  • Embodiment 4 of the present invention is:
  • the present invention provides a face and voiceprint based payment authentication terminal comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein the processor executes the program
  • a face and voiceprint based payment authentication terminal comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein the processor executes the program
  • Reading the second characteristic parameter corresponding to each sample (soundprint information), fitting all the second characteristic parameters by deep learning convolutional neural network, ie defining neural network, collecting original data, classifying training, Correcting and outputting a result, obtaining a second mathematical model between the stressed state and the second characteristic parameter;
  • the default transaction terminal and the server are first authenticated. If the authentication fails, the payment authentication fails and the transaction is terminated.
  • the authentication succeeds, acquiring current location information of the transaction terminal; encrypting the current location information to obtain location encryption information; and transmitting the location encryption information to a server, so that the server saves the location encryption information in a preset Security log information;
  • the face specified action information and the specified text information requiring voice input are randomly generated, and the specified action information and the specified text information are displayed; while the face information is collected, the voiceprint information is simultaneously; the face information is
  • the method includes the face video information and the face image information; after performing noise reduction and filtering processing on the face information and the voiceprint information, determining whether the face motion in the face video information is consistent with the specified action, and determining the voiceprint Whether the text corresponding to the information is consistent with the text corresponding to the specified text information; determining whether the audio information in the lip information and the voiceprint information in the face video information is synchronized, if not synchronized, the payment authentication fails; if it is synchronized, it is determined Whether the first text information corresponding to the lip language information is consistent with the specified text information; if there is an inconsistency, the payment authentication fails, otherwise the following steps are performed:
  • Embodiment 5 of the present invention is:
  • the invention provides a POS machine, comprising an MCU (micro control module), a camera, a microphone and a liquid crystal screen, wherein the MCU is electrically connected to the camera, the microphone and the liquid crystal screen respectively;
  • MCU micro control module
  • the MCU is electrically connected to the camera, the microphone and the liquid crystal screen respectively;
  • the POS software Before leaving the factory, the POS software undergoes extensive machine learning training, including tens of thousands of face information under normal emotions and coerced face information, as well as tens of thousands of normal voice messages and voice information under stress.
  • the recognition software fits into a calculation formula by reading the specific parameters of each training sample and fitting it into a calculation formula by using the deep learning convolutional neural network for all training samples (the main steps are: defining the neural network, collecting the original data, and classifying Training, correction, and output results), the relationship between these parameters and emotions is derived, so that the software for face recognition and voiceprint recognition has the ability to identify whether the source of information is under coercion.
  • POS and transaction background authentication If the authentication fails, it means that the POS has no transaction authority and ends the transaction; if the authentication is successful, it means that the POS has the transaction authority, and the encryption of the POS and the transaction background is enabled.
  • the wireless module encrypts and uploads the current base station location at this time, and saves it as the content of the security log in the transaction background.
  • the internal MCU of the POS machine randomly generates the written text information, and prompts the trader to use the microphone to read the corresponding text information through the LCD screen, and the MCU randomly generates the face specified action (including blinking, opening mouth, turning head, etc.) through the LCD screen. The user is prompted to collect the face information of the specified action through the camera.
  • the camera collects the face information of the trader, and at the same time, the microphone collects the voice information of the trader. While collecting sound information, face information is still collected for lip language calculation.
  • the MCU preprocesses the face information, including noise reduction and normalization processing.
  • the MCU checks the legality of the face information, including checking whether the face action is consistent with the prompt, checking whether there is high frequency information exceeding the threshold, whether it is not in a nervous fear, whether the lip language is consistent with the prompt information, and whether the lip language is Synchronized with the recorded information, if the check fails, it is judged as illegal information, rejected, and the transaction is terminated.
  • the MCU calculates the feature values of the face information, including the geometric features of the eyes, nose, mouth, and the like of the face.
  • the MCU encrypts the face information feature value and transmits it to the transaction background.
  • the transaction background analyzes the feature value of the uploaded face information and the face information of the cardholder reserved by the bank, and judges whether the transaction is allowed according to the analysis result: if the significance is insufficient, the face recognition fails, and the POS is notified. End the transaction; if the significance is obvious, it indicates that the face recognition is successful, and the POS is allowed to allow the transaction.
  • the MCU preprocesses the sound information, including noise reduction and the like.
  • the MCU checks the legality of the sound, including checking whether the sound content is consistent with the prompt, checking whether there is high frequency information exceeding the threshold, whether the emotion is not in a nervous or fearful mood, and if the check fails, it is judged as illegal information and rejected. End the transaction.
  • the MCU calculates the voiceprint feature information, that is, MFCC (Mel Frequency Cepstrum Coefficients).
  • the MCU encrypts the voiceprint feature value and transmits it to the transaction background.
  • the transaction background analyzes the uploaded voiceprint feature value and the cardholder's voiceprint information reserved by the bank: if the significance is insufficient, it indicates that the voiceprint recognition fails, and the POS is notified to end the transaction; if the significance is obvious, then Indicates that voiceprint recognition is successful, performs background transactions, and informs the POS of the transaction result.
  • the POS machine will prompt the trader to the transaction result.
  • the present invention provides a method and a terminal for payment authentication based on a face and a voiceprint.
  • the recorded information can be prevented from being stitched.
  • Perform payment authentication spliced recording information has high-frequency components in the splicing segment); in the image synthesized by computer, a large number of spatial hopping occurs at the edge of the face, the edge of the eye, the edge of the mouth, etc. In the frequency domain, a large amount of high-frequency information exists.
  • Face information and voiceprint information can effectively determine whether the user is in a state of coercion, making payment more secure and reliable.
  • the combination of face and voiceprint can greatly reduce the risk of misappropriation and enhance transaction security and reliability.
  • the cardholder's face information and voiceprint information are transmitted to the server as encrypted information through encryption. Only the one-way uplink transmission of the characteristic parameters is allowed to avoid the leakage of sensitive information.
  • the above method realizes adding high frequency detection in face and voiceprint recognition to prevent face or recording synthesis attack, and performs face authentication combined with relatively difficult to disguise voiceprint for payment authentication, which can effectively avoid camouflage attack and make payment safer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)

Abstract

Provided are a face and voiceprint-based payment authentication method, and a terminal. The method comprises the following steps: determining, according to facial information, whether facial image information of facial information contains high-frequency information greater than a preset first threshold; determining, according to voiceprint information, whether the voiceprint information contains a high-frequency component greater than a preset second threshold; and if both determination results are negative, performing payment authentication according to the facial image information and the voiceprint information. In the present invention, whether a user is in a compelled state can be effectively determined according to facial image information and voiceprint information, thereby ensuring a more secure and reliable payment process. The method combines face and voiceprint recognition techniques, thereby greatly reducing risks of unauthorized payments, and enhancing transaction security and reliability.

Description

一种基于人脸和声纹的支付认证方法及终端Payment authentication method and terminal based on face and voiceprint 技术领域Technical field
本发明涉及电子支付技术领域,尤其涉及一种基于人脸和声纹的支付认证方法及终端。The present invention relates to the field of electronic payment technologies, and in particular, to a payment authentication method and a terminal based on a face and a voiceprint.
背景技术Background technique
随着互联网技术的不断发展,通过智能移动终端进行网上购物已成为了人们生活中必不可少的一件事情,这也极大的方便了人们的生活。由于网上购物涉及到用户的敏感信息,因此在网上购物并进行支付时需要较安全的支付认证方式;现在的支付认证方式主要为通过指纹或脸部识别进行支付认证,其存在以下缺点:生物特征易被盗取:指纹信息在交易人接触物品的时候比较容易套取,且非活体信息,脸部图像信息原本就是公开的,通过视频或拍照都很容易盗取;盗取的生物特征易用于攻击,采用盗取的指纹和脸部信息可以分别通过制作指模和图像合成的技术对支付设备进行攻击,从而达到盗刷的目的。With the continuous development of Internet technology, online shopping through smart mobile terminals has become an indispensable part of people's lives, which is also very convenient for people's lives. Since online shopping involves sensitive information of users, it requires a more secure payment authentication method when shopping online and making payment. The current payment authentication method mainly performs payment authentication through fingerprint or face recognition, which has the following disadvantages: biometrics Easy to be stolen: Fingerprint information is easier to take when the trader touches the item, and the non-living information, the facial image information is originally public, and it is easy to steal through video or photographing; the stolen biometrics are easy to use. In the attack, the stolen fingerprint and facial information can be used to attack the payment device by making fingerprint and image synthesis techniques respectively, thereby achieving the purpose of stealing.
技术问题technical problem
本发明所要解决的技术问题是:本发明提供一种基于人脸和声纹支付认证方法及终端,提高了支付认证的安全性。The technical problem to be solved by the present invention is that the present invention provides a face and voiceprint payment authentication method and a terminal, which improves the security of payment authentication.
技术解决方案Technical solution
为了解决上述技术问题,本发明提供了一种基于人脸和声纹的支付认证方法,包括以下步骤:In order to solve the above technical problem, the present invention provides a payment authentication method based on a face and a voiceprint, comprising the following steps:
S1:根据人脸信息,判断人脸信息中所包括的人脸图像信息中是否存在大于预设第一阈值的高频信息;以及根据声纹信息,判断声纹信息中是否存在大于预设第二阈值的高频分量;S1: determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
S2:若均否,则根据人脸图像信息及声纹信息进行支付认证。S2: If no, the payment authentication is performed according to the face image information and the voiceprint information.
本发明还提供了一种基于人脸和声纹的支付认证终端,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现以下步骤:The present invention also provides a face and voiceprint based payment authentication terminal comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor implementing the program to implement the following step:
S1:根据人脸信息,判断人脸信息中所包括的人脸图像信息中是否存在大于预设第一阈值的高频信息;以及根据声纹信息,判断声纹信息中是否存在大于预设第二阈值的高频分量;S1: determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
S2:若均否,则根据人脸图像信息及声纹信息进行支付认证。S2: If no, the payment authentication is performed according to the face image information and the voiceprint information.
有益效果Beneficial effect
本发明的有益效果为:The beneficial effects of the invention are:
本发明提供了一种基于人脸和声纹的支付认证方法及终端,通过判断人脸图像信息中是否存在大于预设阈值的高频信息,能够防止通过计算机合成的图像信息对支付认证攻击的问题(由于通过计算机合成的图像,在脸部边缘、眼部边缘、嘴部边缘等合成拼接的地方出现大量空域的跳变,对应在频域则存在大量高频信息);同时,判断声纹信息中是否存在大于预设第二阈值的高频分量,能够防止通过拼接的录音信息进行支付认证(拼接的录音信息其在拼接段存在高频分量),通过上述方法,实现了在人脸和声纹识别中添加高频检测来防止脸部或录音合成攻击,并将人脸识别结合相对不易伪装的声纹进行支付认证,能够有效避免伪装攻击,使支付更加安全。The invention provides a payment authentication method and a terminal based on a face and a voiceprint. By determining whether there is high frequency information greater than a preset threshold in the face image information, it is possible to prevent the image information synthesized by the computer from attacking the authentication attack. Problem (Because of the image synthesized by the computer, a large number of spatial hopping occurs at the edge of the face, the edge of the eye, the edge of the mouth, etc., and there is a large amount of high-frequency information corresponding to the frequency domain); at the same time, the voiceprint is judged. Whether there is a high-frequency component greater than a preset second threshold in the information, which can prevent payment authentication by the spliced recording information (the spliced recording information has a high-frequency component in the splicing section), and the above method is implemented in the face and High-frequency detection is added to the voiceprint recognition to prevent face or recording synthesis attacks, and face recognition is combined with relatively difficult to disguise voiceprints for payment authentication, which can effectively avoid camouflage attacks and make payment more secure.
附图说明DRAWINGS
图1为根据本发明实施例的一种基于人脸和声纹的支付认证方法的主要步骤示意图;1 is a schematic diagram showing main steps of a face and voiceprint based payment authentication method according to an embodiment of the present invention;
图2为根据本发明实施例的一种基于人脸和声纹的支付认证终端的结构示意图;2 is a schematic structural diagram of a face authentication and voiceprint based payment authentication terminal according to an embodiment of the present invention;
标号说明:Label description:
1、存储器;2、处理器。1, memory; 2, processor.
本发明的实施方式Embodiments of the invention
请参照图1,本发明提供了一种基于人脸和声纹的支付认证方法,包括以下步骤:Referring to FIG. 1, the present invention provides a payment authentication method based on face and voiceprint, which includes the following steps:
S1:根据人脸信息,判断人脸信息中所包括的人脸图像信息中是否存在大于预设第一阈值的高频信息;以及根据声纹信息,判断声纹信息中是否存在大于预设第二阈值的高频分量;S1: determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
S2:若均否,则根据人脸图像信息及声纹信息进行支付认证。S2: If no, the payment authentication is performed according to the face image information and the voiceprint information.
从上述描述可知,本发明提供了一种基于人脸和声纹的支付认证方法,通过判断人脸图像信息中是否存在大于预设阈值的高频信息,能够防止通过计算机合成的图像信息对支付认证攻击的问题(由于通过计算机合成的图像,在脸部边缘、眼部边缘、嘴部边缘等合成拼接的地方出现大量空域的跳变,对应在频域则存在大量高频信息);同时,判断声纹信息中是否存在大于预设第二阈值的高频分量,能够防止通过拼接的录音信息进行支付认证(拼接的录音信息其在拼接段存在高频分量),通过上述方法,实现了在人脸和声纹识别中添加高频检测来防止脸部或录音合成攻击,并将人脸识别结合相对不易伪装的声纹进行支付认证,能够有效避免伪装攻击,使支付更加安全。As can be seen from the above description, the present invention provides a payment authentication method based on a face and a voiceprint. By determining whether there is high frequency information greater than a preset threshold in the face image information, it is possible to prevent image information pairing by computer synthesis. The problem of authentication attack (due to the image synthesized by the computer, a large number of spatial hopping occurs at the edge of the face, the edge of the eye, the edge of the mouth, etc., and there is a large amount of high frequency information corresponding to the frequency domain); Determining whether there is a high-frequency component greater than a preset second threshold in the voiceprint information, and preventing payment authentication by the stitched recording information (the stitched recording information has a high-frequency component in the splicing section), and the method is implemented High-frequency detection is added to face and voiceprint recognition to prevent facial or recording synthesis attacks, and face recognition is combined with relatively difficult to disguise voiceprints for payment authentication, which can effectively avoid camouflage attacks and make payment more secure.
进一步的,所述S1之前还包括:Further, before the S1, the method further includes:
S01:显示支付验证时所需要的脸部指定动作信息及需要语音输入的指定文字信息;S01: displaying face specified action information required for payment verification and designated text information requiring voice input;
S02:采集人脸信息的同时,同时声纹信息;所述人脸信息包括人脸视频信息及人脸图像信息;S02: Simultaneously collecting voice information while simultaneously generating voice information; the face information includes face video information and face image information;
S03:判断人脸视频信息中的人脸动作是否与指定动作一致,以及判断声纹信息对应的文字与指定文字信息对应的文字是否一致;S03: determining whether the face motion in the face video information is consistent with the specified action, and determining whether the text corresponding to the voiceprint information is consistent with the text corresponding to the specified text information;
S04:若均一致,则执行步骤S1,否则支付认证失败。S04: If they are all consistent, step S1 is performed, otherwise the payment authentication fails.
从上述描述可知,采集的人脸信息为用户根据显示的指定动作信息进行相应的脸部动作时,通过摄像装置采集的人脸视频信息及人脸图像信息;采集的声纹信息为用户根据显示的指定文字信息进行语音输入所得到声纹信息,用户事先并不知晓上述指定的动作信息和指定的文字信息,通过上述的验证方式,提高了支付验证的安全性。It can be seen from the above description that the collected face information is the face video information and the face image information collected by the camera device when the user performs the corresponding face action according to the specified action information displayed; the collected voiceprint information is displayed by the user according to the display. The specified text information is used to input the voiceprint information obtained by voice input, and the user does not know the specified action information and the specified text information in advance, and the security of the payment verification is improved by the above verification method.
进一步的,所述S03和S04之间还包括:Further, the S03 and the S04 further include:
判断所述人脸视频信息中唇语信息与声纹信息中的音频信息是否同步,若不同步,则支付认证失败;Determining whether the audio information in the lip language information and the voiceprint information in the face video information is synchronized, and if not, the payment authentication fails;
若同步,则判断唇语信息对应的第一文字信息与指定文字信息是否一致。If the synchronization is performed, it is determined whether the first text information corresponding to the lip language information is consistent with the specified text information.
从上述描述可知,通过上述方法,能够保证唇语信息与音频信息同步,并且保证唇语信息与指定文字信息相对应,从而使支付验证更加安全可靠。It can be seen from the above description that, by the above method, it is possible to ensure that the lip language information is synchronized with the audio information, and that the lip language information corresponds to the specified text information, thereby making the payment verification more secure and reliable.
进一步的,所述S02和S03之间还包括:Further, the S02 and the S03 further include:
对所述人脸信息和声纹信息分别进行降噪和滤波处理。The face information and the voiceprint information are separately subjected to noise reduction and filtering processing.
从上述描述可知,通过上述方法能够提高数据处理的准确性。As can be seen from the above description, the accuracy of data processing can be improved by the above method.
进一步的,所述S01中显示支付验证时所需要的脸部指定动作信息及需要语音输入的指定文字信息具体为:Further, the face specified action information required for the payment verification and the specified text information requiring voice input are displayed in the S01:
随机生成脸部指定动作信息及需要语音输入的指定文字信息,显示所述指定动作信息及指定文字信息。The face specified action information and the specified text information requiring voice input are randomly generated, and the specified action information and the specified text information are displayed.
从上述描述可知,通过上述方法,能够防止不法分子盗用上一次支持验证时的人脸信息进行支付验证,同时能够防止不法分子对用户支付时输入的语音进行录音,保证了支付的安全性。As can be seen from the above description, by the above method, it is possible to prevent the fraudulent use of the face information at the time of last support verification for payment verification, and at the same time, it is possible to prevent the criminals from recording the voice input when the user pays, thereby ensuring the security of the payment.
进一步的,若在预设时间内人脸信息或声纹信息采集失败,则随机显示新的指定动作信息及新的指定文字信息,并重新采集人脸信息及声纹信息。Further, if the face information or the voiceprint information fails to be collected within the preset time, the new specified action information and the new designated text information are randomly displayed, and the face information and the voiceprint information are re-acquired.
从上述描述可知,通过上述方法,能够防止支付认证信息被盗用,进一步提高了支付的安全性。As apparent from the above description, by the above method, it is possible to prevent the payment authentication information from being stolen, and the security of the payment is further improved.
进一步的,所述S2之前还包括:Further, before the S2, the method further includes:
获取多份处于被胁迫状态的第一人脸图像信息,计算得到每一份第一人脸图像信息的第一特征参数;Acquiring a plurality of first face image information in a state of being stressed, and calculating a first feature parameter of each first face image information;
拟合所有的第一特征参数,得到被胁迫状态与第一特征参数之间的第一数学模型;Fitting all the first feature parameters to obtain a first mathematical model between the stressed state and the first feature parameter;
获取多份处于被胁迫状态的第一声纹信息,计算得到每一份第一声纹信息的第二特征参数;Acquiring a plurality of first voiceprint information in a state of being stressed, and calculating a second feature parameter of each first voiceprint information;
拟合所有的第二特征参数,得到被胁迫状态与特征参数之间的第二数学模型。Fitting all the second characteristic parameters to obtain a second mathematical model between the stressed state and the characteristic parameters.
从上述描述可知,通过上述方法能够建立出处于被胁迫状态下声纹信息的第一数学模型,以及被胁迫状态下人脸图像信息的第二数学模型,以便后续能准确判断用户所处的状态,提高了支付的安全性。It can be seen from the above description that the first mathematical model of the voiceprint information under the stress state and the second mathematical model of the face image information under the stress state can be established by the above method, so that the state of the user can be accurately determined subsequently. Increase the security of payment.
进一步的,所述S2之前还包括:Further, before the S2, the method further includes:
计算获取得到的人脸信息中人脸图像信息的人脸特征参数;Calculating a face feature parameter of the face image information in the obtained face information;
计算获取得到的声纹信息的声纹特征参数;Calculating the voiceprint characteristic parameters of the obtained voiceprint information;
根据所述第一数学模型和人脸特征参数,判断用户是否处于被胁迫状态;Determining whether the user is in a coerced state according to the first mathematical model and the face feature parameter;
根据第二数学模型和声纹特征参数,判断用户是否处于被胁迫状态。According to the second mathematical model and the voiceprint feature parameter, it is determined whether the user is in a state of being coerced.
从上述描述可知,通过上述方法,能够准确地判断用处是否处于被胁迫状态,以防止用户在被胁迫状态下,被不法分子盗用用户信息,使用户带来巨大损失的问题。As can be seen from the above description, by the above method, it is possible to accurately determine whether or not the use is in a state of being coerced, thereby preventing the user from being robbed of user information by the criminal in the state of being coerced, causing a huge loss to the user.
进一步的,所述S2具体为:Further, the S2 is specifically:
若均否,则加密所述人脸特征参数及声纹特征参数,并发送至服务器,以使服务器将人脸特征参数与预存人脸信息对应的第一人脸特征参数进行显著性分析,以及将声纹特征参数与预存声纹信息对应的第一声纹特征参数进行显著性分析,得到显著性分析结果;If yes, encrypting the face feature parameter and the voiceprint feature parameter, and sending the parameter to the server, so that the server performs a significant analysis on the first face feature parameter corresponding to the face feature parameter and the pre-stored face information, and The first voiceprint feature parameter corresponding to the voiceprint feature parameter and the pre-stored voiceprint information is subjected to significant analysis to obtain a significant analysis result;
根据所述显著性分析结果,判断支付认证是否通过。Based on the result of the significance analysis, it is determined whether the payment authentication is passed.
从上述描述可知,传输过程中将人脸特征参数及声纹特征参数进行加密传输,能够防止用户数据被盗,使用户造成一定的经济损失的问题;同时通过显著性分析,能够准确地判断出人脸特征参数与预存人脸信息对应的特征参数是否匹配,并能准确判断声纹特征参数与预存声纹信息对应的特征参数是否匹配;上述双重验证的方式,提高了支付认证的安全性。It can be seen from the above description that the face feature parameters and the voiceprint feature parameters are encrypted and transmitted during the transmission process, which can prevent the user data from being stolen and cause a certain economic loss to the user; and at the same time, the saliency analysis can accurately determine Whether the feature parameters corresponding to the face feature parameters and the pre-stored face information match, and can accurately determine whether the feature parameters corresponding to the voiceprint feature parameters and the pre-stored voiceprint information match; the above-mentioned double verification method improves the security of payment authentication.
进一步的,所述S1具体为:Further, the S1 is specifically:
计算所述人脸图像信息对应的图像频域信息,判断所述图像频域信息中是否存在大于预设阈值的高频信息;Calculating image frequency domain information corresponding to the face image information, and determining whether there is high frequency information greater than a preset threshold in the image frequency domain information;
根据所述声纹信息,计算得到高频分量集合,判断所述高频分量集合中是否存在大于预设阈值的高频分量。And calculating, according to the voiceprint information, a high frequency component set, and determining whether there is a high frequency component greater than a preset threshold in the high frequency component set.
从上述描述可知,通过上述方法,能够准确计算出声纹信息是否包括高频分量,以防止通过合成的录音用于支付验证;并且能准确判断出人脸图像信息中是否包括高频信息,以防止通过计算机合成的人脸图像信息用于支付验证,上述方法提高了支付的安全性。It can be seen from the above description that, by the above method, it is possible to accurately calculate whether the voiceprint information includes a high frequency component, to prevent the synthesized recording from being used for payment verification, and to accurately determine whether the high frequency information is included in the face image information, The face image information synthesized by the computer is prevented from being used for payment verification, and the above method improves the security of payment.
进一步的,所述S1之前还包括:Further, before the S1, the method further includes:
对预设的交易终端与服务器进行鉴权判断,若鉴权失败,则支付认证失败,结束交易;Performing an authentication judgment on the preset transaction terminal and the server, and if the authentication fails, the payment authentication fails, and the transaction is ended;
若鉴权成功,则获取所述交易终端的当前位置信息;If the authentication is successful, acquiring current location information of the transaction terminal;
加密所述当前位置信息,得到位置加密信息;Encrypting the current location information to obtain location encryption information;
发送所述位置加密信息至服务器,以使得服务器将所述位置加密信息保存在预设的安全日志信息中。Sending the location encryption information to the server, so that the server saves the location encryption information in preset security log information.
从上述描述可知,交易过程中存储交易时所处的位置信息,使交易位置可追溯。As can be seen from the above description, the location information in which the transaction is stored during the transaction process makes the transaction location traceable.
请参照图2,本发明提供了一种基于人脸和声纹的支付认证终端,包括存储器1、处理器2及存储在存储器1上并可在处理器2上运行的计算机程序,其特征在于,所述处理器2执行所述程序时实现以下步骤:Referring to FIG. 2, the present invention provides a face and voiceprint based payment authentication terminal, comprising a memory 1, a processor 2, and a computer program stored on the memory 1 and operable on the processor 2, characterized in that The processor 2 implements the following steps when executing the program:
S1:根据人脸信息,判断人脸信息中所包括的人脸图像信息中是否存在大于预设第一阈值的高频信息;以及根据声纹信息,判断声纹信息中是否存在大于预设第二阈值的高频分量;S1: determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
S2:若均否,则根据人脸图像信息及声纹信息进行支付认证。S2: If no, the payment authentication is performed according to the face image information and the voiceprint information.
进一步的,所述的一种基于人脸和声纹的支付认证终端,所述S1之前还包括:Further, the above-mentioned face and voiceprint-based payment authentication terminal further includes:
S01:显示支付验证时所需要的脸部指定动作信息及需要语音输入的指定文字信息;S01: displaying face specified action information required for payment verification and designated text information requiring voice input;
S02:采集人脸信息的同时,同时声纹信息;所述人脸信息包括人脸视频信息及人脸图像信息;S02: Simultaneously collecting voice information while simultaneously generating voice information; the face information includes face video information and face image information;
S03:判断人脸视频信息中的人脸动作是否与指定动作一致,以及判断声纹信息对应的文字与指定文字信息对应的文字是否一致;S03: determining whether the face motion in the face video information is consistent with the specified action, and determining whether the text corresponding to the voiceprint information is consistent with the text corresponding to the specified text information;
S04:若均一致,则执行步骤S1,否则支付认证失败。S04: If they are all consistent, step S1 is performed, otherwise the payment authentication fails.
进一步的,所述的一种基于人脸和声纹的支付认证终端,所述S03和S04之间还包括:Further, the above-mentioned face and voiceprint-based payment authentication terminal further includes: between S03 and S04:
判断所述人脸视频信息中唇语信息与声纹信息中的音频信息是否同步,若不同步,则支付认证失败;Determining whether the audio information in the lip language information and the voiceprint information in the face video information is synchronized, and if not, the payment authentication fails;
若同步,则判断唇语信息对应的第一文字信息与指定文字信息是否一致。If the synchronization is performed, it is determined whether the first text information corresponding to the lip language information is consistent with the specified text information.
进一步的,所述的一种基于人脸和声纹的支付认证终端,所述S02和S03之间还包括:Further, the above-mentioned face and voiceprint-based payment authentication terminal further includes: between S02 and S03:
对所述人脸信息和声纹信息分别进行降噪和滤波处理。The face information and the voiceprint information are separately subjected to noise reduction and filtering processing.
进一步的,所述的一种基于人脸和声纹的支付认证终端,所述S01中显示支付验证时所需要的脸部指定动作信息及需要语音输入的指定文字信息具体为:Further, the above-mentioned face and voiceprint-based payment authentication terminal, the face specified action information required for displaying the payment verification in S01 and the specified text information requiring voice input are specifically:
随机生成脸部指定动作信息及需要语音输入的指定文字信息,显示所述指定动作信息及指定文字信息。The face specified action information and the specified text information requiring voice input are randomly generated, and the specified action information and the specified text information are displayed.
进一步的,所述的一种基于人脸和声纹的支付认证终端,若在预设时间内人脸信息或声纹信息采集失败,则随机显示新的指定动作信息及新的指定文字信息,并重新采集人脸信息及声纹信息。Further, the payment authentication terminal based on the face and the voiceprint is configured to randomly display the new specified action information and the new designated text information if the face information or the voiceprint information fails to be collected within a preset time. And re-collect face information and voiceprint information.
进一步的,所述的一种基于人脸和声纹的支付认证终端,所述S1之前还包括:Further, the above-mentioned face and voiceprint-based payment authentication terminal further includes:
获取多份处于被胁迫状态的第一人脸图像信息,计算得到每一份第一人脸图像信息的第一特征参数;Acquiring a plurality of first face image information in a state of being stressed, and calculating a first feature parameter of each first face image information;
拟合所有的第一特征参数,得到被胁迫状态与第一特征参数之间的第一数学模型;Fitting all the first feature parameters to obtain a first mathematical model between the stressed state and the first feature parameter;
获取多份处于被胁迫状态的第一声纹信息,计算得到每一份第一声纹信息的第二特征参数;Acquiring a plurality of first voiceprint information in a state of being stressed, and calculating a second feature parameter of each first voiceprint information;
拟合所有的第二特征参数,得到被胁迫状态与特征参数之间的第二数学模型。Fitting all the second characteristic parameters to obtain a second mathematical model between the stressed state and the characteristic parameters.
进一步的,所述的一种基于人脸和声纹的支付认证终端,所述S2之前还包括:Further, the above-mentioned face and voiceprint-based payment authentication terminal, before the S2, further includes:
计算获取得到的人脸信息中人脸图像信息的人脸特征参数;Calculating a face feature parameter of the face image information in the obtained face information;
计算获取得到的声纹信息的声纹特征参数;Calculating the voiceprint characteristic parameters of the obtained voiceprint information;
根据所述第一数学模型和人脸特征参数,判断用户是否处于被胁迫状态;Determining whether the user is in a coerced state according to the first mathematical model and the face feature parameter;
根据第二数学模型和声纹特征参数,判断用户是否处于被胁迫状态。According to the second mathematical model and the voiceprint feature parameter, it is determined whether the user is in a state of being coerced.
进一步的,所述的一种基于人脸和声纹的支付认证终端,所述S2具体为:Further, the above-mentioned face and voiceprint-based payment authentication terminal, the S2 is specifically:
若均否,则加密所述人脸特征参数及声纹特征参数,并发送至服务器,以使服务器将人脸特征参数与预存人脸信息对应的第一人脸特征参数进行显著性分析,以及将声纹特征参数与预存声纹信息对应的第一声纹特征参数进行显著性分析,得到显著性分析结果;If yes, encrypting the face feature parameter and the voiceprint feature parameter, and sending the parameter to the server, so that the server performs a significant analysis on the first face feature parameter corresponding to the face feature parameter and the pre-stored face information, and The first voiceprint feature parameter corresponding to the voiceprint feature parameter and the pre-stored voiceprint information is subjected to significant analysis to obtain a significant analysis result;
根据所述显著性分析结果,判断支付认证是否通过。Based on the result of the significance analysis, it is determined whether the payment authentication is passed.
进一步的,所述的一种基于人脸和声纹的支付认证终端,所述S1之前还包括:Further, the above-mentioned face and voiceprint-based payment authentication terminal further includes:
对预设的交易终端与服务器进行鉴权判断,若鉴权失败,则支付认证失败,结束交易;Performing an authentication judgment on the preset transaction terminal and the server, and if the authentication fails, the payment authentication fails, and the transaction is ended;
若鉴权成功,则获取所述交易终端的当前位置信息;If the authentication is successful, acquiring current location information of the transaction terminal;
加密所述当前位置信息,得到位置加密信息;Encrypting the current location information to obtain location encryption information;
发送所述位置加密信息至服务器,以使得服务器将所述位置加密信息保存在预设的安全日志信息中。Sending the location encryption information to the server, so that the server saves the location encryption information in preset security log information.
请参照图1,本发明的实施例一为:Referring to FIG. 1, a first embodiment of the present invention is:
本发明提供了一种基于人脸和声纹的支付认证方法,包括以下步骤:The invention provides a payment authentication method based on face and voiceprint, comprising the following steps:
S0:随机生成脸部指定动作信息及需要语音输入的指定文字信息,显示所述指定动作信息及指定文字信息;采集人脸信息的同时,同时声纹信息;所述人脸信息包括人脸视频信息及人脸图像信息;对所述人脸信息和声纹信息分别进行降噪和滤波处理后,判断人脸视频信息中的人脸动作是否与指定动作一致,以及判断声纹信息对应的文字与指定文字信息对应的文字是否一致;判断所述人脸视频信息中唇语信息与声纹信息中的音频信息是否同步,若不同步,则支付认证失败;若同步,则判断唇语信息对应的第一文字信息与指定文字信息是否一致;若均一致,则执行步骤S1,否则支付认证失败;S0: randomly generating face specified action information and designated text information requiring voice input, displaying the specified action information and designated text information; collecting face information while simultaneously generating voiceprint information; the face information including face video Information and face image information; after performing noise reduction and filtering processing on the face information and the voiceprint information respectively, determining whether the face motion in the face video information is consistent with the specified action, and determining the text corresponding to the voiceprint information Whether the characters corresponding to the specified text information are consistent; determining whether the lip information in the face video information and the audio information in the voiceprint information are synchronized, if not, the payment authentication fails; if the synchronization, the lip information is determined to be corresponding Whether the first text information is consistent with the specified text information; if they are consistent, step S1 is performed, otherwise the payment authentication fails;
其中,若在预设时间内人脸信息和声纹信息采集失败,则重新执行步骤S0;Wherein, if the face information and the voiceprint information collection fails within the preset time, step S0 is re-executed;
S1:根据人脸信息,判断人脸信息中所包括的人脸图像信息中是否存在大于预设第一阈值的高频信息;以及根据声纹信息,判断声纹信息中是否存在大于预设第二阈值的高频分量;S1: determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
S2:若均否,则根据人脸图像信息及声纹信息进行支付认证。S2: If no, the payment authentication is performed according to the face image information and the voiceprint information.
本发明的实施例二为:Embodiment 2 of the present invention is:
本发明提供了一种基于人脸和声纹的支付认证方法,包括以下步骤:The invention provides a payment authentication method based on face and voiceprint, comprising the following steps:
获取多份处于被胁迫状态的第一人脸图像信息,计算得到每一份第一人脸图像信息的第一特征参数;拟合所有的第一特征参数,得到被胁迫状态与第一特征参数之间的第一数学模型;Acquiring a plurality of first face image information in a state of being stressed, calculating a first feature parameter of each first face image information; fitting all the first feature parameters to obtaining a stressed state and a first feature parameter The first mathematical model between;
其中“拟合所有的第一特征参数,得到被胁迫状态与第一特征参数之间的第一数学模型”具体为:Wherein "fitting all the first characteristic parameters to obtain a first mathematical model between the stressed state and the first characteristic parameter" is specifically:
读取每一份样本(人脸图像信息)对应的第一特征参数,通过深度学习卷积神经网络的方法对所有的第一特征参数进行拟合,即定义神经网络、收集原始数据、分类训练、校正、输出结果,得到被胁迫状态与第一特征参数之间的第一数学模型;Reading the first feature parameter corresponding to each sample (face image information), fitting all the first feature parameters by deep learning convolutional neural network, ie defining neural network, collecting original data, classifying training And correcting, outputting the result, obtaining a first mathematical model between the stressed state and the first characteristic parameter;
获取多份处于被胁迫状态的第一声纹信息,计算得到每一份第一声纹信息的第二特征参数;拟合所有的第二特征参数,得到被胁迫状态与特征参数之间的第二数学模型;Acquiring a plurality of first voiceprint information in a state of being stressed, calculating a second feature parameter of each first voiceprint information; fitting all second feature parameters to obtain a relationship between the stress state and the feature parameter Second mathematical model;
其中“拟合所有的第二特征参数,得到被胁迫状态与第一特征参数之间的第二数学模型”具体为:Wherein "fitting all the second characteristic parameters to obtain a second mathematical model between the stressed state and the first characteristic parameter" is specifically:
读取每一份样本(声纹信息)对应的第二特征参数,通过深度学习卷积神经网络的方法对所有的第二特征参数进行拟合,即定义神经网络、收集原始数据、分类训练、校正、输出结果,得到被胁迫状态与第二特征参数之间的第二数学模型;Reading the second characteristic parameter corresponding to each sample (soundprint information), fitting all the second characteristic parameters by deep learning convolutional neural network, ie defining neural network, collecting original data, classifying training, Correcting and outputting a result, obtaining a second mathematical model between the stressed state and the second characteristic parameter;
在支付时,先对预设的交易终端与服务器进行鉴权判断,若鉴权失败,则支付认证失败,结束交易;In the payment, the default transaction terminal and the server are first authenticated. If the authentication fails, the payment authentication fails and the transaction is terminated.
若鉴权成功,则获取所述交易终端的当前位置信息;加密所述当前位置信息,得到位置加密信息;发送所述位置加密信息至服务器,以使得服务器将所述位置加密信息保存在预设的安全日志信息中;If the authentication succeeds, acquiring current location information of the transaction terminal; encrypting the current location information to obtain location encryption information; and transmitting the location encryption information to a server, so that the server saves the location encryption information in a preset Security log information;
在鉴权成功后,随机生成脸部指定动作信息及需要语音输入的指定文字信息,显示所述指定动作信息及指定文字信息;采集人脸信息的同时,同时声纹信息;所述人脸信息包括人脸视频信息及人脸图像信息;对所述人脸信息和声纹信息分别进行降噪和滤波处理后,判断人脸视频信息中的人脸动作是否与指定动作一致,以及判断声纹信息对应的文字与指定文字信息对应的文字是否一致;判断所述人脸视频信息中唇语信息与声纹信息中的音频信息是否同步,若不同步,则支付认证失败;若同步,则判断唇语信息对应的第一文字信息与指定文字信息是否一致;若存在不一致,则支付认证失败,否则执行以下步骤:After the authentication succeeds, the face specified action information and the specified text information requiring voice input are randomly generated, and the specified action information and the specified text information are displayed; while the face information is collected, the voiceprint information is simultaneously; the face information is The method includes the face video information and the face image information; after performing noise reduction and filtering processing on the face information and the voiceprint information, determining whether the face motion in the face video information is consistent with the specified action, and determining the voiceprint Whether the text corresponding to the information is consistent with the text corresponding to the specified text information; determining whether the audio information in the lip information and the voiceprint information in the face video information is synchronized, if not synchronized, the payment authentication fails; if it is synchronized, it is determined Whether the first text information corresponding to the lip language information is consistent with the specified text information; if there is an inconsistency, the payment authentication fails, otherwise the following steps are performed:
根据人脸信息,判断人脸信息中所包括的人脸图像信息中是否存在大于预设第一阈值的高频信息;以及根据声纹信息,判断声纹信息中是否存在大于预设第二阈值的高频分量;Determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset second threshold High frequency component;
计算获取得到的人脸信息中人脸图像信息的人脸特征参数;计算获取得到的声纹信息的声纹特征参数;根据所述第一数学模型和人脸特征参数,判断用户是否处于被胁迫状态;根据第二数学模型和声纹特征参数,判断用户是否处于被胁迫状态。Calculating a face feature parameter of the face image information in the obtained face information; calculating a voiceprint feature parameter of the obtained voiceprint information; determining, according to the first mathematical model and the face feature parameter, whether the user is under stress State; determining whether the user is in a coerced state according to the second mathematical model and the voiceprint feature parameter.
若均否,则加密所述人脸特征参数及声纹特征参数,并发送至服务器,以使服务器将人脸特征参数与预存人脸信息对应的第一人脸特征参数进行显著性分析,以及将声纹特征参数与预存声纹信息对应的第一声纹特征参数进行显著性分析,得到显著性分析结果;If yes, encrypting the face feature parameter and the voiceprint feature parameter, and sending the parameter to the server, so that the server performs a significant analysis on the first face feature parameter corresponding to the face feature parameter and the pre-stored face information, and The first voiceprint feature parameter corresponding to the voiceprint feature parameter and the pre-stored voiceprint information is subjected to significant analysis to obtain a significant analysis result;
根据所述显著性分析结果,判断支付认证是否通过。Based on the result of the significance analysis, it is determined whether the payment authentication is passed.
请参照图2,本发明的实施例三为:Referring to FIG. 2, the third embodiment of the present invention is:
本发明提供了一种基于人脸和声纹的支付认证终端,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现以下步骤:The present invention provides a face and voiceprint based payment authentication terminal comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein the processor executes the program The following steps are implemented:
S0:随机生成脸部指定动作信息及需要语音输入的指定文字信息,显示所述指定动作信息及指定文字信息;采集人脸信息的同时,同时声纹信息;所述人脸信息包括人脸视频信息及人脸图像信息;对所述人脸信息和声纹信息分别进行降噪和滤波处理后,判断人脸视频信息中的人脸动作是否与指定动作一致,以及判断声纹信息对应的文字与指定文字信息对应的文字是否一致;判断所述人脸视频信息中唇语信息与声纹信息中的音频信息是否同步,若不同步,则支付认证失败;若同步,则判断唇语信息对应的第一文字信息与指定文字信息是否一致;若均一致,则执行步骤S1,否则支付认证失败;S0: randomly generating face specified action information and designated text information requiring voice input, displaying the specified action information and designated text information; collecting face information while simultaneously generating voiceprint information; the face information including face video Information and face image information; after performing noise reduction and filtering processing on the face information and the voiceprint information respectively, determining whether the face motion in the face video information is consistent with the specified action, and determining the text corresponding to the voiceprint information Whether the characters corresponding to the specified text information are consistent; determining whether the lip information in the face video information and the audio information in the voiceprint information are synchronized, if not, the payment authentication fails; if the synchronization, the lip information is determined to be corresponding Whether the first text information is consistent with the specified text information; if they are consistent, step S1 is performed, otherwise the payment authentication fails;
其中,若在预设时间内人脸信息和声纹信息采集失败,则重新执行步骤S0;Wherein, if the face information and the voiceprint information collection fails within the preset time, step S0 is re-executed;
S1:根据人脸信息,判断人脸信息中所包括的人脸图像信息中是否存在大于预设第一阈值的高频信息;以及根据声纹信息,判断声纹信息中是否存在大于预设第二阈值的高频分量;S1: determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
S2:若均否,则根据人脸图像信息及声纹信息进行支付认证。S2: If no, the payment authentication is performed according to the face image information and the voiceprint information.
本发明的实施例四为:Embodiment 4 of the present invention is:
本发明提供了一种基于人脸和声纹的支付认证终端,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现以下步骤:The present invention provides a face and voiceprint based payment authentication terminal comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein the processor executes the program The following steps are implemented:
获取多份处于被胁迫状态的第一人脸图像信息,计算得到每一份第一人脸图像信息的第一特征参数;拟合所有的第一特征参数,得到被胁迫状态与第一特征参数之间的第一数学模型;Acquiring a plurality of first face image information in a state of being stressed, calculating a first feature parameter of each first face image information; fitting all the first feature parameters to obtaining a stressed state and a first feature parameter The first mathematical model between;
其中“拟合所有的第一特征参数,得到被胁迫状态与第一特征参数之间的第一数学模型”具体为:Wherein "fitting all the first characteristic parameters to obtain a first mathematical model between the stressed state and the first characteristic parameter" is specifically:
读取每一份样本(人脸图像信息)对应的第一特征参数,通过深度学习卷积神经网络的方法对所有的第一特征参数进行拟合,即定义神经网络、收集原始数据、分类训练、校正、输出结果,得到被胁迫状态与第一特征参数之间的第一数学模型;Reading the first feature parameter corresponding to each sample (face image information), fitting all the first feature parameters by deep learning convolutional neural network, ie defining neural network, collecting original data, classifying training And correcting, outputting the result, obtaining a first mathematical model between the stressed state and the first characteristic parameter;
获取多份处于被胁迫状态的第一声纹信息,计算得到每一份第一声纹信息的第二特征参数;拟合所有的第二特征参数,得到被胁迫状态与特征参数之间的第二数学模型;Acquiring a plurality of first voiceprint information in a state of being stressed, calculating a second feature parameter of each first voiceprint information; fitting all second feature parameters to obtain a relationship between the stress state and the feature parameter Second mathematical model;
其中“拟合所有的第二特征参数,得到被胁迫状态与第一特征参数之间的第二数学模型”具体为:Wherein "fitting all the second characteristic parameters to obtain a second mathematical model between the stressed state and the first characteristic parameter" is specifically:
读取每一份样本(声纹信息)对应的第二特征参数,通过深度学习卷积神经网络的方法对所有的第二特征参数进行拟合,即定义神经网络、收集原始数据、分类训练、校正、输出结果,得到被胁迫状态与第二特征参数之间的第二数学模型;Reading the second characteristic parameter corresponding to each sample (soundprint information), fitting all the second characteristic parameters by deep learning convolutional neural network, ie defining neural network, collecting original data, classifying training, Correcting and outputting a result, obtaining a second mathematical model between the stressed state and the second characteristic parameter;
在支付时,先对预设的交易终端与服务器进行鉴权判断,若鉴权失败,则支付认证失败,结束交易;In the payment, the default transaction terminal and the server are first authenticated. If the authentication fails, the payment authentication fails and the transaction is terminated.
若鉴权成功,则获取所述交易终端的当前位置信息;加密所述当前位置信息,得到位置加密信息;发送所述位置加密信息至服务器,以使得服务器将所述位置加密信息保存在预设的安全日志信息中;If the authentication succeeds, acquiring current location information of the transaction terminal; encrypting the current location information to obtain location encryption information; and transmitting the location encryption information to a server, so that the server saves the location encryption information in a preset Security log information;
在鉴权成功后,随机生成脸部指定动作信息及需要语音输入的指定文字信息,显示所述指定动作信息及指定文字信息;采集人脸信息的同时,同时声纹信息;所述人脸信息包括人脸视频信息及人脸图像信息;对所述人脸信息和声纹信息分别进行降噪和滤波处理后,判断人脸视频信息中的人脸动作是否与指定动作一致,以及判断声纹信息对应的文字与指定文字信息对应的文字是否一致;判断所述人脸视频信息中唇语信息与声纹信息中的音频信息是否同步,若不同步,则支付认证失败;若同步,则判断唇语信息对应的第一文字信息与指定文字信息是否一致;若存在不一致,则支付认证失败,否则执行以下步骤:After the authentication succeeds, the face specified action information and the specified text information requiring voice input are randomly generated, and the specified action information and the specified text information are displayed; while the face information is collected, the voiceprint information is simultaneously; the face information is The method includes the face video information and the face image information; after performing noise reduction and filtering processing on the face information and the voiceprint information, determining whether the face motion in the face video information is consistent with the specified action, and determining the voiceprint Whether the text corresponding to the information is consistent with the text corresponding to the specified text information; determining whether the audio information in the lip information and the voiceprint information in the face video information is synchronized, if not synchronized, the payment authentication fails; if it is synchronized, it is determined Whether the first text information corresponding to the lip language information is consistent with the specified text information; if there is an inconsistency, the payment authentication fails, otherwise the following steps are performed:
根据人脸信息,判断人脸信息中所包括的人脸图像信息中是否存在大于预设第一阈值的高频信息;以及根据声纹信息,判断声纹信息中是否存在大于预设第二阈值的高频分量;Determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset second threshold High frequency component;
计算获取得到的人脸信息中人脸图像信息的人脸特征参数;计算获取得到的声纹信息的声纹特征参数;根据所述第一数学模型和人脸特征参数,判断用户是否处于被胁迫状态;根据第二数学模型和声纹特征参数,判断用户是否处于被胁迫状态。Calculating a face feature parameter of the face image information in the obtained face information; calculating a voiceprint feature parameter of the obtained voiceprint information; determining, according to the first mathematical model and the face feature parameter, whether the user is under stress State; determining whether the user is in a coerced state according to the second mathematical model and the voiceprint feature parameter.
若均否,则加密所述人脸特征参数及声纹特征参数,并发送至服务器,以使服务器将人脸特征参数与预存人脸信息对应的第一人脸特征参数进行显著性分析,以及将声纹特征参数与预存声纹信息对应的第一声纹特征参数进行显著性分析,得到显著性分析结果;If yes, encrypting the face feature parameter and the voiceprint feature parameter, and sending the parameter to the server, so that the server performs a significant analysis on the first face feature parameter corresponding to the face feature parameter and the pre-stored face information, and The first voiceprint feature parameter corresponding to the voiceprint feature parameter and the pre-stored voiceprint information is subjected to significant analysis to obtain a significant analysis result;
根据所述显著性分析结果,判断支付认证是否通过。Based on the result of the significance analysis, it is determined whether the payment authentication is passed.
本发明的实施例五为:Embodiment 5 of the present invention is:
本发明提供了一种POS机,包括MCU(微控制模块)、摄像头、麦克风和液晶屏,所述MCU分别与摄像头、麦克风和液晶屏电连接;The invention provides a POS machine, comprising an MCU (micro control module), a camera, a microphone and a liquid crystal screen, wherein the MCU is electrically connected to the camera, the microphone and the liquid crystal screen respectively;
1)出厂前,POS的软件经过大量的机器学习训练,包括上万份正常情绪下的人脸信息与被胁迫的人脸信息以及上万份正常声音信息和被胁迫状态下的声音信息。识别软件通过对每一份训练样本进行特定参数的读取,通过对所有训练样本利用深度学习卷积神经网络的方法拟合成一个计算公式(主要步骤有: 定义神经网络、收集原始数据、分类训练、校正、输出结果),得出这些参数与情绪的关系,从而使得人脸识别与声纹图识别的软件具有识别信息来源是否处于胁迫的能力。1) Before leaving the factory, the POS software undergoes extensive machine learning training, including tens of thousands of face information under normal emotions and coerced face information, as well as tens of thousands of normal voice messages and voice information under stress. The recognition software fits into a calculation formula by reading the specific parameters of each training sample and fitting it into a calculation formula by using the deep learning convolutional neural network for all training samples (the main steps are: defining the neural network, collecting the original data, and classifying Training, correction, and output results), the relationship between these parameters and emotions is derived, so that the software for face recognition and voiceprint recognition has the ability to identify whether the source of information is under coercion.
2)POS与交易后台进行鉴权:若鉴权失败,则表示POS没有交易权限,结束交易;若鉴权成功,则表示POS有交易权限,开启POS与交易后台的加密,若POS机内置有无线模块,则此时加密上传当前基站位置,作为交易后台的安全日志的内容保存下来。2) POS and transaction background authentication: If the authentication fails, it means that the POS has no transaction authority and ends the transaction; if the authentication is successful, it means that the POS has the transaction authority, and the encryption of the POS and the transaction background is enabled. The wireless module encrypts and uploads the current base station location at this time, and saves it as the content of the security log in the transaction background.
3)POS机内部MCU随机生成制定的文字信息,通过液晶屏提示交易人使用麦克风念出对应文字信息,同时MCU随机生成脸部指定动作(包括眨眼、张嘴、转头等动作),通过液晶屏提示用户通过摄像头采集该指定动作的人脸信息。3) The internal MCU of the POS machine randomly generates the written text information, and prompts the trader to use the microphone to read the corresponding text information through the LCD screen, and the MCU randomly generates the face specified action (including blinking, opening mouth, turning head, etc.) through the LCD screen. The user is prompted to collect the face information of the specified action through the camera.
4)摄像头采集交易人的人脸信息,同时,麦克风采集交易人的声音信息。采集声音信息的同时仍旧采集人脸信息,以便进行唇语计算。4) The camera collects the face information of the trader, and at the same time, the microphone collects the voice information of the trader. While collecting sound information, face information is still collected for lip language calculation.
5)MCU对人脸信息进行预处理,包括降噪和归一化处理等。5) The MCU preprocesses the face information, including noise reduction and normalization processing.
6)MCU检查人脸信息合法性,包括检查人脸动作是否与提示一致、检查是否存在超过阈值的高频信息,是否不处于紧张害怕的情绪、唇语是否与提示信息相一致、唇语是否与录音信息同步,若检查不通过,则判为非法信息,予以拒绝,结束交易。6) The MCU checks the legality of the face information, including checking whether the face action is consistent with the prompt, checking whether there is high frequency information exceeding the threshold, whether it is not in a nervous fear, whether the lip language is consistent with the prompt information, and whether the lip language is Synchronized with the recorded information, if the check fails, it is judged as illegal information, rejected, and the transaction is terminated.
7)MCU计算人脸信息特征值,包括脸部的眼、鼻、口等的几何特征等。7) The MCU calculates the feature values of the face information, including the geometric features of the eyes, nose, mouth, and the like of the face.
8)MCU将人脸信息特征值加密后传给交易后台。8) The MCU encrypts the face information feature value and transmits it to the transaction background.
9)交易后台将上传的人脸信息特征值与银行预留的持卡人人脸信息进行显著性分析,根据分析结果判断是否允许交易:若显著性不足,则表明人脸识别失败,告知POS结束交易;若显著性明显,则表明人脸识别成功,告知POS允许交易。9) The transaction background analyzes the feature value of the uploaded face information and the face information of the cardholder reserved by the bank, and judges whether the transaction is allowed according to the analysis result: if the significance is insufficient, the face recognition fails, and the POS is notified. End the transaction; if the significance is obvious, it indicates that the face recognition is successful, and the POS is allowed to allow the transaction.
10)MCU对声音信息进行预处理,包括降噪等处理等。10) The MCU preprocesses the sound information, including noise reduction and the like.
11)MCU检查声音合法性,包括检查声音内容是否与提示一致、检查是否存在超过阈值的高频信息、情绪是否不处于紧张害怕的情绪,若检查不通过,则判为非法信息,予以拒绝,结束交易。11) The MCU checks the legality of the sound, including checking whether the sound content is consistent with the prompt, checking whether there is high frequency information exceeding the threshold, whether the emotion is not in a nervous or fearful mood, and if the check fails, it is judged as illegal information and rejected. End the transaction.
12)MCU计算声纹特征信息,即MFCC(Mel Frequency Cepstrum Coefficients,美尔频率倒谱系数)。12) The MCU calculates the voiceprint feature information, that is, MFCC (Mel Frequency Cepstrum Coefficients).
13)MCU将声纹特征值加密后传给交易后台。13) The MCU encrypts the voiceprint feature value and transmits it to the transaction background.
14)交易后台将上传的声纹特征值与银行预留的持卡人声纹信息进行显著性分析:若显著性不足,则表示声纹识别失败,告知POS结束交易;若显著性明显,则表示声纹识别成功,进行后台交易,并将交易结果告知POS。14) The transaction background analyzes the uploaded voiceprint feature value and the cardholder's voiceprint information reserved by the bank: if the significance is insufficient, it indicates that the voiceprint recognition fails, and the POS is notified to end the transaction; if the significance is obvious, then Indicates that voiceprint recognition is successful, performs background transactions, and informs the POS of the transaction result.
15)POS机将交易结果提示给交易人。15) The POS machine will prompt the trader to the transaction result.
综上所述,本发明提供的一种基于人脸和声纹的支付认证方法及终端,通过判断声纹信息中是否存在大于预设第二阈值的高频分量,能够防止通过拼接的录音信息进行支付认证(拼接的录音信息其在拼接段存在高频分量);于通过计算机合成的图像,在脸部边缘、眼部边缘、嘴部边缘等合成拼接的地方出现大量空域的跳变,对应在频域则存在大量高频信息,故通过判断人脸图像信息中是否存在大于预设阈值的高频信息,能够防止通过计算机合成的图像信息对支付认证攻击的问题;同时本发明通过对人脸信息及声纹信息能够有效地判断用户是否处于被胁迫状态,使支付更加安全可靠;通过人脸与声纹相结合的方法可以很大程度降低盗用风险,加强交易安全性和可靠性。持卡人的人脸信息与声纹信息作为敏感信息通过加密方式传输至服务器,只允许单向上行传输加密的特征参数,避免了敏感信息泄露。上述方法,实现了在人脸和声纹识别中添加高频检测来防止脸部或录音合成攻击,并将人脸识别结合相对不易伪装的声纹进行支付认证,能够有效避免伪装攻击,使支付更加安全。In summary, the present invention provides a method and a terminal for payment authentication based on a face and a voiceprint. By determining whether there is a high frequency component greater than a preset second threshold in the voiceprint information, the recorded information can be prevented from being stitched. Perform payment authentication (spliced recording information has high-frequency components in the splicing segment); in the image synthesized by computer, a large number of spatial hopping occurs at the edge of the face, the edge of the eye, the edge of the mouth, etc. In the frequency domain, a large amount of high-frequency information exists. Therefore, by determining whether there is high-frequency information larger than a preset threshold in the face image information, it is possible to prevent the problem of the authentication authentication attack by the computer-synthesized image information; Face information and voiceprint information can effectively determine whether the user is in a state of coercion, making payment more secure and reliable. The combination of face and voiceprint can greatly reduce the risk of misappropriation and enhance transaction security and reliability. The cardholder's face information and voiceprint information are transmitted to the server as encrypted information through encryption. Only the one-way uplink transmission of the characteristic parameters is allowed to avoid the leakage of sensitive information. The above method realizes adding high frequency detection in face and voiceprint recognition to prevent face or recording synthesis attack, and performs face authentication combined with relatively difficult to disguise voiceprint for payment authentication, which can effectively avoid camouflage attack and make payment safer.
以上所述仅为本发明的实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等同变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above is only the embodiment of the present invention, and thus does not limit the scope of the patent of the present invention. All equivalent transformations made by using the specification and the drawings of the present invention, or directly or indirectly applied to other related technical fields, include the same. Within the scope of the patent protection of the present invention.

Claims (20)

  1. 一种基于人脸和声纹的支付认证方法,其特征在于,包括以下步骤:A payment authentication method based on face and voiceprint, characterized in that it comprises the following steps:
    S1:根据人脸信息,判断人脸信息中所包括的人脸图像信息中是否存在大于预设第一阈值的高频信息;以及根据声纹信息,判断声纹信息中是否存在大于预设第二阈值的高频分量;S1: determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
    S2:若均否,则根据人脸图像信息及声纹信息进行支付认证。S2: If no, the payment authentication is performed according to the face image information and the voiceprint information.
  2. 根据权利要求1所述的一种基于人脸和声纹的支付认证方法,其特征在于,所述S1之前还包括:The face and voiceprint-based payment authentication method according to claim 1, wherein before S1, the method further comprises:
    S01:显示支付验证时所需要的脸部指定动作信息及需要语音输入的指定文字信息;S01: displaying face specified action information required for payment verification and designated text information requiring voice input;
    S02:采集人脸信息的同时,同时声纹信息;所述人脸信息包括人脸视频信息及人脸图像信息;S02: Simultaneously collecting voice information while simultaneously generating voice information; the face information includes face video information and face image information;
    S03:判断人脸视频信息中的人脸动作是否与指定动作一致,以及判断声纹信息对应的文字与指定文字信息对应的文字是否一致;S03: determining whether the face motion in the face video information is consistent with the specified action, and determining whether the text corresponding to the voiceprint information is consistent with the text corresponding to the specified text information;
    S04:若均一致,则执行步骤S1,否则支付认证失败。S04: If they are all consistent, step S1 is performed, otherwise the payment authentication fails.
  3. 根据权利要求2所述的一种基于人脸和声纹的支付认证方法,其特征在于,所述S03和S04之间还包括:The face and voiceprint-based payment authentication method according to claim 2, wherein the S03 and the S04 further comprise:
    判断所述人脸视频信息中唇语信息与声纹信息中的音频信息是否同步,若不同步,则支付认证失败;Determining whether the audio information in the lip language information and the voiceprint information in the face video information is synchronized, and if not, the payment authentication fails;
    若同步,则判断唇语信息对应的第一文字信息与指定文字信息是否一致。If the synchronization is performed, it is determined whether the first text information corresponding to the lip language information is consistent with the specified text information.
  4. 根据权利要求2所述的一种基于人脸和声纹的支付认证方法,其特征在于,所述S02和S03之间还包括:The face and voiceprint-based payment authentication method according to claim 2, wherein the S02 and the S03 further comprise:
    对所述人脸信息和声纹信息分别进行降噪和滤波处理。The face information and the voiceprint information are separately subjected to noise reduction and filtering processing.
  5. 根据权利要求2所述的一种基于人脸和声纹的支付认证方法,其特征在于,所述S01中显示支付验证时所需要的脸部指定动作信息及需要语音输入的指定文字信息具体为:The face and voiceprint-based payment authentication method according to claim 2, wherein the face specified action information required for displaying the payment verification in S01 and the specified text information requiring voice input are specifically :
    随机生成脸部指定动作信息及需要语音输入的指定文字信息,显示所述指定动作信息及指定文字信息。The face specified action information and the specified text information requiring voice input are randomly generated, and the specified action information and the specified text information are displayed.
  6. 根据权利要求5所述的一种基于人脸和声纹的支付认证方法,其特征在于,若在预设时间内人脸信息或声纹信息采集失败,则随机显示新的指定动作信息及新的指定文字信息,并重新采集人脸信息及声纹信息。The face and voiceprint-based payment authentication method according to claim 5, wherein if the face information or the voiceprint information fails to be collected within a preset time, the new specified action information and the new one are randomly displayed. Specify text information and re-collect face information and voiceprint information.
  7. 根据权利要求1所述的一种基于人脸和声纹的支付认证方法,其特征在于,所述S1之前还包括:The face and voiceprint-based payment authentication method according to claim 1, wherein before S1, the method further comprises:
    获取多份处于被胁迫状态的第一人脸图像信息,计算得到每一份第一人脸图像信息的第一特征参数;Acquiring a plurality of first face image information in a state of being stressed, and calculating a first feature parameter of each first face image information;
    拟合所有的第一特征参数,得到被胁迫状态与第一特征参数之间的第一数学模型;Fitting all the first feature parameters to obtain a first mathematical model between the stressed state and the first feature parameter;
    获取多份处于被胁迫状态的第一声纹信息,计算得到每一份第一声纹信息的第二特征参数;Acquiring a plurality of first voiceprint information in a state of being stressed, and calculating a second feature parameter of each first voiceprint information;
    拟合所有的第二特征参数,得到被胁迫状态与特征参数之间的第二数学模型。Fitting all the second characteristic parameters to obtain a second mathematical model between the stressed state and the characteristic parameters.
  8. 根据权利要求7所述的一种基于人脸和声纹的支付认证方法,其特征在于,所述S2之前还包括:The face and voiceprint-based payment authentication method according to claim 7, wherein the S2 before:
    计算获取得到的人脸信息中人脸图像信息的人脸特征参数;Calculating a face feature parameter of the face image information in the obtained face information;
    计算获取得到的声纹信息的声纹特征参数;Calculating the voiceprint characteristic parameters of the obtained voiceprint information;
    根据所述第一数学模型和人脸特征参数,判断用户是否处于被胁迫状态;Determining whether the user is in a coerced state according to the first mathematical model and the face feature parameter;
    根据第二数学模型和声纹特征参数,判断用户是否处于被胁迫状态。According to the second mathematical model and the voiceprint feature parameter, it is determined whether the user is in a state of being coerced.
  9. 根据权利要求8所述的一种基于人脸和声纹的支付认证方法,其特征在于,所述S2具体为:The face and voiceprint based payment authentication method according to claim 8, wherein the S2 is specifically:
    若均否,则加密所述人脸特征参数及声纹特征参数,并发送至服务器,以使服务器将人脸特征参数与预存人脸信息对应的第一人脸特征参数进行显著性分析,以及将声纹特征参数与预存声纹信息对应的第一声纹特征参数进行显著性分析,得到显著性分析结果;If yes, encrypting the face feature parameter and the voiceprint feature parameter, and sending the parameter to the server, so that the server performs a significant analysis on the first face feature parameter corresponding to the face feature parameter and the pre-stored face information, and The first voiceprint feature parameter corresponding to the voiceprint feature parameter and the pre-stored voiceprint information is subjected to significant analysis to obtain a significant analysis result;
    根据所述显著性分析结果,判断支付认证是否通过。Based on the result of the significance analysis, it is determined whether the payment authentication is passed.
  10. 根据权利要求1所述的一种基于人脸和声纹的支付认证方法,其特征在于,所述S1之前还包括:The face and voiceprint-based payment authentication method according to claim 1, wherein before S1, the method further comprises:
    对预设的交易终端与服务器进行鉴权判断,若鉴权失败,则支付认证失败,结束交易;Performing an authentication judgment on the preset transaction terminal and the server, and if the authentication fails, the payment authentication fails, and the transaction is ended;
    若鉴权成功,则获取所述交易终端的当前位置信息;If the authentication is successful, acquiring current location information of the transaction terminal;
    加密所述当前位置信息,得到位置加密信息;Encrypting the current location information to obtain location encryption information;
    发送所述位置加密信息至服务器,以使得服务器将所述位置加密信息保存在预设的安全日志信息中。Sending the location encryption information to the server, so that the server saves the location encryption information in preset security log information.
  11. 一种基于人脸和声纹的支付认证终端,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现以下步骤:A face and voiceprint based payment authentication terminal includes a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein the processor implements the following steps when executing the program :
    S1:根据人脸信息,判断人脸信息中所包括的人脸图像信息中是否存在大于预设第一阈值的高频信息;以及根据声纹信息,判断声纹信息中是否存在大于预设第二阈值的高频分量;S1: determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
    S2:若均否,则根据人脸图像信息及声纹信息进行支付认证。S2: If no, the payment authentication is performed according to the face image information and the voiceprint information.
  12. 根据权利要求11所述的一种基于人脸和声纹的支付认证终端,其特征在于,所述S1之前还包括:The face authentication and voiceprint-based payment authentication terminal according to claim 11, wherein the S1 before:
    S01:显示支付验证时所需要的脸部指定动作信息及需要语音输入的指定文字信息;S01: displaying face specified action information required for payment verification and designated text information requiring voice input;
    S02:采集人脸信息的同时,同时声纹信息;所述人脸信息包括人脸视频信息及人脸图像信息;S02: Simultaneously collecting voice information while simultaneously generating voice information; the face information includes face video information and face image information;
    S03:判断人脸视频信息中的人脸动作是否与指定动作一致,以及判断声纹信息对应的文字与指定文字信息对应的文字是否一致;S03: determining whether the face motion in the face video information is consistent with the specified action, and determining whether the text corresponding to the voiceprint information is consistent with the text corresponding to the specified text information;
    S04:若均一致,则执行步骤S1,否则支付认证失败。S04: If they are all consistent, step S1 is performed, otherwise the payment authentication fails.
  13. 根据权利要求12所述的一种基于人脸和声纹的支付认证终端,其特征在于,所述S03和S04之间还包括:The face authentication and voiceprint-based payment authentication terminal according to claim 12, wherein the S03 and the S04 further comprise:
    判断所述人脸视频信息中唇语信息与声纹信息中的音频信息是否同步,若不同步,则支付认证失败;Determining whether the audio information in the lip language information and the voiceprint information in the face video information is synchronized, and if not, the payment authentication fails;
    若同步,则判断唇语信息对应的第一文字信息与指定文字信息是否一致。If the synchronization is performed, it is determined whether the first text information corresponding to the lip language information is consistent with the specified text information.
  14. 根据权利要求12所述的一种基于人脸和声纹的支付认证终端,其特征在于,所述S02和S03之间还包括:The face authentication and voiceprint-based payment authentication terminal according to claim 12, wherein the S02 and the S03 further comprise:
    对所述人脸信息和声纹信息分别进行降噪和滤波处理。The face information and the voiceprint information are separately subjected to noise reduction and filtering processing.
  15. 根据权利要求12所述的一种基于人脸和声纹的支付认证终端,其特征在于,所述S01中显示支付验证时所需要的脸部指定动作信息及需要语音输入的指定文字信息具体为:The face authentication and voiceprint-based payment authentication terminal according to claim 12, wherein the face specified action information required for displaying the payment verification in S01 and the specified text information requiring voice input are specifically :
    随机生成脸部指定动作信息及需要语音输入的指定文字信息,显示所述指定动作信息及指定文字信息。The face specified action information and the specified text information requiring voice input are randomly generated, and the specified action information and the specified text information are displayed.
  16. 根据权利要求15所述的一种基于人脸和声纹的支付认证终端,其特征在于,若在预设时间内人脸信息或声纹信息采集失败,则随机显示新的指定动作信息及新的指定文字信息,并重新采集人脸信息及声纹信息。The face authentication and voiceprint-based payment authentication terminal according to claim 15, wherein if the face information or the voiceprint information collection fails within a preset time, the new specified action information and the new one are randomly displayed. Specify text information and re-collect face information and voiceprint information.
  17. 根据权利要求11所述的一种基于人脸和声纹的支付认证终端,其特征在于,所述S1之前还包括:The face authentication and voiceprint-based payment authentication terminal according to claim 11, wherein the S1 before:
    获取多份处于被胁迫状态的第一人脸图像信息,计算得到每一份第一人脸图像信息的第一特征参数;Acquiring a plurality of first face image information in a state of being stressed, and calculating a first feature parameter of each first face image information;
    拟合所有的第一特征参数,得到被胁迫状态与第一特征参数之间的第一数学模型;Fitting all the first feature parameters to obtain a first mathematical model between the stressed state and the first feature parameter;
    获取多份处于被胁迫状态的第一声纹信息,计算得到每一份第一声纹信息的第二特征参数;Acquiring a plurality of first voiceprint information in a state of being stressed, and calculating a second feature parameter of each first voiceprint information;
    拟合所有的第二特征参数,得到被胁迫状态与特征参数之间的第二数学模型。Fitting all the second characteristic parameters to obtain a second mathematical model between the stressed state and the characteristic parameters.
  18. 根据权利要求17所述的一种基于人脸和声纹的支付认证终端,其特征在于,所述S2之前还包括:The face authentication and voiceprint-based payment authentication terminal according to claim 17, wherein the S2 before:
    计算获取得到的人脸信息中人脸图像信息的人脸特征参数;Calculating a face feature parameter of the face image information in the obtained face information;
    计算获取得到的声纹信息的声纹特征参数;Calculating the voiceprint characteristic parameters of the obtained voiceprint information;
    根据所述第一数学模型和人脸特征参数,判断用户是否处于被胁迫状态;Determining whether the user is in a coerced state according to the first mathematical model and the face feature parameter;
    根据第二数学模型和声纹特征参数,判断用户是否处于被胁迫状态。According to the second mathematical model and the voiceprint feature parameter, it is determined whether the user is in a state of being coerced.
  19. 根据权利要求18所述的一种基于人脸和声纹的支付认证终端,其特征在于,所述S2具体为:The face authentication and voiceprint-based payment authentication terminal according to claim 18, wherein the S2 is specifically:
    若均否,则加密所述人脸特征参数及声纹特征参数,并发送至服务器,以使服务器将人脸特征参数与预存人脸信息对应的第一人脸特征参数进行显著性分析,以及将声纹特征参数与预存声纹信息对应的第一声纹特征参数进行显著性分析,得到显著性分析结果;If yes, encrypting the face feature parameter and the voiceprint feature parameter, and sending the parameter to the server, so that the server performs a significant analysis on the first face feature parameter corresponding to the face feature parameter and the pre-stored face information, and The first voiceprint feature parameter corresponding to the voiceprint feature parameter and the pre-stored voiceprint information is subjected to significant analysis to obtain a significant analysis result;
    根据所述显著性分析结果,判断支付认证是否通过。Based on the result of the significance analysis, it is determined whether the payment authentication is passed.
  20. 根据权利要求11所述的一种基于人脸和声纹的支付认证终端,其特征在于,所述S1之前还包括:The face authentication and voiceprint-based payment authentication terminal according to claim 11, wherein the S1 before:
    对预设的交易终端与服务器进行鉴权判断,若鉴权失败,则支付认证失败,结束交易;Performing an authentication judgment on the preset transaction terminal and the server, and if the authentication fails, the payment authentication fails, and the transaction is ended;
    若鉴权成功,则获取所述交易终端的当前位置信息;If the authentication is successful, acquiring current location information of the transaction terminal;
    加密所述当前位置信息,得到位置加密信息;Encrypting the current location information to obtain location encryption information;
    发送所述位置加密信息至服务器,以使得服务器将所述位置加密信息保存在预设的安全日志信息中。Sending the location encryption information to the server, so that the server saves the location encryption information in preset security log information.
PCT/CN2017/115617 2017-12-12 2017-12-12 Face and voiceprint-based payment authentication method, and terminal WO2019113776A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/115617 WO2019113776A1 (en) 2017-12-12 2017-12-12 Face and voiceprint-based payment authentication method, and terminal
CN201780002078.9A CN108124488A (en) 2017-12-12 2017-12-12 A kind of payment authentication method and terminal based on face and vocal print

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/115617 WO2019113776A1 (en) 2017-12-12 2017-12-12 Face and voiceprint-based payment authentication method, and terminal

Publications (1)

Publication Number Publication Date
WO2019113776A1 true WO2019113776A1 (en) 2019-06-20

Family

ID=62233644

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/115617 WO2019113776A1 (en) 2017-12-12 2017-12-12 Face and voiceprint-based payment authentication method, and terminal

Country Status (2)

Country Link
CN (1) CN108124488A (en)
WO (1) WO2019113776A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766973A (en) * 2021-01-19 2021-05-07 湖南校智付网络科技有限公司 Face payment terminal
CN115171312A (en) * 2022-06-28 2022-10-11 重庆京东方智慧科技有限公司 Image processing method, device, equipment, monitoring system and storage medium

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510364A (en) * 2018-03-30 2018-09-07 杭州法奈昇科技有限公司 Big data intelligent shopping guide system based on voiceprint identification
CN108805678A (en) * 2018-06-14 2018-11-13 安徽鼎龙网络传媒有限公司 A kind of micro- scene management backstage wechat store synthesis measuring system
CN109214820B (en) * 2018-07-06 2021-12-21 厦门快商通信息技术有限公司 Merchant money collection system and method based on audio and video combination
CN108846676B (en) * 2018-08-02 2023-07-11 平安科技(深圳)有限公司 Biological feature auxiliary payment method, device, computer equipment and storage medium
CN109359982B (en) * 2018-09-02 2020-11-27 珠海横琴现联盛科技发展有限公司 Payment information confirmation method combining face and voiceprint recognition
CN109784175A (en) * 2018-12-14 2019-05-21 深圳壹账通智能科技有限公司 Abnormal behaviour people recognition methods, equipment and storage medium based on micro- Expression Recognition
CN109977646B (en) * 2019-04-01 2021-10-12 杭州城市大数据运营有限公司 Intelligent safety verification method
CN110363148A (en) * 2019-07-16 2019-10-22 中用科技有限公司 A kind of method of face vocal print feature fusion verifying
CN110688641A (en) * 2019-09-30 2020-01-14 联想(北京)有限公司 Information processing method and electronic equipment
CN111861495A (en) * 2020-08-06 2020-10-30 中国银行股份有限公司 Transfer processing method and device
CN112150740B (en) * 2020-09-10 2022-02-22 福建创识科技股份有限公司 Non-inductive secure payment system and method
CN112733636A (en) * 2020-12-29 2021-04-30 北京旷视科技有限公司 Living body detection method, living body detection device, living body detection apparatus, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6219639B1 (en) * 1998-04-28 2001-04-17 International Business Machines Corporation Method and apparatus for recognizing identity of individuals employing synchronized biometrics
CN104680375A (en) * 2015-02-28 2015-06-03 优化科技(苏州)有限公司 Identification verifying system for living human body for electronic payment
CN105119872A (en) * 2015-02-13 2015-12-02 腾讯科技(深圳)有限公司 Identity verification method, client, and service platform
CN105426723A (en) * 2015-11-20 2016-03-23 北京得意音通技术有限责任公司 Voiceprint identification, face identification and synchronous in-vivo detection-based identity authentication method and system
CN105718874A (en) * 2016-01-18 2016-06-29 北京天诚盛业科技有限公司 Method and device of in-vivo detection and authentication
CN108194488A (en) * 2017-12-28 2018-06-22 宁波群力紧固件制造有限公司 A kind of anti-derotation screw

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004021748A (en) * 2002-06-18 2004-01-22 Nec Corp Method for notifying authentication information, authentication system and information terminal device
US20070038868A1 (en) * 2005-08-15 2007-02-15 Top Digital Co., Ltd. Voiceprint-lock system for electronic data
CN101999900B (en) * 2009-08-28 2013-04-17 南京壹进制信息技术有限公司 Living body detecting method and system applied to human face recognition
CN102110303B (en) * 2011-03-10 2012-07-04 西安电子科技大学 Method for compounding face fake portrait\fake photo based on support vector return
CN104143078B (en) * 2013-05-09 2016-08-24 腾讯科技(深圳)有限公司 Living body faces recognition methods, device and equipment
US9990555B2 (en) * 2015-04-30 2018-06-05 Beijing Kuangshi Technology Co., Ltd. Video detection method, video detection system and computer program product
CN105320947B (en) * 2015-11-04 2019-03-01 博宏信息技术有限公司 A kind of human face in-vivo detection method based on illumination component
CN106156730B (en) * 2016-06-30 2019-03-15 腾讯科技(深圳)有限公司 A kind of synthetic method and device of facial image
CN106782565A (en) * 2016-11-29 2017-05-31 重庆重智机器人研究院有限公司 A kind of vocal print feature recognition methods and system
CN106982426A (en) * 2017-03-30 2017-07-25 广东微模式软件股份有限公司 A kind of method and system for remotely realizing old card system of real name

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6219639B1 (en) * 1998-04-28 2001-04-17 International Business Machines Corporation Method and apparatus for recognizing identity of individuals employing synchronized biometrics
CN105119872A (en) * 2015-02-13 2015-12-02 腾讯科技(深圳)有限公司 Identity verification method, client, and service platform
CN104680375A (en) * 2015-02-28 2015-06-03 优化科技(苏州)有限公司 Identification verifying system for living human body for electronic payment
CN105426723A (en) * 2015-11-20 2016-03-23 北京得意音通技术有限责任公司 Voiceprint identification, face identification and synchronous in-vivo detection-based identity authentication method and system
CN105718874A (en) * 2016-01-18 2016-06-29 北京天诚盛业科技有限公司 Method and device of in-vivo detection and authentication
CN108194488A (en) * 2017-12-28 2018-06-22 宁波群力紧固件制造有限公司 A kind of anti-derotation screw

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766973A (en) * 2021-01-19 2021-05-07 湖南校智付网络科技有限公司 Face payment terminal
CN115171312A (en) * 2022-06-28 2022-10-11 重庆京东方智慧科技有限公司 Image processing method, device, equipment, monitoring system and storage medium

Also Published As

Publication number Publication date
CN108124488A (en) 2018-06-05

Similar Documents

Publication Publication Date Title
WO2019113776A1 (en) Face and voiceprint-based payment authentication method, and terminal
Crouse et al. Continuous authentication of mobile user: Fusion of face image and inertial measurement unit data
CN108804884B (en) Identity authentication method, identity authentication device and computer storage medium
JP4578244B2 (en) Method for performing secure electronic transactions using portable data storage media
US20210166241A1 (en) Methods, apparatuses, storage mediums and terminal devices for authentication
KR102210775B1 (en) Using the ability to speak as a human interactive proof
WO2019114376A1 (en) Document verification method, device, electronic device, and storage medium
US11769152B2 (en) Verifying user identities during transactions using identification tokens that include user face data
US10068224B2 (en) Near field authentication through communication of enclosed content sound waves
US20180247314A1 (en) Voice filter system
CN103310339A (en) Identity recognition device and method as well as payment system and method
CN113168437A (en) Voice authentication
US20190065874A1 (en) System and method of authentication using image of a user
CN106911630A (en) Terminal and the authentication method and system of identity identifying method, terminal and authentication center
KR20220061919A (en) Method and server for providing service of disital signature based on face recognition
KR20220136963A (en) System and method for non-face-to-face identification kyc solution having excellent security
US11044250B2 (en) Biometric one touch system
CN114422144A (en) Method, system, equipment and storage medium for improving reliability of chain certificate of scene certificate block
CN117853103A (en) Payment system activation method based on intelligent bracelet
CN117688533A (en) Electronic signature method, electronic signature verification method and system based on artificial intelligence
WO2019113765A1 (en) Face and electrocardiogram-based payment authentication method and terminal
JPWO2009051250A1 (en) Registration device, authentication device, registration method, and authentication method
TWM623959U (en) Identification authentication device
CN107959669B (en) Password verification method for handheld mobile communication device
US8886952B1 (en) Method of controlling a transaction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17934621

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17934621

Country of ref document: EP

Kind code of ref document: A1