WO2019113776A1 - Procédé d'authentification de paiement à base de reconnaissance faciale et d'empreinte vocale ainsi que terminal - Google Patents

Procédé d'authentification de paiement à base de reconnaissance faciale et d'empreinte vocale ainsi que terminal Download PDF

Info

Publication number
WO2019113776A1
WO2019113776A1 PCT/CN2017/115617 CN2017115617W WO2019113776A1 WO 2019113776 A1 WO2019113776 A1 WO 2019113776A1 CN 2017115617 W CN2017115617 W CN 2017115617W WO 2019113776 A1 WO2019113776 A1 WO 2019113776A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
face
voiceprint
payment authentication
feature parameter
Prior art date
Application number
PCT/CN2017/115617
Other languages
English (en)
Chinese (zh)
Inventor
张炽成
唐超旬
Original Assignee
福建联迪商用设备有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 福建联迪商用设备有限公司 filed Critical 福建联迪商用设备有限公司
Priority to PCT/CN2017/115617 priority Critical patent/WO2019113776A1/fr
Priority to CN201780002078.9A priority patent/CN108124488A/zh
Publication of WO2019113776A1 publication Critical patent/WO2019113776A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/382Payment protocols; Details thereof insuring higher security of transaction
    • G06Q20/3823Payment protocols; Details thereof insuring higher security of transaction combining multiple encryption tools for a transaction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces

Definitions

  • the present invention relates to the field of electronic payment technologies, and in particular, to a payment authentication method and a terminal based on a face and a voiceprint.
  • the current payment authentication method mainly performs payment authentication through fingerprint or face recognition, which has the following disadvantages: biometrics Easy to be stolen: Fingerprint information is easier to take when the trader touches the item, and the non-living information, the facial image information is originally public, and it is easy to steal through video or photographing; the stolen biometrics are easy to use.
  • biometrics Easy to be stolen Fingerprint information is easier to take when the trader touches the item, and the non-living information, the facial image information is originally public, and it is easy to steal through video or photographing; the stolen biometrics are easy to use.
  • the stolen fingerprint and facial information can be used to attack the payment device by making fingerprint and image synthesis techniques respectively, thereby achieving the purpose of stealing.
  • the technical problem to be solved by the present invention is that the present invention provides a face and voiceprint payment authentication method and a terminal, which improves the security of payment authentication.
  • the present invention provides a payment authentication method based on a face and a voiceprint, comprising the following steps:
  • S1 determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
  • the present invention also provides a face and voiceprint based payment authentication terminal comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor implementing the program to implement the following step:
  • S1 determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
  • the invention provides a payment authentication method and a terminal based on a face and a voiceprint.
  • determining whether there is high frequency information greater than a preset threshold in the face image information it is possible to prevent the image information synthesized by the computer from attacking the authentication attack.
  • Problem Because of the image synthesized by the computer, a large number of spatial hopping occurs at the edge of the face, the edge of the eye, the edge of the mouth, etc., and there is a large amount of high-frequency information corresponding to the frequency domain); at the same time, the voiceprint is judged.
  • the above method is implemented in the face and High-frequency detection is added to the voiceprint recognition to prevent face or recording synthesis attacks, and face recognition is combined with relatively difficult to disguise voiceprints for payment authentication, which can effectively avoid camouflage attacks and make payment more secure.
  • FIG. 1 is a schematic diagram showing main steps of a face and voiceprint based payment authentication method according to an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a face authentication and voiceprint based payment authentication terminal according to an embodiment of the present invention
  • the present invention provides a payment authentication method based on face and voiceprint, which includes the following steps:
  • S1 determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
  • the present invention provides a payment authentication method based on a face and a voiceprint. By determining whether there is high frequency information greater than a preset threshold in the face image information, it is possible to prevent image information pairing by computer synthesis.
  • the problem of authentication attack (due to the image synthesized by the computer, a large number of spatial hopping occurs at the edge of the face, the edge of the eye, the edge of the mouth, etc., and there is a large amount of high frequency information corresponding to the frequency domain); Determining whether there is a high-frequency component greater than a preset second threshold in the voiceprint information, and preventing payment authentication by the stitched recording information (the stitched recording information has a high-frequency component in the splicing section), and the method is implemented High-frequency detection is added to face and voiceprint recognition to prevent facial or recording synthesis attacks, and face recognition is combined with relatively difficult to disguise voiceprints for payment authentication, which can effectively avoid camouflage attacks and make payment more secure.
  • the method further includes:
  • S02 Simultaneously collecting voice information while simultaneously generating voice information;
  • the face information includes face video information and face image information;
  • S03 determining whether the face motion in the face video information is consistent with the specified action, and determining whether the text corresponding to the voiceprint information is consistent with the text corresponding to the specified text information;
  • step S04 If they are all consistent, step S1 is performed, otherwise the payment authentication fails.
  • the collected face information is the face video information and the face image information collected by the camera device when the user performs the corresponding face action according to the specified action information displayed; the collected voiceprint information is displayed by the user according to the display.
  • the specified text information is used to input the voiceprint information obtained by voice input, and the user does not know the specified action information and the specified text information in advance, and the security of the payment verification is improved by the above verification method.
  • the S03 and the S04 further include:
  • the synchronization is performed, it is determined whether the first text information corresponding to the lip language information is consistent with the specified text information.
  • the S02 and the S03 further include:
  • the face information and the voiceprint information are separately subjected to noise reduction and filtering processing.
  • the accuracy of data processing can be improved by the above method.
  • the face specified action information and the specified text information requiring voice input are randomly generated, and the specified action information and the specified text information are displayed.
  • the new specified action information and the new designated text information are randomly displayed, and the face information and the voiceprint information are re-acquired.
  • the method further includes:
  • the first mathematical model of the voiceprint information under the stress state and the second mathematical model of the face image information under the stress state can be established by the above method, so that the state of the user can be accurately determined subsequently. Increase the security of payment.
  • the method further includes:
  • the second mathematical model and the voiceprint feature parameter it is determined whether the user is in a state of being coerced.
  • S2 is specifically:
  • the face feature parameters and the voiceprint feature parameters are encrypted and transmitted during the transmission process, which can prevent the user data from being stolen and cause a certain economic loss to the user; and at the same time, the saliency analysis can accurately determine Whether the feature parameters corresponding to the face feature parameters and the pre-stored face information match, and can accurately determine whether the feature parameters corresponding to the voiceprint feature parameters and the pre-stored voiceprint information match; the above-mentioned double verification method improves the security of payment authentication.
  • S1 is specifically:
  • the method further includes:
  • the location information in which the transaction is stored during the transaction process makes the transaction location traceable.
  • the present invention provides a face and voiceprint based payment authentication terminal, comprising a memory 1, a processor 2, and a computer program stored on the memory 1 and operable on the processor 2, characterized in that The processor 2 implements the following steps when executing the program:
  • S1 determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
  • the above-mentioned face and voiceprint-based payment authentication terminal further includes:
  • S02 Simultaneously collecting voice information while simultaneously generating voice information;
  • the face information includes face video information and face image information;
  • S03 determining whether the face motion in the face video information is consistent with the specified action, and determining whether the text corresponding to the voiceprint information is consistent with the text corresponding to the specified text information;
  • step S04 If they are all consistent, step S1 is performed, otherwise the payment authentication fails.
  • the above-mentioned face and voiceprint-based payment authentication terminal further includes: between S03 and S04:
  • the synchronization is performed, it is determined whether the first text information corresponding to the lip language information is consistent with the specified text information.
  • the above-mentioned face and voiceprint-based payment authentication terminal further includes: between S02 and S03:
  • the face information and the voiceprint information are separately subjected to noise reduction and filtering processing.
  • the above-mentioned face and voiceprint-based payment authentication terminal the face specified action information required for displaying the payment verification in S01 and the specified text information requiring voice input are specifically:
  • the face specified action information and the specified text information requiring voice input are randomly generated, and the specified action information and the specified text information are displayed.
  • the payment authentication terminal based on the face and the voiceprint is configured to randomly display the new specified action information and the new designated text information if the face information or the voiceprint information fails to be collected within a preset time. And re-collect face information and voiceprint information.
  • the above-mentioned face and voiceprint-based payment authentication terminal further includes:
  • the above-mentioned face and voiceprint-based payment authentication terminal before the S2, further includes:
  • the second mathematical model and the voiceprint feature parameter it is determined whether the user is in a state of being coerced.
  • the S2 is specifically:
  • the above-mentioned face and voiceprint-based payment authentication terminal further includes:
  • a first embodiment of the present invention is:
  • the invention provides a payment authentication method based on face and voiceprint, comprising the following steps:
  • step S0 randomly generating face specified action information and designated text information requiring voice input, displaying the specified action information and designated text information; collecting face information while simultaneously generating voiceprint information; the face information including face video Information and face image information; after performing noise reduction and filtering processing on the face information and the voiceprint information respectively, determining whether the face motion in the face video information is consistent with the specified action, and determining the text corresponding to the voiceprint information Whether the characters corresponding to the specified text information are consistent; determining whether the lip information in the face video information and the audio information in the voiceprint information are synchronized, if not, the payment authentication fails; if the synchronization, the lip information is determined to be corresponding Whether the first text information is consistent with the specified text information; if they are consistent, step S1 is performed, otherwise the payment authentication fails;
  • step S0 is re-executed
  • S1 determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
  • Embodiment 2 of the present invention is:
  • the invention provides a payment authentication method based on face and voiceprint, comprising the following steps:
  • Reading the second characteristic parameter corresponding to each sample (soundprint information), fitting all the second characteristic parameters by deep learning convolutional neural network, ie defining neural network, collecting original data, classifying training, Correcting and outputting a result, obtaining a second mathematical model between the stressed state and the second characteristic parameter;
  • the default transaction terminal and the server are first authenticated. If the authentication fails, the payment authentication fails and the transaction is terminated.
  • the authentication succeeds, acquiring current location information of the transaction terminal; encrypting the current location information to obtain location encryption information; and transmitting the location encryption information to a server, so that the server saves the location encryption information in a preset Security log information;
  • the face specified action information and the specified text information requiring voice input are randomly generated, and the specified action information and the specified text information are displayed; while the face information is collected, the voiceprint information is simultaneously; the face information is
  • the method includes the face video information and the face image information; after performing noise reduction and filtering processing on the face information and the voiceprint information, determining whether the face motion in the face video information is consistent with the specified action, and determining the voiceprint Whether the text corresponding to the information is consistent with the text corresponding to the specified text information; determining whether the audio information in the lip information and the voiceprint information in the face video information is synchronized, if not synchronized, the payment authentication fails; if it is synchronized, it is determined Whether the first text information corresponding to the lip language information is consistent with the specified text information; if there is an inconsistency, the payment authentication fails, otherwise the following steps are performed:
  • the third embodiment of the present invention is:
  • the present invention provides a face and voiceprint based payment authentication terminal comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein the processor executes the program
  • a face and voiceprint based payment authentication terminal comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein the processor executes the program
  • step S0 randomly generating face specified action information and designated text information requiring voice input, displaying the specified action information and designated text information; collecting face information while simultaneously generating voiceprint information; the face information including face video Information and face image information; after performing noise reduction and filtering processing on the face information and the voiceprint information respectively, determining whether the face motion in the face video information is consistent with the specified action, and determining the text corresponding to the voiceprint information Whether the characters corresponding to the specified text information are consistent; determining whether the lip information in the face video information and the audio information in the voiceprint information are synchronized, if not, the payment authentication fails; if the synchronization, the lip information is determined to be corresponding Whether the first text information is consistent with the specified text information; if they are consistent, step S1 is performed, otherwise the payment authentication fails;
  • step S0 is re-executed
  • S1 determining, according to the face information, whether there is high frequency information greater than a preset first threshold in the face image information included in the face information; and determining, according to the voiceprint information, whether the voiceprint information is greater than a preset number a high frequency component of the second threshold;
  • Embodiment 4 of the present invention is:
  • the present invention provides a face and voiceprint based payment authentication terminal comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein the processor executes the program
  • a face and voiceprint based payment authentication terminal comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein the processor executes the program
  • Reading the second characteristic parameter corresponding to each sample (soundprint information), fitting all the second characteristic parameters by deep learning convolutional neural network, ie defining neural network, collecting original data, classifying training, Correcting and outputting a result, obtaining a second mathematical model between the stressed state and the second characteristic parameter;
  • the default transaction terminal and the server are first authenticated. If the authentication fails, the payment authentication fails and the transaction is terminated.
  • the authentication succeeds, acquiring current location information of the transaction terminal; encrypting the current location information to obtain location encryption information; and transmitting the location encryption information to a server, so that the server saves the location encryption information in a preset Security log information;
  • the face specified action information and the specified text information requiring voice input are randomly generated, and the specified action information and the specified text information are displayed; while the face information is collected, the voiceprint information is simultaneously; the face information is
  • the method includes the face video information and the face image information; after performing noise reduction and filtering processing on the face information and the voiceprint information, determining whether the face motion in the face video information is consistent with the specified action, and determining the voiceprint Whether the text corresponding to the information is consistent with the text corresponding to the specified text information; determining whether the audio information in the lip information and the voiceprint information in the face video information is synchronized, if not synchronized, the payment authentication fails; if it is synchronized, it is determined Whether the first text information corresponding to the lip language information is consistent with the specified text information; if there is an inconsistency, the payment authentication fails, otherwise the following steps are performed:
  • Embodiment 5 of the present invention is:
  • the invention provides a POS machine, comprising an MCU (micro control module), a camera, a microphone and a liquid crystal screen, wherein the MCU is electrically connected to the camera, the microphone and the liquid crystal screen respectively;
  • MCU micro control module
  • the MCU is electrically connected to the camera, the microphone and the liquid crystal screen respectively;
  • the POS software Before leaving the factory, the POS software undergoes extensive machine learning training, including tens of thousands of face information under normal emotions and coerced face information, as well as tens of thousands of normal voice messages and voice information under stress.
  • the recognition software fits into a calculation formula by reading the specific parameters of each training sample and fitting it into a calculation formula by using the deep learning convolutional neural network for all training samples (the main steps are: defining the neural network, collecting the original data, and classifying Training, correction, and output results), the relationship between these parameters and emotions is derived, so that the software for face recognition and voiceprint recognition has the ability to identify whether the source of information is under coercion.
  • POS and transaction background authentication If the authentication fails, it means that the POS has no transaction authority and ends the transaction; if the authentication is successful, it means that the POS has the transaction authority, and the encryption of the POS and the transaction background is enabled.
  • the wireless module encrypts and uploads the current base station location at this time, and saves it as the content of the security log in the transaction background.
  • the internal MCU of the POS machine randomly generates the written text information, and prompts the trader to use the microphone to read the corresponding text information through the LCD screen, and the MCU randomly generates the face specified action (including blinking, opening mouth, turning head, etc.) through the LCD screen. The user is prompted to collect the face information of the specified action through the camera.
  • the camera collects the face information of the trader, and at the same time, the microphone collects the voice information of the trader. While collecting sound information, face information is still collected for lip language calculation.
  • the MCU preprocesses the face information, including noise reduction and normalization processing.
  • the MCU checks the legality of the face information, including checking whether the face action is consistent with the prompt, checking whether there is high frequency information exceeding the threshold, whether it is not in a nervous fear, whether the lip language is consistent with the prompt information, and whether the lip language is Synchronized with the recorded information, if the check fails, it is judged as illegal information, rejected, and the transaction is terminated.
  • the MCU calculates the feature values of the face information, including the geometric features of the eyes, nose, mouth, and the like of the face.
  • the MCU encrypts the face information feature value and transmits it to the transaction background.
  • the transaction background analyzes the feature value of the uploaded face information and the face information of the cardholder reserved by the bank, and judges whether the transaction is allowed according to the analysis result: if the significance is insufficient, the face recognition fails, and the POS is notified. End the transaction; if the significance is obvious, it indicates that the face recognition is successful, and the POS is allowed to allow the transaction.
  • the MCU preprocesses the sound information, including noise reduction and the like.
  • the MCU checks the legality of the sound, including checking whether the sound content is consistent with the prompt, checking whether there is high frequency information exceeding the threshold, whether the emotion is not in a nervous or fearful mood, and if the check fails, it is judged as illegal information and rejected. End the transaction.
  • the MCU calculates the voiceprint feature information, that is, MFCC (Mel Frequency Cepstrum Coefficients).
  • the MCU encrypts the voiceprint feature value and transmits it to the transaction background.
  • the transaction background analyzes the uploaded voiceprint feature value and the cardholder's voiceprint information reserved by the bank: if the significance is insufficient, it indicates that the voiceprint recognition fails, and the POS is notified to end the transaction; if the significance is obvious, then Indicates that voiceprint recognition is successful, performs background transactions, and informs the POS of the transaction result.
  • the POS machine will prompt the trader to the transaction result.
  • the present invention provides a method and a terminal for payment authentication based on a face and a voiceprint.
  • the recorded information can be prevented from being stitched.
  • Perform payment authentication spliced recording information has high-frequency components in the splicing segment); in the image synthesized by computer, a large number of spatial hopping occurs at the edge of the face, the edge of the eye, the edge of the mouth, etc. In the frequency domain, a large amount of high-frequency information exists.
  • Face information and voiceprint information can effectively determine whether the user is in a state of coercion, making payment more secure and reliable.
  • the combination of face and voiceprint can greatly reduce the risk of misappropriation and enhance transaction security and reliability.
  • the cardholder's face information and voiceprint information are transmitted to the server as encrypted information through encryption. Only the one-way uplink transmission of the characteristic parameters is allowed to avoid the leakage of sensitive information.
  • the above method realizes adding high frequency detection in face and voiceprint recognition to prevent face or recording synthesis attack, and performs face authentication combined with relatively difficult to disguise voiceprint for payment authentication, which can effectively avoid camouflage attack and make payment safer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)

Abstract

La présente invention concerne un procédé d'authentification de paiement à base de reconnaissance faciale et d'empreinte vocale ainsi qu'un terminal. Le procédé comprend les étapes suivantes consistant à : déterminer, conformément à des informations faciales, si des informations d'image faciale d'informations faciales contiennent des informations à haute fréquence supérieure à un premier seuil prédéfini ; déterminer, conformément à des informations d'empreinte vocale, si les informations d'empreinte vocale contiennent une composante à haute fréquence supérieure à un second seuil prédéfini ; et si les deux résultats de détermination sont négatifs, effectuer une authentification de paiement conformément aux informations d'image faciale et aux informations d'empreinte vocale. Selon la présente invention, le fait qu'un utilisateur soit éventuellement dans un état contraint peut être efficacement déterminé conformément à des informations d'image faciale et à des informations d'empreinte vocale, ce qui permet de garantir un processus de paiement plus sûr et plus fiable. Le procédé combine des techniques de reconnaissance faciale et d'empreinte vocale, réduisant ainsi considérablement les risques de paiements non autorisés, et améliorant la sécurité et la fiabilité de transaction.
PCT/CN2017/115617 2017-12-12 2017-12-12 Procédé d'authentification de paiement à base de reconnaissance faciale et d'empreinte vocale ainsi que terminal WO2019113776A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/115617 WO2019113776A1 (fr) 2017-12-12 2017-12-12 Procédé d'authentification de paiement à base de reconnaissance faciale et d'empreinte vocale ainsi que terminal
CN201780002078.9A CN108124488A (zh) 2017-12-12 2017-12-12 一种基于人脸和声纹的支付认证方法及终端

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/115617 WO2019113776A1 (fr) 2017-12-12 2017-12-12 Procédé d'authentification de paiement à base de reconnaissance faciale et d'empreinte vocale ainsi que terminal

Publications (1)

Publication Number Publication Date
WO2019113776A1 true WO2019113776A1 (fr) 2019-06-20

Family

ID=62233644

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/115617 WO2019113776A1 (fr) 2017-12-12 2017-12-12 Procédé d'authentification de paiement à base de reconnaissance faciale et d'empreinte vocale ainsi que terminal

Country Status (2)

Country Link
CN (1) CN108124488A (fr)
WO (1) WO2019113776A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766973A (zh) * 2021-01-19 2021-05-07 湖南校智付网络科技有限公司 人脸支付终端
CN115171312A (zh) * 2022-06-28 2022-10-11 重庆京东方智慧科技有限公司 图像处理方法、装置、设备、监控系统及存储介质

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510364A (zh) * 2018-03-30 2018-09-07 杭州法奈昇科技有限公司 基于声波纹身份识别的大数据智能导购系统
CN108805678A (zh) * 2018-06-14 2018-11-13 安徽鼎龙网络传媒有限公司 一种微场景管理后台微信商城综合测定系统
CN109214820B (zh) * 2018-07-06 2021-12-21 厦门快商通信息技术有限公司 一种基于音视频结合的商户收款系统及方法
CN108846676B (zh) * 2018-08-02 2023-07-11 平安科技(深圳)有限公司 生物特征辅助支付方法、装置、计算机设备及存储介质
CN109359982B (zh) * 2018-09-02 2020-11-27 珠海横琴现联盛科技发展有限公司 结合人脸与声纹识别的支付信息确认方法
CN109784175A (zh) * 2018-12-14 2019-05-21 深圳壹账通智能科技有限公司 基于微表情识别的异常行为人识别方法、设备和存储介质
CN109977646B (zh) * 2019-04-01 2021-10-12 杭州城市大数据运营有限公司 一种智能安全核验方法
CN110363148A (zh) * 2019-07-16 2019-10-22 中用科技有限公司 一种人脸声纹特征融合验证的方法
CN110688641A (zh) * 2019-09-30 2020-01-14 联想(北京)有限公司 信息处理方法及电子设备
CN111861495A (zh) * 2020-08-06 2020-10-30 中国银行股份有限公司 转账处理方法及装置
CN112150740B (zh) * 2020-09-10 2022-02-22 福建创识科技股份有限公司 无感安全支付系统和方法
CN112733636A (zh) * 2020-12-29 2021-04-30 北京旷视科技有限公司 活体检测方法、装置、设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6219639B1 (en) * 1998-04-28 2001-04-17 International Business Machines Corporation Method and apparatus for recognizing identity of individuals employing synchronized biometrics
CN104680375A (zh) * 2015-02-28 2015-06-03 优化科技(苏州)有限公司 电子支付真人活体身份验证系统
CN105119872A (zh) * 2015-02-13 2015-12-02 腾讯科技(深圳)有限公司 身份验证方法、客户端和服务平台
CN105426723A (zh) * 2015-11-20 2016-03-23 北京得意音通技术有限责任公司 基于声纹识别、人脸识别以及同步活体检测的身份认证方法及系统
CN105718874A (zh) * 2016-01-18 2016-06-29 北京天诚盛业科技有限公司 活体检测及认证的方法和装置
CN108194488A (zh) * 2017-12-28 2018-06-22 宁波群力紧固件制造有限公司 一种防反旋螺钉

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004021748A (ja) * 2002-06-18 2004-01-22 Nec Corp 認証情報通知方法及び認証システム並びに情報端末装置
US20070038868A1 (en) * 2005-08-15 2007-02-15 Top Digital Co., Ltd. Voiceprint-lock system for electronic data
CN101999900B (zh) * 2009-08-28 2013-04-17 南京壹进制信息技术有限公司 一种应用于人脸识别的活体检测方法及系统
CN102110303B (zh) * 2011-03-10 2012-07-04 西安电子科技大学 基于支撑向量回归的人脸伪照片合成方法
CN104143078B (zh) * 2013-05-09 2016-08-24 腾讯科技(深圳)有限公司 活体人脸识别方法、装置和设备
US9990555B2 (en) * 2015-04-30 2018-06-05 Beijing Kuangshi Technology Co., Ltd. Video detection method, video detection system and computer program product
CN105320947B (zh) * 2015-11-04 2019-03-01 博宏信息技术有限公司 一种基于光照成分的人脸活体检测方法
CN106156730B (zh) * 2016-06-30 2019-03-15 腾讯科技(深圳)有限公司 一种人脸图像的合成方法和装置
CN106782565A (zh) * 2016-11-29 2017-05-31 重庆重智机器人研究院有限公司 一种声纹特征识别方法及系统
CN106982426A (zh) * 2017-03-30 2017-07-25 广东微模式软件股份有限公司 一种远程实现旧卡实名制的方法与系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6219639B1 (en) * 1998-04-28 2001-04-17 International Business Machines Corporation Method and apparatus for recognizing identity of individuals employing synchronized biometrics
CN105119872A (zh) * 2015-02-13 2015-12-02 腾讯科技(深圳)有限公司 身份验证方法、客户端和服务平台
CN104680375A (zh) * 2015-02-28 2015-06-03 优化科技(苏州)有限公司 电子支付真人活体身份验证系统
CN105426723A (zh) * 2015-11-20 2016-03-23 北京得意音通技术有限责任公司 基于声纹识别、人脸识别以及同步活体检测的身份认证方法及系统
CN105718874A (zh) * 2016-01-18 2016-06-29 北京天诚盛业科技有限公司 活体检测及认证的方法和装置
CN108194488A (zh) * 2017-12-28 2018-06-22 宁波群力紧固件制造有限公司 一种防反旋螺钉

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766973A (zh) * 2021-01-19 2021-05-07 湖南校智付网络科技有限公司 人脸支付终端
CN115171312A (zh) * 2022-06-28 2022-10-11 重庆京东方智慧科技有限公司 图像处理方法、装置、设备、监控系统及存储介质

Also Published As

Publication number Publication date
CN108124488A (zh) 2018-06-05

Similar Documents

Publication Publication Date Title
WO2019113776A1 (fr) Procédé d'authentification de paiement à base de reconnaissance faciale et d'empreinte vocale ainsi que terminal
Crouse et al. Continuous authentication of mobile user: Fusion of face image and inertial measurement unit data
CN108804884B (zh) 身份认证的方法、装置及计算机存储介质
JP4578244B2 (ja) 携帯型データ記憶媒体を使って安全な電子取引を実行する方法
US20210166241A1 (en) Methods, apparatuses, storage mediums and terminal devices for authentication
US10360555B2 (en) Near field authentication through communication of enclosed content sound waves
KR102210775B1 (ko) 인적 상호 증명으로서 말하는 능력을 이용하는 기법
WO2019114376A1 (fr) Procédé de vérification de document, dispositif, dispositif électronique et support d'informations
US20180247314A1 (en) Voice filter system
CN103310339A (zh) 身份识别装置和方法以及支付系统和方法
US11769152B2 (en) Verifying user identities during transactions using identification tokens that include user face data
CN113168437A (zh) 声音认证
CN106911630A (zh) 终端及身份认证方法、终端和认证中心的认证方法及系统
KR20220061919A (ko) 안면인식 기반 전자서명 서비스 제공 방법 및 서버
KR20220136963A (ko) 보안성이 우수한 비대면 본인인증 시스템 및 그 방법
US11044250B2 (en) Biometric one touch system
CN117688533A (zh) 一种基于人工智能的电子签章方法、电子验章方法及系统
WO2019113765A1 (fr) Procédé d'authentification de paiement basé sur un visage et sur un électrocardiogramme et terminal
JPWO2009051250A1 (ja) 登録装置、認証装置、登録方法及び認証方法
CN114422144A (zh) 一种提升场景证书区块链存证可信度的方法、系统、设备及存储介质
TWM623959U (zh) 身分驗證設備
CN107959669B (zh) 手持行动通讯装置的密码验证方法
US8886952B1 (en) Method of controlling a transaction
CN108108975A (zh) 一种基于心电图和声纹的支付认证方法及终端
CN109299945B (zh) 一种基于生物识别算法的身份验证的方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17934621

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17934621

Country of ref document: EP

Kind code of ref document: A1