CN108124488A - A kind of payment authentication method and terminal based on face and vocal print - Google Patents

A kind of payment authentication method and terminal based on face and vocal print Download PDF

Info

Publication number
CN108124488A
CN108124488A CN201780002078.9A CN201780002078A CN108124488A CN 108124488 A CN108124488 A CN 108124488A CN 201780002078 A CN201780002078 A CN 201780002078A CN 108124488 A CN108124488 A CN 108124488A
Authority
CN
China
Prior art keywords
information
face
voiceprint
vocal print
payment authentication
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201780002078.9A
Other languages
Chinese (zh)
Inventor
张炽成
唐超旬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Landi Commercial Equipment Co Ltd
Original Assignee
Fujian Landi Commercial Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Landi Commercial Equipment Co Ltd filed Critical Fujian Landi Commercial Equipment Co Ltd
Publication of CN108124488A publication Critical patent/CN108124488A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/382Payment protocols; Details thereof insuring higher security of transaction
    • G06Q20/3823Payment protocols; Details thereof insuring higher security of transaction combining multiple encryption tools for a transaction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/22Interactive procedures; Man-machine interfaces

Abstract

The present invention provides a kind of payment authentication method and terminal based on face and vocal print, method comprises the following steps:According to face information, judge to whether there is the high-frequency information more than preset first threshold value in human face image information included in face information;And according to voiceprint, judge to whether there is the high fdrequency component for being more than default second threshold in voiceprint;If no, payment authentication is carried out according to human face image information and voiceprint.The present invention makes payment more safe and reliable by can effectively judge whether user is in by stress state according to human face image information and voiceprint;It can largely be reduced by the method that face is combined with Application on Voiceprint Recognition and usurp risk, strengthen transaction security and reliability.

Description

A kind of payment authentication method and terminal based on face and vocal print
Technical field
The present invention relates to e-payment technical field more particularly to a kind of payment authentication method based on face and vocal print and Terminal.
Background technology
With the continuous development of Internet technology, carrying out shopping online by intelligent mobile terminal has become people's life In essential something, this also greatly facilitates people’s lives.Since shopping online is related to the sensitivity of user Information, therefore shopping on the web and safer payment authentication mode is needed when being paid;Present payment authentication mode master To be that payment authentication is carried out by fingerprint or face recognition, have the following disadvantages:Biological characteristic is easily stolen:Finger print information exists Negotiator is easier to extract when contacting article, and non-living body information, and facial image information is exactly originally disclosed, is passed through Video takes pictures and is all easy to steal;The biological characteristic stolen is easy for attacking, and uses the fingerprint stolen and facial information can be with The technology for referring to mould and image synthesis by making respectively attacks payment devices, so as to achieve the purpose that steal brush.
The content of the invention
The technical problems to be solved by the invention are:The present invention provide it is a kind of based on face and vocal print payment authentication method and Terminal improves the security of payment authentication.
In order to solve the above technical problem, the present invention provides a kind of payment authentication method based on face and vocal print, bags Include following steps:
S1:According to face information, judge to whether there is to be more than in human face image information included in face information to preset The high-frequency information of first threshold;And according to voiceprint, judge to whether there is to be more than in voiceprint to preset second threshold High fdrequency component;
S2:If no, payment authentication is carried out according to human face image information and voiceprint.
The present invention also provides a kind of payment authentication terminals based on face and vocal print, including memory, processor and deposit The computer program that can be run on a memory and on a processor is stored up, the processor realizes following step when performing described program Suddenly:
S1:According to face information, judge to whether there is to be more than in human face image information included in face information to preset The high-frequency information of first threshold;And according to voiceprint, judge to whether there is to be more than in voiceprint to preset second threshold High fdrequency component;
S2:If no, payment authentication is carried out according to human face image information and voiceprint.
Beneficial effects of the present invention are:
The present invention provides a kind of payment authentication method and terminal based on face and vocal print, by judging that facial image is believed It whether there is the high-frequency information more than predetermined threshold value in breath, the image information that can prevent from synthesizing by computer is to payment authentication The problem of attack, is (due to the image synthesized by computer, in the synthesis such as face edge, eye edge, mouth edge splicing There is the saltus step in a large amount of spatial domains in place, and corresponding in frequency domain, then there are a large amount of high-frequency informations);Meanwhile judge in voiceprint whether In the presence of the high fdrequency component more than default second threshold, can prevent by the recorded message of splicing carry out payment authentication (splicing Recorded message its in splice sections, there are high fdrequency components), by the above method, realize and add high frequency in face and Application on Voiceprint Recognition Recognition of face is carried out payment authentication by detection to prevent face or recording synthesis attack with reference to the opposite vocal print pretended that is not easy, Spoof attack can be effectively avoided, makes payment safer.
Description of the drawings
Fig. 1 is to be illustrated according to a kind of key step of payment authentication method based on face and vocal print of the embodiment of the present invention Figure;
Fig. 2 is the structure diagram according to a kind of payment authentication terminal based on face and vocal print of the embodiment of the present invention;
Label declaration:
1st, memory;2nd, processor.
Specific embodiment
For the technology contents that the present invention will be described in detail, the objects and the effects, below in conjunction with embodiment and coordinate attached Figure is explained in detail.
Fig. 1 is refer to, the present invention provides a kind of payment authentication methods based on face and vocal print, comprise the following steps:
S1:According to face information, judge to whether there is to be more than in human face image information included in face information to preset The high-frequency information of first threshold;And according to voiceprint, judge to whether there is to be more than in voiceprint to preset second threshold High fdrequency component;
S2:If no, payment authentication is carried out according to human face image information and voiceprint.
As can be seen from the above description, the present invention provides a kind of payment authentication method based on face and vocal print, judgement is passed through It whether there is the high-frequency information more than predetermined threshold value in human face image information, can prevent the image information synthesized by computer The problem of attacking payment authentication is (due to the image synthesized by computer, in face edge, eye edge, mouth edge etc. There is the saltus step in a large amount of spatial domains in the place of synthesis splicing, and corresponding in frequency domain, then there are a large amount of high-frequency informations);Meanwhile judge vocal print It whether there is the high fdrequency component for being more than default second threshold in information, can prevent that carrying out payment by the recorded message of splicing recognizes Card (recorded message of splicing its there are high fdrequency components in splice sections), by the above method, realizes in face and Application on Voiceprint Recognition Middle addition high-frequency detection carries out recognition of face with reference to the opposite vocal print pretended that is not easy to prevent face or recording synthesis attack Payment authentication can effectively avoid spoof attack, make payment safer.
Further, further included before the S1:
S01:Required face's required movement information and the specified word of phonetic entry is needed to believe when showing payment verification Breath;
S02:While gathering face information, while voiceprint;The face information includes face video information and people Face image information;
S03:Judge whether the human face action in face video information is consistent with required movement and judges voiceprint pair Whether the word answered word corresponding with specified word information is consistent;
S04:If consistent, step S1 is performed, otherwise payment authentication fails.
As can be seen from the above description, the face information of acquisition carries out corresponding face for user according to the required movement information of display When portion acts, the face video information and human face image information that are gathered by photographic device;The voiceprint of acquisition is user's root According to voiceprint obtained by the specified word information progress phonetic entry of display, user is prior and is unaware of the above-mentioned action specified Information and the text information specified by above-mentioned verification mode, improve the security of payment verification.
Further, further included between the S03 and S04:
Judge whether lip reading information is synchronous with the audio-frequency information in voiceprint in the face video information, if different Step, then payment authentication failure;
If synchronous, judge whether corresponding first text information of lip reading information and specified word information are consistent.
As can be seen from the above description, by the above method, it can ensure that lip reading information is synchronous with audio-frequency information, and ensure lip Language information is corresponding with specified word information, so that payment verification is more safe and reliable.
Further, further included between the S02 and S03:
Noise reduction and filtering process are carried out respectively to the face information and voiceprint.
As can be seen from the above description, the accuracy of data processing can be improved by the above method.
Further, required face's required movement information during payment verification is shown in the S01 and needs voice defeated The specified word information entered is specially:
Random generation face's required movement information and the specified word information for needing phonetic entry, show the required movement Information and specified word information.
As can be seen from the above description, by the above method, people when criminal usurps last support verification can be prevented Face information carries out payment verification, while the voice that can prevent criminal from being paid to user when inputs is recorded, and ensure that The security of payment.
Further, if face information or voiceprint acquisition failure in preset time, random display is new to specify Action message and new specified word information, and resurvey face information and voiceprint.
As can be seen from the above description, by the above method, payment authentication information can be prevented to be stolen, further improves branch The security paid.
Further, further included before the S2:
It obtains more parts to be in by the first human face image information of stress state, the first face image letter of every portion is calculated The fisrt feature parameter of breath;
All fisrt feature parameters are fitted, are obtained by the first mathematical modulo between stress state and fisrt feature parameter Type;
It obtains more parts to be in by the first voiceprint of stress state, is calculated the second of every a first voiceprint Characteristic parameter;
All second feature parameters are fitted, are obtained by the second mathematical model between stress state and characteristic parameter.
As can be seen from the above description, the first number of source voiceprint under by stress state can be established by the above method Model and the second mathematical model by stress state human face image information are learned, so as to residing for follow-up energy accurate judgement user State, improve the security of payment.
Further, further included before the S2:
Calculate the face characteristic parameter of human face image information in the face information acquired;
Calculate the vocal print feature parameter of the voiceprint acquired;
According to first mathematical model and face characteristic parameter, judge whether user is in by stress state;
According to the second mathematical model and vocal print characteristic parameter, judge whether user is in by stress state.
As can be seen from the above description, by the above method, it can judge whether use is in by stress state exactly, to prevent Only user is under by stress state, the problem of being usurped user information by criminal, user is made to bring massive losses.
Further, the S2 is specially:
If no, the face characteristic parameter and vocal print feature parameter are encrypted, and is sent to server, so that server Corresponding with the face information that prestores the first face characteristic parameter of face characteristic parameter is subjected to significance analysis and vocal print is special It levies parameter the first vocal print feature parameter corresponding with the voiceprint that prestores and carries out significance analysis, obtain significance analysis result;
According to the significance analysis as a result, judging whether payment authentication passes through.
As can be seen from the above description, transmission, energy is encrypted in face characteristic parameter and vocal print feature parameter in transmission process The problem of enough user data being prevented to be stolen, user is made to cause certain economic loss;It, can be accurate simultaneously by significance analysis Ground judges whether face characteristic parameter characteristic parameter corresponding with the face information that prestores matches, and can accurate judgement vocal print feature Whether parameter characteristic parameter corresponding with the voiceprint that prestores matches;The mode of above-mentioned double verification, improves payment authentication Security.
Further, the S1 is specially:
The corresponding image frequency domain information of the human face image information is calculated, judges to whether there is in described image frequency domain information More than the high-frequency information of predetermined threshold value;
According to the voiceprint, high fdrequency component set is calculated, judges to whether there is in the high fdrequency component set More than the high fdrequency component of predetermined threshold value.
As can be seen from the above description, by the above method, it can accurately calculate whether voiceprint includes high fdrequency component, with It prevents from being used for payment verification by the recording of synthesis;And whether can be accurately judged in human face image information, which includes high frequency, is believed Breath, to prevent that the above method improves the security of payment to the human face image information synthesized by computer for payment verification.
Further, further included before the S1:
Authentication judgement is carried out to default transaction terminal and server, if failed authentication, payment authentication failure terminates to hand over Easily;
If authenticate the current location information for successfully, obtaining the transaction terminal;
The current location information is encrypted, obtains position encryption information;
The position encryption information is sent to server so that server the position encryption information is stored in it is default Security log information in.
As can be seen from the above description, information the location of during store transaction in process of exchange, makes transaction location that can trace.
Fig. 2 is refer to, the present invention provides a kind of payment authentication terminal based on face and vocal print, including memory 1, place It manages device 2 and is stored in the computer program that can be run on memory 1 and on processor 2, which is characterized in that the processor 2 Following steps are realized when performing described program:
S1:According to face information, judge to whether there is to be more than in human face image information included in face information to preset The high-frequency information of first threshold;And according to voiceprint, judge to whether there is to be more than in voiceprint to preset second threshold High fdrequency component;
S2:If no, payment authentication is carried out according to human face image information and voiceprint.
Further, a kind of payment authentication terminal based on face and vocal print, the S1 are further included before:
S01:Required face's required movement information and the specified word of phonetic entry is needed to believe when showing payment verification Breath;
S02:While gathering face information, while voiceprint;The face information includes face video information and people Face image information;
S03:Judge whether the human face action in face video information is consistent with required movement and judges voiceprint pair Whether the word answered word corresponding with specified word information is consistent;
S04:If consistent, step S1 is performed, otherwise payment authentication fails.
Further, a kind of payment authentication terminal based on face and vocal print, is also wrapped between the S03 and S04 It includes:
Judge whether lip reading information is synchronous with the audio-frequency information in voiceprint in the face video information, if different Step, then payment authentication failure;
If synchronous, judge whether corresponding first text information of lip reading information and specified word information are consistent.
Further, a kind of payment authentication terminal based on face and vocal print, is also wrapped between the S02 and S03 It includes:
Noise reduction and filtering process are carried out respectively to the face information and voiceprint.
Further, a kind of payment authentication terminal based on face and vocal print shows payment verification in the S01 When required face's required movement information and need the specified word information of phonetic entry to be specially:
Random generation face's required movement information and the specified word information for needing phonetic entry, show the required movement Information and specified word information.
Further, a kind of payment authentication terminal based on face and vocal print, if face is believed in preset time Breath or voiceprint acquisition failure, the then new required movement information of random display and new specified word information, and resurvey Face information and voiceprint.
Further, a kind of payment authentication terminal based on face and vocal print, the S1 are further included before:
It obtains more parts to be in by the first human face image information of stress state, the first face image letter of every portion is calculated The fisrt feature parameter of breath;
All fisrt feature parameters are fitted, are obtained by the first mathematical modulo between stress state and fisrt feature parameter Type;
It obtains more parts to be in by the first voiceprint of stress state, is calculated the second of every a first voiceprint Characteristic parameter;
All second feature parameters are fitted, are obtained by the second mathematical model between stress state and characteristic parameter.
Further, a kind of payment authentication terminal based on face and vocal print, the S2 are further included before:
Calculate the face characteristic parameter of human face image information in the face information acquired;
Calculate the vocal print feature parameter of the voiceprint acquired;
According to first mathematical model and face characteristic parameter, judge whether user is in by stress state;
According to the second mathematical model and vocal print characteristic parameter, judge whether user is in by stress state.
Further, a kind of payment authentication terminal based on face and vocal print, the S2 are specially:
If no, the face characteristic parameter and vocal print feature parameter are encrypted, and is sent to server, so that server Corresponding with the face information that prestores the first face characteristic parameter of face characteristic parameter is subjected to significance analysis and vocal print is special It levies parameter the first vocal print feature parameter corresponding with the voiceprint that prestores and carries out significance analysis, obtain significance analysis result;
According to the significance analysis as a result, judging whether payment authentication passes through.
Further, a kind of payment authentication terminal based on face and vocal print, the S1 are further included before:
Authentication judgement is carried out to default transaction terminal and server, if failed authentication, payment authentication failure terminates to hand over Easily;
If authenticate the current location information for successfully, obtaining the transaction terminal;
The current location information is encrypted, obtains position encryption information;
The position encryption information is sent to server so that server the position encryption information is stored in it is default Security log information in.
Fig. 1 is refer to, the embodiment of the present invention one is:
The present invention provides a kind of payment authentication methods based on face and vocal print, comprise the following steps:
S0:Random generation face's required movement information and the specified word information for needing phonetic entry, show described specify Action message and specified word information;While gathering face information, while voiceprint;The face information is regarded including face Frequency information and human face image information;After carrying out noise reduction and filtering process respectively to the face information and voiceprint, people is judged Whether the human face action in face video information is consistent with required movement and judges the corresponding word of voiceprint and specified word Whether the corresponding word of information is consistent;Judge that lip reading information is with the audio-frequency information in voiceprint in the face video information No synchronization, if asynchronous, payment authentication failure;If synchronous, judge corresponding first text information of lip reading information with specifying Whether text information is consistent;If consistent, step S1 is performed, otherwise payment authentication fails;
Wherein, if face information and vocal print information gathering failure in preset time, re-execute step S0;
S1:According to face information, judge to whether there is to be more than in human face image information included in face information to preset The high-frequency information of first threshold;And according to voiceprint, judge to whether there is to be more than in voiceprint to preset second threshold High fdrequency component;
S2:If no, payment authentication is carried out according to human face image information and voiceprint.
The embodiment of the present invention two is:
The present invention provides a kind of payment authentication methods based on face and vocal print, comprise the following steps:
It obtains more parts to be in by the first human face image information of stress state, the first face image letter of every portion is calculated The fisrt feature parameter of breath;All fisrt feature parameters are fitted, are obtained by between stress state and fisrt feature parameter One mathematical model;
Wherein " all fisrt feature parameters of fitting are obtained by the first number between stress state and fisrt feature parameter Learn model " be specially:
It reads per the corresponding fisrt feature parameter of a sample (human face image information), passes through deep learning convolutional Neural net The method of network is fitted all fisrt feature parameters, that is, defines neutral net, collects initial data, classification based training, school Just, output is as a result, obtain by the first mathematical model between stress state and fisrt feature parameter;
It obtains more parts to be in by the first voiceprint of stress state, is calculated the second of every a first voiceprint Characteristic parameter;All second feature parameters are fitted, are obtained by the second mathematical model between stress state and characteristic parameter;
Wherein " all second feature parameters of fitting are obtained by the second number between stress state and fisrt feature parameter Learn model " be specially:
It reads per the corresponding second feature parameter of a sample (voiceprint), passes through deep learning convolutional neural networks Method is fitted all second feature parameters, that is, defines neutral net, collect initial data, is classification based training, correction, defeated Go out as a result, obtaining by the second mathematical model between stress state and second feature parameter;
In payment, authentication judgement first is carried out to default transaction terminal and server, if failed authentication, payment authentication Failure, closes the trade;
If authenticate the current location information for successfully, obtaining the transaction terminal;The current location information is encrypted, is obtained Position encryption information;The position encryption information is sent to server, so that server preserves the position encryption information In default security log information;
After authenticating successfully, the random specified word information for generating face's required movement information and needing phonetic entry is shown Show the required movement information and specified word information;While gathering face information, while voiceprint;The face information Including face video information and human face image information;Noise reduction and filtering process are carried out respectively to the face information and voiceprint Afterwards, judge whether the human face action in face video information is consistent with required movement and judge the corresponding word of voiceprint Whether word corresponding with specified word information is consistent;Judge in the face video information in lip reading information and voiceprint Whether audio-frequency information is synchronous, if asynchronous, payment authentication failure;If synchronous, corresponding first word of lip reading information is judged Whether information is consistent with specified word information;If there are inconsistent, otherwise payment authentication failure performs following steps:
According to face information, judge to whether there is to be more than in human face image information included in face information to preset first The high-frequency information of threshold value;And according to voiceprint, judge to whether there is the high frequency for being more than default second threshold in voiceprint Component;
Calculate the face characteristic parameter of human face image information in the face information acquired;Calculate the vocal print acquired The vocal print feature parameter of information;According to first mathematical model and face characteristic parameter, judge whether user is in and coerced State;According to the second mathematical model and vocal print characteristic parameter, judge whether user is in by stress state.
If no, the face characteristic parameter and vocal print feature parameter are encrypted, and is sent to server, so that server Corresponding with the face information that prestores the first face characteristic parameter of face characteristic parameter is subjected to significance analysis and vocal print is special It levies parameter the first vocal print feature parameter corresponding with the voiceprint that prestores and carries out significance analysis, obtain significance analysis result;
According to the significance analysis as a result, judging whether payment authentication passes through.
Fig. 2 is refer to, the embodiment of the present invention three is:
The present invention provides a kind of payment authentication terminal based on face and vocal print, including memory, processor and storage On a memory and the computer program that can run on a processor, which is characterized in that when the processor performs described program Realize following steps:
S0:Random generation face's required movement information and the specified word information for needing phonetic entry, show described specify Action message and specified word information;While gathering face information, while voiceprint;The face information is regarded including face Frequency information and human face image information;After carrying out noise reduction and filtering process respectively to the face information and voiceprint, people is judged Whether the human face action in face video information is consistent with required movement and judges the corresponding word of voiceprint and specified word Whether the corresponding word of information is consistent;Judge that lip reading information is with the audio-frequency information in voiceprint in the face video information No synchronization, if asynchronous, payment authentication failure;If synchronous, judge corresponding first text information of lip reading information with specifying Whether text information is consistent;If consistent, step S1 is performed, otherwise payment authentication fails;
Wherein, if face information and vocal print information gathering failure in preset time, re-execute step S0;
S1:According to face information, judge to whether there is to be more than in human face image information included in face information to preset The high-frequency information of first threshold;And according to voiceprint, judge to whether there is to be more than in voiceprint to preset second threshold High fdrequency component;
S2:If no, payment authentication is carried out according to human face image information and voiceprint.
The embodiment of the present invention four is:
The present invention provides a kind of payment authentication terminal based on face and vocal print, including memory, processor and storage On a memory and the computer program that can run on a processor, which is characterized in that when the processor performs described program Realize following steps:
It obtains more parts to be in by the first human face image information of stress state, the first face image letter of every portion is calculated The fisrt feature parameter of breath;All fisrt feature parameters are fitted, are obtained by between stress state and fisrt feature parameter One mathematical model;
Wherein " all fisrt feature parameters of fitting are obtained by the first number between stress state and fisrt feature parameter Learn model " be specially:
It reads per the corresponding fisrt feature parameter of a sample (human face image information), passes through deep learning convolutional Neural net The method of network is fitted all fisrt feature parameters, that is, defines neutral net, collects initial data, classification based training, school Just, output is as a result, obtain by the first mathematical model between stress state and fisrt feature parameter;
It obtains more parts to be in by the first voiceprint of stress state, is calculated the second of every a first voiceprint Characteristic parameter;All second feature parameters are fitted, are obtained by the second mathematical model between stress state and characteristic parameter;
Wherein " all second feature parameters of fitting are obtained by the second number between stress state and fisrt feature parameter Learn model " be specially:
It reads per the corresponding second feature parameter of a sample (voiceprint), passes through deep learning convolutional neural networks Method is fitted all second feature parameters, that is, defines neutral net, collect initial data, is classification based training, correction, defeated Go out as a result, obtaining by the second mathematical model between stress state and second feature parameter;
In payment, authentication judgement first is carried out to default transaction terminal and server, if failed authentication, payment authentication Failure, closes the trade;
If authenticate the current location information for successfully, obtaining the transaction terminal;The current location information is encrypted, is obtained Position encryption information;The position encryption information is sent to server, so that server preserves the position encryption information In default security log information;
After authenticating successfully, the random specified word information for generating face's required movement information and needing phonetic entry is shown Show the required movement information and specified word information;While gathering face information, while voiceprint;The face information Including face video information and human face image information;Noise reduction and filtering process are carried out respectively to the face information and voiceprint Afterwards, judge whether the human face action in face video information is consistent with required movement and judge the corresponding word of voiceprint Whether word corresponding with specified word information is consistent;Judge in the face video information in lip reading information and voiceprint Whether audio-frequency information is synchronous, if asynchronous, payment authentication failure;If synchronous, corresponding first word of lip reading information is judged Whether information is consistent with specified word information;If there are inconsistent, otherwise payment authentication failure performs following steps:
According to face information, judge to whether there is to be more than in human face image information included in face information to preset first The high-frequency information of threshold value;And according to voiceprint, judge to whether there is the high frequency for being more than default second threshold in voiceprint Component;
Calculate the face characteristic parameter of human face image information in the face information acquired;Calculate the vocal print acquired The vocal print feature parameter of information;According to first mathematical model and face characteristic parameter, judge whether user is in and coerced State;According to the second mathematical model and vocal print characteristic parameter, judge whether user is in by stress state.
If no, the face characteristic parameter and vocal print feature parameter are encrypted, and is sent to server, so that server Corresponding with the face information that prestores the first face characteristic parameter of face characteristic parameter is subjected to significance analysis and vocal print is special It levies parameter the first vocal print feature parameter corresponding with the voiceprint that prestores and carries out significance analysis, obtain significance analysis result;
According to the significance analysis as a result, judging whether payment authentication passes through.
The embodiment of the present invention five is:
It is described including MCU (micro-control module), camera, microphone and liquid crystal display the present invention provides a kind of POS machine MCU is electrically connected respectively with camera, microphone and liquid crystal display;
1) before dispatching from the factory, the software of POS is trained by substantial amounts of machine learning, including the face letter under the normal mood of parts up to ten thousand Breath is with the face information coerced and up to ten thousand parts of normal sound information and by the acoustic information under stress state.Identification software is led to The reading that special parameter is carried out to every a training sample is crossed, by utilizing deep learning convolutional Neural net to all training samples The method of network is fitted to a calculation formula, and (key step has:It defines neutral net, collect initial data, classification based training, school Just, result is exported), the relation of these parameters and mood is drawn, so that the software that recognition of face is identified with vocal print figure has Whether identification information source is in the ability coerced.
2) POS is authenticated with transaction backstage:If failed authentication, then it represents that POS does not have trading privilege, closes the trade;If It authenticating successfully, then it represents that POS has trading privilege, opens POS and the encryption on backstage of merchandising, if POS machine is built-in with wireless module, Encryption uploads current base station position at this time, and the content of the security log as transaction backstage preserves.
3) POS machine inside MCU generates the text information of formulation at random, by liquid crystal display negotiator is prompted to be read using microphone Go out corresponding text information, while MCU generates face's required movement actions such as (including blinking, opening one's mouth) rotary heads at random, passes through liquid crystal Screen prompting user gathers the face information of the required movement by camera.
4) face information of camera acquisition negotiator, meanwhile, microphone gathers the acoustic information of negotiator.Gather sound Face information is still gathered while information, to carry out lip reading calculating.
5) MCU pre-processes face information, including noise reduction and normalized etc..
6) MCU checks face information legitimacy, including checking whether human face action is consistent with prompting, it is super to check for Cross the high-frequency information of threshold value, if be not at whether the nervous mood feared, lip reading consistent with prompt message, lip reading whether with Recorded message is synchronous, if checking not by being judged to invalid information, being refused, closed the trade.
7) MCU calculates geometric properties of face information characteristic value, eye, nose, mouth including face etc. etc..
8) MCU is transmitted to transaction backstage after face information characteristic value is encrypted.
9) holder's face information that the face information characteristic value of upload and bank are reserved in transaction backstage carries out conspicuousness Analysis, determines whether to merchandise according to analysis result:If conspicuousness is insufficient, shows that recognition of face fails, inform that POS terminates Transaction;If conspicuousness is apparent, shows recognition of face success, inform that POS allows to merchandise.
10) MCU pre-processes acoustic information, the processing etc. such as including noise reduction.
11) MCU checks sound legitimacy, including check sound-content whether with prompt it is consistent, check for and be more than Whether the high-frequency information of threshold value, mood are not at the nervous mood feared, if checking not by being judged to invalid information, giving Refusal, closes the trade.
12) MCU calculates vocal print feature information, i.e. MFCC (Mel Frequency Cepstrum Coefficients, U.S. That frequency cepstral coefficient).
13) MCU is transmitted to transaction backstage after vocal print feature value is encrypted.
14) holder's voiceprint that the vocal print feature value of upload and bank are reserved in transaction backstage carries out conspicuousness point Analysis:If conspicuousness is insufficient, then it represents that Application on Voiceprint Recognition fails, and informs that POS closes the trade;If conspicuousness is apparent, then it represents that vocal print is known Not Cheng Gong, carry out backstage transaction, and transaction results informed into POS.
15) transaction results are prompted to negotiator by POS machine.
In conclusion a kind of payment authentication method and terminal based on face and vocal print provided by the invention, pass through judgement It whether there is the high fdrequency component for being more than default second threshold in voiceprint, can prevent from being propped up by the recorded message of splicing Pay certification (recorded message of splicing its there are high fdrequency components in splice sections);In the image synthesized by computer, on face side There is the saltus step in a large amount of spatial domains in the place of the synthesis such as edge, eye edge, mouth edge splicing, corresponds in frequency domain then in the presence of a large amount of high Frequency information, therefore by judging to whether there is the high-frequency information more than predetermined threshold value in human face image information, it can prevent from passing through meter The problem of image information of calculation machine synthesis attacks payment authentication;The present invention is by can to face information and voiceprint simultaneously Effectively judge whether user is in by stress state, make payment more safe and reliable;The side being combined by face with vocal print Method can be reduced largely and usurp risk, strengthen transaction security and reliability.The face information and voiceprint of holder Server is transmitted to by cipher mode as sensitive information, only allows the encrypted characteristic parameter of unidirectional uplink, avoids Sensitive information leakage.The above method realizes and high-frequency detection is added in face and Application on Voiceprint Recognition to prevent face or recording from closing Into attack, and recognition of face is combined into the opposite vocal print pretended that is not easy and carries out payment authentication, can effectively avoided spoof attack, make Payment is safer.
The foregoing is merely the embodiment of the present invention, are not intended to limit the scope of the invention, every to utilize this hair The equivalents that bright specification and accompanying drawing content are made directly or indirectly are used in other related technical areas, similarly It is included within the scope of the present invention.

Claims (20)

1. a kind of payment authentication method based on face and vocal print, which is characterized in that comprise the following steps:
S1:According to face information, judge to whether there is to be more than in human face image information included in face information to preset first The high-frequency information of threshold value;And according to voiceprint, judge to whether there is the high frequency for being more than default second threshold in voiceprint Component;
S2:If no, payment authentication is carried out according to human face image information and voiceprint.
2. a kind of payment authentication method based on face and vocal print according to claim 1, which is characterized in that the S1 it Before further include:
S01:Required face's required movement information and the specified word information of phonetic entry is needed when showing payment verification;
S02:While gathering face information, while voiceprint;The face information includes face video information and face figure As information;
S03:Judge whether the human face action in face video information is consistent with required movement and judges that voiceprint is corresponding Whether word word corresponding with specified word information is consistent;
S04:If consistent, step S1 is performed, otherwise payment authentication fails.
A kind of 3. payment authentication method based on face and vocal print according to claim 2, which is characterized in that the S03 It is further included between S04:
Judge whether lip reading information is synchronous with the audio-frequency information in voiceprint in the face video information, if asynchronous, Payment authentication fails;
If synchronous, judge whether corresponding first text information of lip reading information and specified word information are consistent.
A kind of 4. payment authentication method based on face and vocal print according to claim 2, which is characterized in that the S02 It is further included between S03:
Noise reduction and filtering process are carried out respectively to the face information and voiceprint.
A kind of 5. payment authentication method based on face and vocal print according to claim 2, which is characterized in that the S01 Required face's required movement information and the specified word information of phonetic entry is needed to be specially during middle display payment verification:
Random generation face's required movement information and the specified word information for needing phonetic entry, show the required movement information And specified word information.
6. a kind of payment authentication method based on face and vocal print according to claim 5, which is characterized in that if default Face information or voiceprint acquisition failure in time, then the new required movement information of random display and new specified word are believed Breath, and resurvey face information and voiceprint.
7. a kind of payment authentication method based on face and vocal print according to claim 1, which is characterized in that the S1 it Before further include:
It obtains more parts to be in by the first human face image information of stress state, the first face image information of every portion is calculated Fisrt feature parameter;
All fisrt feature parameters are fitted, are obtained by the first mathematical model between stress state and fisrt feature parameter;
It obtains more parts and is in the second feature that every a first voiceprint by the first voiceprint of stress state, is calculated Parameter;
All second feature parameters are fitted, are obtained by the second mathematical model between stress state and characteristic parameter.
8. a kind of payment authentication method based on face and vocal print according to claim 7, which is characterized in that the S2 it Before further include:
Calculate the face characteristic parameter of human face image information in the face information acquired;
Calculate the vocal print feature parameter of the voiceprint acquired;
According to first mathematical model and face characteristic parameter, judge whether user is in by stress state;
According to the second mathematical model and vocal print characteristic parameter, judge whether user is in by stress state.
A kind of 9. payment authentication method based on face and vocal print according to claim 8, which is characterized in that the S2 tools Body is:
If no, the face characteristic parameter and vocal print feature parameter are encrypted, and is sent to server, so that server is by people Face characteristic parameter the first face characteristic parameter corresponding with the face information that prestores carries out significance analysis and joins vocal print feature Number the first vocal print feature parameter corresponding with the voiceprint that prestores carries out significance analysis, obtains significance analysis result;
According to the significance analysis as a result, judging whether payment authentication passes through.
A kind of 10. payment authentication method based on face and vocal print according to claim 1, which is characterized in that the S1 It further includes before:
Authentication judgement is carried out to default transaction terminal and server, if failed authentication, payment authentication failure is closed the trade;
If authenticate the current location information for successfully, obtaining the transaction terminal;
The current location information is encrypted, obtains position encryption information;
The position encryption information is sent to server, so that the position encryption information is stored in default peace by server In full log information.
11. a kind of payment authentication terminal based on face and vocal print, including memory, processor and storage on a memory and can The computer program run on a processor, which is characterized in that the processor realizes following steps when performing described program:
S1:According to face information, judge to whether there is to be more than in human face image information included in face information to preset first The high-frequency information of threshold value;And according to voiceprint, judge to whether there is the high frequency for being more than default second threshold in voiceprint Component;
S2:If no, payment authentication is carried out according to human face image information and voiceprint.
A kind of 12. payment authentication terminal based on face and vocal print according to claim 11, which is characterized in that the S1 It further includes before:
S01:Required face's required movement information and the specified word information of phonetic entry is needed when showing payment verification;
S02:While gathering face information, while voiceprint;The face information includes face video information and face figure As information;
S03:Judge whether the human face action in face video information is consistent with required movement and judges that voiceprint is corresponding Whether word word corresponding with specified word information is consistent;
S04:If consistent, step S1 is performed, otherwise payment authentication fails.
13. a kind of payment authentication terminal based on face and vocal print according to claim 12, which is characterized in that described It is further included between S03 and S04:
Judge whether lip reading information is synchronous with the audio-frequency information in voiceprint in the face video information, if asynchronous, Payment authentication fails;
If synchronous, judge whether corresponding first text information of lip reading information and specified word information are consistent.
14. a kind of payment authentication terminal based on face and vocal print according to claim 12, which is characterized in that described It is further included between S02 and S03:
Noise reduction and filtering process are carried out respectively to the face information and voiceprint.
15. a kind of payment authentication terminal based on face and vocal print according to claim 12, which is characterized in that described Required face's required movement information during payment verification is shown in S01 and needs the specified word information of phonetic entry specific For:
Random generation face's required movement information and the specified word information for needing phonetic entry, show the required movement information And specified word information.
16. a kind of payment authentication terminal based on face and vocal print according to claim 15, which is characterized in that if pre- If face information or voiceprint acquisition failure in the time, then the new required movement information of random display and new specified word are believed Breath, and resurvey face information and voiceprint.
A kind of 17. payment authentication terminal based on face and vocal print according to claim 11, which is characterized in that the S1 It further includes before:
It obtains more parts to be in by the first human face image information of stress state, the first face image information of every portion is calculated Fisrt feature parameter;
All fisrt feature parameters are fitted, are obtained by the first mathematical model between stress state and fisrt feature parameter;
It obtains more parts and is in the second feature that every a first voiceprint by the first voiceprint of stress state, is calculated Parameter;
All second feature parameters are fitted, are obtained by the second mathematical model between stress state and characteristic parameter.
A kind of 18. payment authentication terminal based on face and vocal print according to claim 17, which is characterized in that the S2 It further includes before:
Calculate the face characteristic parameter of human face image information in the face information acquired;
Calculate the vocal print feature parameter of the voiceprint acquired;
According to first mathematical model and face characteristic parameter, judge whether user is in by stress state;
According to the second mathematical model and vocal print characteristic parameter, judge whether user is in by stress state.
A kind of 19. payment authentication terminal based on face and vocal print according to claim 18, which is characterized in that the S2 Specially:
If no, the face characteristic parameter and vocal print feature parameter are encrypted, and is sent to server, so that server is by people Face characteristic parameter the first face characteristic parameter corresponding with the face information that prestores carries out significance analysis and joins vocal print feature Number the first vocal print feature parameter corresponding with the voiceprint that prestores carries out significance analysis, obtains significance analysis result;
According to the significance analysis as a result, judging whether payment authentication passes through.
A kind of 20. payment authentication terminal based on face and vocal print according to claim 11, which is characterized in that the S1 It further includes before:
Authentication judgement is carried out to default transaction terminal and server, if failed authentication, payment authentication failure is closed the trade;
If authenticate the current location information for successfully, obtaining the transaction terminal;
The current location information is encrypted, obtains position encryption information;
The position encryption information is sent to server, so that the position encryption information is stored in default peace by server In full log information.
CN201780002078.9A 2017-12-12 2017-12-12 A kind of payment authentication method and terminal based on face and vocal print Pending CN108124488A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/115617 WO2019113776A1 (en) 2017-12-12 2017-12-12 Face and voiceprint-based payment authentication method, and terminal

Publications (1)

Publication Number Publication Date
CN108124488A true CN108124488A (en) 2018-06-05

Family

ID=62233644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780002078.9A Pending CN108124488A (en) 2017-12-12 2017-12-12 A kind of payment authentication method and terminal based on face and vocal print

Country Status (2)

Country Link
CN (1) CN108124488A (en)
WO (1) WO2019113776A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510364A (en) * 2018-03-30 2018-09-07 杭州法奈昇科技有限公司 Big data intelligent shopping guide system based on voiceprint identification
CN108805678A (en) * 2018-06-14 2018-11-13 安徽鼎龙网络传媒有限公司 A kind of micro- scene management backstage wechat store synthesis measuring system
CN108846676A (en) * 2018-08-02 2018-11-20 平安科技(深圳)有限公司 Biological characteristic assistant payment method, device, computer equipment and storage medium
CN109214820A (en) * 2018-07-06 2019-01-15 厦门快商通信息技术有限公司 A kind of trade company's cash collecting system and method based on audio-video combination
CN109359982A (en) * 2018-09-02 2019-02-19 珠海横琴现联盛科技发展有限公司 In conjunction with the payment information confirmation method of face and Application on Voiceprint Recognition
CN109784175A (en) * 2018-12-14 2019-05-21 深圳壹账通智能科技有限公司 Abnormal behaviour people recognition methods, equipment and storage medium based on micro- Expression Recognition
CN109977646A (en) * 2019-04-01 2019-07-05 杭州城市大数据运营有限公司 A kind of intelligent and safe checking method
CN110363148A (en) * 2019-07-16 2019-10-22 中用科技有限公司 A kind of method of face vocal print feature fusion verifying
CN110688641A (en) * 2019-09-30 2020-01-14 联想(北京)有限公司 Information processing method and electronic equipment
CN111861495A (en) * 2020-08-06 2020-10-30 中国银行股份有限公司 Transfer processing method and device
CN112150740A (en) * 2020-09-10 2020-12-29 福建创识科技股份有限公司 Non-inductive secure payment system and method
WO2022142521A1 (en) * 2020-12-29 2022-07-07 北京旷视科技有限公司 Liveness detection method and apparatus, device, and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766973A (en) * 2021-01-19 2021-05-07 湖南校智付网络科技有限公司 Face payment terminal
CN115171312A (en) * 2022-06-28 2022-10-11 重庆京东方智慧科技有限公司 Image processing method, device, equipment, monitoring system and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004021748A (en) * 2002-06-18 2004-01-22 Nec Corp Method for notifying authentication information, authentication system and information terminal device
US20070038868A1 (en) * 2005-08-15 2007-02-15 Top Digital Co., Ltd. Voiceprint-lock system for electronic data
CN101999900A (en) * 2009-08-28 2011-04-06 南京壹进制信息技术有限公司 Living body detecting method and system applied to human face recognition
CN102110303A (en) * 2011-03-10 2011-06-29 西安电子科技大学 Method for compounding face fake portrait\fake photo based on support vector return
CN104143078A (en) * 2013-05-09 2014-11-12 腾讯科技(深圳)有限公司 Living body face recognition method and device and equipment
CN105320947A (en) * 2015-11-04 2016-02-10 博宏信息技术有限公司 Face in-vivo detection method based on illumination component
CN105426723A (en) * 2015-11-20 2016-03-23 北京得意音通技术有限责任公司 Voiceprint identification, face identification and synchronous in-vivo detection-based identity authentication method and system
CN105518710A (en) * 2015-04-30 2016-04-20 北京旷视科技有限公司 Video detecting method, video detecting system and computer program product
CN105718874A (en) * 2016-01-18 2016-06-29 北京天诚盛业科技有限公司 Method and device of in-vivo detection and authentication
CN106156730A (en) * 2016-06-30 2016-11-23 腾讯科技(深圳)有限公司 The synthetic method of a kind of facial image and device
CN106782565A (en) * 2016-11-29 2017-05-31 重庆重智机器人研究院有限公司 A kind of vocal print feature recognition methods and system
CN106982426A (en) * 2017-03-30 2017-07-25 广东微模式软件股份有限公司 A kind of method and system for remotely realizing old card system of real name

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6219639B1 (en) * 1998-04-28 2001-04-17 International Business Machines Corporation Method and apparatus for recognizing identity of individuals employing synchronized biometrics
CN106850648B (en) * 2015-02-13 2020-10-16 腾讯科技(深圳)有限公司 Identity verification method, client and service platform
CN104680375A (en) * 2015-02-28 2015-06-03 优化科技(苏州)有限公司 Identification verifying system for living human body for electronic payment
CN108194488A (en) * 2017-12-28 2018-06-22 宁波群力紧固件制造有限公司 A kind of anti-derotation screw

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004021748A (en) * 2002-06-18 2004-01-22 Nec Corp Method for notifying authentication information, authentication system and information terminal device
US20070038868A1 (en) * 2005-08-15 2007-02-15 Top Digital Co., Ltd. Voiceprint-lock system for electronic data
CN101999900A (en) * 2009-08-28 2011-04-06 南京壹进制信息技术有限公司 Living body detecting method and system applied to human face recognition
CN102110303A (en) * 2011-03-10 2011-06-29 西安电子科技大学 Method for compounding face fake portrait\fake photo based on support vector return
CN104143078A (en) * 2013-05-09 2014-11-12 腾讯科技(深圳)有限公司 Living body face recognition method and device and equipment
CN105518710A (en) * 2015-04-30 2016-04-20 北京旷视科技有限公司 Video detecting method, video detecting system and computer program product
CN105320947A (en) * 2015-11-04 2016-02-10 博宏信息技术有限公司 Face in-vivo detection method based on illumination component
CN105426723A (en) * 2015-11-20 2016-03-23 北京得意音通技术有限责任公司 Voiceprint identification, face identification and synchronous in-vivo detection-based identity authentication method and system
CN105718874A (en) * 2016-01-18 2016-06-29 北京天诚盛业科技有限公司 Method and device of in-vivo detection and authentication
CN106156730A (en) * 2016-06-30 2016-11-23 腾讯科技(深圳)有限公司 The synthetic method of a kind of facial image and device
CN106782565A (en) * 2016-11-29 2017-05-31 重庆重智机器人研究院有限公司 A kind of vocal print feature recognition methods and system
CN106982426A (en) * 2017-03-30 2017-07-25 广东微模式软件股份有限公司 A kind of method and system for remotely realizing old card system of real name

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510364A (en) * 2018-03-30 2018-09-07 杭州法奈昇科技有限公司 Big data intelligent shopping guide system based on voiceprint identification
CN108805678A (en) * 2018-06-14 2018-11-13 安徽鼎龙网络传媒有限公司 A kind of micro- scene management backstage wechat store synthesis measuring system
CN109214820B (en) * 2018-07-06 2021-12-21 厦门快商通信息技术有限公司 Merchant money collection system and method based on audio and video combination
CN109214820A (en) * 2018-07-06 2019-01-15 厦门快商通信息技术有限公司 A kind of trade company's cash collecting system and method based on audio-video combination
WO2020024398A1 (en) * 2018-08-02 2020-02-06 平安科技(深圳)有限公司 Biometrics-assisted payment method and apparatus, and computer device and storage medium
CN108846676A (en) * 2018-08-02 2018-11-20 平安科技(深圳)有限公司 Biological characteristic assistant payment method, device, computer equipment and storage medium
CN109359982A (en) * 2018-09-02 2019-02-19 珠海横琴现联盛科技发展有限公司 In conjunction with the payment information confirmation method of face and Application on Voiceprint Recognition
CN109784175A (en) * 2018-12-14 2019-05-21 深圳壹账通智能科技有限公司 Abnormal behaviour people recognition methods, equipment and storage medium based on micro- Expression Recognition
CN109977646A (en) * 2019-04-01 2019-07-05 杭州城市大数据运营有限公司 A kind of intelligent and safe checking method
CN110363148A (en) * 2019-07-16 2019-10-22 中用科技有限公司 A kind of method of face vocal print feature fusion verifying
CN110688641A (en) * 2019-09-30 2020-01-14 联想(北京)有限公司 Information processing method and electronic equipment
CN111861495A (en) * 2020-08-06 2020-10-30 中国银行股份有限公司 Transfer processing method and device
CN112150740A (en) * 2020-09-10 2020-12-29 福建创识科技股份有限公司 Non-inductive secure payment system and method
WO2022142521A1 (en) * 2020-12-29 2022-07-07 北京旷视科技有限公司 Liveness detection method and apparatus, device, and storage medium

Also Published As

Publication number Publication date
WO2019113776A1 (en) 2019-06-20

Similar Documents

Publication Publication Date Title
CN108124488A (en) A kind of payment authentication method and terminal based on face and vocal print
US11100205B2 (en) Secure automated teller machine (ATM) and method thereof
CN108804884B (en) Identity authentication method, identity authentication device and computer storage medium
CN109218269A (en) Identity authentication method, device, equipment and data processing method
CN107977776A (en) Information processing method, device, server and computer-readable recording medium
CN105512535A (en) User authentication method and user authentication device
CN110459204A (en) Audio recognition method, device, storage medium and electronic equipment
US20060242691A1 (en) Method for carrying out a secure electronic transaction using a portable data support
CN109660509A (en) Login method, device, system and storage medium based on recognition of face
CN106850648A (en) Auth method, client and service platform
US20210342850A1 (en) Verifying user identities during transactions using identification tokens that include user face data
CN105550928A (en) System and method of network remote account opening for commercial bank
US11682236B2 (en) Iris authentication device, iris authentication method and recording medium
CN109146492A (en) A kind of device and method of vehicle end mobile payment
CN106911630A (en) Terminal and the authentication method and system of identity identifying method, terminal and authentication center
CN110392041A (en) Electronic authorization method, apparatus, storage equipment and storage medium
Yampolskiy Mimicry attack on strategy-based behavioral biometric
CN116453196B (en) Face recognition method and system
Goel et al. Securing biometric framework with cryptanalysis
CN108108975A (en) A kind of payment authentication method and terminal based on electrocardiogram and vocal print
CN115906028A (en) User identity verification method and device and self-service terminal
CN108401458A (en) A kind of payment authentication method and terminal based on face and electrocardiogram
CN205377891U (en) Identity authentication device and system
CN107742214A (en) A kind of method of payment and payment system based on face recognition
Sanchez-Reillo Securing information and operations in a smart card through biometrics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180605