CN106384595A - Voice password based payment platform login method and device - Google Patents

Voice password based payment platform login method and device Download PDF

Info

Publication number
CN106384595A
CN106384595A CN201610703600.3A CN201610703600A CN106384595A CN 106384595 A CN106384595 A CN 106384595A CN 201610703600 A CN201610703600 A CN 201610703600A CN 106384595 A CN106384595 A CN 106384595A
Authority
CN
China
Prior art keywords
audio signal
feature point
frame
point pairs
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610703600.3A
Other languages
Chinese (zh)
Other versions
CN106384595B (en
Inventor
陈勇
何清素
申海娟
王俊生
沙彦柱
崔九鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING HUITONG JINCAI INFORMATION TECHNOLOGY Co Ltd
Original Assignee
BEIJING HUITONG JINCAI INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING HUITONG JINCAI INFORMATION TECHNOLOGY Co Ltd filed Critical BEIJING HUITONG JINCAI INFORMATION TECHNOLOGY Co Ltd
Priority to CN201610703600.3A priority Critical patent/CN106384595B/en
Publication of CN106384595A publication Critical patent/CN106384595A/en
Application granted granted Critical
Publication of CN106384595B publication Critical patent/CN106384595B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/06Decision making techniques; Pattern matching strategies
    • G10L17/08Use of distortion metrics or a particular distance between probe pattern and reference templates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3226Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using a predetermined code, e.g. password, passphrase or PIN
    • H04L9/3231Biological data, e.g. fingerprint, voice or retina

Abstract

The invention discloses a voice password based payment platform login method and device. The payment platform login method comprises the steps of receiving account and audio information inputted by a user, decomposing the audio signals, and building a feature point pair model for each frame; querying in a preset user information table to acquire audio signals containing one or more built feature point pair models; matching the feature point pair model of each searched audio signal with all newly built feature point pair models, if the matching is successful, acquiring account information corresponding to the successfully matched audio signal in the user information table, judging whether the corresponding account information is identical to the account information inputted by the user, if so, logging in a payment platform, and if not, being failed in login; and being directly failed in login if the matching is unsuccessful. Therefore, the voice password based payment platform login method and device solve a problem that an existing password is poor in safety, inconvenient and inefficient.

Description

A kind of payment platform login method based on speech cipher and device
Technical field
The present invention relates to communication technical field, particularly relate to a kind of payment platform login method based on speech cipher and dress Put.
Background technology
Constantly develop with information technology ground, quick payment has fast, easily feature, and main advantage is " fast ". Quick payment has been popularized in middle big city.But safety of payment problem becomes focus of concern.Existing safety of payment is arranged Apply fixing numerical ciphers, numeral and monogram password, dynamic password, numerical ciphers, digital and alphabetical combination pin Will there is the hidden danger that user account password stolen by hacker's wooden horse, the property of user or data there is a problem of stolen.And Dynamic password is too time-consuming, and dynamic password has real-time, and dynamic password is not convenient, quick.
Content of the invention
In view of this, it is an object of the invention to proposing a kind of payment platform login method based on speech cipher and dress Put, solve existing payment cipher poor stability, not convenient and efficiently problem.
The payment platform login method based on speech cipher being provided based on the above-mentioned purpose present invention, including step:
Receive account and the audio signal of user input, described audio signal is decomposed, feature is set up to each frame Point is to model;
Default user message table is inquired about, obtains the feature point pairs model including one or more foundation Audio signal;
The feature point pairs model of each audio signal finding and newly-established all feature point pairs models are carried out Join, if the match is successful, obtain, in described user message table, the corresponding account information of audio signal that the match is successful, it is right to judge Whether the account information answered is identical with the account information of user input, if identical, logging in payment platform, if differing, logging in Failure;If coupling is unsuccessful, directly login failure.
In some embodiments of the invention, after receiving the audio signal of user input, also include:
The audio signal receiving is sampled, the audio analog signals after sampling are converted into audio digital signals;
According to the time threshold pre-setting, to described audio digital signals framing windowing process;
Audio digital signals after each framing windowing process are carried out time frequency processing;
The characteristic point of time-frequency spectrum after extraction scaling down processing, and set up feature point pairs model.
In some embodiments of the invention, described set up feature point pairs model, including:
Using the peak curve of the first frame of audio signal as initial threshold curve, the threshold value of each frame after the first frame Curve can be multiplied with the peak curve of this frame by the threshold curve of former frame and be multiplied by attenuation quotient acquisition again;
Threshold curve according to every frame extracts the peak point after screening;Wherein, by by all peak points in every frame with This frame threshold curve corresponding point is compared, if peak point is less than the corresponding point of threshold curve, gives up this peak point, if peak value Point then retains this peak point higher than the corresponding point of threshold curve;
Choose several peak points after screening before every frame, as the characteristic point of every frame;
It is right that characteristic point in each characteristic point of former frame and its a later frame region is organized successively.
In some embodiments of the invention, the described feature point pairs model by each audio signal finding with newly-built Vertical all feature point pairs models are mated, including:
According to the content information of feature point pairs, all feature point pairs models of each audio signal are all with newly-established Feature point model is corresponded;
Calculate the time difference between corresponding feature point pairs model, time difference is arranged according to order from small to large Row, obtain minimum time poor;
Count the number of times that each audio signal has minimum time difference;
Judge whether this number of times is more than or equal to default minimum frequency threshold value, if being more than or equal to default minimum number of times Threshold value then this audio signal and the success of newly-built feature point pairs Model Matching, otherwise mates unsuccessful;Or extracting directly has The minimum time difference most audio signal of number of times, as with the newly-built successful audio signal of feature point pairs Model Matching.
In some embodiments of the invention, before the described account receiving user input and audio signal, also include:
Eject floating layer, the information of display reminding simultaneously starts monitoring process;
When the audio signal monitoring there is input, obtain the duration starting monitoring process;
Judge whether the duration obtaining is more than default duration threshold value, if more than this login failure.
On the other hand, present invention also offers a kind of payment platform entering device based on speech cipher, including:
Log-on message receiving unit, for receiving account and the audio signal of user input.
Audio signal processing unit, for decomposing described audio signal, sets up feature point pairs model to each frame;
Query unit, for inquiring about in default user message table, obtains and includes one or more foundation The audio signal of feature point pairs model;
Matching unit, for by the feature point pairs model of each audio signal finding and newly-established all characteristic points Model is mated, if the match is successful, described user message table obtains the corresponding account of audio signal that the match is successful Information, judges whether corresponding account information is identical with the account information of user input, if identical, logs in payment platform, if not Identical then login failure;If coupling is unsuccessful, directly login failure.
In some embodiments of the invention, described audio signal processing unit, is additionally operable to:
The audio signal receiving is sampled, the audio analog signals after sampling are converted into audio digital signals;
According to the time threshold pre-setting, to described audio digital signals framing windowing process;
Audio digital signals after each framing windowing process are carried out time frequency processing;
The characteristic point of time-frequency spectrum after extraction scaling down processing, and set up feature point pairs model.
In some embodiments of the invention, described audio signal processing unit sets up feature point pairs model, including:
Using the peak curve of the first frame of audio signal as initial threshold curve, the threshold value of each frame after the first frame Curve can be multiplied with the peak curve of this frame by the threshold curve of former frame and be multiplied by attenuation quotient acquisition again;
Threshold curve according to every frame extracts the peak point after screening;Wherein, by by all peak points in every frame with This frame threshold curve corresponding point is compared, if peak point is less than the corresponding point of threshold curve, gives up this peak point, if peak value Point then retains this peak point higher than the corresponding point of threshold curve;
Choose several peak points after screening before every frame, as the characteristic point of every frame;
It is right that characteristic point in each characteristic point of former frame and its a later frame region is organized successively.
In some embodiments of the invention, described matching unit, by the feature point pairs of each audio signal finding Model is mated with newly-established all feature point pairs models, including:
According to the content information of feature point pairs, all feature point pairs models of each audio signal are all with newly-established Feature point model is corresponded;
Calculate the time difference between corresponding feature point pairs model, time difference is arranged according to order from small to large Row, obtain minimum time poor;
Count the number of times that each audio signal has minimum time difference;
Judge whether this number of times is more than or equal to default minimum frequency threshold value, if being more than or equal to default minimum number of times Threshold value then this audio signal and the success of newly-built feature point pairs Model Matching, otherwise mates unsuccessful;Or extracting directly has The minimum time difference most audio signal of number of times, as with the newly-built successful audio signal of feature point pairs Model Matching.
In some embodiments of the invention, described log-on message receiving unit receives account and the audio frequency of user input Before signal, it is additionally operable to:
Eject floating layer, the information of display reminding simultaneously starts monitoring process;
When the audio signal monitoring there is input, obtain the duration starting monitoring process;
Judge whether the duration obtaining is more than default duration threshold value, if more than this login failure.
From the above it can be seen that the payment platform login method based on speech cipher of present invention offer and device, Decomposed by the audio signal that will receive, feature point pairs model is set up to each frame;Default user message table is inquired about, obtains The audio signal of the feature point pairs model of one or more foundation must be included;Spy by each audio signal finding Levy and a little model is mated with newly-established all feature point pairs models, if the match is successful, obtain in described user message table The corresponding account information of audio signal that the match is successful, judge the account information whether with user input for the corresponding account information Identical, if identical, log in payment platform, if differing, login failure;If coupling is unsuccessful, directly login failure.Thus, Payment platform login method based on speech cipher of the present invention and device can overcome the defect of existing payment cipher, make Payment behavior is faster, more convenient, safer.
Brief description
Fig. 1 is the schematic flow sheet of the payment platform login method in first embodiment of the invention based on speech cipher;
Fig. 2 refers to the schematic flow sheet of the payment platform login method based on speech cipher in embodiment for the present invention;
Fig. 3 is the structural representation of the payment platform entering device in the embodiment of the present invention based on speech cipher.
Specific embodiment
For making the object, technical solutions and advantages of the present invention become more apparent, below in conjunction with specific embodiment, and reference Accompanying drawing, the present invention is described in more detail.
Refering to the flow process shown in Fig. 1, being the payment platform login method based on speech cipher in first embodiment of the invention Schematic diagram, described is included based on the payment platform login method of speech cipher:
Step 101, receives account and the audio signal of user input.
Step 102, described audio signal is decomposed, sets up feature point pairs model to each frame.
Step 103, inquires about in default user message table, obtains the characteristic point including one or more foundation Audio signal to model.
Step 104, by the feature point pairs model of each audio signal finding and newly-established all feature point pairs moulds Type is mated, if the match is successful, execution step 105, if mate unsuccessful, execution step 106.
Step 105, obtains, in described user message table, the corresponding account information of audio signal that the match is successful, it is right to judge Whether the account information answered is identical with the account information of user input, if identical, logging in payment platform, if differing, executing Step 106.
Step 106, login failure.
In one preferably embodiment, after step 101 receives account and the audio signal of user input, can be right The audio signal receiving is sampled.Preferably, take 8kHz sample rate that the audio-frequency information receiving is sampled.To adopt Audio analog signals after sample are converted into audio digital signals, then according to the time threshold pre-setting, to described audio frequency Digital signal framing windowing process.Preferably, the time threshold pre-setting is 10s, divides an audio frequency number at interval of 10s Word signal segment.Then, the audio digital signals after each framing windowing process are carried out time frequency processing.Preferably, using quick Fourier transformation (FFT) method completes, and obtains the time of audio digital signals after each framing windowing process and frequecy characteristic. Finally, extract the characteristic point of time-frequency spectrum after scaling down processing, and set up feature point pairs model.
As an embodiment referring to, refering to shown in Fig. 2, the described payment platform login method based on speech cipher Following steps specifically can be adopted:
Step 201, receives account and the audio signal of user input.
Wherein, user according to prompting information input audio-frequency information, for example:Any first song of your which star favorite Bent;The when and where that you meet for the first time with your lover;Date of birth of your father and mother etc..It is preferred that Mike can be passed through The audio-frequency information of wind receiving user's input.
Step 202, samples to the audio signal receiving.
The frequency range of common people's acoustical signal is 300Hz~3.4kHz, according to nyquist sampling theorem:Only when adopting When sample frequency is higher than the twice of acoustical signal highest frequency, the acoustical signal that discrete analog signal could be represented uniquely reduces Become original sound.Preferably, take 8kHz sample rate that the audio-frequency information receiving is sampled in this embodiment.
Step 203, the audio analog signals after sampling are converted into audio digital signals.
As an embodiment, the audio analog signals after sampling can be converted into by digital audio by analog-digital converter Signal.
Step 204, according to the time threshold pre-setting, to described audio digital signals framing windowing process.
In one embodiment it would be required that a length of 10s is that is to say, that the time threshold pre-setting is 10s, that is, during voice Divide an audio digital signals section at interval of 10s.
Step 205, the audio digital signals after each framing windowing process are carried out time frequency processing.Be embodied as including: Completed using fast Fourier transform (FFT) method, obtain the time of audio digital signals after each framing windowing process and Frequecy characteristic.
Step 206, extracts the characteristic point of time-frequency spectrum after scaling down processing, and sets up feature point pairs model.Specific implementation process Including:
Step one:Using the peak curve of the first frame of audio signal as initial threshold curve.
Step 2:After first frame, the threshold curve of each frame can pass through the threshold curve of former frame and the peak value of this frame Curve is multiplied and is multiplied by attenuation quotient acquisition again, and wherein attenuation quotient takes 0.98.
Step 3:Threshold curve according to every frame extracts the peak point after screening.Wherein, by by all peaks in every frame Value point is compared with this frame threshold curve corresponding point, if peak point is less than the corresponding point of threshold curve, gives up this peak point, If peak point is higher than the corresponding point of threshold curve, retain this peak point.
Step 4:Choose several peak points after screening before every frame, the characteristic point of as every frame.In an embodiment, select Take the peak point after front 5 screenings of every frame.
Step 5:It is right that characteristic point in each characteristic point of former frame and its a later frame region is organized successively, every a pair Characteristic point all use one 32 lint-long integer according to description, this 32 data are by the frequency of two characteristic points and two characteristic points Time difference forms.
Step 207, inquires about in user message table, obtains the feature point pairs model including one or more foundation Audio signal.
Step 208, by the feature point pairs model of each audio signal finding and newly-established all feature point pairs moulds Type is mated, if the match is successful, execution step 209, if mate unsuccessful, execution step 210.
It is preferred that can adopt hash reverse index method to all feature point pairs models finding with newly-established Feature point pairs model is mated.Further, specific implementation process includes:
Step one:According to the content information of feature point pairs, by all feature point pairs models of each audio signal with newly-built Vertical all feature point models are corresponded.For example:The content of all feature point pairs models of one audio signal is: " I ", "Yes", " little ", " king ", " sub ", and the content of newly-established all feature point pairs models:" I ", "Yes", " little ", " public ", " leading ", then " I " of audio signal, "Yes", " little " feature point pairs model correspond upper newly-established feature point pairs " I " of model, "Yes", " little ".
Step 2:Calculate the time difference between corresponding feature point pairs model, by time difference according to order from small to large Arranged, obtained minimum time poor.
Step 3:Count the number of times that each audio signal has minimum time difference.
Step 4:Judge whether this number of times is more than or equal to default minimum frequency threshold value, if more than or equal to default Then this audio signal and the success of newly-built feature point pairs Model Matching of minimum frequency threshold value, otherwise mates unsuccessful.Or directly Extract and there is the most audio signal of minimum time difference number of times, believe as with the newly-built successful audio frequency of feature point pairs Model Matching Number.
Step 209, obtains, in described user message table, the corresponding account information of audio signal that the match is successful, it is right to judge Whether the account information answered is identical with the account information of user input, if identical, logging in payment platform, if differing, executing Step 210.
Step 210, login failure.
Also, it should be noted when user did not register (there is not this user in user message table), permissible By step 201 to step 206, then the account of described user and corresponding feature point pairs model are stored and pre-set In user message table.That is, the process of User logs in is step 201 to step 210.And the process of user's registration is step Then the account of described user is stored, with corresponding feature point pairs model, the user profile pre-setting by 201 to step 206 In table.
As further carrying out example, when step 209 does not have in the user message table pre-setting according to described account When finding identical with the account information of user input, floating layer can be ejected, point out new account registration.Register new account when receiving Number instruction when, the mapping relations of this account and described feature point pairs model can be set up in default user message table.Excellent Selection of land, can directly store the mapping relations of this new account and described feature point model when registering new account.Can also first set Put information during login, then gather the audio-frequency information of this information, and this is obtained according to step 202 to step 206 The feature point pairs model of the audio-frequency information of secondary collection, this new account is stored together with the feature point pairs model of this collection acquisition To in described user message table.
In one preferably embodiment, when arranging information when logging in, answer information can also be set Duration threshold value.In the duration threshold value of execution setting, specifically implementation process includes:
Step one:Eject floating layer, the information of display reminding simultaneously starts monitoring process.Wherein, described information is to need The information of audio signal to be inputted, for example:Information is:" what a favorite book is recently?”
Step 2:When the audio signal monitoring there is input, obtain the duration starting monitoring process.
Step 3:Judge whether the duration obtaining is more than default duration threshold value, if more than this login failure, if little In or be equal to then execution step 202 to 210.
Preferably, can also arrange answer information number of times, such as three times, you can carry out sound to provide three chances The input of frequency information.In addition, after answer the number of times of information through setting, not all being successfully logged onto payment platform then This account can be pinned, with this account of Anti-theft.At the same time it can also design the unblock setting to this account, for example, carry out body Part checking (can be to upload identity card).Preferably, also can be stored with the user message table pre-setting account and use The mapping relations of family information, can be by comparing user profile when being unlocked to account.
In another aspect of this invention, additionally provide a kind of payment platform entering device based on speech cipher, as Fig. 3 institute Show, described log-on message receiving unit 301, the audio frequency letter including based on the payment platform entering device of speech cipher being sequentially connected Number processing unit 302, query unit 303, matching unit 304.Wherein, log-on message receiving unit 301 receives user input Account and audio signal, described audio signal is decomposed by audio signal processing unit 302, sets up feature point pairs to each frame Model.Query unit 303 is inquired about in default user message table, obtains the characteristic point including one or more foundation Audio signal to model.Matching unit 304 is by the feature point pairs model of each audio signal finding and newly-established institute There is feature point pairs model to be mated, if the match is successful, described user message table obtains the audio signal pair that the match is successful The account information answered, judges whether corresponding account information is identical with the account information of user input, if identical, log in and pays Platform, if differing, login failure;If coupling is unsuccessful, directly login failure.
In one preferably embodiment, described audio signal processing unit 302 can enter to the audio signal receiving Row is following to be processed:The audio signal receiving can be sampled.Preferably, take 8kHz sample rate to the audio frequency receiving Information is sampled.Audio analog signals after sampling are converted into audio digital signals, then according to the time pre-setting Threshold value, to described audio digital signals framing windowing process.Preferably, the time threshold pre-setting be 10s, that is, at interval of 10s divides an audio digital signals section.Then, the audio digital signals after each framing windowing process are carried out at time-frequency Reason.Preferably, completed using fast Fourier transform (FFT) method, obtain the letter of the digital audio after each framing windowing process Number time and frequecy characteristic.Finally, extract the characteristic point of time-frequency spectrum after scaling down processing, and set up feature point pairs model.
Further embodiment, when audio signal processing unit 302 sets up feature point pairs model, specific implementation process Including:
Step one:Using the peak curve of the first frame of audio signal as initial threshold curve.
Step 2:After first frame, the threshold curve of each frame can pass through the threshold curve of former frame and the peak value of this frame Curve is multiplied and is multiplied by attenuation quotient acquisition again, and wherein attenuation quotient takes 0.98.
Step 3:Threshold curve according to every frame extracts the peak point after screening.Wherein, by by all peaks in every frame Value point is compared with this frame threshold curve corresponding point, if peak point is less than the corresponding point of threshold curve, gives up this peak point, If peak point is higher than the corresponding point of threshold curve, retain this peak point.
Step 4:Choose several peak points after screening before every frame, the characteristic point of as every frame.In an embodiment, select Take the peak point after front 5 screenings of every frame.
Step 5:It is right that characteristic point in each characteristic point of former frame and its a later frame region is organized successively, every a pair Characteristic point all use one 32 lint-long integer according to description, this 32 data are by the frequency of two characteristic points and two characteristic points Time difference forms.
As in another embodiment of this device, described matching unit 304 is by the spy of each audio signal finding Levy when a little model being mated with newly-established all feature point pairs models, specific implementation process includes:
Step one:According to the content information of feature point pairs, by all feature point pairs models of each audio signal with newly-built Vertical all feature point models are corresponded.
Step 2:Calculate the time difference between corresponding feature point pairs model, by time difference according to order from small to large Arranged, obtained minimum time poor.
Step 3:Count the number of times that each audio signal has minimum time difference.
Step 4:Judge whether this number of times is more than or equal to default minimum frequency threshold value, if more than or equal to default Then this audio signal and the success of newly-built feature point pairs Model Matching of minimum frequency threshold value, otherwise mates unsuccessful.Or directly Extract and there is the most audio signal of minimum time difference number of times, believe as with the newly-built successful audio frequency of feature point pairs Model Matching Number.
In addition, as an embodiment referring to, the prompting when setting logs in for the described log-on message receiving unit 301 During information, the duration threshold value answering information can also be set.In the duration threshold value of execution setting, specifically implementation process Including:
Step one:Eject floating layer, the information of display reminding simultaneously starts monitoring process.Wherein, described information is to need The information of audio signal to be inputted, for example:Information is:" what a favorite book is recently?”
Step 2:When the audio signal monitoring there is input, obtain the duration starting monitoring process.
Step 3:Judge whether the duration obtaining is more than default duration threshold value, if more than this login failure.
It should be noted that in being embodied as of the payment platform entering device based on speech cipher of the present invention Hold, be described in detail in the payment platform login method based on speech cipher described above, therefore in this duplicate contents No longer illustrate.
In sum, the payment platform login method based on speech cipher of present invention offer, device, creatively;With When, it is mainly used in the safeguard protection of quick payment.Speech cipher can effectively take precautions against hacker's wooden horse theft user account password, The multiple network problem such as false website, leads to the property of user or the loss of data.Speech cipher ensure that the peace that user pays Quan Xing;And, convenient, quick, high safety that the present invention has the advantages that;And, there is extensive, great dissemination;Finally, Entirely the described payment platform login method based on speech cipher and device compact it is easy to control.
Those of ordinary skill in the art should be understood:The discussion of any of the above embodiment is exemplary only, not It is intended to imply that the scope of the present disclosure (inclusion claim) is limited to these examples;Under the thinking of the present invention, above example Or can also be combined between the technical characteristic in different embodiments, step can be realized with random order, and exists such as The other change of many of the upper described different aspect of the present invention, for their not offers in details simple and clear.
In addition, for simplifying explanation and discussing, and in order to obscure the invention, can in the accompanying drawing being provided To illustrate or the known power supply/grounding connection with integrated circuit (IC) chip and other part can not be illustrated.Furthermore, it is possible to In block diagram form device is shown, to avoid obscuring the invention, and this have also contemplated that following facts, that is, with regard to this The details of the embodiment of a little block diagram arrangements be the platform that depends highly on and will implement the present invention (that is, these details should It is completely in the range of the understanding of those skilled in the art).Elaborating detail (for example, circuit) to describe the present invention's In the case of exemplary embodiment, it will be apparent to those skilled in the art that these details can there is no In the case of or these details change in the case of implement the present invention.Therefore, these descriptions are considered as explanation Property rather than restricted.
Although invention has been described, according to retouching above to have been incorporated with the specific embodiment of the present invention State, a lot of replacements of these embodiments, modification and modification will be apparent from for those of ordinary skills.Example As other memory architectures (for example, dynamic ram (DRAM)) can be using discussed embodiment.
Embodiments of the invention be intended to fall into all such replacement within the broad range of claims, Modification and modification.Therefore, all any omissions within the spirit and principles in the present invention, made, modification, equivalent, improvement Deng should be included within the scope of the present invention.

Claims (10)

1. a kind of payment platform login method based on speech cipher is it is characterised in that include step:
Receive account and the audio signal of user input, described audio signal is decomposed, feature point pairs are set up to each frame Model;
Default user message table is inquired about, obtains the audio frequency of the feature point pairs model including one or more foundation Signal;
The feature point pairs model of each audio signal finding is mated with newly-established all feature point pairs models, if The match is successful then obtains, in described user message table, the corresponding account information of audio signal that the match is successful, judges corresponding account Whether number information is identical with the account information of user input, if identical, logs in payment platform, if differing, login failure;If The unsuccessful then direct login failure of coupling.
2. method according to claim 1 it is characterised in that receive user input audio signal after, also include:
The audio signal receiving is sampled, the audio analog signals after sampling are converted into audio digital signals;
According to the time threshold pre-setting, to described audio digital signals framing windowing process;
Audio digital signals after each framing windowing process are carried out time frequency processing;
The characteristic point of time-frequency spectrum after extraction scaling down processing, and set up feature point pairs model.
3. method according to claim 2 is it is characterised in that described set up feature point pairs model, including:
Using the peak curve of the first frame of audio signal as initial threshold curve, the threshold curve of each frame after the first frame Can be multiplied with the peak curve of this frame by the threshold curve of former frame and be multiplied by attenuation quotient acquisition again;
Threshold curve according to every frame extracts the peak point after screening;Wherein, by by all peak points in every frame and this frame Threshold curve corresponding point are compared, if peak point is less than the corresponding point of threshold curve, give up this peak point, if peak point is high Corresponding point in threshold curve then retain this peak point;
Choose several peak points after screening before every frame, as the characteristic point of every frame;
It is right that characteristic point in each characteristic point of former frame and its a later frame region is organized successively.
4. method according to claim 1 is it is characterised in that the described feature point pairs by each audio signal finding Model is mated with newly-established all feature point pairs models, including:
According to the content information of feature point pairs, by all feature point pairs models of each audio signal and newly-established all features Point model is corresponded;
Calculate the time difference between corresponding feature point pairs model, time difference is arranged according to order from small to large, obtains Obtain minimum time poor;
Count the number of times that each audio signal has minimum time difference;
Judge whether this number of times is more than or equal to default minimum frequency threshold value, if being more than or equal to default minimum frequency threshold value Then this audio signal and the success of newly-built feature point pairs Model Matching, otherwise mate unsuccessful;Or extracting directly has minimum The most audio signal of time difference number of times, as with the newly-built successful audio signal of feature point pairs Model Matching.
5. the method according to Claims 1-4 any one is it is characterised in that the described account receiving user input Before audio signal, also include:
Eject floating layer, the information of display reminding simultaneously starts monitoring process;
When the audio signal monitoring there is input, obtain the duration starting monitoring process;
Judge whether the duration obtaining is more than default duration threshold value, if more than this login failure.
6. a kind of payment platform entering device based on speech cipher is it is characterised in that include:
Log-on message receiving unit, for receiving account and the audio signal of user input.
Audio signal processing unit, for decomposing described audio signal, sets up feature point pairs model to each frame;
Query unit, for inquiring about in default user message table, obtains the feature including one or more foundation The audio signal to model for the point;
Matching unit, for by the feature point pairs model of each audio signal finding and newly-established all feature point pairs moulds Type is mated, if the match is successful, obtains the audio signal corresponding account letter that the match is successful in described user message table Breath, judges whether corresponding account information is identical with the account information of user input, if identical, logs in payment platform, if not phase Same then login failure;If coupling is unsuccessful, directly login failure.
7. device according to claim 6, it is characterised in that described audio signal processing unit, is additionally operable to:
The audio signal receiving is sampled, the audio analog signals after sampling are converted into audio digital signals;
According to the time threshold pre-setting, to described audio digital signals framing windowing process;
Audio digital signals after each framing windowing process are carried out time frequency processing;
The characteristic point of time-frequency spectrum after extraction scaling down processing, and set up feature point pairs model.
8. device according to claim 7 is it is characterised in that described audio signal processing unit sets up feature point pairs mould Type, including:
Using the peak curve of the first frame of audio signal as initial threshold curve, the threshold curve of each frame after the first frame Can be multiplied with the peak curve of this frame by the threshold curve of former frame and be multiplied by attenuation quotient acquisition again;
Threshold curve according to every frame extracts the peak point after screening;Wherein, by by all peak points in every frame and this frame Threshold curve corresponding point are compared, if peak point is less than the corresponding point of threshold curve, give up this peak point, if peak point is high Corresponding point in threshold curve then retain this peak point;
Choose several peak points after screening before every frame, as the characteristic point of every frame;
It is right that characteristic point in each characteristic point of former frame and its a later frame region is organized successively.
9. device according to claim 6 is it is characterised in that described matching unit, by each audio signal finding Feature point pairs model mated with newly-established all feature point pairs models, including:
According to the content information of feature point pairs, by all feature point pairs models of each audio signal and newly-established all features Point model is corresponded;
Calculate the time difference between corresponding feature point pairs model, time difference is arranged according to order from small to large, obtains Obtain minimum time poor;
Count the number of times that each audio signal has minimum time difference;
Judge whether this number of times is more than or equal to default minimum frequency threshold value, if being more than or equal to default minimum frequency threshold value Then this audio signal and the success of newly-built feature point pairs Model Matching, otherwise mate unsuccessful;Or extracting directly has minimum The most audio signal of time difference number of times, as with the newly-built successful audio signal of feature point pairs Model Matching.
10. the device according to claim 6 to 9 any one is it is characterised in that described log-on message receiving unit receives To before the account and audio signal of user input, it is additionally operable to:
Eject floating layer, the information of display reminding simultaneously starts monitoring process;
When the audio signal monitoring there is input, obtain the duration starting monitoring process;
Judge whether the duration obtaining is more than default duration threshold value, if more than this login failure.
CN201610703600.3A 2016-08-22 2016-08-22 A kind of payment platform login method and device based on speech cipher Active CN106384595B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610703600.3A CN106384595B (en) 2016-08-22 2016-08-22 A kind of payment platform login method and device based on speech cipher

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610703600.3A CN106384595B (en) 2016-08-22 2016-08-22 A kind of payment platform login method and device based on speech cipher

Publications (2)

Publication Number Publication Date
CN106384595A true CN106384595A (en) 2017-02-08
CN106384595B CN106384595B (en) 2019-04-02

Family

ID=57916850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610703600.3A Active CN106384595B (en) 2016-08-22 2016-08-22 A kind of payment platform login method and device based on speech cipher

Country Status (1)

Country Link
CN (1) CN106384595B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108768977A (en) * 2018-05-17 2018-11-06 东莞市华睿电子科技有限公司 A kind of terminal system login method based on speech verification
CN108876983A (en) * 2018-05-17 2018-11-23 东莞市华睿电子科技有限公司 A kind of unlocking method of safety box with function of intelligent lock
CN111883141A (en) * 2020-07-27 2020-11-03 李林林 Text semi-correlation voiceprint recognition method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101772015A (en) * 2008-12-29 2010-07-07 卢中江 Method for starting up mobile terminal through voice password
CN102098159A (en) * 2010-07-28 2011-06-15 胡旭光 Secret key device and method for mobile phone
WO2011146531A9 (en) * 2010-05-18 2012-07-12 Acea Biosciences, Inc Data analysis of impedance-based cardiomyocyte-beating signals as detected on real-time cell analysis (rtca) cardio instruments
CN103685185A (en) * 2012-09-14 2014-03-26 上海掌门科技有限公司 Mobile equipment voiceprint registration and authentication method and system
CN104022879A (en) * 2014-05-29 2014-09-03 金蝶软件(中国)有限公司 Voice security verification method and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101772015A (en) * 2008-12-29 2010-07-07 卢中江 Method for starting up mobile terminal through voice password
WO2011146531A9 (en) * 2010-05-18 2012-07-12 Acea Biosciences, Inc Data analysis of impedance-based cardiomyocyte-beating signals as detected on real-time cell analysis (rtca) cardio instruments
CN102098159A (en) * 2010-07-28 2011-06-15 胡旭光 Secret key device and method for mobile phone
CN103685185A (en) * 2012-09-14 2014-03-26 上海掌门科技有限公司 Mobile equipment voiceprint registration and authentication method and system
CN104022879A (en) * 2014-05-29 2014-09-03 金蝶软件(中国)有限公司 Voice security verification method and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙立峰 等: "《和谐人机环境2006》", 30 June 2007, 清华大学出版社 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108768977A (en) * 2018-05-17 2018-11-06 东莞市华睿电子科技有限公司 A kind of terminal system login method based on speech verification
CN108876983A (en) * 2018-05-17 2018-11-23 东莞市华睿电子科技有限公司 A kind of unlocking method of safety box with function of intelligent lock
CN111883141A (en) * 2020-07-27 2020-11-03 李林林 Text semi-correlation voiceprint recognition method and system
CN111883141B (en) * 2020-07-27 2022-02-25 重庆金宝保信息技术服务有限公司 Text semi-correlation voiceprint recognition method and system

Also Published As

Publication number Publication date
CN106384595B (en) 2019-04-02

Similar Documents

Publication Publication Date Title
CN106373575B (en) User voiceprint model construction method, device and system
US8812319B2 (en) Dynamic pass phrase security system (DPSS)
CN102254559A (en) Identity authentication system and method based on vocal print
CN105913850B (en) Text correlation vocal print method of password authentication
CN103685185B (en) Mobile equipment voiceprint registration, the method and system of certification
WO2018149209A1 (en) Voice recognition method, electronic device, and computer storage medium
CN108074310A (en) Voice interactive method and intelligent lock administration system based on sound identification module
CN105869641A (en) Speech recognition device and speech recognition method
CN101441869A (en) Method and terminal for speech recognition of terminal user identification
CN107886958A (en) A kind of express delivery cabinet pickup method and device based on vocal print
US20100049526A1 (en) System and method for auditory captchas
CN106790054A (en) Interactive authentication system and method based on recognition of face and Application on Voiceprint Recognition
CN107862005A (en) User view recognition methods and device
CN101685635A (en) Identity authentication system and method
CN102413100A (en) Voice-print authentication system having voice-print password picture prompting function and realization method thereof
CN102752453A (en) Mobile phone unlocking method based on voice recognition
CN103325037A (en) Mobile payment safety verification method based on voice recognition
CN108062464A (en) Terminal control method and system based on Application on Voiceprint Recognition
CN104468522A (en) Voiceprint authentication method and device
CN106384595A (en) Voice password based payment platform login method and device
CN102413101A (en) Voice-print authentication system having voice-print password voice prompting function and realization method thereof
CN102567534B (en) Interactive product user generated content intercepting system and intercepting method for the same
CN107451131A (en) A kind of audio recognition method and device
CN103078828A (en) Cloud-model voice authentication system
WO2017059679A1 (en) Account processing method and apparatus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant