CN109036435A - Authentication and recognition methods based on voiceprint - Google Patents

Authentication and recognition methods based on voiceprint Download PDF

Info

Publication number
CN109036435A
CN109036435A CN201810928479.3A CN201810928479A CN109036435A CN 109036435 A CN109036435 A CN 109036435A CN 201810928479 A CN201810928479 A CN 201810928479A CN 109036435 A CN109036435 A CN 109036435A
Authority
CN
China
Prior art keywords
voiceprint
user
vocal print
classification
library
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810928479.3A
Other languages
Chinese (zh)
Other versions
CN109036435B (en
Inventor
余伟
赵静芝
李家虎
施文杰
胡发泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ping An Comprehensive Financial Services Co Ltd Shanghai Branch
Original Assignee
Shenzhen Ping An Comprehensive Financial Services Co Ltd Shanghai Branch
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ping An Comprehensive Financial Services Co Ltd Shanghai Branch filed Critical Shenzhen Ping An Comprehensive Financial Services Co Ltd Shanghai Branch
Priority to CN201810928479.3A priority Critical patent/CN109036435B/en
Publication of CN109036435A publication Critical patent/CN109036435A/en
Application granted granted Critical
Publication of CN109036435B publication Critical patent/CN109036435B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/04Training, enrolment or model building
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering

Abstract

Present invention discloses a kind of authentication and recognition methods based on voiceprint, comprise the following steps that voiceprint registration step, obtain the voiceprint of user, and voiceprint is associated with the personal information of classification information and user.Vocal print storing step stores voiceprint into corresponding classification vocal print library according to classification information, and classification vocal print library has characteristic of division.Classified weight calculates step, when carrying out authentication identification to user under specified business scenario, calculates the respective classified weight in each classification vocal print library according to business scenario.Vocal print compares step, obtains the current vocal print of user, and the voiceprint to match with current vocal print is searched in each class library.Voiceprint identification step, according to the voiceprint to match and the classified weight in the classification vocal print library for storing the voiceprint, calculate voiceprint value, voiceprint with highest voiceprint value is authenticated into the identity for user, and obtains the personal information of user associated with the voiceprint.

Description

Authentication and recognition methods based on voiceprint
Technical field
The present invention relates to authentication and identification technology fields, recognize more specifically to the identity based on voiceprint Card and identification technology.
Background technique
With the development of internet science and technology and the communication technology, more and more business are transferred on network either by logical Communication network (such as phone) is handled.Same in financial field, current credit operation under traditional line much via doing Reason goes to on-line processing.On-line processing include by internet or mobile network, with video, audio, online fill in a form etc. it is non-existing The ground mode of field carries out credit operation.
On the one hand on-line processing business simplifies formality, facilitate business handling, improve work efficiency, but then Since approval procedure simplifies, additional business risk is also brought.Approval procedure especially on line is usually done with fixing process Reason, although can get in touch with the means such as confirmation by the audit of the customer data of multi-angle, related personnel carries out air control audit, majority is Fixed program.Many bad intermediaries can be using the loophole of air control audit on line, not meet the client of qualification or being It exchanges more service charges for, and customer data is packed, is pretended, fake to contact person/colleague, air control is passed through with this Audit.
Conventional service logic verification process is difficult to such camouflage deliberately or this cost, it is therefore desirable to more There is characteristic, is difficult to the mode cracked more to promote the level of verifying audit.
Summary of the invention
The present invention is directed to propose a kind of carry out authentication based on voiceprint and know method for distinguishing, voiceprint is biology Characteristic, it is difficult to forge and pack, therefore have higher reliability.
An embodiment according to the present invention proposes a kind of authentication and recognition methods based on voiceprint, including such as Under step:
Voiceprint registration step obtains the voiceprint of user, and voiceprint and classification information and the personal of user are believed Manner of breathing association;
Vocal print storing step stores voiceprint into corresponding classification vocal print library according to classification information, each classification Vocal print library has respective characteristic of division;
Classified weight calculates step, when carrying out authentication identification to user under specified business scenario, according to business Scene calculates the respective classified weight in each classification vocal print library, and classified weight is the characteristic of division and business according to classification vocal print library The degree of association of scene determines;
Vocal print compares step, obtains the current vocal print of user, and lookup matches with current vocal print in each class library Voiceprint;
Voiceprint step, according to the classification in the voiceprint to match and the classification vocal print library for storing the voiceprint Weight calculates voiceprint value, the voiceprint with highest voiceprint value is authenticated the identity for user, and obtain and be somebody's turn to do The personal information of the associated user of voiceprint.
In one embodiment, authentication and recognition methods based on voiceprint are somebody's turn to do further include:
Service logic authenticates identification step, according to the personal information of user associated with the voiceprint, in current industry Service logic certification is carried out to user identity under scene of being engaged in.
In one embodiment, the classification information includes business classification and possession classification.
In one embodiment, the voiceprint of user is obtained by one of following approach:
Voiceprint is obtained by call voice when conversing with subscriber phone;
Voiceprint is obtained by the voice flow of the audio signal of video platform or client when conversing with user video.
In one embodiment, the voiceprint registration step includes:
Obtain the voice flow of user;
Voice flow segment processing is carried out, voice flow is segmented according to minimum identification duration;
Voice de-noising processing, removes the noise unrelated with voice;
Speech feature extraction extracts the phonetic feature in voice flow segmentation;
Sound-groove model is established, and establishes sound-groove model according to phonetic feature, sound-groove model parses voiceprint.
In one embodiment, the vocal print comparison step includes:
Obtain the current speech stream of user;
Voice flow segment processing is carried out, current speech stream is segmented according to minimum identification duration;
Voice de-noising processing, removes the noise unrelated with voice;
Speech feature extraction, extracts the current speech feature in voice flow segmentation, and current speech feature is the current of user Vocal print;
Vocal print feature compares, and current speech feature is compared with sound-groove model, finds the voiceprint to match.
In one embodiment, phonetic feature is being extracted and when current speech feature, according to the acquisition channel of voice flow, into The amendment of row voice quality.
Using voiceprint, this is difficult to the biological nature forged and packed to the present invention, carries out in conjunction with specific business scenario The verifying of the dual identity of voiceprint and service logic can effectively promote discrimination and verifying reliability, recognize in the identity of user In terms of card or identification, it is with a wide range of applications.
Detailed description of the invention
The above and other feature of the present invention, property and advantage will pass through description with reference to the accompanying drawings and examples And become apparent, identical appended drawing reference always shows identical feature in the accompanying drawings, in which:
Fig. 1 discloses the process of the authentication and recognition methods based on voiceprint of an embodiment according to the present invention Figure.
Specific embodiment
Refering to what is shown in Fig. 1, Fig. 1 discloses the identity identifying method based on voiceprint of an embodiment according to the present invention Flow chart.The authentication and recognition methods based on voiceprint, comprise the following steps that
101, voiceprint registration step obtains the voiceprint of user, by voiceprint and classification information and of user People's information is associated.In one embodiment, classification information includes business classification and possession classification, that is, is obtaining voiceprint Meanwhile business classification associated with the voiceprint can be also obtained, for example the voiceprint is and insurance business, stock exchange transaction Or one of credit operation is associated.Possession information can be obtained by user on-position.User is to pass through fixation What phone or mobile phone accessed, the geographical location of caller can be obtained.User is accessed by internet channels, Ke Yitong It crosses IP address and determines its geographical location.Above-mentioned geographical location is as possession information.In order to carry out voiceprint registration, can by with The voiceprint of one of lower approach acquisition user: voiceprint and user are obtained by audio call when user's incoming call Voiceprint or active call user are obtained by audio signal when video calling and vocal print letter is obtained by audio call Breath.Vocal print is obtained by audio signal when obtaining voiceprint by audio call when user's incoming call and conversing with user video Both approach of information are commonly used in normal business, when being accessed by user, while obtaining the voiceprint of user to carry out Registration.Active call user simultaneously obtains the identification industry that this approach of voiceprint is commonly used in high risk user by audio call Business, determined certain user be high risk user after, in order to obtain their voiceprint in order to carry out from now on identification and Early warning, meeting active call user are registered with the voiceprint to user.
In one embodiment, the implementation procedure of voiceprint registration step is as follows:
Obtain the voice flow of user;
Voice flow segment processing is carried out, voice flow is segmented according to minimum identification duration;
Voice de-noising processing, removes the noise unrelated with voice;
Speech feature extraction extracts the phonetic feature in voice flow segmentation;
Sound-groove model is established, and establishes sound-groove model according to phonetic feature, sound-groove model parses voiceprint.
Due to the channel that is used with user's communication may there are many, such as fixed-line telephone, mobile phone, video terminal, mutually The loss of connected network communication software etc., different channel sound intermediate frequency data is different, and the signal strength and stability of mobile network In ground section, there is also differences, and also resulting in audio data, there are differential loss.When extracting phonetic feature, the damage of audio data Consumption will cause the difference of phonetic feature.Even if the same person passes through the different obtained audio datas of channel, it is also possible to generate Different phonetic feature, so as to cause matching error.In order to eliminate the error caused by channel, the present invention is extracting voice When feature, voice quality amendment can be carried out according to the acquisition channel of voice flow.Specifically, the present invention is to fixed-line telephone, movement The common channel such as phone, video terminal, internet communication software provides correction model respectively, to the sound for being obtained from above-mentioned channel Frequency is modified according to according to correction model, and the extraction of phonetic feature and basis are carried out on the basis of revised audio data Phonetic feature establishes sound-groove model.Thus the consistency of sound-groove model is improved, so that the same person is as obtained by different channels To audio data can have same phonetic feature.
102, vocal print storing step stores voiceprint into corresponding classification vocal print library according to classification information, each Classify vocal print library with respective characteristic of division.In the present invention, voice print database is stored in classification vocal print library, classification sound Line library is according to service attribute and possession attributive classification, point in vocal print library correspondingly, service attribute and possession attribute are namely classified Category feature.In voiceprint registration step, voiceprint can classify with business and possession classification is associated.In vocal print storing step In, classified according to business classification and possession, voiceprint can be saved to point with corresponding service attribute and possession attribute In class vocal print library.In this case, voiceprint is to be classified preservation, then each classification vocal print library data volume relatively Small, the lesser classification vocal print library of data volume, which is conducive to be promoted, searches matched efficiency, and either matching speed still matches accurately Degree, the lesser classification vocal print library of data volume all has advantage.If all voiceprints are all stored in a database, So in matching, it is necessary to traverse all data in the database, hence it is evident that the data volume of database is bigger, and matching efficiency is got over It is low.And voiceprint is stored in respectively in classification vocal print library by mode classification, when being matched, multiple classification vocal print libraries It is to carry out matching operation parallel, the data volume of each database is smaller, therefore matching efficiency is high, and matched accuracy is also higher.
103, classified weight calculates step, when carrying out authentication identification to user under specified business scenario, according to Business scenario calculate the respective classified weight in each classification vocal print library, classified weight be according to classification vocal print library characteristic of division and The degree of association of business scenario determines.The calculating of classified weight is each characteristic value and the business scenario comprehensively considered in characteristic of division Each characteristic value the degree of association and obtain.For example, business scenario is as follows: receiving the phone that a logical possession information is Shanghai, electricity Words content is consulting insurance business.So, the business classification in the characteristic of division in each vocal print library can be endowed weight below: Insure 100, security 50, credit 50.Possession classification can be endowed weight below: Shanghai in the characteristic of division in each vocal print library 100, Suzhou 80, Hangzhou 70, Nanjing 60, the distance that possession is classified according to geographical location, the closer weight of distance are higher.Synthesis is examined After worry, the classified weight in each classification vocal print library can calculate as follows: insurance Shanghai library 100, insurance Suzhou library 80, insurance Hangzhou Library 70, insurance Nanjing library 60, security Shanghai library 50, security Suzhou library 40, security Hangzhou library 35 ....It is to be appreciated that being herein The calculating principle for illustrating classified weight, is not intended as a kind of limitation.
104, vocal print compares step, obtains the current vocal print of user, searches and current vocal print phase in each class library The voiceprint matched.In one embodiment, vocal print comparison step includes:
Obtain the current speech stream of user;
Voice flow segment processing is carried out, current speech stream is segmented according to minimum identification duration;
Voice de-noising processing, removes the noise unrelated with voice;
Speech feature extraction, extracts the current speech feature in voice flow segmentation, and current speech feature is the current of user Vocal print;
Vocal print feature compares, and current speech feature is compared with sound-groove model, finds the voiceprint to match.
Likewise, due to the channel that is used with user's communication may there are many, such as fixed-line telephone, mobile phone, video Terminal, internet communication software etc., the losses of different channel sound intermediate frequency data is different, and the signal strength of mobile network and There is also differences in ground section for stability, and also resulting in audio data, there are differential loss.In order to eliminate caused by channel Error can be also modified according to correction model, when extracting current speech feature on the basis of revised audio data The extraction of phonetic feature and current speech feature is compared with sound-groove model before the enterprising trade, so that the same person is not by The obtained audio data of cochannel can have same current speech feature.
It should be noted that so-called voiceprint is compared with the voice spy referred in current speech feature and sound-groove model Similarity between sign is greater than certain threshold value.For example 60 are set the threshold to, then all similarities are greater than 60 vocal print mould Type all can be considered as matching.Therefore, several voiceprints to match may be obtained by comparing in step in vocal print.But These voiceprints can all have the numerical value of a corresponding similarity, such as: 60,70,80 etc..
105, voiceprint identification step, according to the voiceprint to match and the classification vocal print for storing the voiceprint The classified weight in library calculates voiceprint value, the voiceprint with highest voiceprint value is authenticated the identity for user, and Obtain the personal information of user associated with the voiceprint.
It is calculated in step in step 103 classified weight, the classified weight in each classification vocal print library has been calculated.In step 104 vocal prints compare in step, and the numerical value of the similarity of the voiceprint to match has been calculated.According to classified weight and similar The numerical value of degree can obtain the final voiceprint value of each vocal print.For example, the similarity of the voiceprint in insurance Shanghai library Numerical value is 60, the similarity numerical value of voiceprint in insurance Suzhou library is 90, voiceprint in the library of security Shanghai it is similar Degree value is 80.So combine the classified weight in each classification vocal print library, the voiceprint value of each vocal print are as follows: insurance Shang Haiku Vocal print 60, insure the vocal print 40 of the vocal print 72 in Suzhou library, security Shanghai library.
Voiceprint with highest voiceprint value is the voiceprint for insuring Suzhou library, and authentication value is 72.Then, it protects The voiceprint in dangerous Suzhou library is authenticated to be to match with user.It is invoked in step 101 voiceprint registration step later and the sound The personal information for the user that line information matches.The step of userspersonal information called herein can be used for other, such as into The certification of row service logic.
In the embodiment shown in fig. 1, which further includes following step:
106, service logic certification identification step is being worked as according to the personal information of user associated with the voiceprint Service logic certification is carried out to user identity under preceding business scenario.By aforementioned step, one is had been obtained for based on vocal print The user identity of information, the user identity are obtained according to living things feature recognition, therefore confidence level with higher.It is basic herein On, the double authentication of service logic is carried out in conjunction with current business scene.
For example, current business scene is to carry out identity verification to the user accessed.Then service logic certification can be Please user associated personal information is provided, by relevant information that user provides and the personal information ratio that is obtained based on voiceprint It is right, carry out secondary service logic certification.Service logic certification synchronous with voiceprint can carry out, and service logic is authenticated by attending a banquet It is accomplished manually, and voiceprint is to be completed by the server on backstage automatically according to above-mentioned step.The result of the two is checked.
Several typical business scenarios are described below
User identity verifies scene
User seeks advice from insurance business, mobile phone information of home location Shanghai by mobile phone incoming call.User directly shows identity, this When business scenario be carry out user identity verification.After user, which connects, to attend a banquet, service logic can be carried out with user by, which attending a banquet, is tested Card asks user to provide relevant personal information.Meanwhile during with user's communication, the voice flow of user is collected.It is based on The voice flow obtains current speech feature, is then matched in each vocal print library.In conjunction with vocal print library classified weight and The similarity matched finds the vocal print with highest voiceprint value.The vocal print is stored in insurance Shanghai library, vocal print library of classifying Classification information in, business is classified as insuring, and possession is classified as Shanghai.Recall in voiceprint registration with the voiceprint phase The personal information of the user matched.The personal information for the user that voice print matching obtains and user are provided by service logic verifying Personal information is checked, if the two unanimously if user identity verification pass through, the two is inconsistent, and identity verification does not pass through.
User identification confirmation
User seeks advice from credit operation, mobile phone information of home location Shanghai by mobile phone incoming call.User is unwilling directly to show body Part, business scenario at this time is the identification for carrying out user identity.After user, which connects, to attend a banquet, attending a banquet, it is conventional right to carry out with user The problem of words, answer user, for example answer and putd question to about the routine of credit operation.Meanwhile during with user's communication, use The voice flow at family is collected.Current speech feature is obtained based on the voice flow, is then matched in each vocal print library.In conjunction with The classified weight in vocal print library and matched similarity find the vocal print with highest voiceprint value.The vocal print is stored in black In the library of list Shanghai, classify in the classification information in vocal print library, business is classified as blacklist, and possession is classified as Shanghai.It calls at this time The personal information of the user to match in voiceprint registration with the voiceprint.The personal information for the user that voice print matching is obtained It is supplied to and attends a banquet, while prompt attends a banquet the user as black list user.
Bad intermediary's vocal print obtains
After the information for obtaining bad intermediary by other approach, in order to obtain the voiceprint of bad intermediary, so as to Identified afterwards up to the present, at this time can by it is special attend a banquet the perhaps phone of the bad intermediary of software control active call or other Terminal.After connection, engage in the dialogue with user to obtain voice flow.The voice flow of user is collected.It is parsed by sound-groove model The feature of the voice is voiceprint, and then voiceprint is saved in blacklist library.If bad intermediary can be obtained Possession information, then further sound-groove model is saved in the blacklist library of corresponding possession classification.If from now on this it is bad in It interfaces with into consulting, or the proof personnel as its client are, can identify its identity in time and reminds the voiceprint to be Blacklist voiceprint.
Using voiceprint, this is difficult to the biological nature forged and packed to the present invention, carries out in conjunction with specific business scenario The verifying of the dual identity of voiceprint and service logic can effectively promote discrimination and verifying reliability, recognize in the identity of user In terms of card or identification, it is with a wide range of applications.
Above-described embodiment, which is available to, to be familiar with person in the art to realize or use the present invention, and is familiar with this field Personnel can make various modifications or variation, thus this to above-described embodiment without departing from the present invention in the case of the inventive idea The protection scope of invention is not limited by above-described embodiment, and should meet inventive features that claims are mentioned most On a large scale.

Claims (7)

1. a kind of authentication and recognition methods based on voiceprint characterized by comprising
Voiceprint registration step obtains the voiceprint of user, by the personal information phase of voiceprint and classification information and user Association;
Vocal print storing step stores voiceprint into corresponding classification vocal print library according to classification information, each classification vocal print Library has respective characteristic of division;
Classified weight calculates step, when carrying out authentication identification to user under specified business scenario, according to business scenario The respective classified weight in each classification vocal print library is calculated, classified weight is the characteristic of division and business scenario according to classification vocal print library The degree of association determine;
Vocal print compares step, obtains the current vocal print of user, and the vocal print to match with current vocal print is searched in each class library Information;
Voiceprint identification step, according to the classification in the voiceprint to match and the classification vocal print library for storing the voiceprint Weight calculates voiceprint value, the voiceprint with highest voiceprint value is authenticated the identity for user, and obtain and be somebody's turn to do The personal information of the associated user of voiceprint.
2. authentication and recognition methods based on voiceprint as described in claim 1, which is characterized in that further include:
Service logic authenticates identification step, according to the personal information of user associated with the voiceprint, in current business field Service logic certification is carried out to user identity under scape.
3. authentication and recognition methods based on voiceprint as described in claim 1, which is characterized in that the classification letter Breath includes business classification and possession classification.
4. authentication and recognition methods based on voiceprint as described in claim 1, which is characterized in that pass through following way The voiceprint of one of diameter acquisition user:
Voiceprint is obtained by call voice when conversing with subscriber phone;
Voiceprint is obtained by the voice flow of the audio signal of video platform or client when conversing with user video.
5. authentication and recognition methods based on voiceprint as described in claim 1, which is characterized in that the vocal print note Volume step include:
Obtain the voice flow of user;
Voice flow segment processing is carried out, voice flow is segmented according to minimum identification duration;
Voice de-noising processing, removes the noise unrelated with voice;
Speech feature extraction extracts the phonetic feature in voice flow segmentation;
Sound-groove model is established, and establishes sound-groove model according to phonetic feature, sound-groove model parses voiceprint.
6. authentication and recognition methods based on voiceprint as claimed in claim 5, which is characterized in that the vocal print ratio Include: to step
Obtain the current speech stream of user;
Voice flow segment processing is carried out, current speech stream is segmented according to minimum identification duration;
Voice de-noising processing, removes the noise unrelated with voice;
Speech feature extraction, extracts the current speech feature in voice flow segmentation, and current speech feature is the current vocal print of user;
Vocal print feature compares, and current speech feature is compared by sound-groove model with other people vocal print feature, finds phase Matched voiceprint.
7. such as the authentication and recognition methods described in claim 5 or 6 based on voiceprint, which is characterized in that extracting When phonetic feature and current speech feature, according to the acquisition channel of voice flow, voice quality amendment is carried out.
CN201810928479.3A 2018-08-15 2018-08-15 Identity authentication and identification method based on voiceprint information Active CN109036435B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810928479.3A CN109036435B (en) 2018-08-15 2018-08-15 Identity authentication and identification method based on voiceprint information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810928479.3A CN109036435B (en) 2018-08-15 2018-08-15 Identity authentication and identification method based on voiceprint information

Publications (2)

Publication Number Publication Date
CN109036435A true CN109036435A (en) 2018-12-18
CN109036435B CN109036435B (en) 2022-12-20

Family

ID=64631362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810928479.3A Active CN109036435B (en) 2018-08-15 2018-08-15 Identity authentication and identification method based on voiceprint information

Country Status (1)

Country Link
CN (1) CN109036435B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619880A (en) * 2019-10-14 2019-12-27 百可录(北京)科技有限公司 Voiceprint processing system and user identification method
CN111429920A (en) * 2020-03-30 2020-07-17 北京奇艺世纪科技有限公司 User distinguishing method, user behavior library determining method, device and equipment
CN111554303A (en) * 2020-05-09 2020-08-18 福建星网视易信息系统有限公司 User identity recognition method and storage medium in song singing process
CN111680589A (en) * 2020-05-26 2020-09-18 天津市微卡科技有限公司 Cognitive method for robot to finish face recognition based on voiceprint authentication
CN111833882A (en) * 2019-03-28 2020-10-27 阿里巴巴集团控股有限公司 Voiceprint information management method, device and system, computing equipment and storage medium
CN111833068A (en) * 2020-07-31 2020-10-27 重庆富民银行股份有限公司 Identity verification system and method based on voiceprint recognition
CN112002332A (en) * 2020-08-28 2020-11-27 北京捷通华声科技股份有限公司 Voice verification method and device and processor
CN112466310A (en) * 2020-10-15 2021-03-09 讯飞智元信息科技有限公司 Deep learning voiceprint recognition method and device, electronic equipment and storage medium
CN113409763A (en) * 2021-07-20 2021-09-17 北京声智科技有限公司 Voice correction method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102035649A (en) * 2009-09-29 2011-04-27 国际商业机器公司 Authentication method and device
US8234494B1 (en) * 2005-12-21 2012-07-31 At&T Intellectual Property Ii, L.P. Speaker-verification digital signatures
CN102810311A (en) * 2011-06-01 2012-12-05 株式会社理光 Speaker estimation method and speaker estimation equipment
CN105279282A (en) * 2015-11-19 2016-01-27 北京锐安科技有限公司 Identity relationship database generating method and identity relationship database generating device
CN106469261A (en) * 2015-08-21 2017-03-01 阿里巴巴集团控股有限公司 A kind of auth method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8234494B1 (en) * 2005-12-21 2012-07-31 At&T Intellectual Property Ii, L.P. Speaker-verification digital signatures
CN102035649A (en) * 2009-09-29 2011-04-27 国际商业机器公司 Authentication method and device
CN102810311A (en) * 2011-06-01 2012-12-05 株式会社理光 Speaker estimation method and speaker estimation equipment
CN106469261A (en) * 2015-08-21 2017-03-01 阿里巴巴集团控股有限公司 A kind of auth method and device
CN105279282A (en) * 2015-11-19 2016-01-27 北京锐安科技有限公司 Identity relationship database generating method and identity relationship database generating device

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833882A (en) * 2019-03-28 2020-10-27 阿里巴巴集团控股有限公司 Voiceprint information management method, device and system, computing equipment and storage medium
CN110619880A (en) * 2019-10-14 2019-12-27 百可录(北京)科技有限公司 Voiceprint processing system and user identification method
CN111429920A (en) * 2020-03-30 2020-07-17 北京奇艺世纪科技有限公司 User distinguishing method, user behavior library determining method, device and equipment
CN111429920B (en) * 2020-03-30 2024-01-23 北京奇艺世纪科技有限公司 User distinguishing method, user behavior library determining method, device and equipment
CN111554303A (en) * 2020-05-09 2020-08-18 福建星网视易信息系统有限公司 User identity recognition method and storage medium in song singing process
CN111554303B (en) * 2020-05-09 2023-06-02 福建星网视易信息系统有限公司 User identity recognition method and storage medium in song singing process
CN111680589A (en) * 2020-05-26 2020-09-18 天津市微卡科技有限公司 Cognitive method for robot to finish face recognition based on voiceprint authentication
CN111833068A (en) * 2020-07-31 2020-10-27 重庆富民银行股份有限公司 Identity verification system and method based on voiceprint recognition
CN112002332A (en) * 2020-08-28 2020-11-27 北京捷通华声科技股份有限公司 Voice verification method and device and processor
CN112466310A (en) * 2020-10-15 2021-03-09 讯飞智元信息科技有限公司 Deep learning voiceprint recognition method and device, electronic equipment and storage medium
CN113409763A (en) * 2021-07-20 2021-09-17 北京声智科技有限公司 Voice correction method and device and electronic equipment
CN113409763B (en) * 2021-07-20 2022-10-25 北京声智科技有限公司 Voice correction method and device and electronic equipment

Also Published As

Publication number Publication date
CN109036435B (en) 2022-12-20

Similar Documents

Publication Publication Date Title
CN109036435A (en) Authentication and recognition methods based on voiceprint
US10685657B2 (en) Biometrics platform
CN106373575B (en) User voiceprint model construction method, device and system
CN106550155B (en) Swindle sample is carried out to suspicious number and screens the method and system sorted out and intercepted
US9503571B2 (en) Systems, methods, and media for determining fraud patterns and creating fraud behavioral models
US11714793B2 (en) Systems and methods for providing searchable customer call indexes
CN110533288A (en) Business handling process detection method, device, computer equipment and storage medium
US8219404B2 (en) Method and apparatus for recognizing a speaker in lawful interception systems
US9258425B2 (en) Method and system for speaker verification
JP2023511104A (en) A Robust Spoofing Detection System Using Deep Residual Neural Networks
US11943383B2 (en) Methods and systems for automatic discovery of fraudulent calls using speaker recognition
US20120072453A1 (en) Systems, methods, and media for determining fraud patterns and creating fraud behavioral models
CN113794805A (en) Detection method and detection system for GOIP fraud telephone
CN112511696A (en) System and method for identifying bad content of call center AI engine
CN105991809A (en) Method and device for storing temporary contact
CN112818316B (en) Voiceprint-based identity recognition and application method, device and equipment
CN113744742B (en) Role identification method, device and system under dialogue scene
CN109510904A (en) The detection method and system of call center's outgoing call recording
CN114726635A (en) Authority verification method, device, electronic equipment and medium
CN113452847A (en) Crank call identification method and related device
Ramasubramanian Speaker spotting: Automatic telephony surveillance for homeland security
CN110909333A (en) Bank customer service system based on voiceprint technology and operation method
CN116665708A (en) Illegal service operation detection system and method thereof
TWI778234B (en) Speaker verification system
JP2001306094A (en) System and method for voice authentication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant