CN109036435B - Identity authentication and identification method based on voiceprint information - Google Patents

Identity authentication and identification method based on voiceprint information Download PDF

Info

Publication number
CN109036435B
CN109036435B CN201810928479.3A CN201810928479A CN109036435B CN 109036435 B CN109036435 B CN 109036435B CN 201810928479 A CN201810928479 A CN 201810928479A CN 109036435 B CN109036435 B CN 109036435B
Authority
CN
China
Prior art keywords
voiceprint
information
classification
user
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810928479.3A
Other languages
Chinese (zh)
Other versions
CN109036435A (en
Inventor
余伟
赵静芝
李家虎
施文杰
胡发泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ping An Comprehensive Financial Services Co Ltd Shanghai Branch
Original Assignee
Shenzhen Ping An Comprehensive Financial Services Co Ltd Shanghai Branch
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ping An Comprehensive Financial Services Co Ltd Shanghai Branch filed Critical Shenzhen Ping An Comprehensive Financial Services Co Ltd Shanghai Branch
Priority to CN201810928479.3A priority Critical patent/CN109036435B/en
Publication of CN109036435A publication Critical patent/CN109036435A/en
Application granted granted Critical
Publication of CN109036435B publication Critical patent/CN109036435B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/04Training, enrolment or model building
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering

Abstract

The invention discloses an identity authentication and identification method based on voiceprint information, which comprises the following steps: and a voiceprint registration step, namely acquiring the voiceprint information of the user and associating the voiceprint information with the classification information and the personal information of the user. And a voiceprint storage step, storing the voiceprint information into a corresponding classification voiceprint library according to the classification information, wherein the classification voiceprint library has classification characteristics. And a classification weight calculation step, namely calculating the classification weight of each classification voiceprint library according to a service scene when the identity authentication and identification are carried out on the user in the specified service scene. And a voiceprint comparison step, namely acquiring the current voiceprint of the user, and searching voiceprint information matched with the current voiceprint in each classification library. And a voiceprint authentication identification step, namely calculating a voiceprint authentication value according to the matched voiceprint information and the classification weight of the classification voiceprint library for storing the voiceprint information, authenticating the voiceprint information with the highest voiceprint authentication value as the identity of the user, and acquiring the personal information of the user associated with the voiceprint information.

Description

Identity authentication and identification method based on voiceprint information
Technical Field
The invention relates to the technical field of identity authentication and identification, in particular to an identity authentication and identification technology based on voiceprint information.
Background
With the development of internet technology and communication technology, more and more services are being moved to the network or handled through a communication network (such as a telephone). As in the financial field, many current credit transactions have been switched from traditional offline to online. The on-line transaction comprises the off-site credit business development in video, audio, on-line form filling and the like through the Internet or a mobile network.
On one hand, the on-line business handling simplifies the procedures, facilitates the business handling and improves the working efficiency, but on the other hand, the examination procedures are simplified, so that additional business risks are brought. In particular, the on-line auditing procedure is usually handled in a fixed flow, and although the wind-controlled auditing is performed by means of multi-angle client data auditing, relevant personnel contact confirmation and the like, most procedures are fixed procedures. Many bad intermediaries can utilize the loophole of online wind control audit to package and disguise the client data for the clients who do not accord with the qualification or exchange more service fees, and counterfeit the contacts/colleagues, so that the clients pass the wind control audit.
Conventional business logic verification processes are difficult to identify for such deliberate camouflage or this cost, and therefore a more characteristic, more difficult to crack approach is needed to improve the level of verification auditing.
Disclosure of Invention
The invention aims to provide a method for identity authentication and identification based on voiceprint information, and the voiceprint information is biological in character and difficult to forge and package, so that the method has high reliability.
According to an embodiment of the present invention, an identity authentication and identification method based on voiceprint information is provided, which includes the following steps:
a voiceprint registration step, namely acquiring voiceprint information of the user and associating the voiceprint information with classification information and personal information of the user;
a voiceprint storage step, in which the voiceprint information is stored into corresponding classified voiceprint libraries according to the classification information, and each classified voiceprint library has respective classification characteristics;
a classification weight calculation step, namely calculating the classification weight of each classification voiceprint library according to a service scene when the identity authentication and identification are carried out on the user in the specified service scene, wherein the classification weight is determined according to the association degree of the classification feature of the classification voiceprint library and the service scene;
a voiceprint comparison step, namely acquiring the current voiceprint of the user, and searching voiceprint information matched with the current voiceprint in each classification library;
and a voiceprint authentication step, namely calculating a voiceprint authentication value according to the matched voiceprint information and the classification weight of a classification voiceprint library storing the voiceprint information, authenticating the voiceprint information with the highest voiceprint authentication value as the identity of the user, and acquiring the personal information of the user associated with the voiceprint information.
In one embodiment, the identity authentication and identification method based on voiceprint information further includes:
and a service logic authentication identification step, namely performing service logic authentication on the identity of the user in the current service scene according to the personal information of the user associated with the voiceprint information.
In one embodiment, the classification information includes a traffic classification and an attribution classification.
In one embodiment, the voiceprint information of the user is obtained by one of the following ways:
obtaining voiceprint information through telephone voice when communicating with a user telephone;
and acquiring the voiceprint information through the audio signal of the video platform or the voice stream of the client when the user has a video call.
In one embodiment, the voiceprint registration step comprises:
acquiring a voice stream of a user;
carrying out voice stream segmentation processing, and segmenting the voice stream according to the minimum recognition duration;
voice noise reduction processing for removing noise irrelevant to voice;
extracting voice features, namely extracting the voice features in the voice stream segments;
and (4) establishing a voiceprint model, namely establishing the voiceprint model according to the voice characteristics, and analyzing the voiceprint information by the voiceprint model.
In one embodiment, the voiceprint comparison step comprises:
acquiring a current voice stream of a user;
performing voice stream segmentation processing, and segmenting the current voice stream according to the minimum recognition duration;
voice noise reduction processing for removing noise irrelevant to voice;
extracting voice characteristics, namely extracting current voice characteristics in voice stream segmentation, wherein the current voice characteristics are current voiceprints of a user;
and comparing the voiceprint characteristics, namely comparing the current voice characteristics with the voiceprint model, and searching matched voiceprint information.
In one embodiment, when the voice feature and the current voice feature are extracted, the voice quality is corrected according to the acquisition channel of the voice stream.
The invention utilizes the biological characteristic of voiceprint information which is difficult to forge and package, and combines a specific service scene to carry out dual identity verification of the voiceprint information and the service logic, thereby effectively improving the identification rate and the verification reliability, and having wide application prospect in the aspect of identity authentication or identity recognition of users.
Drawings
The above and other features, properties and advantages of the present invention will become more apparent from the following description of the embodiments with reference to the accompanying drawings in which like reference numerals denote like features throughout the several views, wherein:
fig. 1 discloses a flow chart of an identity authentication and identification method based on voiceprint information according to an embodiment of the invention.
Detailed Description
Referring to fig. 1, fig. 1 discloses a flowchart of an identity authentication method based on voiceprint information according to an embodiment of the invention. The identity authentication and identification method based on the voiceprint information comprises the following steps:
101. and a voiceprint registration step, namely acquiring the voiceprint information of the user and associating the voiceprint information with the classification information and the personal information of the user. In one embodiment, the classification information includes a service classification and an attribution classification, i.e., the voiceprint information is obtained along with a service classification associated with the voiceprint information, such as the voiceprint information is associated with one of an insurance service, a security service, or a credit service. The home information may be obtained through the user access location. The user is accessed through a fixed telephone or a mobile telephone, and can obtain the geographic position of the calling party. The user accesses through the internet channel and can determine the geographic position of the user through the IP address. The geographical location is used as the home information. To perform voiceprint registration, voiceprint information of a user may be obtained by one of the following ways: when a user calls in, voiceprint information is obtained through voice calling, when the user is in video communication with the user, voiceprint information is obtained through an audio signal, or the user is actively called and the voiceprint information is obtained through voice calling. The method comprises the following steps that when a user calls in, voiceprint information is obtained through voice calling, and when the user has a video call with the user, the voiceprint information is obtained through an audio signal, the two ways are usually used for normal services, and when the user accesses the services, the voiceprint information of the user is obtained at the same time to register. The approach of actively calling users and acquiring voiceprint information through voice call is generally used for identification services of high-risk users, and after certain users are determined to be high-risk users, the users are actively called to register the voiceprint information of the users in order to acquire the voiceprint information of the users for identification and early warning in the future.
In one embodiment, the voiceprint registration step is performed as follows:
acquiring a voice stream of a user;
carrying out voice stream segmentation processing, and segmenting the voice stream according to the minimum recognition duration;
voice noise reduction processing for removing noise irrelevant to voice;
extracting voice features, namely extracting the voice features in the voice stream segments;
and (4) establishing a voiceprint model, namely establishing the voiceprint model according to the voice characteristics, and analyzing the voiceprint information by the voiceprint model.
Since there may be a plurality of channels used for a call with a user, such as a fixed line telephone, a mobile phone, a video terminal, internet communication software, and the like, audio data loss is different in different channels, and signal strength and stability of a mobile network are different between regions, which also causes audio data loss difference. When extracting the voice feature, the loss of the audio data may cause the voice feature to be different. Even audio data obtained by the same person through different channels may produce different speech characteristics, resulting in matching errors. In order to eliminate errors caused by channels, when voice features are extracted, voice quality correction is carried out according to an acquisition channel of a voice stream. Specifically, the invention provides correction models for commonly used channels such as fixed telephones, mobile telephones, video terminals, internet communication software and the like, corrects the audio data obtained from the channels according to the correction models, extracts voice features on the basis of the corrected audio data and establishes voiceprint models according to the voice features. Therefore, the consistency of the voiceprint models is improved, and the audio data obtained by the same person through different channels can have the same voice characteristics.
102. And a voiceprint storage step, storing the voiceprint information into corresponding classified voiceprint libraries according to the classification information, wherein each classified voiceprint library has respective classification characteristics. In the invention, the voiceprint data is stored in a classification voiceprint library, the classification voiceprint library is classified according to the service attribute and the attribution attribute, and correspondingly, the service attribute and the attribution attribute are the classification characteristics of the classification voiceprint library. In the voiceprint registration step, the voiceprint information is associated with the traffic class and the attribute class. In the step of storing the voiceprint, the voiceprint information is stored into a classified voiceprint library with corresponding business attribute and attribution attribute according to the business classification and the attribution classification. In this way, the voiceprint information is stored in a classified mode, so that the data volume of each classified voiceprint library is relatively small, the classified voiceprint library with the small data volume is beneficial to improving the efficiency of searching and matching, and the classified voiceprint library with the small data volume has advantages in both matching speed and matching accuracy. If all the voiceprint information is stored in one database, all the data in the database needs to be traversed during matching, and obviously, the larger the data volume of the database is, the lower the matching efficiency is. And the voiceprint information is respectively stored in the classified voiceprint libraries in a classified mode, when the voiceprint information is matched, the plurality of classified voiceprint libraries are matched in parallel, and the data volume of each database is small, so that the matching efficiency is high, and the matching accuracy is high.
103. And a classification weight calculation step, namely calculating the classification weight of each classification voiceprint library according to the service scene when the identity authentication and identification are carried out on the user under the appointed service scene, wherein the classification weight is determined according to the association degree of the classification feature of the classification voiceprint library and the service scene. The calculation of the classification weight is obtained by comprehensively considering the association degree of each characteristic value in the classification characteristic and each characteristic value of the service scene. For example, the service scenario is as follows: the received one-way location information is the telephone of Shanghai, and the telephone content is the consultation insurance service. Then, the traffic class in the classification feature of each voiceprint library is given the following weight: insurance 100, securities 50, credit 50. The generic classes in the classification features of each voiceprint library are given the following weights: shanghai 100, suzhou 80, hangzhou 70 and Nanjing 60, the categories of attributes are classified according to geographical location, and the closer the attributes are, the higher the weight is. After comprehensive consideration, the classification weight of each classified voiceprint library can be calculated as follows: 100 of insurance Shanghai storehouse, 80 of insurance Suzhou storehouse, 70 of insurance Hangzhou storehouse, 60 of insurance Nanjing storehouse, 50 of securities Shanghai storehouse, 40 of securities Suzhou storehouse and 35 of securities Hangzhou storehouse, 8230. It should be noted that the calculation principle of the classification weight is only illustrated here, and is not a limitation.
104. And a voiceprint comparison step, namely acquiring the current voiceprint of the user, and searching voiceprint information matched with the current voiceprint in each classification library. In one embodiment, the voiceprint comparison step comprises:
acquiring a current voice stream of a user;
performing voice stream segmentation processing, and segmenting the current voice stream according to the minimum recognition duration;
voice noise reduction processing for removing noise irrelevant to voice;
extracting voice characteristics, namely extracting current voice characteristics in voice stream segmentation, wherein the current voice characteristics are current voiceprints of a user;
and comparing the voiceprint characteristics, namely comparing the current voice characteristics with the voiceprint model and searching matched voiceprint information.
Similarly, since there may be a plurality of channels used for a call with a user, such as a fixed phone, a mobile phone, a video terminal, internet communication software, etc., audio data loss in different channels is different, and signal strength and stability of a mobile network are different from one zone to another, which also causes audio data loss difference. In order to eliminate errors caused by channels, when the current voice features are extracted, correction is carried out according to the correction model, the current voice features are extracted on the basis of the corrected audio data, and the current voice features are compared with the voiceprint model, so that the audio data obtained by the same person through different channels can have the same current voice features.
The comparison of the voiceprint information means that the similarity between the current speech feature and the speech feature in the voiceprint model is greater than a certain threshold. For example, setting the threshold to 60, all voiceprint models with similarity greater than 60 would be considered a match. Therefore, several matching voiceprint information may be obtained in the voiceprint matching step. However, the voiceprint information has a corresponding similarity value, such as: 60. 70, 80, etc.
105. And a voiceprint authentication and identification step, namely calculating a voiceprint authentication value according to the matched voiceprint information and the classification weight of a classification voiceprint library storing the voiceprint information, authenticating the voiceprint information with the highest voiceprint authentication value as the identity of the user, and acquiring the personal information of the user associated with the voiceprint information.
In the classification weight calculation step of step 103, the classification weight of each classified voiceprint library is calculated. In step 104, a similarity value of the matched voiceprint information is calculated. And obtaining the final voiceprint authentication value of each voiceprint according to the classification weight and the numerical value of the similarity. For example, the similarity value of the voiceprint information in the shanghai repository for insurance is 60, the similarity value of the voiceprint information in the suzhou repository for insurance is 90, and the similarity value of the voiceprint information in the shanghai repository for securities is 80. Then, in combination with the classification weight of each classified voiceprint library, the voiceprint authentication value of each voiceprint is: voiceprint 60 of insurance shanghai repository, voiceprint 72 of insurance suzhou repository, voiceprint 40 of securities shanghai repository.
The voiceprint information with the highest voiceprint authentication value is the voiceprint information for the suzhou bank of insurance, and the authentication value is 72. The voiceprint information of the insurance suzhou repository is then authenticated as matching the user. And then calls the personal information of the user matched with the voiceprint information in the voiceprint registration step in step 101. The user profile invoked here may be used for other steps, such as performing business logic authentication.
In the embodiment shown in fig. 1, the identity authentication and identification method based on voiceprint information further includes the following steps:
106. and a service logic authentication identification step, namely performing service logic authentication on the identity of the user in the current service scene according to the personal information of the user associated with the voiceprint information. Through the foregoing steps, a user identity based on voiceprint information has been obtained, which is obtained by biometric recognition, and thus has a high degree of confidence. On the basis, the double authentication of the service logic is carried out by combining the current service scene.
For example, the current business scenario is to verify the identity of the accessing user. The service logic authentication may be to ask the user to provide related personal information, compare the related information provided by the user with the personal information obtained based on the voiceprint information, and perform secondary service logic authentication. The service logic authentication can be synchronously carried out with the voiceprint authentication, the service logic authentication is manually completed by an agent, and the voiceprint authentication is automatically completed by a background server according to the steps. The results of both are checked.
Several exemplary service scenarios are described below
User identity verification scenario
The user calls in through the mobile phone, consults insurance service, and the information of the mobile phone home area is Shanghai. The user directly indicates the identity, and the service scenario at this time is to verify the identity of the user. When the user connects the seat, the seat can carry out service logic verification with the user and ask the user to provide related personal information. Meanwhile, in the process of communicating with the user, the voice stream of the user is collected. The current speech features are obtained based on the speech stream and then matched in respective voiceprint libraries. And finding out the voiceprint with the highest voiceprint authentication value by combining the classification weight of the voiceprint library and the matching similarity. The voiceprint is stored in an insurance shanghai library, and in the classification information of the classification voiceprint library, the service is classified as insurance and the attribution is classified as shanghai. And then, the personal information of the user matched with the voiceprint information at the time of voiceprint registration is called. And checking the personal information of the user obtained by matching the voiceprint with the personal information provided by the user through service logic verification, wherein if the personal information of the user is consistent with the personal information provided by the user through service logic verification, the identity verification of the user is passed, and if the personal information of the user is inconsistent with the service logic verification, the identity verification of the user is not passed.
User identity confirmation
The user calls in through the mobile phone, consults credit services and has information about the location of the mobile phone in Shanghai. The user does not want to directly indicate the identity, and the service scene at this time is to identify the identity of the user. When the user connects to the agent, the agent will have a regular dialog with the user to answer the user's questions, such as answering regular questions about the credit service. Meanwhile, in the process of communicating with the user, the voice stream of the user is collected. The current speech features are obtained based on the speech stream and then matched in respective voiceprint libraries. And finding out the voiceprint with the highest voiceprint authentication value by combining the classification weight of the voiceprint library and the matching similarity. The voiceprint is stored in a blacklist Shanghai database, and the service is classified into the blacklist and the attribution is classified into Shanghai in the classification information of the classification voiceprint database. At this time, the personal information of the user matching the voiceprint information at the time of voiceprint registration is called. And providing the personal information of the user obtained by voiceprint matching to the seat, and prompting the seat that the user is a blacklist user.
Poor intermediary voiceprint acquisition
After obtaining the information of the bad intermediary through other ways, in order to obtain the voiceprint information of the bad intermediary for the future identification, the bad intermediary's phone or other terminal is actively called under the control of a special seat or software. When switched on, a dialogue is conducted with the user to obtain a voice stream. The user's voice stream is collected. And analyzing the characteristics of the voice as voice print information through a voice print model, and then storing the voice print information into a black list library. If the attribution information of the bad intermediary is available, the voiceprint model is further stored in the blacklist library classified according to the attribution. If the bad intermediary accesses the consultation in the future or serves as a certifier of the client, the identity of the bad intermediary can be recognized in time and the voiceprint information is reminded to be blacklisted voiceprint information.
The invention utilizes the biological characteristic of voiceprint information which is difficult to forge and package, and combines a specific service scene to carry out dual identity verification of the voiceprint information and the service logic, thereby effectively improving the identification rate and the verification reliability, and having wide application prospect in the aspect of identity authentication or identity recognition of users.
The embodiments described above are provided to enable persons skilled in the art to make or use the invention and that modifications or variations can be made to the embodiments described above by persons skilled in the art without departing from the inventive concept of the present invention, so that the scope of protection of the present invention is not limited by the embodiments described above but should be accorded the widest scope consistent with the innovative features set forth in the claims.

Claims (7)

1. An identity authentication and identification method based on voiceprint information is characterized by comprising the following steps:
a voiceprint registration step, in which voiceprint information of a user is acquired and is associated with classification information and personal information of the user;
a voiceprint storage step, in which the voiceprint information is stored into corresponding classified voiceprint libraries according to the classification information, and each classified voiceprint library has respective classification characteristics;
a classification weight calculation step, namely calculating the respective classification weight of each classification voiceprint library according to a service scene when the identity authentication and identification are carried out on a user under the appointed service scene, wherein the classification weight is determined according to the association degree of the classification feature of the classification voiceprint library and the service scene;
a voiceprint comparison step, namely acquiring the current voiceprint of the user, and searching voiceprint information matched with the current voiceprint in each classification library;
and a voiceprint authentication and identification step, namely calculating a voiceprint authentication value according to the matched voiceprint information and the classification weight of a classification voiceprint library storing the voiceprint information, authenticating the voiceprint information with the highest voiceprint authentication value as the identity of the user, and acquiring the personal information of the user associated with the voiceprint information.
2. The voiceprint information based identity authentication and identification method of claim 1, further comprising:
and a service logic authentication identification step, namely performing service logic authentication on the identity of the user under the current service scene according to the personal information of the user associated with the voiceprint information.
3. The voiceprint information based identity authentication and identification method of claim 1, wherein the classification information comprises a traffic classification and a home classification.
4. The identity authentication and identification method based on voiceprint information of claim 1, wherein the voiceprint information of the user is obtained by one of the following ways:
obtaining voiceprint information through telephone voice when communicating with a user telephone;
and acquiring the voiceprint information through the audio signal of the video platform or the voice stream of the client when the user has a video call.
5. The voiceprint information based identity authentication and identification method according to claim 1, wherein the voiceprint registration step comprises:
acquiring a voice stream of a user;
carrying out voice stream segmentation processing, and segmenting the voice stream according to the minimum recognition duration;
performing voice noise reduction processing to remove noise irrelevant to voice;
extracting voice features, namely extracting the voice features in the voice stream segments;
and (4) establishing a voiceprint model, namely establishing the voiceprint model according to the voice characteristics, and analyzing the voiceprint information by the voiceprint model.
6. The identity authentication and identification method based on voiceprint information according to claim 5, wherein the voiceprint comparison step comprises:
acquiring a current voice stream of a user;
performing voice stream segmentation processing, and segmenting the current voice stream according to the minimum recognition duration;
voice noise reduction processing for removing noise irrelevant to voice;
extracting voice characteristics, namely extracting current voice characteristics in voice stream segmentation, wherein the current voice characteristics are current voiceprints of a user;
and comparing the voiceprint characteristics, namely comparing the current voice characteristics with the voiceprint characteristics of other people through a voiceprint model, and searching matched voiceprint information.
7. The identity authentication and identification method based on voiceprint information according to claim 5 or 6, wherein when the voice feature and the current voice feature are extracted, the voice quality correction is performed according to an acquisition channel of the voice stream.
CN201810928479.3A 2018-08-15 2018-08-15 Identity authentication and identification method based on voiceprint information Active CN109036435B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810928479.3A CN109036435B (en) 2018-08-15 2018-08-15 Identity authentication and identification method based on voiceprint information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810928479.3A CN109036435B (en) 2018-08-15 2018-08-15 Identity authentication and identification method based on voiceprint information

Publications (2)

Publication Number Publication Date
CN109036435A CN109036435A (en) 2018-12-18
CN109036435B true CN109036435B (en) 2022-12-20

Family

ID=64631362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810928479.3A Active CN109036435B (en) 2018-08-15 2018-08-15 Identity authentication and identification method based on voiceprint information

Country Status (1)

Country Link
CN (1) CN109036435B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833882A (en) * 2019-03-28 2020-10-27 阿里巴巴集团控股有限公司 Voiceprint information management method, device and system, computing equipment and storage medium
CN110619880A (en) * 2019-10-14 2019-12-27 百可录(北京)科技有限公司 Voiceprint processing system and user identification method
CN111429920B (en) * 2020-03-30 2024-01-23 北京奇艺世纪科技有限公司 User distinguishing method, user behavior library determining method, device and equipment
CN111554303B (en) * 2020-05-09 2023-06-02 福建星网视易信息系统有限公司 User identity recognition method and storage medium in song singing process
CN111680589A (en) * 2020-05-26 2020-09-18 天津市微卡科技有限公司 Cognitive method for robot to finish face recognition based on voiceprint authentication
CN111833068A (en) * 2020-07-31 2020-10-27 重庆富民银行股份有限公司 Identity verification system and method based on voiceprint recognition
CN112002332A (en) * 2020-08-28 2020-11-27 北京捷通华声科技股份有限公司 Voice verification method and device and processor
CN112466310A (en) * 2020-10-15 2021-03-09 讯飞智元信息科技有限公司 Deep learning voiceprint recognition method and device, electronic equipment and storage medium
CN113409763B (en) * 2021-07-20 2022-10-25 北京声智科技有限公司 Voice correction method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102035649A (en) * 2009-09-29 2011-04-27 国际商业机器公司 Authentication method and device
US8234494B1 (en) * 2005-12-21 2012-07-31 At&T Intellectual Property Ii, L.P. Speaker-verification digital signatures
CN102810311A (en) * 2011-06-01 2012-12-05 株式会社理光 Speaker estimation method and speaker estimation equipment
CN105279282A (en) * 2015-11-19 2016-01-27 北京锐安科技有限公司 Identity relationship database generating method and identity relationship database generating device
CN106469261A (en) * 2015-08-21 2017-03-01 阿里巴巴集团控股有限公司 A kind of auth method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8234494B1 (en) * 2005-12-21 2012-07-31 At&T Intellectual Property Ii, L.P. Speaker-verification digital signatures
CN102035649A (en) * 2009-09-29 2011-04-27 国际商业机器公司 Authentication method and device
CN102810311A (en) * 2011-06-01 2012-12-05 株式会社理光 Speaker estimation method and speaker estimation equipment
CN106469261A (en) * 2015-08-21 2017-03-01 阿里巴巴集团控股有限公司 A kind of auth method and device
CN105279282A (en) * 2015-11-19 2016-01-27 北京锐安科技有限公司 Identity relationship database generating method and identity relationship database generating device

Also Published As

Publication number Publication date
CN109036435A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
CN109036435B (en) Identity authentication and identification method based on voiceprint information
US10410636B2 (en) Methods and system for reducing false positive voice print matching
US11842740B2 (en) Seamless authentication and enrollment
US8924285B2 (en) Building whitelists comprising voiceprints not associated with fraud and screening calls using a combination of a whitelist and blacklist
US9734831B2 (en) Utilizing voice biometrics
US20210152897A1 (en) Call classification through analysis of dtmf events
US20120284026A1 (en) Speaker verification system
US7212613B2 (en) System and method for telephonic voice authentication
KR102220962B1 (en) Identity recognition method and device
US20060106605A1 (en) Biometric record management
AU2017266971A1 (en) Identity authentication method and apparatus
US8406383B2 (en) Voice authentication for call control
JP2001503156A (en) Speaker identification method
CN109005104B (en) Instant messaging method, device, server and storage medium
CN113794805A (en) Detection method and detection system for GOIP fraud telephone
CN107464328A (en) Unlocking method, device, storage medium and the smart lock of smart lock
CN112818316B (en) Voiceprint-based identity recognition and application method, device and equipment
CN117252429A (en) Risk user identification method and device, storage medium and electronic equipment
CN110602326B (en) Suspicious incoming call identification method and suspicious incoming call identification system
CN111833068A (en) Identity verification system and method based on voiceprint recognition
CN114971627A (en) Data monitoring system and method based on computer network
CN116665708A (en) Illegal service operation detection system and method thereof
CN114819980A (en) Payment transaction risk control method and device, electronic equipment and storage medium
CN117459941A (en) Overseas fraud recognition method, device, equipment and readable storage medium
CN116881887A (en) Application program login method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant