WO2018149077A1 - 声纹识别方法、装置、存储介质和后台服务器 - Google Patents

声纹识别方法、装置、存储介质和后台服务器 Download PDF

Info

Publication number
WO2018149077A1
WO2018149077A1 PCT/CN2017/090046 CN2017090046W WO2018149077A1 WO 2018149077 A1 WO2018149077 A1 WO 2018149077A1 CN 2017090046 W CN2017090046 W CN 2017090046W WO 2018149077 A1 WO2018149077 A1 WO 2018149077A1
Authority
WO
WIPO (PCT)
Prior art keywords
test
voiceprint feature
voiceprint
target
user
Prior art date
Application number
PCT/CN2017/090046
Other languages
English (en)
French (fr)
Inventor
王健宗
郭卉
宋继程
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Priority to SG11201803895RA priority Critical patent/SG11201803895RA/en
Priority to JP2018514332A priority patent/JP6649474B2/ja
Priority to AU2017341161A priority patent/AU2017341161A1/en
Priority to EP17857669.0A priority patent/EP3584790A4/en
Priority to US15/772,801 priority patent/US10629209B2/en
Priority to KR1020187015547A priority patent/KR20180104595A/ko
Publication of WO2018149077A1 publication Critical patent/WO2018149077A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/04Training, enrolment or model building
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/06Decision making techniques; Pattern matching strategies
    • G10L17/08Use of distortion metrics or a particular distance between probe pattern and reference templates
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/39Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using genetic algorithms

Definitions

  • the present invention relates to the field of biometric identification technology, and more particularly to a voiceprint recognition method, apparatus, storage medium, and backend server.
  • Voiceprint Recognition is an identification technique that identifies a speaker based on the biometric characteristics of the speaker contained in the sound. Because voiceprint recognition is safe and reliable, it can be used in almost all areas of security protection and personalized applications where identification is required. For example, the business volume of financial institutions such as banks, securities, insurance, etc. continues to expand, resulting in a large number of identification needs. Compared with traditional identification technology, the advantage of voiceprint recognition is that the voiceprint extraction process is simple and low-cost, and each person's voiceprint features are different from other people's voiceprint features, unique, and difficult to forge and counterfeit. Because voiceprint recognition is safe, reliable, and convenient, it is widely used in applications where identification is required. However, the existing voiceprint recognition process takes a long time. When processing a large number of voice recognition requests, it is easy to cause partial voice recognition requests to be lost due to excessive processing time, which affects the application of voiceprint recognition technology.
  • the technical problem to be solved by the present invention is to provide a voiceprint recognition method, device, storage medium and background server for the defects of the prior art, which can improve the processing efficiency of a large number of voice recognition requests and shorten the processing time.
  • a voiceprint recognition method comprising:
  • the client collects the test voice of the user, and sends a voice recognition request to the background server, where the voice recognition request includes the user ID and the test voice;
  • the background server receives the voice recognition request, and uses a message queue and an asynchronous mechanism to determine a voice recognition request to be processed;
  • the background server acquires a target voiceprint feature corresponding to the user ID of the to-be-processed voice recognition request, and acquires a test voiceprint feature corresponding to the test voice of the to-be-processed voice recognition request;
  • the client receives and displays the judgment result.
  • the invention also provides a voiceprint recognition device, comprising:
  • a client configured to collect a test voice of the user, and send a voice recognition request to the background server, where the voice recognition request includes a user ID and the test voice;
  • a background server configured to receive the voice recognition request, and determine a to-be-processed voice recognition request by using a message queue and an asynchronous mechanism
  • a background server configured to acquire a target voiceprint feature corresponding to the user ID of the to-be-processed voice recognition request, and acquire a test voiceprint feature corresponding to the test voice of the to-be-processed voice recognition request;
  • a background server configured to determine, according to the target voiceprint feature and the test voiceprint feature, whether to correspond to the same user, and output a determination result to the client;
  • the client is configured to receive and display the judgment result.
  • the present invention also provides a background server comprising a memory, a processor, and a computer program stored in the memory and operable on the processor, the processor implementing the voiceprint recognition when the computer program is executed The steps performed by the background server in the method.
  • the present invention also provides a computer readable storage medium storing a computer program that, when executed by a processor, implements the steps performed by a background server in the voiceprint recognition method described above.
  • the background server acquires a corresponding target voiceprint feature based on the user ID in the voice recognition request to be processed, and is based on the voice to be processed. Identifying the test voice in the request to obtain the test voiceprint feature, and comparing the target voiceprint feature with the test voiceprint feature to determine whether the target voiceprint feature and the speaker of the test voiceprint feature are the same user, and the fast voice recognition can be achieved. The effect is to improve the efficiency of speech recognition.
  • the background server uses the message queue and the asynchronous mechanism to determine the pending speech recognition request, so as to improve the processing efficiency of a large number of speech recognition requests, and avoid partial speech recognition requests being lost due to excessive processing time.
  • Embodiment 1 is a flow chart of a voiceprint recognition method in Embodiment 1 of the present invention.
  • FIG. 2 is a schematic block diagram of a voiceprint recognition apparatus in Embodiment 2 of the present invention.
  • FIG. 3 is a schematic diagram of a background server according to an embodiment of the present invention.
  • Fig. 1 shows a flow chart of the voiceprint recognition method in this embodiment.
  • the voiceprint recognition method can be applied on the client and the background server to identify the test voice collected by the client.
  • the voiceprint recognition method includes the following steps:
  • the client collects the test voice of the user, and sends a voice recognition request to the background server, where the voice recognition request includes the user ID and the test voice.
  • the client includes a terminal connected to the background server, such as a smart phone, a notebook, a desktop computer, etc., and the client has a microphone for collecting test voice or an external microphone interface.
  • the user ID is used to uniquely identify the user identity.
  • the test voice is associated with the user ID, and is used to determine the user corresponding to the test voice.
  • the client samples and records the user, obtains the test voice in the wav audio format, forms a voice recognition request according to the test voice and the user ID, and sends the voice recognition request to the background server.
  • test voice is collected by using a multi-threading method; when the client is the webpage end, the test voice is collected by using the Ajax asynchronous refresh method, so as to achieve the communication with the background server, the user operation is not interrupted.
  • Ajax Asynchronous JavaScript and XML
  • asynchronous JavaScript and XML is a web application development method that uses client-side scripts to exchange data with a web server.
  • the background server receives the voice recognition request, and uses the message queue and the asynchronous mechanism to determine the voice recognition request to be processed.
  • the background server receives at least one voice recognition request sent by the client, and At least one voice recognition request is placed in the message queue to wait.
  • the background server uses an asynchronous mechanism to schedule at least one voice recognition request in the message queue, so that when the background server processes each message in the message queue, the sender and the receiver are independent of each other without waiting for the other party to respond.
  • the background server can receive a large number of voice recognition requests at the same time, and avoid processing time of any pending voice recognition request is too long. A large number of other speech recognition requests are lost.
  • the message queue and asynchronous mechanism can also be used to build a distributed system in the background server, which can improve the peak processing capability and flexibility of the voice recognition request, reduce the coupling degree between processes, and ensure that each voice recognition request can be deal with.
  • the background server acquires a target voiceprint feature corresponding to the user ID of the voice recognition request to be processed, and acquires a test voiceprint feature corresponding to the test voice of the voice recognition request to be processed.
  • the target voiceprint feature is a voiceprint feature of the user corresponding to the user ID stored in advance in the background server.
  • the test voiceprint feature is the voiceprint feature corresponding to the test voice in the voice request.
  • Voiceprint is a sound wave spectrum that carries speech information displayed by electroacoustic instruments.
  • voiceprint features include, but are not limited to, acoustic features related to the anatomical structure of the human's pronunciation mechanism, such as spectrum, cepstrum, formants, pitch, reflection coefficients, etc.), nasal sounds, deep breath sounds, hoarse sounds, Laughter and so on.
  • the target voiceprint feature and the test voiceprint feature are preferably I-vector (ie, identification vector) features.
  • I-vector ie, identification vector
  • any I-vector feature can be obtained by the I-vector algorithm, and the i-vertor algorithm is an estimated recession.
  • the method of quantity using a fixed-length low-dimensional vector to represent a piece of speech.
  • the variance between the class and the class is not considered separately, but it is placed in a subspace, that is, the total variable space. (total variablity space) is considered to allow unsupervised methods for training, and to remove language-independent information in the total variable space, while maintaining dimensionality and denoising while maximizing the acoustic information associated with the language. .
  • step S30 specifically includes the following steps:
  • S31 Query the voiceprint feature library according to the user ID of the voice recognition request to be processed, to acquire a target voiceprint feature corresponding to the user ID of the voice recognition request to be processed.
  • At least one set of user IDs and target voiceprint features associated with the user IDs are pre-stored in the voiceprint feature library to facilitate searching for corresponding target voiceprint features based on the user IDs in the pending voice recognition request.
  • GMM-UBM Gaussian Mixture Model-Universal Background Model
  • the speaker uses his own voice to reflect the pronunciation condition not included in the model, and uses the speaker feature distribution that is not related to the speaker to approximate the description, which has the characteristics of high recognition rate.
  • the background server puts the received voice recognition request into the message queue and waits.
  • the pending voice recognition request is taken from the message queue and sent to the background Servlet container for processing, and the Servlet container creates an HttpRequest object, which will be sent.
  • the information sent is encapsulated into this object, and an HttpResponse object is created.
  • the HttpRequest and HttpResponse are passed as parameters to the HttpServlet object, the service method of the HttpServlet object is called, and the Gaussian mixture model-general background model is used to process the test speech. To obtain test voiceprint features.
  • the background server determines whether it corresponds to the same user according to the target voiceprint feature and the test voiceprint feature, and outputs the judgment result to the client.
  • the target voiceprint feature is a voiceprint feature previously associated with the user ID stored in the voiceprint feature library
  • the test voiceprint feature is a voiceprint feature corresponding to the test voice associated with the user ID collected by the client, if When two identical or approximate degrees reach a preset similar threshold, the two can be considered as the same user to output to the client the judgment result that the two are the same user or not the same user.
  • step S40 specifically includes the following steps:
  • the PLDA algorithm is used to reduce the dimension of the target voiceprint feature and the test voiceprint feature, and obtain the target dimension reduction value and the test dimension reduction value.
  • the PLDA (Probabilistic Linear Discriminant Analysis) algorithm is a channel compensation algorithm.
  • PLDA is based on the I-vector feature. Because the I-vector feature contains both speaker difference information and channel difference information, and we only care about speaker information, channel compensation is required.
  • the channel compensation capability of the PLDA algorithm is better than the LDA algorithm.
  • the PLDA algorithm specifically includes the following steps:
  • is the mean voiceprint vector
  • W is the distance between classes
  • w is the voiceprint feature
  • i is the number of iterations.
  • S42 Perform a cosine measure on the target dimension reduction value and the test dimension reduction value by using a cosine measure function to obtain a cosine measure value.
  • the cosine measure function includes: Where w train is the target voiceprint feature, w test is the test voiceprint feature, and t is time.
  • the cosine measure function can be used to easily measure the distance between the target voiceprint feature and the test voiceprint feature.
  • the cosine measure function is simpler to calculate. The effect is more direct and effective.
  • S43 Determine whether the cosine measure value is greater than a similar threshold; if yes, the same user; if not, the same user.
  • S50 The client receives and displays the judgment result.
  • the judgment result may be that the test voiceprint feature corresponding to the test voice is the same as the speaker of the target voiceprint feature saved in the voiceprint feature database, or is not the judgment result of the same user.
  • the background server acquires a corresponding target voiceprint feature based on the user ID in the voice recognition request to be processed, and acquires a test voiceprint feature based on the test voice in the voice recognition request to be processed, and The target voiceprint feature is compared with the test voiceprint feature to determine whether the target voiceprint feature and the speaker of the test voiceprint feature are the same user, which can achieve fast speech recognition effect and improve speech recognition efficiency.
  • the background server uses the message queue and the asynchronous mechanism to determine the pending speech recognition request, so as to improve the processing efficiency of a large number of speech recognition requests, and avoid partial speech recognition requests being lost due to excessive processing time.
  • the voiceprint recognition method further includes the following steps:
  • S51 Perform MFCC feature extraction on the training speech to obtain MFCC acoustic features.
  • MFCC Mel Frequency Cepstrum Coefficients, Mel frequency cepstrum coefficient.
  • the process of performing MFCC feature extraction on training speech includes: pre-emphasizing, framing, and windowing the training speech; and obtaining a corresponding spectrum by FFT (Fast Fourier Transform) for each short-time analysis window.
  • FFT Fast Fourier Transform
  • the above spectrum is passed through the Mel filter group to obtain the Mel frequency; the inverse spectrum analysis is performed on the Mel spectrum (take the logarithm and inverse transform, and the actual inverse transform is generally realized by DCT discrete cosine transform, taking the second after DCT) From the 13th coefficient to the MFCC coefficient, the Mel frequency cepstral coefficient MFCC is obtained, thereby obtaining the MFCC acoustic characteristics.
  • S52 Perform voice activity detection on the MFCC acoustic features, and estimate Gaussian mixture model parameters.
  • the voice activity detection uses the Voice Activity Detection (VAD) algorithm to perform voice and noise judgment on different characteristics of voice and noise, and detects the voice signal segment and the noise signal segment from the digital signals obtained by continuous sampling.
  • VAD Voice Activity Detection
  • the parameter set of the Gaussian Mixture Model (GMM model) is estimated by the MFCC acoustic feature of the speech signal segment.
  • the voice activity detection algorithm is used to calculate the speech feature parameters such as short-time energy, short-time zero-crossing rate, and short-time autocorrelation, thereby removing the mute signal and the non-speech signal, and preserving the non-silent speech signal to estimate the Gaussian mixture model parameter.
  • the zeroth order, first order, and second order quantities of the MFCC acoustic features are used to estimate the parameters of the Gaussian mixture model.
  • S53 The general background model is trained by using Gaussian mixture model parameters to obtain a Gaussian mixture model-general background model.
  • the Gaussian mixture model parameters are analyzed by a general background model to obtain a Gaussian mixture model-general background model.
  • the factor analysis algorithm is used to analyze the acoustic features represented by the Gaussian mixture model, and the mean vector (mean) of the acoustic features is separated from the voiceprint difference vector (balance) to obtain the I-vector feature.
  • the factor analysis algorithm can separate the voiceprint difference vector between different voices, and it is easier to extract the voiceprint specificity between different voices.
  • S54 Receive a voiceprint registration request, the voiceprint registration request includes a user ID and target training voice.
  • the client receives the voiceprint registration request input by the user, and sends the voiceprint registration request to the server, and the server receives the voiceprint registration request.
  • the Gaussian mixture model-general background model is used to extract the feature of the target training speech to obtain the target voiceprint feature.
  • the server uses the trained Gaussian mixture model-general background model to perform feature extraction on the target training speech to obtain the target voiceprint feature.
  • MFCC feature extraction is performed on the target training speech to obtain the corresponding target MFCC acoustic features, then the target MFCC acoustic features are detected for the speech activity, and then the active speech detected MFCC acoustic features are put into the trained Gaussian mixture model-
  • the general background model performs feature extraction to obtain the target voiceprint features.
  • S56 Store the user ID and the target voiceprint feature in the voiceprint feature library.
  • the user ID in the voiceprint registration request and the target voiceprint feature acquired based on the target training voice are stored in the voiceprint feature library, so that when the user identity needs to be performed, the corresponding call may be performed based on the user ID.
  • the target voiceprint feature is stored in the voiceprint feature library, so that when the user identity needs to be performed, the corresponding call may be performed based on the user ID.
  • the MFCC feature extraction and the voice activity detection are performed on the training speech, the Gaussian mixture model parameters are estimated, and the general background model is trained by using the Gaussian mixture model parameters to obtain the trained Gaussian mixture model-general background model.
  • the Gaussian mixture model-general background model has the advantage of high recognition rate.
  • Receiving the voiceprint registration request extracting the target training voice in the voiceprint registration request through the trained Gaussian mixture model-general background model to obtain the target voiceprint feature, and storing the target voiceprint feature and the user ID in the In the voiceprint feature library, in order to facilitate the speech recognition process, Obtaining a corresponding target voiceprint feature based on the user ID in the to-be-processed voice recognition request, and comparing with the test voiceprint feature to determine whether the target voiceprint feature and the test voiceprint feature speaker are the same user to achieve voice recognition effect.
  • Fig. 2 is a flow chart showing the voiceprint recognition method in the embodiment.
  • the voiceprint recognition device includes a client and a background server, and can perform identification on the test voice collected by the client.
  • the voiceprint recognition apparatus includes a client 10 and a background server 20.
  • the client 10 is configured to collect test voices of the user, and send a voice recognition request to the background server, where the voice recognition request includes a user ID and a test voice.
  • the client 10 includes a terminal connected to the background server, such as a smart phone, a notebook, a desktop computer, etc., and the client has a microphone for collecting test voice or an external microphone interface.
  • the user ID is used to uniquely identify the user identity.
  • the test voice is associated with the user ID, and is used to determine the user corresponding to the test voice.
  • the client samples and records the user, obtains the test voice in the wav audio format, forms a voice recognition request according to the test voice and the user ID, and sends the voice recognition request to the background server.
  • test voice is collected by using a multi-threading method; when the client is the webpage end, the test voice is collected by using the Ajax asynchronous refresh method, so as to achieve the communication with the background server, the user operation is not interrupted.
  • Ajax Asynchronous JavaScript and XML
  • asynchronous JavaScript and XML is a web application development method that uses client-side scripts to exchange data with a web server.
  • the background server 20 is configured to receive a voice recognition request, using message queue and asynchronous The mechanism determines the speech recognition request to be processed.
  • the background server 20 receives the voice recognition request sent by at least one client, and puts at least one voice recognition request into the message queue to wait.
  • the background server uses an asynchronous mechanism to schedule at least one voice recognition request in the message queue, so that when the background server processes each message in the message queue, the sender and the receiver are independent of each other without waiting for the other party to respond.
  • the background server can receive a large number of voice recognition requests at the same time, and avoid processing time of any pending voice recognition request is too long. A large number of other speech recognition requests are lost.
  • the message queue and asynchronous mechanism can also be used to build a distributed system in the background server, which can improve the peak processing capability and flexibility of the voice recognition request, reduce the coupling degree between processes, and ensure that each voice recognition request can be deal with.
  • the background server 20 is configured to acquire a target voiceprint feature corresponding to the user ID of the voice recognition request to be processed, and acquire a test voiceprint feature corresponding to the test voice of the voice recognition request to be processed.
  • the target voiceprint feature is a voiceprint feature of the user corresponding to the user ID stored in advance in the background server.
  • the test voiceprint feature is the voiceprint feature corresponding to the test voice in the voice request.
  • Voiceprint is a sound wave spectrum that carries speech information displayed by electroacoustic instruments.
  • voiceprint features include, but are not limited to, acoustic features related to the anatomical structure of the human's pronunciation mechanism, such as spectrum, cepstrum, formants, pitch, reflection coefficients, etc.), nasal sounds, deep breath sounds, hoarse sounds, Laughter and so on.
  • the target voiceprint feature and the test voiceprint feature are preferably I-vector (ie, Identify vector, identification vector) feature.
  • I-vector ie, Identify vector, identification vector
  • any I-vector feature can be obtained by the I-vector algorithm.
  • the i-vertor algorithm is a method for estimating hidden variables. A fixed-length low-dimensional vector is used to represent a segment of speech, and I-vector feature extraction is performed.
  • the variance between the class and the class is not considered separately, but it is considered in a subspace, that is, the total variable space (total variablity space), so that it can be trained in an unsupervised way and can be removed. Language-independent information in the total variable space minimizes the acoustic information associated with the language while reducing dimensionality and denoising.
  • the background server 20 includes a feature query unit 211 and a feature processing unit 212.
  • the feature query unit 211 is configured to query the voiceprint feature library according to the user ID of the voice recognition request to be processed to acquire the target voiceprint feature corresponding to the user ID of the voice recognition request to be processed.
  • At least one set of user IDs and target voiceprint features associated with the user IDs are pre-stored in the voiceprint feature library to facilitate searching for corresponding target voiceprint features based on the user IDs in the pending voice recognition request.
  • the feature processing unit 212 is configured to process the test voiceprint feature to be processed by the Gaussian Mixture Model-Universal Background Model to process the voice recognition request to obtain the test voiceprint feature corresponding to the test voice of the voice recognition request to be processed.
  • GMM-UBM Gaussian Mixture Model-Universal Background Model
  • the background server 20 puts the received voice recognition request into the message queue to wait.
  • the pending voice recognition request is taken from the message queue and sent to the background Servlet container for processing, and the Servlet container creates an HttpRequest object, which will be sent.
  • the information coming in is encapsulated into this object, and an HttpResponse object is created.
  • the HttpRequest and HttpResponse are passed as parameters to the HttpServlet object, the service method of the HttpServlet object is called, and the Gaussian mixture model-general background model is used to process the test speech. To obtain test voiceprint features.
  • the background server 20 determines whether it corresponds to the same user according to the target voiceprint feature and the test voiceprint feature, and outputs the judgment result to the client.
  • the target voiceprint feature is a voiceprint feature previously associated with the user ID stored in the voiceprint feature library
  • the test voiceprint feature is a voiceprint feature corresponding to the test voice associated with the user ID collected by the client, if When two identical or approximate degrees reach a preset similar threshold, the two can be considered as the same user to output to the client the judgment result that the two are the same user or not the same user.
  • the background server 20 specifically includes a feature dimension reduction unit 221, a cosine measure processing unit 222, and a user recognition determination unit 223.
  • the feature dimension reduction unit 221 is configured to perform dimension reduction on the target voiceprint feature and the test voiceprint feature by using the PLDA algorithm, and obtain the target dimension reduction value and the test dimension reduction value.
  • the PLDA (Probabilistic Linear Discriminant Analysis) algorithm is a channel compensation algorithm.
  • PLDA is based on I-vector features because I-vector The feature contains the speaker difference information and the channel difference information, and we only care about the speaker information, so channel compensation is needed.
  • the channel compensation capability of the PLDA algorithm is better than the LDA algorithm.
  • the PLDA algorithm specifically includes the following steps:
  • is the mean voiceprint vector
  • W is the distance between classes
  • w is the voiceprint feature
  • i is the number of iterations.
  • the cosine measure processing unit 222 is configured to perform a cosine measure on the target dimension reduction value and the test dimension reduction value by using a cosine measure function to obtain a cosine measure value.
  • the cosine measure function includes: Where w train is the target voiceprint feature, w test is the test voiceprint feature, and t is time.
  • the cosine measure function can be used to easily measure the distance between the target voiceprint feature and the test voiceprint feature.
  • the cosine measure function is simpler to calculate. The effect is more direct and effective.
  • the user identification determining unit 223 is configured to determine whether the cosine measure value is greater than a similar threshold; If yes, it is the same user; if not, it is not the same user.
  • the client 10 is configured to receive and display the judgment result.
  • the judgment result may be that the test voiceprint feature corresponding to the test voice is the same as the speaker of the target voiceprint feature saved in the voiceprint feature database, or is not the judgment result of the same user.
  • the background server acquires a corresponding target voiceprint feature based on the user ID in the voice recognition request to be processed, and acquires a test voiceprint feature based on the test voice in the voice recognition request to be processed, and The target voiceprint feature is compared with the test voiceprint feature to determine whether the target voiceprint feature and the speaker of the test voiceprint feature are the same user, which can achieve fast speech recognition effect and improve speech recognition efficiency.
  • the background server uses the message queue and the asynchronous mechanism to determine the pending speech recognition request, so as to improve the processing efficiency of a large number of speech recognition requests, and avoid partial speech recognition requests being lost due to excessive processing time.
  • the voiceprint recognition apparatus further includes an acoustic feature extraction unit 231, a voice activity detection unit 232, a model training unit 233, a registration voice receiving unit 234, a target voiceprint feature acquisition unit 235, and a target voiceprint feature storage.
  • Unit 236 the voiceprint recognition apparatus.
  • the acoustic feature extraction unit 231 is configured to perform MFCC feature extraction on the training speech to obtain the MFCC acoustic feature.
  • MFCC Mel Frequency Cepstrum Coefficients, Mel frequency cepstrum coefficient.
  • the process of performing MFCC feature extraction on training speech includes: pre-emphasizing, framing, and windowing the training speech; and obtaining a corresponding spectrum by FFT (Fast Fourier Transform) for each short-time analysis window.
  • FFT Fast Fourier Transform
  • the above spectrum is passed through the Mel filter group to obtain the Mel frequency; the inverse spectrum analysis is performed on the Mel spectrum (take the logarithm and inverse transform, and the actual inverse transform is generally realized by DCT discrete cosine transform, taking the second after DCT) From the 13th coefficient to the MFCC coefficient, the Mel frequency cepstral coefficient MFCC is obtained, thereby obtaining the MFCC acoustic characteristics.
  • the voice activity detecting unit 232 is configured to perform voice activity detection on the MFCC acoustic feature and estimate a Gaussian mixture model parameter.
  • the voice activity detection uses the Voice Activity Detection (VAD) algorithm to perform voice and noise judgment on different characteristics of voice and noise, and detects the voice signal segment and the noise signal segment from the digital signals obtained by continuous sampling.
  • VAD Voice Activity Detection
  • the MFCC acoustic characteristics of the speech signal segment are taken as the parameter set of the Gaussian Mixture Model (GMM model for short).
  • GMM model Gaussian Mixture Model
  • the voice activity detection algorithm is used to calculate the speech feature parameters such as short-time energy, short-time zero-crossing rate, and short-time autocorrelation, thereby removing the mute signal and the non-speech signal, and the non-silent speech signal is reserved for estimating the Gaussian mixture model parameter.
  • the parameters of the Gaussian mixture model are estimated using the zero-order, first-order, and second-order quantities of the acoustic characteristics of the non-silent speech signal MFCC.
  • the model training unit 233 is configured to train the general background model by using Gaussian mixture model parameters to obtain a Gaussian mixture model-general background model.
  • the Gaussian mixture model parameters are factored by the general background model.
  • the factor analysis algorithm is used to analyze the acoustic features represented by the Gaussian mixture model, and the mean vector (mean) of the acoustic features is separated from the voiceprint difference vector (balance) to obtain the I-vector feature.
  • the factor analysis algorithm can separate the voiceprint difference vector between different voices, and it is easier to extract the voiceprint specificity between different voices.
  • the registration voice receiving unit 234 is configured to receive a voiceprint registration request, where the voiceprint registration request includes a user ID and a target training voice.
  • the client receives the voiceprint registration request input by the user, and sends the voiceprint registration request to the server, and the server receives the voiceprint registration request.
  • the target voiceprint feature acquiring unit 235 is configured to perform feature extraction on the target training voice by using a Gaussian mixture model-general background model to obtain a target voiceprint feature.
  • the server uses the trained Gaussian mixture model-general background model to perform feature extraction on the target training speech to obtain the target voiceprint feature.
  • MFCC feature extraction is performed on the target training speech to obtain the corresponding target MFCC acoustic features, then the target MFCC acoustic features are detected for the speech activity, and then the active speech detected MFCC acoustic features are put into the trained Gaussian mixture model-
  • the general background model performs feature extraction to obtain the target voiceprint features.
  • the target voiceprint feature storage unit 236 is configured to store the user ID and the target voiceprint feature in the voiceprint feature library.
  • the user ID in the voiceprint registration request and the target voiceprint feature acquired based on the target training voice are stored in the voiceprint feature library, so as to facilitate When user identification is required, the corresponding target voiceprint feature can be called based on the user ID.
  • the Gaussian mixture model parameters are estimated, and the general background model is trained by using the Gaussian mixture model parameters to obtain the training.
  • the Gaussian mixture model-general background model the Gaussian mixture model-general background model has the advantage of high recognition rate.
  • the voiceprint registration request Receiving the voiceprint registration request, extracting the target training voice in the voiceprint registration request through the trained Gaussian mixture model-general background model to obtain the target voiceprint feature, and storing the target voiceprint feature and the user ID in the In the voiceprint feature library, in the speech recognition process, the corresponding target voiceprint feature is obtained based on the user ID in the to-be-processed voice recognition request, and compared with the test voiceprint feature to determine the target voiceprint feature and the test sound. Whether the speaker of the pattern feature is the same user to achieve the speech recognition effect.
  • FIG. 3 is a schematic diagram of a background server according to an embodiment of the present invention.
  • the background server 3 of this embodiment includes a processor 30, a memory 31, and a computer program 32 stored in the memory 31 and operable on the processor 30, for example, performing the above-described voiceprint recognition. Method of procedure.
  • the processor 30 executes the computer program 32, the steps in the embodiments of the above-described respective voiceprint recognition methods are implemented, such as steps S10 to S50 shown in FIG.
  • the processor 30 executes the computer program 32, the functions of the modules/units in the foregoing device embodiments are implemented, such as the functions of the various units on the background server 20 shown in FIG. 2.
  • the computer program 32 can be partitioned into one or more modules/units that are stored in the memory 31 and executed by the processor 30 to complete this invention.
  • the one or more modules/units can A series of computer program instructions that are capable of performing a particular function, the instruction segments being used to describe the execution of the computer program 32 in the background server 3.
  • the background server 3 may be a computing device such as a local server or a cloud server.
  • the background server may include, but is not limited to, the processor 30 and the memory 31. It will be understood by those skilled in the art that FIG. 3 is only an example of the backend server 3, does not constitute a limitation to the backend server 3, may include more or less components than the illustration, or combine some components, or different components.
  • the background server may further include an input/output device, a network access device, a bus, and the like.
  • the processor 30 may be a central processing unit (CPU), or may be another general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the memory 31 may be an internal storage unit of the background server 3, such as a hard disk or a memory of the background server 3.
  • the memory 31 may also be an external storage device of the background server 3, such as a plug-in hard disk provided on the background server 3, a smart memory card (SMC), and a secure digital (SD). Card, flash card, etc.
  • the memory 31 may also include both an internal storage unit of the background server 3 and an external storage device.
  • the memory 31 is configured to store the computer program and the same required by the background server His programs and data.
  • the memory 31 can also be used to temporarily store data that has been output or is about to be output.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computational Linguistics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Telephonic Communication Services (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种声纹识别方法、装置、存储介质和后台服务器。该声纹识别方法包括:客户端采集用户的测试语音,并向后台服务器发送语音识别请求,语音识别请求包括用户ID和测试语音(S10);后台服务器接收语音识别请求,采用消息队列和异步机制确定待处理语音识别请求(S20);后台服务器获取与待处理语音识别请求的用户ID相对应的目标声纹特征,并获取与待处理语音识别请求的测试语音相对应的测试声纹特征(S30);后台服务器根据述目标声纹特征和测试声纹特征判断是否对应同一用户,并向客户端输出判断结果(S40);客户端接收并显示判断结果(S50)。该声纹识别方法,可达到快速语音识别效果,提高语音识别效率。

Description

声纹识别方法、装置、存储介质和后台服务器 技术领域
本发明涉及生物特征的身份识别技术领域,尤其涉及声纹识别方法、装置、存储介质和后台服务器。
背景技术
声纹识别(Voiceprint Recognition)是指根据声音所蕴涵的说话人的生物特征,识别说话人的一种身份识别技术。由于声纹识别具有安全可靠性,使其可在几乎所有需求进行身份识别的安全性保护领域和个性化应用场合中使用。如在银行、证券、保险等金融机构的业务量持续扩大,产生大量的身份识别需求。与传统身份识别技术相比,声纹识别的优势在于,声纹提取过程简单且成本低、且每个人的声纹特征与其他人的声纹特征不相同,具有唯一性,不易伪造和假冒。由于声纹识别具有安全、可靠、方便等特性,使其在需进行身份识别的场合得到广泛的应用。但现有声纹识别过程耗时较长,在对大量语音识别请求进行处理时,容易因处理时间过久而导致部分语音识别请求丢失,影响声纹识别技术的应用。
发明内容
本发明要解决的技术问题在于,针对现有技术的缺陷,提供一种声纹识别方法、装置、存储介质和后台服务器,可提高大量语音识别请求的处理效率,缩短处理时间。
本发明解决其技术问题所采用的技术方案是:一种声纹识别方法,包括:
客户端采集用户的测试语音,并向后台服务器发送语音识别请求,所述语音识别请求包括用户ID和所述测试语音;
后台服务器接收所述语音识别请求,采用消息队列和异步机制确定待处理语音识别请求;
后台服务器获取与所述待处理语音识别请求的用户ID相对应的目标声纹特征,并获取与所述待处理语音识别请求的测试语音相对应的测试声纹特征;
后台服务器根据所述目标声纹特征和所述测试声纹特征判断是否对应同一用户,并向所述客户端输出判断结果;
客户端接收并显示判断结果。
本发明还提供一种声纹识别装置,包括:
客户端,用于采集用户的测试语音,并向后台服务器发送语音识别请求,所述语音识别请求包括用户ID和所述测试语音;
后台服务器,用于接收所述语音识别请求,采用消息队列和异步机制确定待处理语音识别请求;
后台服务器,用于获取与所述待处理语音识别请求的用户ID相对应的目标声纹特征,并获取与所述待处理语音识别请求的测试语音相对应的测试声纹特征;
后台服务器,用于根据所述目标声纹特征和所述测试声纹特征判断是否对应同一用户,并向所述客户端输出判断结果;
客户端,用于接收并显示所述判断结果。
本发明还提供一种后台服务器,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述的声纹识别方法中后台服务器执行的步骤。
本发明还提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述的声纹识别方法中后台服务器执行的步骤。
本发明与现有技术相比具有如下优点:本发明所提供的声纹识别方法及装置中,后台服务器基于待处理语音识别请求中的用户ID获取对应的目标声纹特征,并基于待处理语音识别请求中的测试语音获取测试声纹特征,并将目标声纹特征与测试声纹特征进行对比,以确定目标声纹特征与测试声纹特征的说话人是否为同一用户,可达到快速语音识别效果,提高语音识别效率。另外,后台服务器采用消息队列和异步机制确定待处理语音识别请求,以提高对大量语音识别请求的处理效率,避免因处理时间过长导致部分语音识别请求丢失。
附图说明
下面将结合附图及实施例对本发明作进一步说明,附图中:
图1是本发明实施例1中声纹识别方法的一流程图;
图2是本发明实施例2中声纹识别装置的一原理框图;
图3为本发明一实施例提供的后台服务器的示意图。
具体实施方式
为了对本发明的技术特征、目的和效果有更加清楚的理解,现对照附图详细说明本发明的具体实施方式。
实施例1
图1示出本实施例中声纹识别方法的一流程图。该声纹识别方法可在客户端和后台服务器上应用,以实现对客户端采集的测试语音进行身份识别。如图1所示,该声纹识别方法包括如下步骤:
S10:客户端采集用户的测试语音,并向后台服务器发送语音识别请求,语音识别请求包括用户ID和测试语音。
其中,客户端包括智能手机、笔记本、台式计算机等可与后台服务器通信相连的终端,客户端上设有用于采集测试语音的麦克风或设有外置麦克风接口。用户ID用于唯一识别用户身份,本实施例中测试语音与用户ID相关联,用于确定测试语音对应的用户。客户端对用户进行采样录音,获取wav音频格式的测试语音,根据测试语音与用户ID形成语音识别请求,并将语音识别请求发送给后台服务器。
进一步地,当客户端为手机端时,采用多线程方式采集测试语音;当客户端为网页端时,采用Ajax异步刷新方式采集测试语音,以实现与后台服务器通信时,不打断用户操作,以提高测试请求的采集速度。Ajax(Asynchronous JavaScript and XML),异步JavaScript与XML,是使用客户端脚本与Web服务器交换数据的Web应用开发方法。
S20:后台服务器接收语音识别请求,采用消息队列和异步机制确定待处理语音识别请求。
其中,后台服务器接收至少一个客户端发送的语音识别请求,并 将至少一个语音识别请求放入消息队列等待。后台服务器采用异步机制对消息队列中的至少一个语音识别请求进行调度,使得后台服务器在处理消息队列中的每一消息时,发送方和接收方相互独立,无需等待对方回应。采用消息队列和异步机制对至少一个语音识别请求进行调度以获取待处理语音识别请求,可使后台服务器可同时接收大量的语音识别请求,并避免任一待处理语音识别请求的处理时间过长导致其他大量语音识别请求丢失。另一方面,采用消息队列和异步机制还可在后台服务器搭建分布式系统,可提高对语音识别请求的峰值处理能力和灵活性,降低进程间的耦合程度,保证每一语音识别请求均能被处理。
S30:后台服务器获取与待处理语音识别请求的用户ID相对应的目标声纹特征,并获取与待处理语音识别请求的测试语音相对应的测试声纹特征。
具体地,目标声纹特征是预先存储在后台服务器中与用户ID相对应的用户的声纹特征。测试声纹特征是语音请求中测试语音相对应的声纹特征。其中,声纹(Voiceprint),是用电声学仪器显示的携带言语信息的声波频谱。相应地,声纹特征包括但不限于与人类的发音机制的解剖学结构有关的声学特征,如频谱、倒频谱、共振峰、基音、反射系数等等)、鼻音、带深呼吸音、沙哑音、笑声等。
本实施例中,目标声纹特征和测试声纹特征优选为I-vector(即identifying vector,辨识矢量)特征。相应地,任一I-vector特征均可采用I-vector算法进行获取,i-vertor算法是一种估计隐变 量的方法,用一固定长度的低维向量表示一段语音,在I-vector特征提取过程中没有把类内和类间的方差分开考虑,而是将其放在一个子空间,即总变量空间(total variablity space)中考虑,使其可采用无监督的方法进行训练,并可去除总变量空间中与语种无关的信息,在降维去噪的同时,最大限度保留了与语种相关的声学信息。
进一步地,步骤S30具体包括如下步骤:
S31:根据待处理语音识别请求的用户ID查询声纹特征库,以获取与待处理语音识别请求的用户ID相对应的目标声纹特征。
具体地,在声纹特征库中预先存储有至少一组用户ID和与用户ID相关联的目标声纹特征,以便于基于待处理语音识别请求中的用户ID查找相对应的目标声纹特征。
S32:采用高斯混合模型-通用背景模型对待处理语音识别请求的测试声纹特征进行处理,以获取与待处理语音识别请求的测试语音相对应的测试声纹特征。
其中,高斯混合模型-通用背景模型(即Gaussian Mixture Model-Universal Background Model,简称GMM-UBM)是一个与说话人无关、高阶的GMM,它是根据说话人训练语音自适应训练,即语音模型通过说话人用自己的语音反映出模型中未包含的发音情况,用与说话人无关的语音特征分布近似描述,具有识别率高的特点。
具体地,后台服务器将接收到的语音识别请求放入消息队列等待,当有进程空闲时,从消息队列取出待处理语音识别请求交给后台Servlet容器处理,Servlet容器创建一个HttpRequest对象,将发 送过来的信息封装到这个对象中,同时创建一个HttpResponse对象,把HttpRequest与HttpResponse作为参数传给HttpServlet对象,调用HttpServlet对象的service方法,service方法里调用高斯混合模型-通用背景模型对测试语音进行处理,以获取测试声纹特征。
S40:后台服务器根据目标声纹特征和测试声纹特征判断是否对应同一用户,并向客户端输出判断结果。
由于目标声纹特征是预先存储在声纹特征库中与用户ID相关联的声纹特征,而测试声纹特征是客户端采集到的与用户ID相关联的测试语音对应的声纹特征,若两个相同或者近似度达到预设的相似阈值时,可认定两者为同一用户,以向客户端输出两者为同一用户或不为同一用户的判断结果。
进一步地,步骤S40具体包括如下步骤:
S41:采用PLDA算法分别对目标声纹特征和测试声纹特征进行降维,获取目标降维值和测试降维值。
其中,PLDA(Probabilistic Linear Discriminant Analysis)算法是一种信道补偿算法。PLDA基于I-vector特征,因为I-vector特征既包含说话人差异信息又包含信道差异信息,而我们只关心说话人信息,所以才需要信道补偿。PLDA算法的信道补偿能力比LDA算法更好。
其中,PLDA算法具体包括如下步骤:
(1)初始化μ和W;
(2)采用
Figure PCTCN2017090046-appb-000001
计算w;
(3)采用
Figure PCTCN2017090046-appb-000002
重新W,并返回采用
Figure PCTCN2017090046-appb-000003
计算w的步骤,直至w小于指定阈值;
其中,μ为均值声纹向量;W为类间距离;w为声纹特征;i为迭代次数。
S42:采用余弦测度函数对目标降维值和测试降维值进行余弦测度,获取余弦测度值。
具体地,余弦测度函数包括:
Figure PCTCN2017090046-appb-000004
其中,wtrain为目标声纹特征,wtest为测试声纹特征,t为时间。采用余弦测度函数可简单衡量目标声纹特征与测试声纹特征之间距离的远近,当目标声纹特征与测试声纹特征在指定有限维度空间内可以展开时,该余弦测度函数计算较简便且效果较直接有效。
S43:判断余弦测度值是否大于相似阈值;若是,则为同一用户;若否,则不为同一用户。
具体地,若score(wtrain,wtest)>K,则目标声纹特征对应的说话人和测试声纹特征对应的说话人为同一用户;反之,若score(wtrain,wtest)≤K,则目标声纹特征对应的说话人和测试声纹特征对应的说话人不为同一用户;其中K为相似阈值,可以为大于50%的常数。
S50:客户端接收并显示判断结果。
该判断结果可以是认定测试语音对应的测试声纹特征与声纹特征库中保存的目标声纹特征的说话人为同一用户的判断结果,或者不为同一用户的判断结果。
本发明所提供的声纹识别方法中,后台服务器基于待处理语音识别请求中的用户ID获取对应的目标声纹特征,并基于待处理语音识别请求中的测试语音获取测试声纹特征,并将目标声纹特征与测试声纹特征进行对比,以确定目标声纹特征与测试声纹特征的说话人是否为同一用户,可达到快速语音识别效果,提高语音识别效率。另外,后台服务器采用消息队列和异步机制确定待处理语音识别请求,以提高对大量语音识别请求的处理效率,避免因处理时间过长导致部分语音识别请求丢失。
在一具体实施方式中,该声纹识别方法还包括如下步骤:
S51:对训练语音进行MFCC特征提取,以获取MFCC声学特征。
其中,MFCC(Mel Frequency Cepstrum Coefficients,梅尔频率倒谱系数)。对训练语音进行MFCC特征提取的过程包括:对训练语音进行预加重、分帧和加窗;再对每一短时分析窗,通过FFT(Fast Fourier Transform,快速傅里叶变换)获得对应的频谱;再将上述频谱通过Mel滤波器组得到Mel频率;在Mel频谱上面进行倒谱分析(取对数,做逆变换,实际逆变换一般是通过DCT离散余弦变换来实现,取DCT后的第2个到第13个系数作为MFCC系数),获得Mel频率倒谱系数MFCC,从而获取MFCC声学特征。
S52:对MFCC声学特征进行语音活动检测,估计高斯混合模型参数。
其中,语音活动检测是采用语音活动检测(Voice Activity detection,VAD)算法对语音和噪音的不同特性进行语音和噪音判断,以从连续采样得到的数字信号中检测出语音信号段和噪声信号段,并将语音信号段的MFCC声学特征估计高斯混合模型(Gaussian Mixture Model模型,简称为GMM模型)的参数组。具体地,采用语音活动检测算法计算短时能量、短时过零率、短时自相关等语音特征参数,从而去除静音信号和非语音信号,将非静音语音信号保留估计高斯混合模型参数。本实施例中,将MFCC声学特征的零阶、一阶、二阶量用来估计高斯混合模型的参数。
S53:利用高斯混合模型参数对通用背景模型进行训练,获取高斯混合模型-通用背景模型。
本实施例中,对高斯混合模型参数通过通用背景模型进行因子分析,以获取高斯混合模型-通用背景模型。具体地,通用背景模型的因子分析算法包括:s=m+Tw,其中,m为平均声,即为均值向量;T为声纹空间映射矩阵;w为声纹差异向量,即I-vector特征。采用因子分析算法对用高斯混合模型表示的声学特征进行因子分析,把声学特征的均值向量(均值)与声纹差异向量分离(余量),以获取I-vector特征。该因子分析算法可分离出不同语音间的声纹差异向量,更容易提取不同语音间的声纹特异性。
S54:接收声纹注册请求,声纹注册请求包括用户ID和目标训练 语音。
本实施例中,客户端接收用户输入的声纹注册请求,并将该声纹注册请求发送给服务器,服务器接收该声纹注册请求。
S55:采用高斯混合模型-通用背景模型对目标训练语音进行特征提取,获取目标声纹特征。
具体地,服务器采用训练好的高斯混合模型-通用背景模型对目标训练语音进行特征提取,以获取目标声纹特征。即先对目标训练语音进行MFCC特征提取,以获取对应的目标MFCC声学特征,然后对目标MFCC声学特征进行语音活动检测,再把活动语音检测后的MFCC声学特征放入训练好的高斯混合模型-通用背景模型进行特征提取,以获取目标声纹特征。
S56:将用户ID和目标声纹特征存储在声纹特征库。
本实施例中,将声纹注册请求中的用户ID和基于目标训练语音获取到的目标声纹特征存储在声纹特征库中,以便于在需进行用户身份识别时,可基于用户ID调用相应的目标声纹特征。
该具体实施方式中,通过对训练语音进行MFCC特征提取和语音活动检测,估计高斯混合模型参数,并利用高斯混合模型参数对通用背景模型进行训练,以获取训练好的高斯混合模型-通用背景模型,该高斯混合模型-通用背景模型具有识别率高的优点。再接收声纹注册请求,将声纹注册请求中的目标训练语音通过训练好的高斯混合模型-通用背景模型进行特征提取,以获取目标声纹特征,并将目标声纹特征与用户ID保存在声纹特征库中,以便于在语音识别过程中, 基于待处理语音识别请求中的用户ID获取对应的目标声纹特征,并与测试声纹特征进行比较,以确定目标声纹特征与测试声纹特征的说话人是否为同一用户,以达到语音识别效果。
实施例2
图2示出本实施例中声纹识别方法的一流程图。该声纹识别装置包括客户端和后台服务器,可实现对客户端采集的测试语音进行身份识别。如图2所示,该声纹识别装置包括客户端10和后台服务器20。
客户端10,用于采集用户的测试语音,并向后台服务器发送语音识别请求,语音识别请求包括用户ID和测试语音。
其中,客户端10包括智能手机、笔记本、台式计算机等可与后台服务器通信相连的终端,客户端上设有用于采集测试语音的麦克风或设有外置麦克风接口。用户ID用于唯一识别用户身份,本实施例中测试语音与用户ID相关联,用于确定测试语音对应的用户。客户端对用户进行采样录音,获取wav音频格式的测试语音,根据测试语音与用户ID形成语音识别请求,并将语音识别请求发送给后台服务器。
进一步地,当客户端为手机端时,采用多线程方式采集测试语音;当客户端为网页端时,采用Ajax异步刷新方式采集测试语音,以实现与后台服务器通信时,不打断用户操作,以提高测试请求的采集速度。Ajax(Asynchronous JavaScript and XML),异步JavaScript与XML,是使用客户端脚本与Web服务器交换数据的Web应用开发方法。
后台服务器20,用于接收语音识别请求,采用消息队列和异步 机制确定待处理语音识别请求。
其中,后台服务器20接收至少一个客户端发送的语音识别请求,并将至少一个语音识别请求放入消息队列等待。后台服务器采用异步机制对消息队列中的至少一个语音识别请求进行调度,使得后台服务器在处理消息队列中的每一消息时,发送方和接收方相互独立,无需等待对方回应。采用消息队列和异步机制对至少一个语音识别请求进行调度以获取待处理语音识别请求,可使后台服务器可同时接收大量的语音识别请求,并避免任一待处理语音识别请求的处理时间过长导致其他大量语音识别请求丢失。另一方面,采用消息队列和异步机制还可在后台服务器搭建分布式系统,可提高对语音识别请求的峰值处理能力和灵活性,降低进程间的耦合程度,保证每一语音识别请求均能被处理。
后台服务器20,用于获取与待处理语音识别请求的用户ID相对应的目标声纹特征,并获取与待处理语音识别请求的测试语音相对应的测试声纹特征。
具体地,目标声纹特征是预先存储在后台服务器中与用户ID相对应的用户的声纹特征。测试声纹特征是语音请求中测试语音相对应的声纹特征。其中,声纹(Voiceprint),是用电声学仪器显示的携带言语信息的声波频谱。相应地,声纹特征包括但不限于与人类的发音机制的解剖学结构有关的声学特征,如频谱、倒频谱、共振峰、基音、反射系数等等)、鼻音、带深呼吸音、沙哑音、笑声等。
本实施例中,目标声纹特征和测试声纹特征优选为I-vector(即 identifying vector,辨识矢量)特征。相应地,任一I-vector特征均可采用I-vector算法进行获取,i-vertor算法是一种估计隐变量的方法,用一固定长度的低维向量表示一段语音,在I-vector特征提取过程中没有把类内和类间的方差分开考虑,而是将其放在一个子空间,即总变量空间(total variablity space)中考虑,使其可采用无监督的方法进行训练,并可去除总变量空间中与语种无关的信息,在降维去噪的同时,最大限度保留了与语种相关的声学信息。
进一步地,后台服务器20包括特征查询单元211和特征处理单元212。
特征查询单元211,用于根据待处理语音识别请求的用户ID查询声纹特征库,以获取与待处理语音识别请求的用户ID相对应的目标声纹特征。
具体地,在声纹特征库中预先存储有至少一组用户ID和与用户ID相关联的目标声纹特征,以便于基于待处理语音识别请求中的用户ID查找相对应的目标声纹特征。
特征处理单元212,用于采用高斯混合模型-通用背景模型对待处理语音识别请求的测试声纹特征进行处理,以获取与待处理语音识别请求的测试语音相对应的测试声纹特征。
其中,高斯混合模型-通用背景模型(即Gaussian Mixture Model-Universal Background Model,简称GMM-UBM)是一个与说话人无关、高阶的GMM,它是根据说话人训练语音自适应训练,即语音模型通过说话人用自己的语音反映出模型中未包含的发音情况,用与 说话人无关的语音特征分布近似描述,具有识别率高的特点。
具体地,后台服务器20将接收到的语音识别请求放入消息队列等待,当有进程空闲时,从消息队列取出待处理语音识别请求交给后台Servlet容器处理,Servlet容器创建一个HttpRequest对象,将发送过来的信息封装到这个对象中,同时创建一个HttpResponse对象,把HttpRequest与HttpResponse作为参数传给HttpServlet对象,调用HttpServlet对象的service方法,service方法里调用高斯混合模型-通用背景模型对测试语音进行处理,以获取测试声纹特征。
后台服务器20,根据目标声纹特征和测试声纹特征判断是否对应同一用户,并向客户端输出判断结果。
由于目标声纹特征是预先存储在声纹特征库中与用户ID相关联的声纹特征,而测试声纹特征是客户端采集到的与用户ID相关联的测试语音对应的声纹特征,若两个相同或者近似度达到预设的相似阈值时,可认定两者为同一用户,以向客户端输出两者为同一用户或不为同一用户的判断结果。
进一步地,后台服务器20具体包括特征降维单元221、余弦测度处理单元222和用户识别判断单元223。
特征降维单元221,用于采用PLDA算法分别对目标声纹特征和测试声纹特征进行降维,获取目标降维值和测试降维值。
其中,PLDA(Probabilistic Linear Discriminant Analysis)算法是一种信道补偿算法。PLDA基于I-vector特征,因为I-vector 特征即包含说话人差异信息又包含信道差异信息,而我们只关心说话人信息,所以才需要信道补偿。PLDA算法的信道补偿能力比LDA算法更好。
其中,PLDA算法具体包括如下步骤:
(1)初始化μ和W;
(2)采用
Figure PCTCN2017090046-appb-000005
计算w;
(3)采用
Figure PCTCN2017090046-appb-000006
重新W,并返回采用
Figure PCTCN2017090046-appb-000007
计算w的步骤,直至w小于指定阈值;
其中,μ为均值声纹向量;W为类间距离;w为声纹特征;i为迭代次数。
余弦测度处理单元222,用于采用余弦测度函数对目标降维值和测试降维值进行余弦测度,获取余弦测度值。
具体地,余弦测度函数包括:
Figure PCTCN2017090046-appb-000008
其中,wtrain为目标声纹特征,wtest为测试声纹特征,t为时间。采用余弦测度函数可简单衡量目标声纹特征与测试声纹特征之间距离的远近,当目标声纹特征与测试声纹特征在指定有限维度空间内可以展开时,该余弦测度函数计算较简便且效果较直接有效。
用户识别判断单元223,用于判断余弦测度值是否大于相似阈值; 若是,则为同一用户;若否,则不为同一用户。
具体地,若score(wtrain,wtest)>K,则目标声纹特征对应的说话人和测试声纹特征对应的说话人为同一用户;反之,若score(wtrain,wtest)≤K,则目标声纹特征对应的说话人和测试声纹特征对应的说话人不为同一用户;其中K为相似阈值,可以为大于50%的常数。
客户端10,用于接收并显示判断结果。
该判断结果可以是认定测试语音对应的测试声纹特征与声纹特征库中保存的目标声纹特征的说话人为同一用户的判断结果,或者不为同一用户的判断结果。
本发明所提供的声纹识别装置中,后台服务器基于待处理语音识别请求中的用户ID获取对应的目标声纹特征,并基于待处理语音识别请求中的测试语音获取测试声纹特征,并将目标声纹特征与测试声纹特征进行对比,以确定目标声纹特征与测试声纹特征的说话人是否为同一用户,可达到快速语音识别效果,提高语音识别效率。另外,后台服务器采用消息队列和异步机制确定待处理语音识别请求,以提高对大量语音识别请求的处理效率,避免因处理时间过长导致部分语音识别请求丢失。
在一具体实施方式中,该声纹识别装置还包括声学特征提取单元231、语音活动检测单元232、模型训练单元233、注册语音接收单元234、目标声纹特征获取单元235和目标声纹特征存储单元236。
声学特征提取单元231,用于对训练语音进行MFCC特征提取,以获取MFCC声学特征。
其中,MFCC(Mel Frequency Cepstrum Coefficients,梅尔频率倒谱系数)。对训练语音进行MFCC特征提取的过程包括:对训练语音进行预加重、分帧和加窗;再对每一短时分析窗,通过FFT(Fast Fourier Transform,快速傅里叶变换)获得对应的频谱;再将上述频谱通过Mel滤波器组得到Mel频率;在Mel频谱上面进行倒谱分析(取对数,做逆变换,实际逆变换一般是通过DCT离散余弦变换来实现,取DCT后的第2个到第13个系数作为MFCC系数),获得Mel频率倒谱系数MFCC,从而获取MFCC声学特征。
语音活动检测单元232,用于对MFCC声学特征进行语音活动检测,估计高斯混合模型参数。
其中,语音活动检测是采用语音活动检测(Voice Activity detection,VAD)算法对语音和噪音的不同特性进行语音和噪音判断,以从连续采样得到的数字信号中检测出语音信号段和噪声信号段,并将语音信号段的MFCC声学特征作为高斯混合模型(Gaussian Mixture Model模型,简称为GMM模型)的参数组。具体地,采用语音活动检测算法计算短时能量、短时过零率、短时自相关等语音特征参数,从而去除静音信号和非语音信号,将非静音语音信号保留用来估计高斯混合模型参数。本实施例中,用非静音语音信号MFCC声学特征的零阶、一阶、二阶量估计高斯混合模型的参数。
模型训练单元233,用于利用高斯混合模型参数对通用背景模型进行训练,获取高斯混合模型-通用背景模型。
本实施例中,对高斯混合模型参数通过通用背景模型进行因子分 析,以获取高斯混合模型-通用背景模型。具体地,通用背景模型的因子分析算法包括:s=m+Tw,其中,m为平均声,即为均值向量;T为声纹空间映射矩阵;w为声纹差异向量,即I-vector特征。采用因子分析算法对用高斯混合模型表示的声学特征进行因子分析,把声学特征的均值向量(均值)与声纹差异向量分离(余量),以获取I-vector特征。该因子分析算法可分离出不同语音间的声纹差异向量,更容易提取不同语音间的声纹特异性。
注册语音接收单元234,用于接收声纹注册请求,声纹注册请求包括用户ID和目标训练语音。本实施例中,客户端接收用户输入的声纹注册请求,并将该声纹注册请求发送给服务器,服务器接收该声纹注册请求。
目标声纹特征获取单元235,用于采用高斯混合模型-通用背景模型对目标训练语音进行特征提取,获取目标声纹特征。具体地,服务器采用训练好的高斯混合模型-通用背景模型对目标训练语音进行特征提取,以获取目标声纹特征。即先对目标训练语音进行MFCC特征提取,以获取对应的目标MFCC声学特征,然后对目标MFCC声学特征进行语音活动检测,再把活动语音检测后的MFCC声学特征放入训练好的高斯混合模型-通用背景模型进行特征提取,以获取目标声纹特征。
目标声纹特征存储单元236,用于将用户ID和目标声纹特征存储在声纹特征库。本实施例中,将声纹注册请求中的用户ID和基于目标训练语音获取到的目标声纹特征存储在声纹特征库中,以便于在 需进行用户身份识别时,可基于用户ID调用相应的目标声纹特征。
该具体实施方式所提供的声纹识别装置中,通过对训练语音进行MFCC特征提取和语音活动检测后,估计高斯混合模型参数,并利用高斯混合模型参数对通用背景模型进行训练,以获取训练好的高斯混合模型-通用背景模型,该高斯混合模型-通用背景模型具有识别率高的优点。再接收声纹注册请求,将声纹注册请求中的目标训练语音通过训练好的高斯混合模型-通用背景模型进行特征提取,以获取目标声纹特征,并将目标声纹特征与用户ID保存在声纹特征库中,以便于在语音识别过程中,基于待处理语音识别请求中的用户ID获取对应的目标声纹特征,并与测试声纹特征进行比较,以确定目标声纹特征与测试声纹特征的说话人是否为同一用户,以达到语音识别效果。
图3是本发明一实施例提供的后台服务器的示意图。如图3所示,该实施例的后台服务器3包括:处理器30、存储器31以及存储在所述存储器31中并可在所述处理器30上运行的计算机程序32,例如执行上述声纹识别方法的程序。所述处理器30执行所述计算机程序32时实现上述各个声纹识别方法实施例中的步骤,例如图1所示的步骤S10至S50。或者,所述处理器30执行所述计算机程序32时实现上述各装置实施例中各模块/单元的功能,例如图2所示后台服务器20上各个单元的功能。
示例性的,所述计算机程序32可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器31中,并由所述处理器30执行,以完成本发明。所述一个或多个模块/单元可以 是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序32在所述后台服务器3中的执行过程。
所述后台服务器3可以是本地服务器、云端服务器等计算设备。所述后台服务器可包括,但不仅限于,处理器30、存储器31。本领域技术人员可以理解,图3仅仅是后台服务器3的示例,并不构成对后台服务器3的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述后台服务器还可以包括输入输出设备、网络接入设备、总线等。
所述处理器30可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器31可以是所述后台服务器3的内部存储单元,例如后台服务器3的硬盘或内存。所述存储器31也可以是所述后台服务器3的外部存储设备,例如所述后台服务器3上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器31还可以既包括所述后台服务器3的内部存储单元也包括外部存储设备。所述存储器31用于存储所述计算机程序以及所述后台服务器所需的其 他程序和数据。所述存储器31还可以用于暂时地存储已经输出或者将要输出的数据。
本发明是通过几个具体实施例进行说明的,本领域技术人员应当明白,在不脱离本发明范围的情况下,还可以对本发明进行各种变换和等同替代。另外,针对特定情形或具体情况,可以对本发明做各种修改,而不脱离本发明的范围。因此,本发明不局限于所公开的具体实施例,而应当包括落入本发明权利要求范围内的全部实施方式。

Claims (20)

  1. 一种声纹识别方法,其特征在于,包括:
    客户端采集用户的测试语音,并向后台服务器发送语音识别请求,所述语音识别请求包括用户ID和所述测试语音;
    后台服务器接收所述语音识别请求,采用消息队列和异步机制确定待处理语音识别请求;
    后台服务器获取与所述待处理语音识别请求的用户ID相对应的目标声纹特征,并获取与所述待处理语音识别请求的测试语音相对应的测试声纹特征;
    后台服务器根据所述目标声纹特征和所述测试声纹特征判断是否对应同一用户,并向所述客户端输出判断结果;
    客户端接收并显示所述判断结果。
  2. 根据权利要求1所述的声纹识别方法,其特征在于,所述后台服务器获取与所述待处理语音识别请求的用户ID相对应的目标声纹特征,并获取与所述待处理语音识别请求的测试语音相对应的测试声纹特征,包括:
    根据所述待处理语音识别请求的用户ID查询声纹特征库,以获取与所述待处理语音识别请求的用户ID相对应的目标声纹特征;
    采用高斯混合模型-通用背景模型对所述待处理语音识别请求的测试声纹特征进行处理,以获取与所述待处理语音识别请求的测试语音相对应的测试声纹特征。
  3. 根据权利要求2所述的声纹识别方法,其特征在于,还包括:
    对训练语音进行MFCC特征提取,以获取MFCC声学特征;
    对所述MFCC声学特征进行语音活动检测,估计高斯混合模型参数;
    利用所述高斯混合模型参数对通用背景模型进行训练,获取所述高斯混合模型-通用背景模型;
    接收声纹注册请求,所述声纹注册请求包括用户ID和目标训练语音;
    采用所述高斯混合模型-通用背景模型对所述目标训练语音进行训练,获取目标声纹特征;
    将所述用户ID和所述目标声纹特征存储在所述声纹特征库。
  4. 根据权利要求1所述的声纹识别方法,其特征在于,所述根据所述目标声纹特征和所述测试声纹特征判断是否对应同一用户,包括:
    采用PLDA算法分别对所述目标声纹特征和所述测试声纹特征进行降维,获取目标降维值和测试降维值;
    采用余弦测度函数对所述目标降维值和所述测试降维值进行余弦测度,获取余弦测度值;
    判断所述余弦测度值是否大于相似阈值;若是,则为同一用户;若否,则不为同一用户。
  5. 根据权利要求4所述的声纹识别方法,其特征在于,所述PLDA算法包括:
    初始化μ和W;
    采用
    Figure PCTCN2017090046-appb-100001
    计算w;
    采用
    Figure PCTCN2017090046-appb-100002
    重新W,并返回采用
    Figure PCTCN2017090046-appb-100003
    计算w的步骤,直至w小于指定阈值;
    其中,μ为均值声纹向量;W为类间距离;w为声纹特征;i为迭代次数;
    所述余弦测度函数包括:
    Figure PCTCN2017090046-appb-100004
    其中,wtrain为目标声纹特征,wtest为测试声纹特征,t为时间。
  6. 一种声纹识别装置,其特征在于,包括:
    客户端,用于采集用户的测试语音,并向后台服务器发送语音识别请求,所述语音识别请求包括用户ID和所述测试语音;
    后台服务器,用于接收所述语音识别请求,采用消息队列和异步机制确定待处理语音识别请求;
    后台服务器,用于获取与所述待处理语音识别请求的用户ID相对应的目标声纹特征,并获取与所述待处理语音识别请求的测试语音相对应的测试声纹特征;
    后台服务器,用于根据所述目标声纹特征和所述测试声纹特征判 断是否对应同一用户,并向所述客户端输出判断结果;
    客户端,用于接收并显示所述判断结果。
  7. 根据权利要求6所述的声纹识别装置,其特征在于,所述后台服务器包括:
    特征查询单元,用于根据所述待处理语音识别请求的用户ID查询声纹特征库,以获取与所述待处理语音识别请求的用户ID相对应的目标声纹特征;
    特征处理单元,用于采用高斯混合模型-通用背景模型对所述待处理语音识别请求的测试声纹特征进行处理,以获取与所述待处理语音识别请求的测试语音相对应的测试声纹特征。
  8. 根据权利要求7所述的声纹识别装置,其特征在于,后台服务器还包括:
    声学特征提取单元,用于对训练语音进行MFCC特征提取,以获取MFCC声学特征;
    语音活动检测单元,用于对所述MFCC声学特征进行语音活动检测,估计高斯混合模型参数;
    模型训练单元,用于利用所述高斯混合模型参数对通用背景模型进行训练,获取所述高斯混合模型-通用背景模型;
    注册语音接收单元,用于接收声纹注册请求,所述声纹注册请求包括用户ID和目标训练语音;
    目标声纹特征获取单元,用于采用所述高斯混合模型-通用背景模型对所述目标训练语音进行训练,获取目标声纹特征;
    目标声纹特征存储单元,用于将所述用户ID和所述目标声纹特征存储在所述声纹特征库。
  9. 根据权利要求6所述的声纹识别装置,其特征在于,所述后台服务器包括:
    特征降维单元,用于采用PLDA算法分别对所述目标声纹特征和所述测试声纹特征进行降维,获取目标降维值和测试降维值;
    余弦测度处理单元,用于采用余弦测度函数对所述目标降维值和所述测试降维值进行余弦测度,获取余弦测度值;
    用户识别判断单元,用于判断所述余弦测度值是否大于相似阈值;若是,则为同一用户;若否,则不为同一用户。
  10. 根据权利要求9所述的声纹识别装置,其特征在于,所述PLDA算法包括:
    初始化μ和W;
    采用
    Figure PCTCN2017090046-appb-100005
    计算w;
    采用
    Figure PCTCN2017090046-appb-100006
    重新W,并返回采用
    Figure PCTCN2017090046-appb-100007
    计算w的步骤,直至w小于指定阈值;
    其中,μ为均值声纹向量;W为类间距离;w为声纹特征;i为迭代次数;
    所述余弦测度函数包括:
    Figure PCTCN2017090046-appb-100008
    其中,wtrain为目标声纹特征,wtest为测试声纹特征,t为时间。
  11. 一种后台服务器,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如下步骤:
    客户端采集用户的测试语音,并向后台服务器发送语音识别请求,所述语音识别请求包括用户ID和所述测试语音;
    后台服务器接收所述语音识别请求,采用消息队列和异步机制确定待处理语音识别请求;
    后台服务器获取与所述待处理语音识别请求的用户ID相对应的目标声纹特征,并获取与所述待处理语音识别请求的测试语音相对应的测试声纹特征;
    后台服务器根据所述目标声纹特征和所述测试声纹特征判断是否对应同一用户,并向所述客户端输出判断结果;
    客户端接收并显示所述判断结果。
  12. 根据权利要求11所述的后台服务器,其特征在于,所述后台服务器获取与所述待处理语音识别请求的用户ID相对应的目标声纹特征,并获取与所述待处理语音识别请求的测试语音相对应的测试声纹特征,包括:
    根据所述待处理语音识别请求的用户ID查询声纹特征库,以获取与所述待处理语音识别请求的用户ID相对应的目标声纹特征;
    采用高斯混合模型-通用背景模型对所述待处理语音识别请求的 测试声纹特征进行处理,以获取与所述待处理语音识别请求的测试语音相对应的测试声纹特征。
  13. 根据权利要求12所述的后台服务器,其特征在于,还包括:
    对训练语音进行MFCC特征提取,以获取MFCC声学特征;
    对所述MFCC声学特征进行语音活动检测,估计高斯混合模型参数;
    利用所述高斯混合模型参数对通用背景模型进行训练,获取所述高斯混合模型-通用背景模型;
    接收声纹注册请求,所述声纹注册请求包括用户ID和目标训练语音;
    采用所述高斯混合模型-通用背景模型对所述目标训练语音进行训练,获取目标声纹特征;
    将所述用户ID和所述目标声纹特征存储在所述声纹特征库。
  14. 根据权利要求11所述的后台服务器,其特征在于,所述根据所述目标声纹特征和所述测试声纹特征判断是否对应同一用户,包括:
    采用PLDA算法分别对所述目标声纹特征和所述测试声纹特征进行降维,获取目标降维值和测试降维值;
    采用余弦测度函数对所述目标降维值和所述测试降维值进行余弦测度,获取余弦测度值;
    判断所述余弦测度值是否大于相似阈值;若是,则为同一用户;若否,则不为同一用户。
  15. 根据权利要求14所述的后台服务器,其特征在于,所述PLDA算法包括:
    初始化μ和W;
    采用
    Figure PCTCN2017090046-appb-100009
    计算w;
    采用
    Figure PCTCN2017090046-appb-100010
    重新W,并返回采用计算w的步骤,直至w小于指定阈值;
    其中,μ为均值声纹向量;W为类间距离;w为声纹特征;i为迭代次数;
    所述余弦测度函数包括:
    Figure PCTCN2017090046-appb-100012
    其中,wtrain为目标声纹特征,wtest为测试声纹特征,t为时间。
  16. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如下步骤:
    客户端采集用户的测试语音,并向后台服务器发送语音识别请求,所述语音识别请求包括用户ID和所述测试语音;
    后台服务器接收所述语音识别请求,采用消息队列和异步机制确定待处理语音识别请求;
    后台服务器获取与所述待处理语音识别请求的用户ID相对应的目标声纹特征,并获取与所述待处理语音识别请求的测试语音相对应的测试声纹特征;
    后台服务器根据所述目标声纹特征和所述测试声纹特征判断是否对应同一用户,并向所述客户端输出判断结果;
    客户端接收并显示所述判断结果。
  17. 根据权利要求16所述的计算机可读存储介质,其特征在于,所述后台服务器获取与所述待处理语音识别请求的用户ID相对应的目标声纹特征,并获取与所述待处理语音识别请求的测试语音相对应的测试声纹特征,包括:
    根据所述待处理语音识别请求的用户ID查询声纹特征库,以获取与所述待处理语音识别请求的用户ID相对应的目标声纹特征;
    采用高斯混合模型-通用背景模型对所述待处理语音识别请求的测试声纹特征进行处理,以获取与所述待处理语音识别请求的测试语音相对应的测试声纹特征。
  18. 根据权利要求17所述的计算机可读存储介质,其特征在于,还包括:
    对训练语音进行MFCC特征提取,以获取MFCC声学特征;
    对所述MFCC声学特征进行语音活动检测,估计高斯混合模型参数;
    利用所述高斯混合模型参数对通用背景模型进行训练,获取所述高斯混合模型-通用背景模型;
    接收声纹注册请求,所述声纹注册请求包括用户ID和目标训练语音;
    采用所述高斯混合模型-通用背景模型对所述目标训练语音进行训练,获取目标声纹特征;
    将所述用户ID和所述目标声纹特征存储在所述声纹特征库。
  19. 根据权利要求16所述的计算机可读存储介质,其特征在于,所述根据所述目标声纹特征和所述测试声纹特征判断是否对应同一用户,包括:
    采用PLDA算法分别对所述目标声纹特征和所述测试声纹特征进行降维,获取目标降维值和测试降维值;
    采用余弦测度函数对所述目标降维值和所述测试降维值进行余弦测度,获取余弦测度值;
    判断所述余弦测度值是否大于相似阈值;若是,则为同一用户;若否,则不为同一用户。
  20. 根据权利要求19所述的计算机可读存储介质,其特征在于,所述PLDA算法包括:
    初始化μ和W;
    采用
    Figure PCTCN2017090046-appb-100013
    计算w;
    采用
    Figure PCTCN2017090046-appb-100014
    重新W,并返回采用
    Figure PCTCN2017090046-appb-100015
    计算w的步骤,直至w小于指定阈值;
    其中,μ为均值声纹向量;W为类间距离;w为声纹特征;i为迭代次数;
    所述余弦测度函数包括:
    Figure PCTCN2017090046-appb-100016
    其中,wtrain为目标声纹特征,wtest为测试声纹特征,t为时间。
PCT/CN2017/090046 2017-02-16 2017-06-26 声纹识别方法、装置、存储介质和后台服务器 WO2018149077A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
SG11201803895RA SG11201803895RA (en) 2017-02-16 2017-06-26 Voiceprint recognition method, device, storage medium and background server
JP2018514332A JP6649474B2 (ja) 2017-02-16 2017-06-26 声紋識別方法、装置及びバックグラウンドサーバ
AU2017341161A AU2017341161A1 (en) 2017-02-16 2017-06-26 Voiceprint recognition method, device, storage medium and background server
EP17857669.0A EP3584790A4 (en) 2017-02-16 2017-06-26 VOICEPRINT RECOGNITION METHOD, DEVICE, STORAGE MEDIUM AND BACKGROUND SERVER
US15/772,801 US10629209B2 (en) 2017-02-16 2017-06-26 Voiceprint recognition method, device, storage medium and background server
KR1020187015547A KR20180104595A (ko) 2017-02-16 2017-06-26 성문 식별 방법, 장치, 저장 매체 및 백스테이지 서버

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710083629.0 2017-02-16
CN201710083629.0A CN106847292B (zh) 2017-02-16 2017-02-16 声纹识别方法及装置

Publications (1)

Publication Number Publication Date
WO2018149077A1 true WO2018149077A1 (zh) 2018-08-23

Family

ID=59128377

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/090046 WO2018149077A1 (zh) 2017-02-16 2017-06-26 声纹识别方法、装置、存储介质和后台服务器

Country Status (8)

Country Link
US (1) US10629209B2 (zh)
EP (1) EP3584790A4 (zh)
JP (1) JP6649474B2 (zh)
KR (1) KR20180104595A (zh)
CN (1) CN106847292B (zh)
AU (2) AU2017341161A1 (zh)
SG (1) SG11201803895RA (zh)
WO (1) WO2018149077A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111048100A (zh) * 2019-11-21 2020-04-21 深圳市东进银通电子有限公司 一种大数据并行化声纹辨认系统和方法
CN111210829A (zh) * 2020-02-19 2020-05-29 腾讯科技(深圳)有限公司 语音识别方法、装置、系统、设备和计算机可读存储介质
CN111312259A (zh) * 2020-02-17 2020-06-19 厦门快商通科技股份有限公司 声纹识别方法、系统、移动终端及存储介质
CN112214298A (zh) * 2020-09-30 2021-01-12 国网江苏省电力有限公司信息通信分公司 基于声纹识别的动态优先级调度方法及系统
CN114780787A (zh) * 2022-04-01 2022-07-22 杭州半云科技有限公司 声纹检索方法、身份验证方法、身份注册方法和装置

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106847292B (zh) * 2017-02-16 2018-06-19 平安科技(深圳)有限公司 声纹识别方法及装置
US10170112B2 (en) * 2017-05-11 2019-01-01 Google Llc Detecting and suppressing voice queries
CN107492379B (zh) * 2017-06-30 2021-09-21 百度在线网络技术(北京)有限公司 一种声纹创建与注册方法及装置
CN109215643B (zh) * 2017-07-05 2023-10-24 阿里巴巴集团控股有限公司 一种交互方法、电子设备及服务器
CN107527620B (zh) * 2017-07-25 2019-03-26 平安科技(深圳)有限公司 电子装置、身份验证的方法及计算机可读存储介质
CN107623614B (zh) * 2017-09-19 2020-12-08 百度在线网络技术(北京)有限公司 用于推送信息的方法和装置
CN109584884B (zh) * 2017-09-29 2022-09-13 腾讯科技(深圳)有限公司 一种语音身份特征提取器、分类器训练方法及相关设备
CN107978311B (zh) * 2017-11-24 2020-08-25 腾讯科技(深圳)有限公司 一种语音数据处理方法、装置以及语音交互设备
CN108806696B (zh) * 2018-05-08 2020-06-05 平安科技(深圳)有限公司 建立声纹模型的方法、装置、计算机设备和存储介质
US11893999B1 (en) * 2018-05-13 2024-02-06 Amazon Technologies, Inc. Speech based user recognition
CN108777146A (zh) * 2018-05-31 2018-11-09 平安科技(深圳)有限公司 语音模型训练方法、说话人识别方法、装置、设备及介质
CN108899032A (zh) * 2018-06-06 2018-11-27 平安科技(深圳)有限公司 声纹识别方法、装置、计算机设备及存储介质
CN108986792B (zh) * 2018-09-11 2021-02-12 苏州思必驰信息科技有限公司 用于语音对话平台的语音识别模型的训练调度方法及系统
KR20190067135A (ko) 2019-05-27 2019-06-14 박경훈 묶을 수 있는 끈이 일체형으로 직조 된 망사 자루 연속 자동화 제조방법 및 그로써 직조 된 망사 자루
CN110491370A (zh) * 2019-07-15 2019-11-22 北京大米科技有限公司 一种语音流识别方法、装置、存储介质及服务器
CN110364182B (zh) * 2019-08-01 2022-06-14 腾讯音乐娱乐科技(深圳)有限公司 一种声音信号处理方法及装置
CN110610709A (zh) * 2019-09-26 2019-12-24 浙江百应科技有限公司 基于声纹识别的身份辨别方法
CN111081261B (zh) * 2019-12-25 2023-04-21 华南理工大学 一种基于lda的文本无关声纹识别方法
CN111370000A (zh) * 2020-02-10 2020-07-03 厦门快商通科技股份有限公司 声纹识别算法评估方法、系统、移动终端及存储介质
CN111554303B (zh) * 2020-05-09 2023-06-02 福建星网视易信息系统有限公司 一种歌曲演唱过程中的用户身份识别方法及存储介质
CN112000570A (zh) * 2020-07-29 2020-11-27 北京达佳互联信息技术有限公司 应用测试方法、装置、服务器及存储介质
CN111951791B (zh) * 2020-08-26 2024-05-17 上海依图网络科技有限公司 声纹识别模型训练方法、识别方法、电子设备及存储介质
CN112185395B (zh) * 2020-09-04 2021-04-27 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) 一种基于差分隐私的联邦声纹识别方法
CN112185362A (zh) * 2020-09-24 2021-01-05 苏州思必驰信息科技有限公司 针对用户个性化服务的语音处理方法及装置
US11522994B2 (en) 2020-11-23 2022-12-06 Bank Of America Corporation Voice analysis platform for voiceprint tracking and anomaly detection
CN112669820B (zh) * 2020-12-16 2023-08-04 平安科技(深圳)有限公司 基于语音识别的考试作弊识别方法、装置及计算机设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1936967A (zh) * 2005-09-20 2007-03-28 吴田平 声纹考勤机
CN101923855A (zh) * 2009-06-17 2010-12-22 复旦大学 文本无关的声纹识别系统
CN102324232A (zh) * 2011-09-12 2012-01-18 辽宁工业大学 基于高斯混合模型的声纹识别方法及系统
CN102402985A (zh) * 2010-09-14 2012-04-04 盛乐信息技术(上海)有限公司 提高声纹识别安全性的声纹认证系统及其实现方法
CN102509547A (zh) * 2011-12-29 2012-06-20 辽宁工业大学 基于矢量量化的声纹识别方法及系统
CN103730114A (zh) * 2013-12-31 2014-04-16 上海交通大学无锡研究院 一种基于联合因子分析模型的移动设备声纹识别方法
CN104835498A (zh) * 2015-05-25 2015-08-12 重庆大学 基于多类型组合特征参数的声纹识别方法
CN106847292A (zh) * 2017-02-16 2017-06-13 平安科技(深圳)有限公司 声纹识别方法及装置

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU670379B2 (en) 1993-08-10 1996-07-11 International Standard Electric Corp. System and method for passive voice verification in a telephone network
US7047196B2 (en) 2000-06-08 2006-05-16 Agiletv Corporation System and method of voice recognition near a wireline node of a network supporting cable television and/or video delivery
JP2002304379A (ja) * 2001-04-05 2002-10-18 Sharp Corp 個人認証方法および個人認証システム
US6853716B1 (en) * 2001-04-16 2005-02-08 Cisco Technology, Inc. System and method for identifying a participant during a conference call
JP2003114617A (ja) * 2001-10-03 2003-04-18 Systemfrontier Co Ltd 音声による認証システム及び音声による認証方法
US7240007B2 (en) * 2001-12-13 2007-07-03 Matsushita Electric Industrial Co., Ltd. Speaker authentication by fusion of voiceprint match attempt results with additional information
JP2005115921A (ja) * 2003-09-17 2005-04-28 Moss Institute Co Ltd 音声情報管理方法,音声情報管理システム,音声情報管理プログラム及び音声データ管理装置
US20060015335A1 (en) * 2004-07-13 2006-01-19 Ravigopal Vennelakanti Framework to enable multimodal access to applications
CN101197131B (zh) * 2006-12-07 2011-03-30 积体数位股份有限公司 随机式声纹密码验证系统、随机式声纹密码锁及其产生方法
JP2009230267A (ja) * 2008-03-19 2009-10-08 Future Vision:Kk 会議室設備及び会議室設備を用いた会議記録システム
JP2009237774A (ja) * 2008-03-26 2009-10-15 Advanced Media Inc 認証サーバ、サービス提供サーバ、認証方法、通信端末、およびログイン方法
US8442824B2 (en) * 2008-11-26 2013-05-14 Nuance Communications, Inc. Device, system, and method of liveness detection utilizing voice biometrics
JP2010182076A (ja) * 2009-02-05 2010-08-19 Nec Corp 認証システム、認証サーバ、証明方法およびプログラム
CN102760434A (zh) * 2012-07-09 2012-10-31 华为终端有限公司 一种声纹特征模型更新方法及终端
AU2013315343B2 (en) * 2012-09-11 2019-05-30 Auraya Pty Ltd Voice authentication system and method
CN103035245A (zh) * 2012-12-08 2013-04-10 大连创达技术交易市场有限公司 以太网声纹识别系统
JP6276523B2 (ja) 2013-06-28 2018-02-07 株式会社フジクラ 酸化物超電導導体及び酸化物超電導導体の製造方法
WO2015011867A1 (ja) * 2013-07-26 2015-01-29 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 情報管理方法
JP6360484B2 (ja) * 2013-09-03 2018-07-18 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America 音声対話制御方法
GB2517952B (en) * 2013-09-05 2017-05-31 Barclays Bank Plc Biometric verification using predicted signatures
EP3373176B1 (en) * 2014-01-17 2020-01-01 Cirrus Logic International Semiconductor Limited Tamper-resistant element for use in speaker recognition
CN103915096A (zh) * 2014-04-15 2014-07-09 胡上杰 警务声纹识别方法
KR102346634B1 (ko) 2015-02-27 2022-01-03 삼성전자주식회사 사용자 인식을 위한 특징 벡터를 변환하는 방법 및 디바이스
CN105845140A (zh) * 2016-03-23 2016-08-10 广州势必可赢网络科技有限公司 应用于短语音条件下的说话人确认方法和装置
CN107492382B (zh) * 2016-06-13 2020-12-18 阿里巴巴集团控股有限公司 基于神经网络的声纹信息提取方法及装置
CN106297806A (zh) * 2016-08-22 2017-01-04 安徽工程大学机电学院 基于声纹的智能传声系统

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1936967A (zh) * 2005-09-20 2007-03-28 吴田平 声纹考勤机
CN101923855A (zh) * 2009-06-17 2010-12-22 复旦大学 文本无关的声纹识别系统
CN102402985A (zh) * 2010-09-14 2012-04-04 盛乐信息技术(上海)有限公司 提高声纹识别安全性的声纹认证系统及其实现方法
CN102324232A (zh) * 2011-09-12 2012-01-18 辽宁工业大学 基于高斯混合模型的声纹识别方法及系统
CN102509547A (zh) * 2011-12-29 2012-06-20 辽宁工业大学 基于矢量量化的声纹识别方法及系统
CN103730114A (zh) * 2013-12-31 2014-04-16 上海交通大学无锡研究院 一种基于联合因子分析模型的移动设备声纹识别方法
CN104835498A (zh) * 2015-05-25 2015-08-12 重庆大学 基于多类型组合特征参数的声纹识别方法
CN106847292A (zh) * 2017-02-16 2017-06-13 平安科技(深圳)有限公司 声纹识别方法及装置

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111048100A (zh) * 2019-11-21 2020-04-21 深圳市东进银通电子有限公司 一种大数据并行化声纹辨认系统和方法
CN111048100B (zh) * 2019-11-21 2023-09-08 深圳市东进银通电子有限公司 一种大数据并行化声纹辨认系统和方法
CN111312259A (zh) * 2020-02-17 2020-06-19 厦门快商通科技股份有限公司 声纹识别方法、系统、移动终端及存储介质
CN111210829A (zh) * 2020-02-19 2020-05-29 腾讯科技(深圳)有限公司 语音识别方法、装置、系统、设备和计算机可读存储介质
CN112214298A (zh) * 2020-09-30 2021-01-12 国网江苏省电力有限公司信息通信分公司 基于声纹识别的动态优先级调度方法及系统
CN112214298B (zh) * 2020-09-30 2023-09-22 国网江苏省电力有限公司信息通信分公司 基于声纹识别的动态优先级调度方法及系统
CN114780787A (zh) * 2022-04-01 2022-07-22 杭州半云科技有限公司 声纹检索方法、身份验证方法、身份注册方法和装置

Also Published As

Publication number Publication date
CN106847292B (zh) 2018-06-19
EP3584790A4 (en) 2021-01-13
US20190272829A1 (en) 2019-09-05
CN106847292A (zh) 2017-06-13
EP3584790A1 (en) 2019-12-25
AU2017101877A4 (en) 2020-04-23
AU2017341161A1 (en) 2018-08-30
US10629209B2 (en) 2020-04-21
SG11201803895RA (en) 2018-09-27
KR20180104595A (ko) 2018-09-21
JP2019510248A (ja) 2019-04-11
JP6649474B2 (ja) 2020-02-19

Similar Documents

Publication Publication Date Title
WO2018149077A1 (zh) 声纹识别方法、装置、存储介质和后台服务器
WO2021208287A1 (zh) 用于情绪识别的语音端点检测方法、装置、电子设备及存储介质
CN106486131B (zh) 一种语音去噪的方法及装置
WO2020181824A1 (zh) 声纹识别方法、装置、设备以及计算机可读存储介质
WO2018166187A1 (zh) 服务器、身份验证方法、系统及计算机可读存储介质
WO2018223727A1 (zh) 识别声纹的方法、装置、设备及介质
WO2019019256A1 (zh) 电子装置、身份验证的方法、系统及计算机可读存储介质
US20160111112A1 (en) Speaker change detection device and speaker change detection method
CN109346088A (zh) 身份识别方法、装置、介质及电子设备
WO2017031846A1 (zh) 噪声消除、语音识别方法、装置、设备及非易失性计算机存储介质
US20120143608A1 (en) Audio signal source verification system
CN108694954A (zh) 一种性别年龄识别方法、装置、设备及可读存储介质
WO2021218136A1 (zh) 基于语音的用户性别年龄识别方法、装置、计算机设备及存储介质
WO2021000498A1 (zh) 复合语音识别方法、装置、设备及计算机可读存储介质
WO2019232826A1 (zh) i-vector向量提取方法、说话人识别方法、装置、设备及介质
CN110880329A (zh) 一种音频识别方法及设备、存储介质
CN109036437A (zh) 口音识别方法、装置、计算机装置及计算机可读存储介质
WO2017045429A1 (zh) 一种音频数据的检测方法、系统及存储介质
WO2018095167A1 (zh) 声纹识别方法和声纹识别系统
TW202018696A (zh) 語音識別方法、裝置及計算設備
CN113223536A (zh) 声纹识别方法、装置及终端设备
CN111161713A (zh) 一种语音性别识别方法、装置及计算设备
CN109545226A (zh) 一种语音识别方法、设备及计算机可读存储介质
Chakroun et al. Efficient text-independent speaker recognition with short utterances in both clean and uncontrolled environments
WO2019218515A1 (zh) 服务器、基于声纹的身份验证方法及存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018514332

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 11201803895R

Country of ref document: SG

ENP Entry into the national phase

Ref document number: 20187015547

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017341161

Country of ref document: AU

Date of ref document: 20170626

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17857669

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE