US20180018973A1 - Speaker verification - Google Patents

Speaker verification Download PDF

Info

Publication number
US20180018973A1
US20180018973A1 US15/211,317 US201615211317A US2018018973A1 US 20180018973 A1 US20180018973 A1 US 20180018973A1 US 201615211317 A US201615211317 A US 201615211317A US 2018018973 A1 US2018018973 A1 US 2018018973A1
Authority
US
United States
Prior art keywords
user
vector
language
utterance
verification model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/211,317
Inventor
Ignacio Lopez Moreno
Li Wan
Quan Wang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to US15/211,317 priority Critical patent/US20180018973A1/en
Assigned to GOOGLE INC. reassignment GOOGLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WAN, LI, MORENO, IGNACIO LOPEZ, WANG, QUAN
Priority to RU2018112272A priority patent/RU2697736C1/en
Priority to KR1020187009479A priority patent/KR102109874B1/en
Priority to PCT/US2017/040906 priority patent/WO2018013401A1/en
Priority to JP2019500442A priority patent/JP6561219B1/en
Priority to EP18165912.9A priority patent/EP3373294B1/en
Priority to EP17740860.6A priority patent/EP3345181B1/en
Priority to CN201780003481.3A priority patent/CN108140386B/en
Assigned to GOOGLE LLC reassignment GOOGLE LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: GOOGLE INC.
Publication of US20180018973A1 publication Critical patent/US20180018973A1/en
Priority to US15/995,480 priority patent/US10403291B2/en
Priority to US16/557,390 priority patent/US11017784B2/en
Priority to US17/307,704 priority patent/US11594230B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/22Interactive procedures; Man-machine interfaces
    • G10L17/24Interactive procedures; Man-machine interfaces the user being prompted to utter a password or a predefined phrase
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/06Decision making techniques; Pattern matching strategies
    • G10L17/08Use of distortion metrics or a particular distance between probe pattern and reference templates
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/06Decision making techniques; Pattern matching strategies
    • G10L17/14Use of phonemic categorisation or speech recognition prior to speaker recognition or verification
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/18Artificial neural networks; Connectionist approaches

Definitions

  • This specification generally relates to speaker verification.
  • Voice authentication provides an easy way for a user of a user device to gain access to a user device. Voice authentication allows a user to unlock, and access, the user's device without remembering or typing in a passcode. However, the existence of multiple different languages, dialects, accents, and the like presents certain challenges in the field of voice authentication.
  • a speaker verification model is used that facilitates speaker verification regardless of the speaker's language, dialect, or accent.
  • the speaker verification model may be based on a neural network.
  • the neural network may be trained using inputs that include an utterance and a language identifier. Once trained, activations output by a hidden layer of the neural network can be used as a voiceprint, which can be compared to a reference representation on the user's device. A speaker can be authenticated if the voiceprint and the reference representation satisfy a predetermined similarity threshold.
  • the subject matter of this specification may be embodied in a method to facilitate language-independent speaker verification.
  • the method may include the actions of: receiving, by a user device, audio data representing an utterance of a user; determining a language identifier associated with the user device; providing, to a neural network stored on the user device, a set of input data derived from the audio data and the determined language identifier, the neural network having parameters trained using speech data representing speech in different languages and different dialects; generating, based on output of the neural network produced in response to receiving the set of input data, a speaker representation indicative of characteristics of the voice of the user; determining, based on the speaker representation and a second representation, that the utterance is an utterance of the user; and providing the user access to the user device based on determining that the utterance is an utterance of the user.
  • the set of input data derived from the audio data and the determined language identifier includes a first vector that is derived from the audio data and a second vector that is derived from the determined language identifier.
  • the method may include generating an input vector by concatenating the first vector and the second vector into a single concatenated vector, providing, to the neural network, the generated input vector, and generating, based on output of the neural network produced in response to receiving the input vector, a speaker representation indicative of characteristics of the voice of the user.
  • the method may include generating an input vector by concatenating the outputs of at least two other neural networks that respectively generate outputs based on (i) the first vector, (ii) the second vector, or (iii) both the first vector and the second vector, providing, to the neural network, the generated input vector, and generating, based on output of the neural network produced in response to receiving the input vector, a speaker representation indicative of characteristics of the voice of the user.
  • the method may include generating an input vector based on the first vector and a weighted sum of the second vector, providing, to the neural network, the generated input vector, and generating, based on output of the neural network produced in response to receiving the input vector, a speaker representation indicative of characteristics of the voice of the user.
  • the output of the neural network produced in response to receiving the set of input data includes a set of activations generated by a hidden layer of the neural network.
  • determining, based on the speaker representation and a second representation, that the utterance is an utterance of the user may include determining a distance between the first representation and the second representation.
  • the method may include providing the user access to the user device based on determining that the utterance is an utterance of the user includes unlocking the user device.
  • implementations of the subject matter described by this specification include a method for language-independent speaker verification that include receiving, by a mobile device that implements a language-independent speaker verification model configured to determine whether received audio data likely includes an utterance of one of multiple language-specific hotwords, (i) particular audio data corresponding to a particular utterance of a user, and (ii) data indicating a particular language spoken by the user, and in response to receiving (i) particular audio data corresponding to a particular utterance of a user, and (ii) data indicating a particular language spoken by the user, providing, for output, an indication that the language-independent speaker verification model has determined that the particular audio data likely includes the utterance of a hotword designated for the particular language spoken by the user.
  • providing, for output, the indication may include providing access to a resource of the mobile device.
  • providing, for output, the indication may include unlocking the mobile device.
  • providing, for output, the indication may include waking up the mobile device from a low-power state.
  • providing, for output, the indication comprises providing an indication that language-independent speaker verification model has determined that the particular audio data includes the utterance of a particular user associated with the mobile device.
  • the language-independent speaker verification model may include a neural network trained without using utterances of the user.
  • the subject matter of this specification provides multiple advantages over conventional methods.
  • the subject matter of the present application provides a speaker verification model that can be easily distributed. Since the speaker verification model is language, dialect, and accent independent the same speaker verification model can be widely distributed to user devices. This is exceedingly more efficient than providing different speaker verification models to different devices based on the language of the device user. Alternatively, it avoids the need to deploy multiple speaker verification models to the same device, of which the user can select one.
  • the speaker verification model provided by the present application demonstrates improved accuracy when using the same model to perform speaker verification independent of speaker language, dialect, or accent. For instance, variations in language, dialect, or accent can result in a particular user pronouncing a predetermined hotword in a different way than other users. This pronunciation difference can cause accuracy problems in conventional systems.
  • the speaker verification model of the present disclosure improves upon this weakness of conventional systems.
  • the speaker verification model provided by the present application also provides ease of updating. For instance, a newly trained model can easily be deployed as part of a routine software update to a user device's operating system. Such updated speaker verification models may be easily trained to account for new languages, dialects, and/or accents as they arise. Alternatively, updates may be created to an existing version of the speaker verification model based on known languages, dialects, and/or accents. Such updated speaker verification models can be universally deployed, without the need to provide particular speaker verification models to specific devices in specific geographic regions.
  • FIG. 1 shows a contextual diagram of an example of a system for using a language-independent speaker verification model to perform speaker verification.
  • FIG. 2 is an example of a system for training a language-independent speaker verification model.
  • FIG. 3 is an example of a conceptual representation of a plurality of respective language identification vectors.
  • FIG. 4 is an example of a system that performs language-independent speaker verification.
  • FIG. 5 is a flowchart of a process for performing language-independent speaker verification.
  • a system provides a language-independent speaker verification model, which can be a model based on a neural network, to a user device.
  • the language-independent speaker verification model is trained, prior to installation on the user device, based training data that includes (i) utterances from multiple different users and (ii) vectors indicating languages or locations corresponding to the respective utterances.
  • the language-independent speaker verification model may be used to verify the identity of a user of the user device without subsequent training of the language-independent speaker verification model. While the user device may obtain and use utterances of the user to enroll the user, the model itself does not need to be trained based on any utterances of the user of the user device.
  • a “language-independent” speaker verification model refers to a single model that can be used to accurately verify the identities of speakers that speak different languages or dialects. That is, the model is not dependent on or limited to speech being in a specific single language. As a result, rather than using different models for different languages, dialects, or accents, a single language-independent model can be used.
  • a text-dependent model trained to identify a speaker based on utterance of a specific word or phrase, e.g., a predetermined hotword or attention word.
  • a language-independent model may be trained to distinguish speakers of different languages based on a single hotword, or based on different hotwords for different languages or locations.
  • the present application obtains information about the language or location of a user and provides the information to the model, allowing the model to create speaker representations, e.g., voiceprints, that better distinguish a user from other users having the same language, dialect, accent, or location.
  • speaker representations e.g., voiceprints
  • FIG. 1 shows a contextual diagram of an example of a system 100 for using a language-independent speaker verification model to perform identity verification.
  • the system 100 includes a user device 110 , a user device 120 , a network 130 , a server 140 , a neural network 150 , and a speaker verification model 180 .
  • the system 100 includes a server 140 that stores a neural network 150 .
  • the neural network 150 has been trained using speech data representing speech samples in different languages, different dialects, or both.
  • the server 140 generates a speaker verification model 180 based on the neural network 150 .
  • server 150 transmits a copy of the speaker verification model 180 through a network 130 to a first user device 110 and to a second user device 120 .
  • a copy of the speaker verification model 180 is then stored on each respective user device 110 , 120 .
  • a user may attempt to gain access to the user device 110 using voice authentication. For instance, Joe may utter a predetermined hotword 105 a, or phrase, such as “Ok Google” in English.
  • the audio 105 b corresponding to the predetermined utterance may be detected by a microphone 111 of the user device 110 .
  • the user device 110 may generate a first input to the stored speaker verification model 180 that is derived from the audio 105 b detected by the microphone 111 .
  • the user device 110 may derive a second input to the stored speaker verification model 180 based on the determination that Joe uttered the hotword 105 a, or phrase, in the English language.
  • the user device 110 may determine that Joe uttered the hotword 105 a, or phrase, in the English language by obtaining a language setting of the device.
  • the speaker verification model 180 stored on Joe's user device 110 may then generate, based on processing the first input derived from the audio 105 b and the second input derived from Joe's use of the English language, a voiceprint for Joe. Based on an analysis of the generated voiceprint, the user device 110 may determine that Joe is authorized to access the device 110 .
  • the user device 110 can initiate processing that unlocks user device 110 .
  • the user device 110 may display a message on the graphical user interface 112 that recites, for example, “Speaker Identity Verified” 113 .
  • a speaker of the user device 110 may output an audio greeting 115 that recites “Welcome Joe.”
  • another user e.g., “Wang,” has a user device 120 that also stores a copy of the same speaker verification model 180 .
  • Wang a fluent speaker of the Chinese language, may attempt to gain access to the user device 120 using voice authentication. For instance, Wang may utter a predetermined hotword 115 a, or phrase, such as “N ⁇ hacek over (i) ⁇ h ⁇ hacek over (a) ⁇ o Android” in Chinese (roughly translated as “Hello Android” in English).
  • the audio 115 b corresponding to the predetermined utterance may be detected by a microphone 121 of the user device 120 .
  • the user device 120 may derive a second input to the stored speaker verification model 180 based on the determination that Wang uttered the hotword 115 a, or phrase, in the Chinese language.
  • the user device 120 may determine that Joe uttered the hotword 115 a, or phrase, in the Chinese language by obtaining a language setting of the device.
  • the speaker verification model 180 stored on Wang's user device 120 may then generate, based on processing the first input derived from the audio 115 b and the second input derived from Wang's use of the Chinese language, a voiceprint for Wang. Based on an analysis of the generated voiceprint, the user device 120 may determine that Wang is authorized to access the device 120 .
  • the user device 120 can initiate processing that unlocks user device 120 .
  • the user device 120 may display a message on the graphical user interface 122 that recites, for example, “Shu ⁇ huàzh ⁇ hacek over (e) ⁇ de sh ⁇ nfèn yànzhèng” 123 (roughly translated as “Speaker Identity Verified” in English).
  • a speaker of the user device 120 may output an audio greeting 125 that recites “Hu ⁇ nying Wang” (roughly translated as “Welcome Wang” in English).
  • a single text-dependent speaker recognition model 180 can be configured to use different predetermined hotwords for different languages or locations.
  • the model 180 can use the same hotword for multiple languages or locations, but the model 180 can generate speaker representations with respect to different variations of the hotword's pronunciation, e.g., due to different languages or regional accents.
  • the model 180 can fine-tune the verification process by inputting an identifier for a language or location to a neural network of the model 180 along with audio information.
  • FIG. 2 is an example of a system 200 for training a language-independent speaker verification model 280 .
  • the system 200 includes a user device 210 , a network 230 , a server 240 , and a neural network 250 .
  • the training of the language-independent speaker verification model 280 occurs via processing that occurs on server 240 , before the model 280 is distributed to the user device 210 and used to perform speaker recognition. Such training does not require user device 210 to be connected to network 230 .
  • server 240 obtains a set of training utterances 210 a and 210 b.
  • the training utterances may include one or more speech samples that were each respectively uttered by multiple different training speakers, recorded, and stored in a training utterances repository made available to server 240 .
  • Each training utterance 210 a, 210 b may include at least a portion of the audio signal that results when a user utters the training utterance.
  • the neural network 250 may be trained using training utterances that correspond to a predetermined hotword that can be uttered by a user of user device 210 during voice authentication.
  • the training utterances may include utterances from multiple different users who each utter the same hotword in a different language, different dialect, different accent, or the like.
  • multiple sets of training data may be used to train the neural network 250 with each training data set corresponding to a particular keyword utterance in a particular language, dialect, accent, or the like.
  • a single neural network 250 may be trained with a set of training utterances from multiple different users uttering “Ok Google” in U.S.
  • the single neural network 250 may similarly be trained with other training data sets that include the hotword “Ok Google” being uttered in different languages, different dialects, different accents, or the like until the neural network 250 has been trained for all known languages, dialects, accents, or the like.
  • the single neural network 250 may be similarly trained with other training data sets that include the hotword “Ok Google” being uttered in different languages, different dialects, different accents, or the like until the neural network 250 has been trained for all languages, dialects, accents or the like in the regions where a speaker verification model based on the neural network 250 will be deployed.
  • a hotword can be a single word or a phrase that includes multiple words.
  • the hotword for each language is fixed during training of the model, so that each user using the model in a particular location uses the same hotword.
  • the audio signals corresponding to the uttered training phrases may be captured and recorded.
  • training utterances corresponding to a predetermined hotword provided here include “Ok Google” and “N ⁇ hacek over (i) ⁇ h ⁇ hacek over (a) ⁇ o Android,” the present disclosure need not be so limited. Instead, training utterances corresponding to any predetermined hotword, in any language or any dialect can be used to train the neural network 250 .
  • the neural network 250 can be easily trained to accommodate all known languages, dialects, accents, or the like.
  • a training speaker may be requested to utter, and record, the same training phrase multiple times in order to generate multiple different training utterances for the same training word or phrase.
  • Training utterances may be obtained, in this manner, using multiple different speakers uttering the training word or phrase in multiple different languages, multiple different dialects, or the like.
  • the system 200 may derive 212 a, 212 b a respective feature vector for each training utterance that corresponds to the acoustic features of the related training utterance.
  • the respective feature vector for each training utterance may include, for example, an N-by-1 vector that is derived from the training utterance and corresponds to acoustic features of the utterance.
  • An N-by-1 vector may be conceptually modeled using a single column of N values.
  • each of the N values in the N-by-1 vector may include a value of either “0” or “1”.
  • the system 200 may also obtain multiple different language IDs 215 a, 215 b.
  • Language IDs may include data that identifies a particular language.
  • the language ID may include a one-hot language vector.
  • Such one-hot language vectors may include a N-by-1 vector where only one feature of the language vector is activated.
  • a particular feature of a language vector may be activated by, for example, setting the feature to a value of “1.”
  • all other features of the one-hot language vector will be deactivated.
  • a feature of a language vector may be deactivated by, for example, setting the feature to “0.”
  • FIG. 3 is an example of a conceptual representation of a plurality of one-hot language vectors 305 , 310 , 315 , 320 .
  • System 200 may associated each one-hot language vector 305 , 310 , 315 , 320 with a particular language. For instance, system 200 may determine that a one-hot language vector with the first feature of the language vector activated such as the case with respect to language identification vector 305 may be associated with the “English” language.
  • system 200 may determine that a one-hot language vector 310 with the second feature of the vector activated such as the case with respect to language identification vector 310 may be associated with the “Chinese” language. Similar language associations may be made between the language identification vectors 315 and 320 and other languages.
  • Training of the speaker verification model 280 may begin by providing sets of training data to the neural network 250 .
  • neural network 250 may be trained using a pair-wise training technique. For instance, a first set of training data 213 a is input into the neural network 250 that includes a training utterance vector 214 a and a second input that includes a language ID 215 a.
  • the language ID 215 a may include, for example, a one-hot language vector that identifies the language or dialect used by the training speaker that provided the training utterance 210 a from which the training utterance vector 214 a was derived.
  • the neural network 250 processes the first set of training data 213 a and generates an output 260 a.
  • a second set of training data 213 b is input into the neural network 250 .
  • the neural network 250 processes the second set of training data 213 b and generates an output 260 b.
  • the outputs 260 a, 260 b are then compared using a comparator 270 .
  • the comparator 270 analyzes the outputs 260 a, 260 b to determine whether the training vectors 214 a, 214 b were derived from training utterances 210 a, 210 b that were uttered by the same speaker.
  • the comparison module 440 may determine whether the training vectors 214 a, 214 b were derived from training utterances 210 a, 210 b that were uttered by the same speaker by calculating the distance between the outputs 260 a, 260 b. Such a distance may be calculated, for example, using the cosine similarity.
  • the output 272 of the comparison module provides an indication of whether the training utterances 210 a, 210 b were uttered by the same speaker.
  • the output 272 may be a binary value that is comprised of either a ‘0’ or a ‘1’.
  • a ‘0’ may indicate that the utterances were not from the same speaker.
  • a ‘1’ may indicate that the utterance were from the same speaker.
  • the output 272 may be a value that can be mapped to a binary value such as a ‘0’ or a ‘1.’
  • the output 272 may include a probability that is indicative of whether the training utterances 210 a, 210 b were uttered by the same speaker.
  • the parameters of the neural network 250 may then be adjusted based on the output 272 of the comparison module 270 .
  • the parameters of the neural network 250 may be adjusted automatically based on output 272 .
  • one or more parameters of the neural network may be adjusted manually based on the output 272 .
  • Multiple sets of training data may be processed in this manner until a comparison of the two outputs 260 a, 260 b consistently indicates whether a pair of training vectors such as 214 a, 214 b were derived from utterances 210 a, 210 b that were uttered by the same speaker.
  • the neural network 250 may include an input layer 252 for inputting a sets of training data, multiple hidden layers 254 a, 254 b, 254 c for processing the sets of training data, and an output layer 256 for providing output.
  • Each hidden layer 254 a, 254 b, 254 c may include one or more weights or other parameters. The weights or other parameters of each respective hidden layer 254 a, 254 b, 254 c may be adjusted so that the trained neural network produces the desired target vector corresponding to each set of training data.
  • the output of each hidden layer 254 a, 254 b, 254 c may generate a M-by-1 activation vector.
  • the output of the last hidden layer such as 254 c may be provided to the output layer 256 , which performs additional computations of the received activation vector in order to generate a neural network output.
  • the neural network 250 may designated as a trained neural network. For example, the neural network 250 may be trained until the network 250 can distinguish between speech of different speakers, and identify matches between speech of the same speaker, with less than a maximum error rate.
  • a set of training data such as 213 a that includes a training utterance vector 214 a and a language ID 215 a may be pre-processed before being provided as a training input to a neural network 250 in a variety of different ways.
  • the training utterance vector 214 a and the language ID 215 a such as one-hot language vector may be concatenated.
  • the concatenated vector may be provided as the input to the neural network 250 during training.
  • the system 200 may generate the input to the neural network 250 by concatenating the outputs of at least two other neural networks that have respectively generated outputs based on each respective neural network's processing of the training utterance vector 214 a, the one-hot language vector, or both the training utterance vector 214 a and the one-hot language vector. In such instances, the concatenated output of the two or more other neural networks may be used to train the neural network 250 .
  • the system 200 may generate an input vector based the training utterance vector 214 a and a weighted sum of the one-hot language vector. Other methods of generating a set of training data based on the training utterance vector 214 a and a one-hot language vector can be used.
  • a portion 258 of the neural network 250 may obtained once the neural network 250 is designated as trained, and used to generate a speaker verification model 280 .
  • the obtained portion 258 of the neural network 250 may include the input layer 252 of the neural network 250 and one or more hidden layers of the neural network 254 a. In some implementations, however, the obtained portion of the neural network 250 does not include the output layer 256 .
  • the neural network 250 is capable of produced an activation vector as an output of the last hidden layer of the obtained portion 258 that can be used as a voiceprint for speaker.
  • the voiceprint may be used by a user device to verify the identity of a person who provides an utterance of a hotword to the user device.
  • the server 240 transmits a copy of the speaker verification model 280 through a network 230 to one or more respective user devices such as user device 210 .
  • a copy of the speaker verification model 280 is then stored on each respective user device 110 , and can be used to facilitate language-independent speaker identity verification.
  • the speaker verification model 280 may be pre-installed on the user device 210 , e.g., with an operating system of the user device 210 .
  • FIG. 4 is an example of a system 400 that performs language-independent speaker identity verification.
  • the system 400 includes a user device 210 , a speaker verification model 280 , a comparison module 440 , and a verification module 450 .
  • a user 402 attempts to access a user device 210 using voice verification.
  • the user device 210 that has previously received, and stored, a speaker verification model 280 provided by the server 240 via network 230 .
  • the user 402 utters a predetermined hotword 410 a, or phrase, such as “Ok Google.”
  • the audio 410 b corresponding to the predetermined hotword 410 a, or phrase, “Ok Google” is detected by a microphone 211 of the user device 210 .
  • the user device 410 b may derive 413 an acoustic feature vector from the audio 410 b that represents to the acoustic features of audio 410 b.
  • the system 400 may obtain a language ID 415 that is stored in a language ID storage area of the user device 210 .
  • a language ID may include data that identifies a particular language or dialect associated with the user.
  • the language ID may include a one-hot language vector.
  • the language ID 415 that is stored on any particular user device 210 may be set to a particular language ID from a set of multiple different language IDs corresponding to known languages and dialects in any number of different ways. For instance, a user may select a particular language or dialect when powering on, and configuring, the user device 210 for the first time after purchase of the user device 210 .
  • a corresponding language ID may be selected, and stored in the user device 210 , based on the particular language or dialect selected by the user.
  • a particular language ID may be selected, and stored in the user device 210 , based on the location of the device. For instance, a user device 210 may establish a default setting for the language ID based on the location where the device was first activated, current location of the device, or the like. Alternatively, or in addition, the user device 210 may dynamically detect a particular language or dialect associated with a user based on speech samples obtained from the user. The dynamic detection of the particular language or dialect associated with the user may be determined, for example, when the user utters the predetermined hotword, during speaker authentication. In such instances, a corresponding language ID may be selected, and stored on the user device 210 , based on the language or dialect detected from the user's speech samples.
  • the user may modify a language or dialect setting associated the user device 210 in order to select a particular language or dialect at any time.
  • a corresponding language ID may be selected, and stored on the user device 210 , based on the user's modification of the user device 210 language or dialect settings.
  • the acoustic feature vector 414 and the language ID 415 may be provided as an input to the speech verification model 280 that is based on at least portion of the trained neural network 250 .
  • the speech verification model 280 may include one or more layers of the trained neural network 250 such as, for example, the input layer 252 and one or more hidden layers 254 a, 254 b, 254 . In one implementation, however, the speech verification model 280 does not utilize the output layer 256 of the neural network 250 .
  • the acoustic feature vector 414 and the language ID 415 can be provided as input to the speech verification model 280 in a variety of different ways.
  • the acoustic feature vector 414 and the language ID 415 such as one-hot language vector may be concatenated.
  • the concatenated vector may be provided as input to the speech verification model.
  • the system 400 may concatenate the outputs of at least two other neural networks that have respectively generated outputs based on each respective neural network's processing of the acoustic feature vector 414 , the language ID 415 such as a one-hot language vector, or both the acoustic feature vector 414 and the language ID 415 .
  • the concatenated output of the two or more other neural networks may be provided to the speech verification model 280 .
  • the system 400 may generate an input vector based the acoustic feature vector 414 and a weighted sum of a one-hot language vector being used as a language ID 415 .
  • Other methods of generating input data to the speech verification model 280 based on the acoustic feature vector 414 and language ID 415 can be used.
  • the speech verification model's 280 processing of the provided input data based on the acoustic feature vector 414 and the language ID 415 may result in the generation of a set of activations at one or more hidden layers of the speech verification model's 280 neural network.
  • the speech verification models' 280 processing of the provided input can result in a set of activations being generated at a first hidden layer 254 a, a second hidden layer 255 b, a third hidden layer 254 c, or the like.
  • the system 400 may obtain the activations output by the final hidden layer 254 c of the speech verification model's 280 neural network.
  • the activations output by the final hidden layer 254 c may be used to generate a speaker vector 420 .
  • This speaker vector 420 provides a representation that is indicative of characteristics of the voice of the user.
  • This speaker vector may be referred to as a voiceprint.
  • the voiceprint can be used to uniquely verify the identity of a speaker based on the characteristics of the user's voice.
  • a comparison module 440 may be configured to receive the speaker vector 420 and a reference vector 430 .
  • the reference vector 430 may be a vector that has been derived from a previous user utterance captured by the device, e.g., an utterance provided during enrollment of the user with the device. For instance, at some point in time prior to the user's 402 use of system 400 to unlock the user device 210 using voice authentication, the user 402 may utter phrase such as “Ok Google” one, or multiple times.
  • the user device 210 can be configured to use a microphone 211 to capture the audio signals that correspond to the user's utterances.
  • the user device 210 can then derive reference feature vector 430 from the audio signals that correspond to at least one of the uttered phrases captured at some point in time prior to the user's 402 use of system 400 to unlock the user device 210 using voice authentication.
  • the reference vector 430 may provide a baseline representation of the characteristics of the user's 402 voice that the generated voiceprint can be compared to.
  • the reference vector 430 may be generated based on the user's 402 utterance of a predetermined hotword, which can be uttered to unlock the phone during voice authorization.
  • the comparison module 440 may determine the level of similarity between the speaker vector 420 and the reference vector 430 . In one implementation, the comparison module 440 can calculate a similarity measure between the speaker vector 420 and the reference vector 430 . In some instances, the comparison module 440 can determine whether the similarity measure between the speaker vector 420 and the reference vector 430 exceeds a predetermined threshold. In those instances where the similarity measure exceeds the predetermined threshold, the comparison module 440 may provide output data to the verification module 450 indicating that the similarity measure exceeded the predetermined threshold. Alternatively, the comparison module 440 may determine that the similarity measure does not exceed the predetermined threshold. In such instances, the comparison module 440 may provide output data to the verification module 450 indicating that the similarity measure did not exceed the predetermined threshold.
  • the similarity measure between the speaker vector 420 and the reference vector 430 may be calculated based on a distance between the speaker vector 420 and the reference vector 430 .
  • the comparison module 440 may be configured to determine the distance between the speaker vector 420 and the reference vector 430 .
  • the distance between the speaker vector 420 and the reference vector 430 may be determined, for example, using a cosine function.
  • the cosine function can determine the distance between the speaker vector 420 and the reference vector 430 by measuring the angle between the two vectors.
  • the verification module 450 receives and interprets the output data that the verification module 450 receives from the comparison module 440 . Based on the output data received from the comparison module 440 , the verification module may determine whether the user 402 that uttered phrase 410 a from which the speaker vector 420 was derived is the same user who previously uttered the phrase from which the reference vector 430 was derived. If it is determined that the user 402 that uttered the phrase 410 a from which the speaker vector 420 was derived is the same user who previously uttered the phrase from which the reference vector 430 was derived, the verification module 450 may instruct an application executing on user device 210 to provide user 402 with access to the device 420 .
  • the verification module 450 may provide access to a particular resource on the device, unlock the device, wake the device up from a low power state, or the like.
  • the verification module 450 may determine, based on the output data from the comparison module 440 , that the user who uttered the phrase 410 a is the same user who uttered the phrase from which the reference vector 430 was derived if the output data from the comparison module 440 indicates that the similarity measure exceeds the predetermined threshold. In such instances, it the verification module may determine that the user is fully authenticated and authorized to use the user device 210 . Alternatively, the verification module 450 may determine, based on the output data from the comparison module 440 , that that the verification module 450 cannot conclude that the user 402 who uttered the phrase 410 a is the same user who uttered the reference vector 430 .
  • the user 402 is not authenticated, and is not provided with access to the device.
  • the system 400 , user device 210 , one or more other applications, or a combination thereof may provide alternative options for accessing the user device 210 .
  • the user device 210 may prompt the user 402 to enter a secret passcode.
  • the user device 210 When a user 402 has been authenticated, by determining that the user 402 who uttered the phrase 410 a is the same user who uttered the phrase from which the reference vector 430 was derived, the user device 210 unlocks and may output a message 460 to the user indicating that the “Speaker's Identity is Verified.”
  • This message may be a text message displayed on a graphical user interface of the user device 210 , an audio message output by a speaker of the user device 210 , a video message displayed on the graphical user interface of the user device 210 , or a combination of one or more of the aforementioned types of messages.
  • FIG. 5 is a flowchart of a process 500 for performing language-independent speaker identity verification.
  • the process 500 will be described as being performed by a system.
  • the system 400 discussed above can perform the process 500 to authenticate a user attempting to access a user device 210 .
  • the process 500 may begin when a user device 210 receives 510 a request to perform voice authentication from a user of the device.
  • the user may have to select a button on the user device, perform a gesture on the user interface of the user device, perform a gesture in the air in the line of sight of a camera of the user device, or the like in order to instruct the phone to initiate voice authentication of the user.
  • the user may utter a predetermined hotword, in any language or dialect that can be used to verify the identity of the user.
  • the user device 210 may use a microphone to passively “listen” for the detection of a predetermined uttered hotword, in any language or dialect that may be used to initiate voice authentication of the user.
  • a predetermined hotword may include, for example “Hello Phone,” “Ok Google,” “N ⁇ hacek over (i) ⁇ h ⁇ hacek over (a) ⁇ o Android,” or the like.
  • the process can continue at 520 when the system 400 obtains an utterance input by a user of the user device 210 .
  • the utterance may include, for example, a predetermined hotword, in any language or dialect that may be used to initiate voice authentication of the user.
  • the system 400 may derive an acoustic feature vector from the audio signals corresponding to the obtained utterance.
  • the system 400 can determine 530 a language identifier associated with the user device 210 .
  • a language identifier may include data that identifies a particular language or dialect associated with the user.
  • the language identifier may include a one-hot language vector.
  • the language identifier that is stored on any particular user device 210 may be set to a particular language identifier from a pool of multiple different language identifiers corresponding to known languages and dialects in any number of different ways, for example, as described above.
  • subject matter of the present specification is not limited to only currently know languages or dialects.
  • the speaker verification model can be trained to accommodate new languages, dialects, or accents. When a speaker verification model is re-trained, mappings between languages or locations and identifiers may be adjusted, e.g., to add new locations or languages.
  • the system 400 may provide 540 input data to the speaker verification model based on the acoustic feature vector and the language identifier.
  • the input may be provided to the speaker verification model in a variety of different ways.
  • the acoustic feature vector and the language identifier such as one-hot language vector may be concatenated.
  • the concatenated vector may be provided as input to the speech verification model.
  • the system 400 may concatenate the outputs of at least two other neural networks that have respectively generated outputs based on each respective neural network's processing of the acoustic feature vector, the language identifier such as a one-hot language vector, or both the acoustic feature vector and the language identifier.
  • the concatenated output of the two or more other neural networks may be provided to the speech verification model.
  • the system 400 may generate an input vector based the acoustic feature vector and a weighted sum of a one-hot language vector being used as a language identifier.
  • Other methods of generating input data to the speech verification model 280 based on the acoustic feature vector and language identifier may be used.
  • the system 400 may generate a speaker representation based on the input provided in 540 .
  • the speaker verification model may include a neural network that processes the input provided in 540 and generates a set of activations at one or more hidden layers.
  • the speaker representation may then be derived from a particular of set of activations obtained from at least one hidden layer of the neural network.
  • the activations may be obtained from the last hidden layer of the neural network.
  • the speaker representation may include a feature vector that is indicative of characteristics of the voice of the user.
  • the system 400 may determine whether the speaker of the utterance obtained in stage 520 can access the user device 210 . This determination may be based on, for example, a comparison of the speaker representation to a reference representation.
  • the reference may be a feature vector that was derived from a user utterance input into the user device 210 at some point in time prior to the user requesting to access the user device using voice authentication.
  • the comparison of the speaker representation to the reference representation may result in the determination of a similarity measure that is indicative of the similarity between the speaker representation and the reference representation.
  • the similarity measure may include a distance between the speaker representation and the reference representation. In one implementation, the distance may be calculated using a cosine function. If it is determined that the similarity measure exceeds a predetermined threshold, the system 400 may determine to provide 570 the user with access to the user device 210 .
  • Embodiments of the subject matter, the functional operations and the processes described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
  • Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible nonvolatile program carrier for execution by, or to control the operation of, data processing apparatus.
  • the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.
  • the computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
  • data processing apparatus encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
  • the apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • the apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a computer program (which may also be referred to or described as a program, software, a software application, a module, a software module, a script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program may, but need not, correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • the processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • special purpose logic circuitry e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • Computers suitable for the execution of a computer program include, by way of example, can be based on general or special purpose microprocessors or both, or any other kind of central processing unit.
  • a central processing unit will receive instructions and data from a read-only memory or a random access memory or both.
  • the essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • a computer need not have such devices.
  • a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), to name just a few.
  • PDA personal digital assistant
  • GPS Global Positioning System
  • USB universal serial bus
  • Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD-ROM and DVD-ROM disks.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magneto optical disks e.g., CD-ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • keyboard and a pointing device e.g., a mouse or a trackball
  • Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
  • a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a
  • Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components.
  • the components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet.
  • LAN local area network
  • WAN wide area network
  • the computing system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.

Abstract

Methods, systems, apparatus, including computer programs encoded on computer storage medium, to facilitate language independent-speaker verification. In one aspect, a method includes actions of receiving, by a user device, audio data representing an utterance of a user. Other actions may include providing, to a neural network stored on the user device, input data derived from the audio data and a language identifier. The neural network may be trained using speech data representing speech in different languages or dialects. The method may include additional actions of generating, based on output of the neural network produced in response to receiving the set of input data, a speaker representation and determining, based on the speaker representation and a second representation, that the utterance is an utterance of the user. The method may provide the user with access to the user device based on determining that the utterance is an utterance of the user.

Description

    TECHNICAL FIELD
  • This specification generally relates to speaker verification.
  • BACKGROUND
  • Voice authentication provides an easy way for a user of a user device to gain access to a user device. Voice authentication allows a user to unlock, and access, the user's device without remembering or typing in a passcode. However, the existence of multiple different languages, dialects, accents, and the like presents certain challenges in the field of voice authentication.
  • SUMMARY
  • In one implementation, a speaker verification model is used that facilitates speaker verification regardless of the speaker's language, dialect, or accent. The speaker verification model may be based on a neural network. The neural network may be trained using inputs that include an utterance and a language identifier. Once trained, activations output by a hidden layer of the neural network can be used as a voiceprint, which can be compared to a reference representation on the user's device. A speaker can be authenticated if the voiceprint and the reference representation satisfy a predetermined similarity threshold.
  • According to one implementation, the subject matter of this specification may be embodied in a method to facilitate language-independent speaker verification. The method may include the actions of: receiving, by a user device, audio data representing an utterance of a user; determining a language identifier associated with the user device; providing, to a neural network stored on the user device, a set of input data derived from the audio data and the determined language identifier, the neural network having parameters trained using speech data representing speech in different languages and different dialects; generating, based on output of the neural network produced in response to receiving the set of input data, a speaker representation indicative of characteristics of the voice of the user; determining, based on the speaker representation and a second representation, that the utterance is an utterance of the user; and providing the user access to the user device based on determining that the utterance is an utterance of the user.
  • Other versions include corresponding systems, apparatus, and computer programs to perform the actions of methods, encoded on computer storage devices.
  • These and other versions may optionally include one or more of the following features. For instance, in some implementations, the set of input data derived from the audio data and the determined language identifier includes a first vector that is derived from the audio data and a second vector that is derived from the determined language identifier.
  • In some implementations, the method may include generating an input vector by concatenating the first vector and the second vector into a single concatenated vector, providing, to the neural network, the generated input vector, and generating, based on output of the neural network produced in response to receiving the input vector, a speaker representation indicative of characteristics of the voice of the user.
  • In some implementations, the method may include generating an input vector by concatenating the outputs of at least two other neural networks that respectively generate outputs based on (i) the first vector, (ii) the second vector, or (iii) both the first vector and the second vector, providing, to the neural network, the generated input vector, and generating, based on output of the neural network produced in response to receiving the input vector, a speaker representation indicative of characteristics of the voice of the user.
  • In some implementations, the method may include generating an input vector based on the first vector and a weighted sum of the second vector, providing, to the neural network, the generated input vector, and generating, based on output of the neural network produced in response to receiving the input vector, a speaker representation indicative of characteristics of the voice of the user.
  • In some implementations, the output of the neural network produced in response to receiving the set of input data includes a set of activations generated by a hidden layer of the neural network.
  • In some implementations, determining, based on the speaker representation and a second representation, that the utterance is an utterance of the user may include determining a distance between the first representation and the second representation.
  • In some implementations, the method may include providing the user access to the user device based on determining that the utterance is an utterance of the user includes unlocking the user device.
  • Other implementations of the subject matter described by this specification include a method for language-independent speaker verification that include receiving, by a mobile device that implements a language-independent speaker verification model configured to determine whether received audio data likely includes an utterance of one of multiple language-specific hotwords, (i) particular audio data corresponding to a particular utterance of a user, and (ii) data indicating a particular language spoken by the user, and in response to receiving (i) particular audio data corresponding to a particular utterance of a user, and (ii) data indicating a particular language spoken by the user, providing, for output, an indication that the language-independent speaker verification model has determined that the particular audio data likely includes the utterance of a hotword designated for the particular language spoken by the user.
  • These and other versions may optionally include one or more of the following features. For instance, in one implementation, providing, for output, the indication may include providing access to a resource of the mobile device. Alternatively, or in addition, providing, for output, the indication may include unlocking the mobile device. Alternatively, or in addition, providing, for output, the indication may include waking up the mobile device from a low-power state. Alternatively, or in addition, providing, for output, the indication comprises providing an indication that language-independent speaker verification model has determined that the particular audio data includes the utterance of a particular user associated with the mobile device.
  • In some implementations, the language-independent speaker verification model may include a neural network trained without using utterances of the user.
  • The subject matter of this specification provides multiple advantages over conventional methods. For instance, the subject matter of the present application provides a speaker verification model that can be easily distributed. Since the speaker verification model is language, dialect, and accent independent the same speaker verification model can be widely distributed to user devices. This is exceedingly more efficient than providing different speaker verification models to different devices based on the language of the device user. Alternatively, it avoids the need to deploy multiple speaker verification models to the same device, of which the user can select one.
  • The speaker verification model provided by the present application demonstrates improved accuracy when using the same model to perform speaker verification independent of speaker language, dialect, or accent. For instance, variations in language, dialect, or accent can result in a particular user pronouncing a predetermined hotword in a different way than other users. This pronunciation difference can cause accuracy problems in conventional systems. The speaker verification model of the present disclosure improves upon this weakness of conventional systems.
  • The speaker verification model provided by the present application also provides ease of updating. For instance, a newly trained model can easily be deployed as part of a routine software update to a user device's operating system. Such updated speaker verification models may be easily trained to account for new languages, dialects, and/or accents as they arise. Alternatively, updates may be created to an existing version of the speaker verification model based on known languages, dialects, and/or accents. Such updated speaker verification models can be universally deployed, without the need to provide particular speaker verification models to specific devices in specific geographic regions.
  • The details of one or more embodiments of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows a contextual diagram of an example of a system for using a language-independent speaker verification model to perform speaker verification.
  • FIG. 2 is an example of a system for training a language-independent speaker verification model.
  • FIG. 3 is an example of a conceptual representation of a plurality of respective language identification vectors.
  • FIG. 4 is an example of a system that performs language-independent speaker verification.
  • FIG. 5 is a flowchart of a process for performing language-independent speaker verification.
  • DETAILED DESCRIPTION
  • In some implementations, a system provides a language-independent speaker verification model, which can be a model based on a neural network, to a user device. The language-independent speaker verification model is trained, prior to installation on the user device, based training data that includes (i) utterances from multiple different users and (ii) vectors indicating languages or locations corresponding to the respective utterances. Once installed on the user device, the language-independent speaker verification model may be used to verify the identity of a user of the user device without subsequent training of the language-independent speaker verification model. While the user device may obtain and use utterances of the user to enroll the user, the model itself does not need to be trained based on any utterances of the user of the user device.
  • As used herein, a “language-independent” speaker verification model refers to a single model that can be used to accurately verify the identities of speakers that speak different languages or dialects. That is, the model is not dependent on or limited to speech being in a specific single language. As a result, rather than using different models for different languages, dialects, or accents, a single language-independent model can be used. In some implementations, a text-dependent model trained to identify a speaker based on utterance of a specific word or phrase, e.g., a predetermined hotword or attention word. A language-independent model may be trained to distinguish speakers of different languages based on a single hotword, or based on different hotwords for different languages or locations. Even when the same hotword is used in different languages or locations, users having different languages, dialects, accents, or locations may pronounce the hotword differently. These variations have decreased the accuracy of prior models, which often improperly attributed variability due to the regional language or accent as a speaker-distinctive characteristic. For example, the rate of false positives in verification be increased when a prior model interprets general features of a regional accent to be the main distinctive elements of a particular speaker's voice, when in fact the features are actually common to many other users who have a very similar accent. The present application obtains information about the language or location of a user and provides the information to the model, allowing the model to create speaker representations, e.g., voiceprints, that better distinguish a user from other users having the same language, dialect, accent, or location.
  • FIG. 1 shows a contextual diagram of an example of a system 100 for using a language-independent speaker verification model to perform identity verification. The system 100 includes a user device 110, a user device 120, a network 130, a server 140, a neural network 150, and a speaker verification model 180.
  • The system 100 includes a server 140 that stores a neural network 150. The neural network 150 has been trained using speech data representing speech samples in different languages, different dialects, or both. The server 140 generates a speaker verification model 180 based on the neural network 150. Then, server 150 transmits a copy of the speaker verification model 180 through a network 130 to a first user device 110 and to a second user device 120. A copy of the speaker verification model 180 is then stored on each respective user device 110, 120.
  • A user, e.g., “Joe” may attempt to gain access to the user device 110 using voice authentication. For instance, Joe may utter a predetermined hotword 105 a, or phrase, such as “Ok Google” in English. The audio 105 b corresponding to the predetermined utterance may be detected by a microphone 111 of the user device 110. The user device 110 may generate a first input to the stored speaker verification model 180 that is derived from the audio 105 b detected by the microphone 111. In addition, the user device 110 may derive a second input to the stored speaker verification model 180 based on the determination that Joe uttered the hotword 105 a, or phrase, in the English language. The user device 110 may determine that Joe uttered the hotword 105 a, or phrase, in the English language by obtaining a language setting of the device. The speaker verification model 180 stored on Joe's user device 110 may then generate, based on processing the first input derived from the audio 105 b and the second input derived from Joe's use of the English language, a voiceprint for Joe. Based on an analysis of the generated voiceprint, the user device 110 may determine that Joe is authorized to access the device 110. In response to determining that Joe is authorized to access user device 110, the user device 110 can initiate processing that unlocks user device 110. In some instances, the user device 110 may display a message on the graphical user interface 112 that recites, for example, “Speaker Identity Verified” 113. Alternatively, or in addition, when the user device 110 is unlocked, a speaker of the user device 110 may output an audio greeting 115 that recites “Welcome Joe.”
  • In the example of FIG. 1, another user, e.g., “Wang,” has a user device 120 that also stores a copy of the same speaker verification model 180. Wang, a fluent speaker of the Chinese language, may attempt to gain access to the user device 120 using voice authentication. For instance, Wang may utter a predetermined hotword 115 a, or phrase, such as “N{hacek over (i)} h{hacek over (a)}o Android” in Chinese (roughly translated as “Hello Android” in English). The audio 115 b corresponding to the predetermined utterance may be detected by a microphone 121 of the user device 120. In addition, the user device 120 may derive a second input to the stored speaker verification model 180 based on the determination that Wang uttered the hotword 115 a, or phrase, in the Chinese language. The user device 120 may determine that Joe uttered the hotword 115 a, or phrase, in the Chinese language by obtaining a language setting of the device. The speaker verification model 180 stored on Wang's user device 120 may then generate, based on processing the first input derived from the audio 115 b and the second input derived from Wang's use of the Chinese language, a voiceprint for Wang. Based on an analysis of the generated voiceprint, the user device 120 may determine that Wang is authorized to access the device 120. In response to determining that Wang is authorized to access user device 120, the user device 120 can initiate processing that unlocks user device 120. In some instances, the user device 120 may display a message on the graphical user interface 122 that recites, for example, “Shuōhuàzh{hacek over (e)} de shēnfèn yànzhèng” 123 (roughly translated as “Speaker Identity Verified” in English). Alternatively, or in addition, when the user device 120 is unlocked, a speaker of the user device 120 may output an audio greeting 125 that recites “Huānying Wang” (roughly translated as “Welcome Wang” in English).
  • As shown in the example of FIG. 1, a single text-dependent speaker recognition model 180 can be configured to use different predetermined hotwords for different languages or locations. In addition, or as an alternative, the model 180 can use the same hotword for multiple languages or locations, but the model 180 can generate speaker representations with respect to different variations of the hotword's pronunciation, e.g., due to different languages or regional accents. As discussed below, the model 180 can fine-tune the verification process by inputting an identifier for a language or location to a neural network of the model 180 along with audio information.
  • FIG. 2 is an example of a system 200 for training a language-independent speaker verification model 280. The system 200 includes a user device 210, a network 230, a server 240, and a neural network 250. In general, the training of the language-independent speaker verification model 280 occurs via processing that occurs on server 240, before the model 280 is distributed to the user device 210 and used to perform speaker recognition. Such training does not require user device 210 to be connected to network 230.
  • Before training can begin, server 240 obtains a set of training utterances 210 a and 210 b. The training utterances may include one or more speech samples that were each respectively uttered by multiple different training speakers, recorded, and stored in a training utterances repository made available to server 240. Each training utterance 210 a, 210 b may include at least a portion of the audio signal that results when a user utters the training utterance.
  • To facilitate voice authentication, the neural network 250 may be trained using training utterances that correspond to a predetermined hotword that can be uttered by a user of user device 210 during voice authentication. The training utterances may include utterances from multiple different users who each utter the same hotword in a different language, different dialect, different accent, or the like. In one implementation, multiple sets of training data may be used to train the neural network 250 with each training data set corresponding to a particular keyword utterance in a particular language, dialect, accent, or the like. For instance, a single neural network 250 may be trained with a set of training utterances from multiple different users uttering “Ok Google” in U.S. English, and another set of training data with multiple different users uttering “Ok Google” in British English. In one implementation, the single neural network 250 may similarly be trained with other training data sets that include the hotword “Ok Google” being uttered in different languages, different dialects, different accents, or the like until the neural network 250 has been trained for all known languages, dialects, accents, or the like. Alternatively, the single neural network 250 may be similarly trained with other training data sets that include the hotword “Ok Google” being uttered in different languages, different dialects, different accents, or the like until the neural network 250 has been trained for all languages, dialects, accents or the like in the regions where a speaker verification model based on the neural network 250 will be deployed. As used herein, a hotword can be a single word or a phrase that includes multiple words. In some implementations, the hotword for each language is fixed during training of the model, so that each user using the model in a particular location uses the same hotword.
  • The audio signals corresponding to the uttered training phrases may be captured and recorded. Though the examples of training utterances corresponding to a predetermined hotword, provided here include “Ok Google” and “N{hacek over (i)} h{hacek over (a)}o Android,” the present disclosure need not be so limited. Instead, training utterances corresponding to any predetermined hotword, in any language or any dialect can be used to train the neural network 250. In addition, it is contemplated that the neural network 250 can be easily trained to accommodate all known languages, dialects, accents, or the like.
  • In some instances, a training speaker may be requested to utter, and record, the same training phrase multiple times in order to generate multiple different training utterances for the same training word or phrase. Training utterances may be obtained, in this manner, using multiple different speakers uttering the training word or phrase in multiple different languages, multiple different dialects, or the like. Once the training utterances 210 a, 210 b are obtained, the system 200 may derive 212 a, 212 b a respective feature vector for each training utterance that corresponds to the acoustic features of the related training utterance. The respective feature vector for each training utterance may include, for example, an N-by-1 vector that is derived from the training utterance and corresponds to acoustic features of the utterance. An N-by-1 vector may be conceptually modeled using a single column of N values. In one implementation, each of the N values in the N-by-1 vector may include a value of either “0” or “1”.
  • The system 200 may also obtain multiple different language IDs 215 a, 215 b. Language IDs may include data that identifies a particular language. In one implementation, the language ID may include a one-hot language vector. Such one-hot language vectors may include a N-by-1 vector where only one feature of the language vector is activated. A particular feature of a language vector may be activated by, for example, setting the feature to a value of “1.” Similarly, for any given one-hot language vector, all other features of the one-hot language vector will be deactivated. A feature of a language vector may be deactivated by, for example, setting the feature to “0.”
  • FIG. 3 is an example of a conceptual representation of a plurality of one- hot language vectors 305, 310, 315, 320. In each one- hot language vector 305, 310, 315, 310, only one feature has been activated, while all other features are deactivated. System 200 may associated each one- hot language vector 305, 310, 315, 320 with a particular language. For instance, system 200 may determine that a one-hot language vector with the first feature of the language vector activated such as the case with respect to language identification vector 305 may be associated with the “English” language. Similarly, system 200 may determine that a one-hot language vector 310 with the second feature of the vector activated such as the case with respect to language identification vector 310 may be associated with the “Chinese” language. Similar language associations may be made between the language identification vectors 315 and 320 and other languages.
  • Training of the speaker verification model 280 may begin by providing sets of training data to the neural network 250. In one implementation, neural network 250 may be trained using a pair-wise training technique. For instance, a first set of training data 213 a is input into the neural network 250 that includes a training utterance vector 214 a and a second input that includes a language ID 215 a. The language ID 215 a may include, for example, a one-hot language vector that identifies the language or dialect used by the training speaker that provided the training utterance 210 a from which the training utterance vector 214 a was derived. The neural network 250 processes the first set of training data 213 a and generates an output 260 a. Subsequently, a second set of training data 213 b is input into the neural network 250. The neural network 250 processes the second set of training data 213 b and generates an output 260 b. The outputs 260 a, 260 b are then compared using a comparator 270. The comparator 270 analyzes the outputs 260 a, 260 b to determine whether the training vectors 214 a, 214 b were derived from training utterances 210 a, 210 b that were uttered by the same speaker. In one implementation, the comparison module 440 may determine whether the training vectors 214 a, 214 b were derived from training utterances 210 a, 210 b that were uttered by the same speaker by calculating the distance between the outputs 260 a, 260 b. Such a distance may be calculated, for example, using the cosine similarity.
  • The output 272 of the comparison module provides an indication of whether the training utterances 210 a, 210 b were uttered by the same speaker. In one implementation, for example, the output 272 may be a binary value that is comprised of either a ‘0’ or a ‘1’. In such an implementation, a ‘0’ may indicate that the utterances were not from the same speaker. On the other hand, a ‘1’ may indicate that the utterance were from the same speaker. Alternatively, the output 272 may be a value that can be mapped to a binary value such as a ‘0’ or a ‘1.’ For instance, the output 272 may include a probability that is indicative of whether the training utterances 210 a, 210 b were uttered by the same speaker. The parameters of the neural network 250 may then be adjusted based on the output 272 of the comparison module 270. In some implementations, the parameters of the neural network 250 may be adjusted automatically based on output 272. Alternatively, in some implementations, one or more parameters of the neural network may be adjusted manually based on the output 272. Multiple sets of training data may be processed in this manner until a comparison of the two outputs 260 a, 260 b consistently indicates whether a pair of training vectors such as 214 a, 214 b were derived from utterances 210 a, 210 b that were uttered by the same speaker.
  • The neural network 250 may include an input layer 252 for inputting a sets of training data, multiple hidden layers 254 a, 254 b, 254 c for processing the sets of training data, and an output layer 256 for providing output. Each hidden layer 254 a, 254 b, 254 c may include one or more weights or other parameters. The weights or other parameters of each respective hidden layer 254 a, 254 b, 254 c may be adjusted so that the trained neural network produces the desired target vector corresponding to each set of training data. The output of each hidden layer 254 a, 254 b, 254 c may generate a M-by-1 activation vector. The output of the last hidden layer such as 254 c may be provided to the output layer 256, which performs additional computations of the received activation vector in order to generate a neural network output. Once the neural network 250 reaches a desired level of performance the neural network 250 may designated as a trained neural network. For example, the neural network 250 may be trained until the network 250 can distinguish between speech of different speakers, and identify matches between speech of the same speaker, with less than a maximum error rate.
  • A set of training data such as 213 a that includes a training utterance vector 214 a and a language ID 215 a may be pre-processed before being provided as a training input to a neural network 250 in a variety of different ways. For instance, the training utterance vector 214 a and the language ID 215 a such as one-hot language vector may be concatenated. In such instances, the concatenated vector may be provided as the input to the neural network 250 during training. Alternatively, the system 200 may generate the input to the neural network 250 by concatenating the outputs of at least two other neural networks that have respectively generated outputs based on each respective neural network's processing of the training utterance vector 214 a, the one-hot language vector, or both the training utterance vector 214 a and the one-hot language vector. In such instances, the concatenated output of the two or more other neural networks may be used to train the neural network 250. Alternatively, the system 200 may generate an input vector based the training utterance vector 214 a and a weighted sum of the one-hot language vector. Other methods of generating a set of training data based on the training utterance vector 214 a and a one-hot language vector can be used.
  • A portion 258 of the neural network 250 may obtained once the neural network 250 is designated as trained, and used to generate a speaker verification model 280. The obtained portion 258 of the neural network 250 may include the input layer 252 of the neural network 250 and one or more hidden layers of the neural network 254 a. In some implementations, however, the obtained portion of the neural network 250 does not include the output layer 256. Once trained, the neural network 250 is capable of produced an activation vector as an output of the last hidden layer of the obtained portion 258 that can be used as a voiceprint for speaker. The voiceprint may be used by a user device to verify the identity of a person who provides an utterance of a hotword to the user device.
  • The server 240 transmits a copy of the speaker verification model 280 through a network 230 to one or more respective user devices such as user device 210. A copy of the speaker verification model 280 is then stored on each respective user device 110, and can be used to facilitate language-independent speaker identity verification. As another example, the speaker verification model 280 may be pre-installed on the user device 210, e.g., with an operating system of the user device 210.
  • FIG. 4 is an example of a system 400 that performs language-independent speaker identity verification. The system 400 includes a user device 210, a speaker verification model 280, a comparison module 440, and a verification module 450.
  • In the example shown in FIG. 4, a user 402 attempts to access a user device 210 using voice verification. The user device 210 that has previously received, and stored, a speaker verification model 280 provided by the server 240 via network 230. To access the user device 210 using voice verification, the user 402 utters a predetermined hotword 410 a, or phrase, such as “Ok Google.” The audio 410 b corresponding to the predetermined hotword 410 a, or phrase, “Ok Google” is detected by a microphone 211 of the user device 210. The user device 410 b may derive 413 an acoustic feature vector from the audio 410 b that represents to the acoustic features of audio 410 b.
  • In addition, the system 400 may obtain a language ID 415 that is stored in a language ID storage area of the user device 210. A language ID may include data that identifies a particular language or dialect associated with the user. In one implementation, the language ID may include a one-hot language vector. The language ID 415 that is stored on any particular user device 210 may be set to a particular language ID from a set of multiple different language IDs corresponding to known languages and dialects in any number of different ways. For instance, a user may select a particular language or dialect when powering on, and configuring, the user device 210 for the first time after purchase of the user device 210. A corresponding language ID may be selected, and stored in the user device 210, based on the particular language or dialect selected by the user.
  • Alternatively, or in addition, a particular language ID may be selected, and stored in the user device 210, based on the location of the device. For instance, a user device 210 may establish a default setting for the language ID based on the location where the device was first activated, current location of the device, or the like. Alternatively, or in addition, the user device 210 may dynamically detect a particular language or dialect associated with a user based on speech samples obtained from the user. The dynamic detection of the particular language or dialect associated with the user may be determined, for example, when the user utters the predetermined hotword, during speaker authentication. In such instances, a corresponding language ID may be selected, and stored on the user device 210, based on the language or dialect detected from the user's speech samples. Alternatively, or in addition, the user may modify a language or dialect setting associated the user device 210 in order to select a particular language or dialect at any time. In such instances, a corresponding language ID may be selected, and stored on the user device 210, based on the user's modification of the user device 210 language or dialect settings.
  • The acoustic feature vector 414 and the language ID 415 may be provided as an input to the speech verification model 280 that is based on at least portion of the trained neural network 250. For instance, the speech verification model 280 may include one or more layers of the trained neural network 250 such as, for example, the input layer 252 and one or more hidden layers 254 a, 254 b, 254. In one implementation, however, the speech verification model 280 does not utilize the output layer 256 of the neural network 250.
  • The acoustic feature vector 414 and the language ID 415 can be provided as input to the speech verification model 280 in a variety of different ways. For instance, the acoustic feature vector 414 and the language ID 415 such as one-hot language vector may be concatenated. In such instances, the concatenated vector may be provided as input to the speech verification model. Alternatively, the system 400 may concatenate the outputs of at least two other neural networks that have respectively generated outputs based on each respective neural network's processing of the acoustic feature vector 414, the language ID 415 such as a one-hot language vector, or both the acoustic feature vector 414 and the language ID 415. In such instances, the concatenated output of the two or more other neural networks may be provided to the speech verification model 280. Alternatively, the system 400 may generate an input vector based the acoustic feature vector 414 and a weighted sum of a one-hot language vector being used as a language ID 415. Other methods of generating input data to the speech verification model 280 based on the acoustic feature vector 414 and language ID 415 can be used.
  • The speech verification model's 280 processing of the provided input data based on the acoustic feature vector 414 and the language ID 415 may result in the generation of a set of activations at one or more hidden layers of the speech verification model's 280 neural network. For instance, the speech verification models' 280 processing of the provided input can result in a set of activations being generated at a first hidden layer 254 a, a second hidden layer 255 b, a third hidden layer 254 c, or the like. In one implementation, the system 400 may obtain the activations output by the final hidden layer 254 c of the speech verification model's 280 neural network. The activations output by the final hidden layer 254 c may be used to generate a speaker vector 420. This speaker vector 420 provides a representation that is indicative of characteristics of the voice of the user. This speaker vector may be referred to as a voiceprint. The voiceprint can be used to uniquely verify the identity of a speaker based on the characteristics of the user's voice.
  • A comparison module 440 may be configured to receive the speaker vector 420 and a reference vector 430. The reference vector 430 may be a vector that has been derived from a previous user utterance captured by the device, e.g., an utterance provided during enrollment of the user with the device. For instance, at some point in time prior to the user's 402 use of system 400 to unlock the user device 210 using voice authentication, the user 402 may utter phrase such as “Ok Google” one, or multiple times. The user device 210 can be configured to use a microphone 211 to capture the audio signals that correspond to the user's utterances. The user device 210 can then derive reference feature vector 430 from the audio signals that correspond to at least one of the uttered phrases captured at some point in time prior to the user's 402 use of system 400 to unlock the user device 210 using voice authentication. The reference vector 430 may provide a baseline representation of the characteristics of the user's 402 voice that the generated voiceprint can be compared to. In one implementation, the reference vector 430 may be generated based on the user's 402 utterance of a predetermined hotword, which can be uttered to unlock the phone during voice authorization.
  • The comparison module 440 may determine the level of similarity between the speaker vector 420 and the reference vector 430. In one implementation, the comparison module 440 can calculate a similarity measure between the speaker vector 420 and the reference vector 430. In some instances, the comparison module 440 can determine whether the similarity measure between the speaker vector 420 and the reference vector 430 exceeds a predetermined threshold. In those instances where the similarity measure exceeds the predetermined threshold, the comparison module 440 may provide output data to the verification module 450 indicating that the similarity measure exceeded the predetermined threshold. Alternatively, the comparison module 440 may determine that the similarity measure does not exceed the predetermined threshold. In such instances, the comparison module 440 may provide output data to the verification module 450 indicating that the similarity measure did not exceed the predetermined threshold.
  • In some implementations, the similarity measure between the speaker vector 420 and the reference vector 430 may be calculated based on a distance between the speaker vector 420 and the reference vector 430. The comparison module 440 may be configured to determine the distance between the speaker vector 420 and the reference vector 430. In one implementation, the distance between the speaker vector 420 and the reference vector 430 may be determined, for example, using a cosine function. The cosine function can determine the distance between the speaker vector 420 and the reference vector 430 by measuring the angle between the two vectors.
  • The verification module 450 receives and interprets the output data that the verification module 450 receives from the comparison module 440. Based on the output data received from the comparison module 440, the verification module may determine whether the user 402 that uttered phrase 410 a from which the speaker vector 420 was derived is the same user who previously uttered the phrase from which the reference vector 430 was derived. If it is determined that the user 402 that uttered the phrase 410 a from which the speaker vector 420 was derived is the same user who previously uttered the phrase from which the reference vector 430 was derived, the verification module 450 may instruct an application executing on user device 210 to provide user 402 with access to the device 420. Alternatively, or in addition, upon a determination that the user 402 that uttered the phrase 410 a from which the speaker vector 420 was derived is the same user who previously uttered the phrase from which the reference vector 420 was derived, the verification module 450 may provide access to a particular resource on the device, unlock the device, wake the device up from a low power state, or the like.
  • The verification module 450 may determine, based on the output data from the comparison module 440, that the user who uttered the phrase 410 a is the same user who uttered the phrase from which the reference vector 430 was derived if the output data from the comparison module 440 indicates that the similarity measure exceeds the predetermined threshold. In such instances, it the verification module may determine that the user is fully authenticated and authorized to use the user device 210. Alternatively, the verification module 450 may determine, based on the output data from the comparison module 440, that that the verification module 450 cannot conclude that the user 402 who uttered the phrase 410 a is the same user who uttered the reference vector 430. In such instances, the user 402 is not authenticated, and is not provided with access to the device. Instead, the system 400, user device 210, one or more other applications, or a combination thereof may provide alternative options for accessing the user device 210. For instance, the user device 210 may prompt the user 402 to enter a secret passcode.
  • When a user 402 has been authenticated, by determining that the user 402 who uttered the phrase 410 a is the same user who uttered the phrase from which the reference vector 430 was derived, the user device 210 unlocks and may output a message 460 to the user indicating that the “Speaker's Identity is Verified.” This message may be a text message displayed on a graphical user interface of the user device 210, an audio message output by a speaker of the user device 210, a video message displayed on the graphical user interface of the user device 210, or a combination of one or more of the aforementioned types of messages.
  • FIG. 5 is a flowchart of a process 500 for performing language-independent speaker identity verification. For convenience, the process 500 will be described as being performed by a system. For example, the system 400 discussed above can perform the process 500 to authenticate a user attempting to access a user device 210.
  • The process 500 may begin when a user device 210 receives 510 a request to perform voice authentication from a user of the device. In some implementations, the user may have to select a button on the user device, perform a gesture on the user interface of the user device, perform a gesture in the air in the line of sight of a camera of the user device, or the like in order to instruct the phone to initiate voice authentication of the user. In such instances, after the instruction to initiate voice authentication is received, the user may utter a predetermined hotword, in any language or dialect that can be used to verify the identity of the user. Alternatively, or in addition, the user device 210 may use a microphone to passively “listen” for the detection of a predetermined uttered hotword, in any language or dialect that may be used to initiate voice authentication of the user. A predetermined hotword, may include, for example “Hello Phone,” “Ok Google,” “N{hacek over (i)} h{hacek over (a)}o Android,” or the like. In some implementations, there is a single fixed hotword for all users in a particular location or all users that speak a particular language.
  • The process can continue at 520 when the system 400 obtains an utterance input by a user of the user device 210. The utterance may include, for example, a predetermined hotword, in any language or dialect that may be used to initiate voice authentication of the user. The system 400 may derive an acoustic feature vector from the audio signals corresponding to the obtained utterance.
  • The system 400 can determine 530 a language identifier associated with the user device 210. A language identifier may include data that identifies a particular language or dialect associated with the user. In one implementation, the language identifier may include a one-hot language vector. The language identifier that is stored on any particular user device 210 may be set to a particular language identifier from a pool of multiple different language identifiers corresponding to known languages and dialects in any number of different ways, for example, as described above. However, subject matter of the present specification is not limited to only currently know languages or dialects. For instance, the speaker verification model can be trained to accommodate new languages, dialects, or accents. When a speaker verification model is re-trained, mappings between languages or locations and identifiers may be adjusted, e.g., to add new locations or languages.
  • The system 400 may provide 540 input data to the speaker verification model based on the acoustic feature vector and the language identifier. The input may be provided to the speaker verification model in a variety of different ways. For instance, the acoustic feature vector and the language identifier such as one-hot language vector may be concatenated. In such instances, the concatenated vector may be provided as input to the speech verification model. Alternatively, the system 400 may concatenate the outputs of at least two other neural networks that have respectively generated outputs based on each respective neural network's processing of the acoustic feature vector, the language identifier such as a one-hot language vector, or both the acoustic feature vector and the language identifier. In such instances, the concatenated output of the two or more other neural networks may be provided to the speech verification model. Alternatively, the system 400 may generate an input vector based the acoustic feature vector and a weighted sum of a one-hot language vector being used as a language identifier. Other methods of generating input data to the speech verification model 280 based on the acoustic feature vector and language identifier may be used.
  • The system 400 may generate a speaker representation based on the input provided in 540. For instance, the speaker verification model may include a neural network that processes the input provided in 540 and generates a set of activations at one or more hidden layers. The speaker representation may then be derived from a particular of set of activations obtained from at least one hidden layer of the neural network. In one implementation, the activations may be obtained from the last hidden layer of the neural network. The speaker representation may include a feature vector that is indicative of characteristics of the voice of the user.
  • At 560, the system 400 may determine whether the speaker of the utterance obtained in stage 520 can access the user device 210. This determination may be based on, for example, a comparison of the speaker representation to a reference representation. The reference may be a feature vector that was derived from a user utterance input into the user device 210 at some point in time prior to the user requesting to access the user device using voice authentication. The comparison of the speaker representation to the reference representation may result in the determination of a similarity measure that is indicative of the similarity between the speaker representation and the reference representation. The similarity measure may include a distance between the speaker representation and the reference representation. In one implementation, the distance may be calculated using a cosine function. If it is determined that the similarity measure exceeds a predetermined threshold, the system 400 may determine to provide 570 the user with access to the user device 210.
  • Embodiments of the subject matter, the functional operations and the processes described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible nonvolatile program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
  • The term “data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • A computer program (which may also be referred to or described as a program, software, a software application, a module, a software module, a script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • Computers suitable for the execution of a computer program include, by way of example, can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), to name just a few.
  • Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's user device in response to requests received from the web browser.
  • Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet.
  • The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • While this specification contains many specific implementation details, these should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
  • Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
  • Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous. Other steps or stages may be provided, or steps or stages may be eliminated, from the described processes. Accordingly, other implementations are within the scope of the following claims.

Claims (20)

1. A computer-implemented method comprising:
receiving, by a mobile device that implements a language-independent speaker verification model comprising a neural network that is stored on the mobile device and configured to determine whether received audio data likely includes an utterance of one of multiple language-specific hotwords, (i) particular audio data corresponding to a particular utterance of a user, and (ii) data indicating a particular language spoken by the user; and
in response to receiving (i) the particular audio data corresponding to a particular utterance of a user, and (ii) the data indicating a particular language spoken by the user, providing, for output, an indication that the language-independent speaker verification model has determined that the particular audio data likely includes the utterance of a hotword designated for the particular language spoken by the user.
2. The computer-implemented method of claim 1, wherein providing, for output, the indication comprises providing access to a resource of the mobile device.
3. The computer-implemented method of claim 1, wherein providing, for output, the indication comprises unlocking the mobile device.
4. The computer-implemented method of claim 1, wherein providing, for output, the indication comprises waking up the mobile device from a low-power state.
5. The computer-implemented method of claim 1, wherein providing, for output, the indication comprises providing an indication that the language-independent speaker verification model has determined that the particular audio data includes the utterance of a particular user associated with the mobile device.
6. The computer-implemented method of claim 1, wherein the neural network of the language-independent speaker verification model is trained without using utterances of the user.
7. A method comprising:
receiving, by a user device, audio data representing an utterance of a user;
providing, to a language independent speaker verification model comprising a neural network stored on the user device, a set of input data derived from the audio data and a language identifier or location identifier associated with the user device, the neural network having parameters trained using speech data representing speech in different languages or dialects;
generating, based on output of the language independent speaker verification model produced in response to receiving the set of input data, a speaker representation indicative of characteristics of the voice of the user;
determining, based on the speaker representation and a second representation, that the utterance is an utterance of the user; and
providing the user access to the user device based on determining that the utterance is an utterance of the user.
8. The method of claim 7, wherein the set of input data derived from the audio data and the determined language identifier includes a first vector that is derived from the audio data and a second vector that is derived from a language identifier associated with the user device.
9. The method of claim 8, further comprising:
generating an input vector by concatenating the first vector and the second vector into a single concatenated vector;
providing, to the language independent speaker verification model, the generated input vector; and
generating, based on output of the language independent speaker verification model produced in response to receiving the input vector, a speaker representation indicative of characteristics of the voice of the user.
10. The method of claim 8, the method further comprising:
generating an input vector by concatenating the outputs of at least two other neural networks that respectively generate outputs based on (i) the first vector, (ii) the second vector, or (iii) both the first vector and the second vector;
providing, to the language independent speaker verification model, the generated input vector; and
generating, based on output of the language independent speaker verification model produced in response to receiving the input vector, a speaker representation indicative of characteristics of the voice of the user.
11. The method of claim 8, further comprising:
generating an input vector based on the weighted sum of the first vector and the second vector;
providing, to the language independent speaker verification model, the generated input vector; and
generating, based on output of the language independent speaker verification model produced in response to receiving the input vector, a speaker representation indicative of characteristics of the voice of the user.
12. The method of claim 7, wherein the output of the language independent speaker verification model produced in response to receiving the set of input data includes a set of activations generated by a hidden layer of the neural network.
13. The method of claim 7, wherein determining, based on the speaker representation and a second representation, that the utterance is an utterance of the user comprises:
determining a distance between the first representation and the second representation.
14. The method of claim 7, wherein providing the user access to the user device based on determining that the utterance is an utterance of the user includes unlocking the user device.
15. A system comprising:
one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising:
receiving, by a user device, audio data representing an utterance of a user;
providing, to a language independent speaker verification model comprising a neural network stored on the user device, a set of input data derived from the audio data and a language identifier or location identifier associated with the user device, the neural network having parameters trained using speech data representing speech in different languages or different dialects;
generating, based on output of the language independent speaker verification model produced in response to receiving the set of input data, a speaker representation indicative of characteristics of the voice of the user;
determining, based on the speaker representation and a second representation, that the utterance is an utterance of the user; and
providing the user access to the user device based on determining that the utterance is an utterance of the user.
16. The system of claim 15, wherein the set of input data derived from the audio data and the determined language identifier includes a first vector that is derived from the audio data and a second vector that is derived from a language identifier associated with the user device.
17. The system of claim 16, further comprising:
generating an input vector by concatenating the first vector and the second vector into a single concatenated vector;
providing, to the language independent speaker verification model, the generated input vector; and
generating, based on output of the language independent speaker verification model produced in response to receiving the input vector, a speaker representation indicative of characteristics of the voice of the user.
18. The system of claim 16, the method further comprising:
generating an input vector by concatenating the outputs of at least two other neural networks that respectively generate outputs based on (i) the first vector, (ii) the second vector, or (iii) both the first vector and the second vector;
providing, to the language independent speaker verification model, the generated input vector; and
generating, based on output of the language independent speaker verification model produced in response to receiving the input vector, a speaker representation indicative of characteristics of the voice of the user.
19. The system of claim 16, further comprising:
generating an input vector based on the weighted sum of the first vector and the second vector;
providing, to the language independent speaker verification model, the generated input vector; and
generating, based on output of the language independent speaker verification model produced in response to receiving the input vector, a speaker representation indicative of characteristics of the voice of the user.
20. The system of claim 15, wherein the output of the language independent speaker verification model produced in response to receiving the set of input data includes a set of activations generated by a hidden layer of the neural network.
US15/211,317 2016-07-15 2016-07-15 Speaker verification Abandoned US20180018973A1 (en)

Priority Applications (11)

Application Number Priority Date Filing Date Title
US15/211,317 US20180018973A1 (en) 2016-07-15 2016-07-15 Speaker verification
CN201780003481.3A CN108140386B (en) 2016-07-15 2017-07-06 Speaker verification
EP17740860.6A EP3345181B1 (en) 2016-07-15 2017-07-06 Speaker verification
KR1020187009479A KR102109874B1 (en) 2016-07-15 2017-07-06 Speaker verification
PCT/US2017/040906 WO2018013401A1 (en) 2016-07-15 2017-07-06 Speaker verification
JP2019500442A JP6561219B1 (en) 2016-07-15 2017-07-06 Speaker verification
EP18165912.9A EP3373294B1 (en) 2016-07-15 2017-07-06 Speaker verification
RU2018112272A RU2697736C1 (en) 2016-07-15 2017-07-06 Speaker verification
US15/995,480 US10403291B2 (en) 2016-07-15 2018-06-01 Improving speaker verification across locations, languages, and/or dialects
US16/557,390 US11017784B2 (en) 2016-07-15 2019-08-30 Speaker verification across locations, languages, and/or dialects
US17/307,704 US11594230B2 (en) 2016-07-15 2021-05-04 Speaker verification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/211,317 US20180018973A1 (en) 2016-07-15 2016-07-15 Speaker verification

Related Child Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/040906 Continuation WO2018013401A1 (en) 2016-07-15 2017-07-06 Speaker verification

Publications (1)

Publication Number Publication Date
US20180018973A1 true US20180018973A1 (en) 2018-01-18

Family

ID=59366524

Family Applications (4)

Application Number Title Priority Date Filing Date
US15/211,317 Abandoned US20180018973A1 (en) 2016-07-15 2016-07-15 Speaker verification
US15/995,480 Active US10403291B2 (en) 2016-07-15 2018-06-01 Improving speaker verification across locations, languages, and/or dialects
US16/557,390 Active 2036-10-11 US11017784B2 (en) 2016-07-15 2019-08-30 Speaker verification across locations, languages, and/or dialects
US17/307,704 Active 2036-11-08 US11594230B2 (en) 2016-07-15 2021-05-04 Speaker verification

Family Applications After (3)

Application Number Title Priority Date Filing Date
US15/995,480 Active US10403291B2 (en) 2016-07-15 2018-06-01 Improving speaker verification across locations, languages, and/or dialects
US16/557,390 Active 2036-10-11 US11017784B2 (en) 2016-07-15 2019-08-30 Speaker verification across locations, languages, and/or dialects
US17/307,704 Active 2036-11-08 US11594230B2 (en) 2016-07-15 2021-05-04 Speaker verification

Country Status (7)

Country Link
US (4) US20180018973A1 (en)
EP (2) EP3345181B1 (en)
JP (1) JP6561219B1 (en)
KR (1) KR102109874B1 (en)
CN (1) CN108140386B (en)
RU (1) RU2697736C1 (en)
WO (1) WO2018013401A1 (en)

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108597525A (en) * 2018-04-25 2018-09-28 四川远鉴科技有限公司 Voice vocal print modeling method and device
US20190156837A1 (en) * 2017-11-23 2019-05-23 Samsung Electronics Co., Ltd. Neural network device for speaker recognition, and method of operation thereof
US20190189111A1 (en) * 2017-12-15 2019-06-20 Mitsubishi Electric Research Laboratories, Inc. Method and Apparatus for Multi-Lingual End-to-End Speech Recognition
WO2019140689A1 (en) 2018-01-22 2019-07-25 Nokia Technologies Oy Privacy-preservign voiceprint authentication apparatus and method
CN110164452A (en) * 2018-10-10 2019-08-23 腾讯科技(深圳)有限公司 A kind of method of Application on Voiceprint Recognition, the method for model training and server
US20190341055A1 (en) * 2018-05-07 2019-11-07 Microsoft Technology Licensing, Llc Voice identification enrollment
WO2019246220A1 (en) * 2018-06-22 2019-12-26 Babblelabs, Inc. Data driven audio enhancement
JP2019219574A (en) * 2018-06-21 2019-12-26 株式会社東芝 Speaker model creation system, recognition system, program and control device
CN110914898A (en) * 2018-05-28 2020-03-24 北京嘀嘀无限科技发展有限公司 System and method for speech recognition
CN111370003A (en) * 2020-02-27 2020-07-03 杭州雄迈集成电路技术股份有限公司 Voiceprint comparison method based on twin neural network
US10783873B1 (en) * 2017-12-15 2020-09-22 Educational Testing Service Native language identification with time delay deep neural networks trained separately on native and non-native english corpora
US10916252B2 (en) 2017-11-10 2021-02-09 Nvidia Corporation Accelerated data transfer for latency reduction and real-time processing
US10930283B2 (en) * 2019-01-28 2021-02-23 Cheng Uei Precision Industry Co., Ltd. Sound recognition device and sound recognition method applied therein
US11017784B2 (en) 2016-07-15 2021-05-25 Google Llc Speaker verification across locations, languages, and/or dialects
US11132992B2 (en) 2019-05-05 2021-09-28 Microsoft Technology Licensing, Llc On-device custom wake word detection
US11158305B2 (en) * 2019-05-05 2021-10-26 Microsoft Technology Licensing, Llc Online verification of custom wake word
US11170788B2 (en) 2018-05-18 2021-11-09 Emotech Ltd. Speaker recognition
US20210375290A1 (en) * 2020-05-26 2021-12-02 Apple Inc. Personalized voices for text messaging
US11222622B2 (en) 2019-05-05 2022-01-11 Microsoft Technology Licensing, Llc Wake word selection assistance architectures and methods
US11276395B1 (en) * 2017-03-10 2022-03-15 Amazon Technologies, Inc. Voice-based parameter assignment for voice-capturing devices
US11289072B2 (en) * 2017-10-23 2022-03-29 Tencent Technology (Shenzhen) Company Limited Object recognition method, computer device, and computer-readable storage medium
US11437046B2 (en) * 2018-10-12 2022-09-06 Samsung Electronics Co., Ltd. Electronic apparatus, controlling method of electronic apparatus and computer readable medium
US20220292134A1 (en) * 2021-03-09 2022-09-15 Qualcomm Incorporated Device operation based on dynamic classifier
US11487821B2 (en) * 2019-04-30 2022-11-01 Walmart Apollo, Llc Systems and methods for processing retail facility-related information requests of retail facility workers
US11545146B2 (en) * 2016-11-10 2023-01-03 Cerence Operating Company Techniques for language independent wake-up word detection
EP4123641A4 (en) * 2020-03-16 2023-08-23 Panasonic Intellectual Property Corporation of America Information transmission device, information reception device, information transmission method, program, and system
US11750962B2 (en) 2020-07-21 2023-09-05 Apple Inc. User identification using headphones
US11790914B2 (en) 2019-06-01 2023-10-17 Apple Inc. Methods and user interfaces for voice-based control of electronic devices
US11809886B2 (en) 2015-11-06 2023-11-07 Apple Inc. Intelligent automated assistant in a messaging environment
US11838579B2 (en) 2014-06-30 2023-12-05 Apple Inc. Intelligent automated assistant for TV user interactions
US11837237B2 (en) 2017-05-12 2023-12-05 Apple Inc. User-specific acoustic models
US11838734B2 (en) 2020-07-20 2023-12-05 Apple Inc. Multi-device audio adjustment coordination
US11862151B2 (en) 2017-05-12 2024-01-02 Apple Inc. Low-latency intelligent automated assistant
US11862186B2 (en) 2013-02-07 2024-01-02 Apple Inc. Voice trigger for a digital assistant
US11893992B2 (en) 2018-09-28 2024-02-06 Apple Inc. Multi-modal inputs for voice commands
US11907436B2 (en) 2018-05-07 2024-02-20 Apple Inc. Raise to speak
US11914848B2 (en) 2020-05-11 2024-02-27 Apple Inc. Providing relevant data items based on context
US11948582B2 (en) * 2019-03-25 2024-04-02 Omilia Natural Language Solutions Ltd. Systems and methods for speaker verification
US11954405B2 (en) 2015-09-08 2024-04-09 Apple Inc. Zero latency digital assistant

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11676608B2 (en) * 2021-04-02 2023-06-13 Google Llc Speaker verification using co-location information
CN106469040B (en) * 2015-08-19 2019-06-21 华为终端有限公司 Communication means, server and equipment
CN106251859B (en) * 2016-07-22 2019-05-31 百度在线网络技术(北京)有限公司 Voice recognition processing method and apparatus
AU2017425736A1 (en) * 2017-07-31 2020-01-23 Beijing Didi Infinity Technology And Development Co., Ltd. System and method for language-based service hailing
US11817103B2 (en) 2017-09-15 2023-11-14 Nec Corporation Pattern recognition apparatus, pattern recognition method, and storage medium
CN110634489B (en) * 2018-06-25 2022-01-14 科大讯飞股份有限公司 Voiceprint confirmation method, voiceprint confirmation device, voiceprint confirmation equipment and readable storage medium
KR20200011796A (en) * 2018-07-25 2020-02-04 엘지전자 주식회사 Voice recognition system
CN110874875B (en) * 2018-08-13 2021-01-29 珠海格力电器股份有限公司 Door lock control method and device
US10978059B2 (en) * 2018-09-25 2021-04-13 Google Llc Speaker diarization using speaker embedding(s) and trained generative model
US11144542B2 (en) * 2018-11-01 2021-10-12 Visa International Service Association Natural language processing system
US11031017B2 (en) * 2019-01-08 2021-06-08 Google Llc Fully supervised speaker diarization
US10978069B1 (en) * 2019-03-18 2021-04-13 Amazon Technologies, Inc. Word selection for natural language interface
CN113646835A (en) * 2019-04-05 2021-11-12 谷歌有限责任公司 Joint automatic speech recognition and speaker binarization
US11031013B1 (en) 2019-06-17 2021-06-08 Express Scripts Strategic Development, Inc. Task completion based on speech analysis
CN110400562B (en) * 2019-06-24 2022-03-22 歌尔科技有限公司 Interactive processing method, device, equipment and audio equipment
CN110415679B (en) * 2019-07-25 2021-12-17 北京百度网讯科技有限公司 Voice error correction method, device, equipment and storage medium
CN110379433B (en) * 2019-08-02 2021-10-08 清华大学 Identity authentication method and device, computer equipment and storage medium
EP4086904A1 (en) 2019-12-04 2022-11-09 Google LLC Speaker awareness using speaker dependent speech model(s)
RU2723902C1 (en) * 2020-02-15 2020-06-18 Илья Владимирович Редкокашин Method of verifying voice biometric data
JP7388239B2 (en) * 2020-02-21 2023-11-29 日本電信電話株式会社 Verification device, verification method, and verification program
US11651767B2 (en) 2020-03-03 2023-05-16 International Business Machines Corporation Metric learning of speaker diarization
US11443748B2 (en) * 2020-03-03 2022-09-13 International Business Machines Corporation Metric learning of speaker diarization
US20210287681A1 (en) * 2020-03-16 2021-09-16 Fidelity Information Services, Llc Systems and methods for contactless authentication using voice recognition
KR102277422B1 (en) * 2020-07-24 2021-07-19 이종엽 Voice verification and restriction method of the voice system
US11676572B2 (en) * 2021-03-03 2023-06-13 Google Llc Instantaneous learning in text-to-speech during dialog
US11798562B2 (en) * 2021-05-16 2023-10-24 Google Llc Attentive scoring function for speaker identification
WO2023076653A2 (en) * 2021-11-01 2023-05-04 Pindrop Security, Inc. Cross-lingual speaker recognition

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150199963A1 (en) * 2012-10-23 2015-07-16 Google Inc. Mobile speech recognition hardware accelerator

Family Cites Families (159)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4799262A (en) 1985-06-27 1989-01-17 Kurzweil Applied Intelligence, Inc. Speech recognition
US4868867A (en) 1987-04-06 1989-09-19 Voicecraft Inc. Vector excitation speech or audio coder for transmission or storage
JP2733955B2 (en) 1988-05-18 1998-03-30 日本電気株式会社 Adaptive speech recognition device
US5465318A (en) 1991-03-28 1995-11-07 Kurzweil Applied Intelligence, Inc. Method for generating a speech recognition model for a non-vocabulary utterance
JP2979711B2 (en) 1991-04-24 1999-11-15 日本電気株式会社 Pattern recognition method and standard pattern learning method
US5680508A (en) 1991-05-03 1997-10-21 Itt Corporation Enhancement of speech coding in background noise for low-rate speech coder
EP0576765A1 (en) 1992-06-30 1994-01-05 International Business Machines Corporation Method for coding digital data using vector quantizing techniques and device for implementing said method
US5636325A (en) 1992-11-13 1997-06-03 International Business Machines Corporation Speech synthesis and analysis of dialects
US5627939A (en) 1993-09-03 1997-05-06 Microsoft Corporation Speech recognition system and method employing data compression
US5689616A (en) * 1993-11-19 1997-11-18 Itt Corporation Automatic language identification/verification system
US5509103A (en) 1994-06-03 1996-04-16 Motorola, Inc. Method of training neural networks used for speech recognition
US5542006A (en) 1994-06-21 1996-07-30 Eastman Kodak Company Neural network based character position detector for use in optical character recognition
US5729656A (en) 1994-11-30 1998-03-17 International Business Machines Corporation Reduction of search space in speech recognition using phone boundaries and phone ranking
US5839103A (en) * 1995-06-07 1998-11-17 Rutgers, The State University Of New Jersey Speaker verification system using decision fusion logic
US6067517A (en) 1996-02-02 2000-05-23 International Business Machines Corporation Transcription of speech data with segments from acoustically dissimilar environments
US5729694A (en) 1996-02-06 1998-03-17 The Regents Of The University Of California Speech coding, reconstruction and recognition using acoustics and electromagnetic waves
US5745872A (en) 1996-05-07 1998-04-28 Texas Instruments Incorporated Method and system for compensating speech signals using vector quantization codebook adaptation
US6038528A (en) 1996-07-17 2000-03-14 T-Netix, Inc. Robust speech processing with affine transform replicated data
US6539352B1 (en) * 1996-11-22 2003-03-25 Manish Sharma Subword-based speaker verification with multiple-classifier score fusion weight and threshold adaptation
US6260013B1 (en) 1997-03-14 2001-07-10 Lernout & Hauspie Speech Products N.V. Speech recognition system employing discriminatively trained models
KR100238189B1 (en) 1997-10-16 2000-01-15 윤종용 Multi-language tts device and method
WO1999023643A1 (en) * 1997-11-03 1999-05-14 T-Netix, Inc. Model adaptation system and method for speaker verification
US6188982B1 (en) 1997-12-01 2001-02-13 Industrial Technology Research Institute On-line background noise adaptation of parallel model combination HMM with discriminative learning using weighted HMM for noisy speech recognition
US6397179B2 (en) 1997-12-24 2002-05-28 Nortel Networks Limited Search optimization system and method for continuous speech recognition
US6381569B1 (en) 1998-02-04 2002-04-30 Qualcomm Incorporated Noise-compensated speech recognition templates
US6434520B1 (en) 1999-04-16 2002-08-13 International Business Machines Corporation System and method for indexing and querying audio archives
US6665644B1 (en) 1999-08-10 2003-12-16 International Business Machines Corporation Conversational data mining
GB9927528D0 (en) 1999-11-23 2000-01-19 Ibm Automatic language identification
DE10018134A1 (en) 2000-04-12 2001-10-18 Siemens Ag Determining prosodic markings for text-to-speech systems - using neural network to determine prosodic markings based on linguistic categories such as number, verb, verb particle, pronoun, preposition etc.
US6631348B1 (en) 2000-08-08 2003-10-07 Intel Corporation Dynamic speech recognition pattern switching for enhanced speech recognition accuracy
DE10047172C1 (en) 2000-09-22 2001-11-29 Siemens Ag Speech processing involves comparing output parameters generated and to be generated and deriving change instruction using reduced weight of input parameters with little influence
US6876966B1 (en) 2000-10-16 2005-04-05 Microsoft Corporation Pattern recognition training method and apparatus using inserted noise followed by noise reduction
JP4244514B2 (en) 2000-10-23 2009-03-25 セイコーエプソン株式会社 Speech recognition method and speech recognition apparatus
US7280969B2 (en) 2000-12-07 2007-10-09 International Business Machines Corporation Method and apparatus for producing natural sounding pitch contours in a speech synthesizer
GB2370401A (en) 2000-12-19 2002-06-26 Nokia Mobile Phones Ltd Speech recognition
US7062442B2 (en) 2001-02-23 2006-06-13 Popcatcher Ab Method and arrangement for search and recording of media signals
GB2375673A (en) 2001-05-14 2002-11-20 Salgen Systems Ltd Image compression method using a table of hash values corresponding to motion vectors
GB2375935A (en) 2001-05-22 2002-11-27 Motorola Inc Speech quality indication
GB0113581D0 (en) 2001-06-04 2001-07-25 Hewlett Packard Co Speech synthesis apparatus
US7668718B2 (en) 2001-07-17 2010-02-23 Custom Speech Usa, Inc. Synchronized pattern recognition source data processed by manual or automatic means for creation of shared speaker-dependent speech user profile
US20030033143A1 (en) 2001-08-13 2003-02-13 Hagai Aronowitz Decreasing noise sensitivity in speech processing under adverse conditions
US7571095B2 (en) 2001-08-15 2009-08-04 Sri International Method and apparatus for recognizing speech in a noisy environment
US7043431B2 (en) 2001-08-31 2006-05-09 Nokia Corporation Multilingual speech recognition system using text derived recognition models
US6950796B2 (en) 2001-11-05 2005-09-27 Motorola, Inc. Speech recognition by dynamical noise model adaptation
US7286987B2 (en) 2002-06-28 2007-10-23 Conceptual Speech Llc Multi-phoneme streamer and knowledge representation speech recognition system and method
US20040024585A1 (en) 2002-07-03 2004-02-05 Amit Srivastava Linguistic segmentation of speech
US6756821B2 (en) * 2002-07-23 2004-06-29 Broadcom High speed differential signaling logic gate and applications thereof
JP4352790B2 (en) 2002-10-31 2009-10-28 セイコーエプソン株式会社 Acoustic model creation method, speech recognition device, and vehicle having speech recognition device
US20040111272A1 (en) 2002-12-10 2004-06-10 International Business Machines Corporation Multimodal speech-to-speech language translation and display
US7593842B2 (en) 2002-12-10 2009-09-22 Leslie Rousseau Device and method for translating language
KR100486735B1 (en) 2003-02-28 2005-05-03 삼성전자주식회사 Method of establishing optimum-partitioned classifed neural network and apparatus and method and apparatus for automatic labeling using optimum-partitioned classifed neural network
US7571097B2 (en) 2003-03-13 2009-08-04 Microsoft Corporation Method for training of subspace coded gaussian models
US8849185B2 (en) 2003-04-15 2014-09-30 Ipventure, Inc. Hybrid audio delivery system and method therefor
JP2004325897A (en) 2003-04-25 2004-11-18 Pioneer Electronic Corp Apparatus and method for speech recognition
US7275032B2 (en) 2003-04-25 2007-09-25 Bvoice Corporation Telephone call handling center where operators utilize synthesized voices generated or modified to exhibit or omit prescribed speech characteristics
US7499857B2 (en) 2003-05-15 2009-03-03 Microsoft Corporation Adaptation of compressed acoustic models
US20040260550A1 (en) 2003-06-20 2004-12-23 Burges Chris J.C. Audio processing system and method for classifying speakers in audio data
JP4548646B2 (en) 2003-09-12 2010-09-22 株式会社エヌ・ティ・ティ・ドコモ Noise model noise adaptation system, noise adaptation method, and speech recognition noise adaptation program
US20050144003A1 (en) 2003-12-08 2005-06-30 Nokia Corporation Multi-lingual speech synthesis
FR2865846A1 (en) 2004-02-02 2005-08-05 France Telecom VOICE SYNTHESIS SYSTEM
FR2867598B1 (en) 2004-03-12 2006-05-26 Thales Sa METHOD FOR AUTOMATIC LANGUAGE IDENTIFICATION IN REAL TIME IN AN AUDIO SIGNAL AND DEVICE FOR IMPLEMENTING SAID METHOD
US20050228673A1 (en) 2004-03-30 2005-10-13 Nefian Ara V Techniques for separating and evaluating audio and video source data
FR2868586A1 (en) 2004-03-31 2005-10-07 France Telecom IMPROVED METHOD AND SYSTEM FOR CONVERTING A VOICE SIGNAL
US20050267755A1 (en) 2004-05-27 2005-12-01 Nokia Corporation Arrangement for speech recognition
US7406408B1 (en) 2004-08-24 2008-07-29 The United States Of America As Represented By The Director, National Security Agency Method of recognizing phones in speech of any language
US7418383B2 (en) 2004-09-03 2008-08-26 Microsoft Corporation Noise robust speech recognition with a switching linear dynamic model
WO2006089055A1 (en) 2005-02-15 2006-08-24 Bbn Technologies Corp. Speech analyzing system with adaptive noise codebook
US20060253272A1 (en) 2005-05-06 2006-11-09 International Business Machines Corporation Voice prompts for use in speech-to-speech translation system
CN101176146B (en) 2005-05-18 2011-05-18 松下电器产业株式会社 Speech synthesizer
EP1889255A1 (en) 2005-05-24 2008-02-20 Loquendo S.p.A. Automatic text-independent, language-independent speaker voice-print creation and speaker recognition
US20070088552A1 (en) 2005-10-17 2007-04-19 Nokia Corporation Method and a device for speech recognition
US20070118372A1 (en) 2005-11-23 2007-05-24 General Electric Company System and method for generating closed captions
JP2009518884A (en) 2005-11-29 2009-05-07 グーグル・インコーポレーテッド Mass media social and interactive applications
US7539616B2 (en) * 2006-02-20 2009-05-26 Microsoft Corporation Speaker authentication using adapted background models
US20080004858A1 (en) 2006-06-29 2008-01-03 International Business Machines Corporation Apparatus and method for integrated phrase-based and free-form speech-to-speech translation
US7996222B2 (en) 2006-09-29 2011-08-09 Nokia Corporation Prosody conversion
CN101166017B (en) 2006-10-20 2011-12-07 松下电器产业株式会社 Automatic murmur compensation method and device for sound generation apparatus
US8204739B2 (en) 2008-04-15 2012-06-19 Mobile Technologies, Llc System and methods for maintaining speech-to-speech translation in the field
CA2676380C (en) 2007-01-23 2015-11-24 Infoture, Inc. System and method for detection and analysis of speech
US7848924B2 (en) 2007-04-17 2010-12-07 Nokia Corporation Method, apparatus and computer program product for providing voice conversion using temporal dynamic features
US20080300875A1 (en) 2007-06-04 2008-12-04 Texas Instruments Incorporated Efficient Speech Recognition with Cluster Methods
CN101359473A (en) 2007-07-30 2009-02-04 国际商业机器公司 Auto speech conversion method and apparatus
GB2453366B (en) 2007-10-04 2011-04-06 Toshiba Res Europ Ltd Automatic speech recognition method and apparatus
JP4944241B2 (en) * 2008-03-14 2012-05-30 名古屋油化株式会社 Release sheet and molded product
US8615397B2 (en) 2008-04-04 2013-12-24 Intuit Inc. Identifying audio content using distorted target patterns
CN101562013B (en) * 2008-04-15 2013-05-22 联芯科技有限公司 Method and device for automatically recognizing voice
US8374873B2 (en) 2008-08-12 2013-02-12 Morphism, Llc Training and applying prosody models
US20100057435A1 (en) 2008-08-29 2010-03-04 Kent Justin R System and method for speech-to-speech translation
US8239195B2 (en) 2008-09-23 2012-08-07 Microsoft Corporation Adapting a compressed model for use in speech recognition
US8332223B2 (en) * 2008-10-24 2012-12-11 Nuance Communications, Inc. Speaker verification methods and apparatus
CA2748695C (en) 2008-12-31 2017-11-07 Bce Inc. System and method for unlocking a device
US20100198577A1 (en) 2009-02-03 2010-08-05 Microsoft Corporation State mapping for cross-language speaker adaptation
US8380507B2 (en) 2009-03-09 2013-02-19 Apple Inc. Systems and methods for determining the language to use for speech generated by a text to speech engine
WO2010105089A1 (en) 2009-03-11 2010-09-16 Google Inc. Audio classification for information retrieval using sparse features
US9009039B2 (en) 2009-06-12 2015-04-14 Microsoft Technology Licensing, Llc Noise adaptive training for speech recognition
US20110238407A1 (en) 2009-08-31 2011-09-29 O3 Technologies, Llc Systems and methods for speech-to-speech translation
US8886531B2 (en) 2010-01-13 2014-11-11 Rovi Technologies Corporation Apparatus and method for generating an audio fingerprint and using a two-stage query
US8700394B2 (en) 2010-03-24 2014-04-15 Microsoft Corporation Acoustic model adaptation using splines
US8234111B2 (en) 2010-06-14 2012-07-31 Google Inc. Speech and noise models for speech recognition
US20110313762A1 (en) 2010-06-20 2011-12-22 International Business Machines Corporation Speech output with confidence indication
US8725506B2 (en) 2010-06-30 2014-05-13 Intel Corporation Speech audio processing
ES2540995T3 (en) 2010-08-24 2015-07-15 Veovox Sa System and method to recognize a user voice command in a noisy environment
US8782012B2 (en) 2010-08-27 2014-07-15 International Business Machines Corporation Network analysis
US8972253B2 (en) 2010-09-15 2015-03-03 Microsoft Technology Licensing, Llc Deep belief network for large vocabulary continuous speech recognition
EP2431969B1 (en) 2010-09-15 2013-04-03 Svox AG Speech recognition with small CPU footprint and reduced quantization error
US9318114B2 (en) 2010-11-24 2016-04-19 At&T Intellectual Property I, L.P. System and method for generating challenge utterances for speaker verification
US20120143604A1 (en) 2010-12-07 2012-06-07 Rita Singh Method for Restoring Spectral Components in Denoised Speech Signals
TWI413105B (en) 2010-12-30 2013-10-21 Ind Tech Res Inst Multi-lingual text-to-speech synthesis system and method
US9286886B2 (en) 2011-01-24 2016-03-15 Nuance Communications, Inc. Methods and apparatus for predicting prosody in speech synthesis
US8594993B2 (en) 2011-04-04 2013-11-26 Microsoft Corporation Frame mapping approach for cross-lingual voice transformation
US8260615B1 (en) 2011-04-25 2012-09-04 Google Inc. Cross-lingual initialization of language models
WO2012089288A1 (en) 2011-06-06 2012-07-05 Bridge Mediatech, S.L. Method and system for robust audio hashing
US8768707B2 (en) * 2011-09-27 2014-07-01 Sensory Incorporated Background speech recognition assistant using speaker verification
US9235799B2 (en) 2011-11-26 2016-01-12 Microsoft Technology Licensing, Llc Discriminative pretraining of deep neural networks
CN103562993B (en) * 2011-12-16 2015-05-27 华为技术有限公司 Speaker recognition method and device
CA2806372C (en) 2012-02-16 2016-07-19 Qnx Software Systems Limited System and method for dynamic residual noise shaping
US9042867B2 (en) 2012-02-24 2015-05-26 Agnitio S.L. System and method for speaker recognition on mobile devices
JP5875414B2 (en) 2012-03-07 2016-03-02 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation Noise suppression method, program and apparatus
WO2013149123A1 (en) 2012-03-30 2013-10-03 The Ohio State University Monaural speech filter
US9368104B2 (en) 2012-04-30 2016-06-14 Src, Inc. System and method for synthesizing human speech using multiple speakers and context
US20130297299A1 (en) 2012-05-07 2013-11-07 Board Of Trustees Of Michigan State University Sparse Auditory Reproducing Kernel (SPARK) Features for Noise-Robust Speech and Speaker Recognition
US9489950B2 (en) 2012-05-31 2016-11-08 Agency For Science, Technology And Research Method and system for dual scoring for text-dependent speaker verification
US9123338B1 (en) 2012-06-01 2015-09-01 Google Inc. Background audio identification for speech disambiguation
US9704068B2 (en) 2012-06-22 2017-07-11 Google Inc. System and method for labelling aerial images
US9536528B2 (en) * 2012-07-03 2017-01-03 Google Inc. Determining hotword suitability
US9336771B2 (en) 2012-11-01 2016-05-10 Google Inc. Speech recognition using non-parametric models
US9477925B2 (en) 2012-11-20 2016-10-25 Microsoft Technology Licensing, Llc Deep neural networks training for speech and pattern recognition
US9263036B1 (en) 2012-11-29 2016-02-16 Google Inc. System and method for speech recognition using deep recurrent neural networks
US20140156575A1 (en) 2012-11-30 2014-06-05 Nuance Communications, Inc. Method and Apparatus of Processing Data Using Deep Belief Networks Employing Low-Rank Matrix Factorization
US9230550B2 (en) * 2013-01-10 2016-01-05 Sensory, Incorporated Speaker verification and identification using artificial neural network-based sub-phonetic unit discrimination
US9502038B2 (en) * 2013-01-28 2016-11-22 Tencent Technology (Shenzhen) Company Limited Method and device for voiceprint recognition
US9454958B2 (en) 2013-03-07 2016-09-27 Microsoft Technology Licensing, Llc Exploiting heterogeneous data in deep neural network-based speech recognition systems
US9361885B2 (en) 2013-03-12 2016-06-07 Nuance Communications, Inc. Methods and apparatus for detecting a voice command
US9728184B2 (en) 2013-06-18 2017-08-08 Microsoft Technology Licensing, Llc Restructuring deep neural network acoustic models
JP5734354B2 (en) * 2013-06-26 2015-06-17 ファナック株式会社 Tool clamping device
US9311915B2 (en) 2013-07-31 2016-04-12 Google Inc. Context-based speech recognition
US9679258B2 (en) 2013-10-08 2017-06-13 Google Inc. Methods and apparatus for reinforcement learning
US9401148B2 (en) * 2013-11-04 2016-07-26 Google Inc. Speaker verification using neural networks
US9620145B2 (en) 2013-11-01 2017-04-11 Google Inc. Context-dependent state tying using a neural network
US9514753B2 (en) * 2013-11-04 2016-12-06 Google Inc. Speaker identification using hash-based indexing
US9715660B2 (en) 2013-11-04 2017-07-25 Google Inc. Transfer learning for deep neural network based hotword detection
CN104700831B (en) * 2013-12-05 2018-03-06 国际商业机器公司 The method and apparatus for analyzing the phonetic feature of audio file
US8965112B1 (en) 2013-12-09 2015-02-24 Google Inc. Sequence transcription with deep neural networks
US9195656B2 (en) 2013-12-30 2015-11-24 Google Inc. Multilingual prosody generation
US9589564B2 (en) 2014-02-05 2017-03-07 Google Inc. Multiple speech locale-specific hotword classifiers for selection of a speech locale
US20150228277A1 (en) 2014-02-11 2015-08-13 Malaspina Labs (Barbados), Inc. Voiced Sound Pattern Detection
US10102848B2 (en) 2014-02-28 2018-10-16 Google Llc Hotwords presentation framework
US9412358B2 (en) * 2014-05-13 2016-08-09 At&T Intellectual Property I, L.P. System and method for data-driven socially customized models for language generation
US9728185B2 (en) 2014-05-22 2017-08-08 Google Inc. Recognizing speech using neural networks
US20150364129A1 (en) 2014-06-17 2015-12-17 Google Inc. Language Identification
CN104008751A (en) * 2014-06-18 2014-08-27 周婷婷 Speaker recognition method based on BP neural network
CN104168270B (en) * 2014-07-31 2016-01-13 腾讯科技(深圳)有限公司 Auth method, server, client and system
US9378731B2 (en) 2014-09-25 2016-06-28 Google Inc. Acoustic model training corpus selection
US9299347B1 (en) 2014-10-22 2016-03-29 Google Inc. Speech recognition using associative mapping
US9418656B2 (en) * 2014-10-29 2016-08-16 Google Inc. Multi-stage hotword detection
CN104732978B (en) * 2015-03-12 2018-05-08 上海交通大学 The relevant method for distinguishing speek person of text based on combined depth study
EP3067884B1 (en) * 2015-03-13 2019-05-08 Samsung Electronics Co., Ltd. Speech recognition system and speech recognition method thereof
US9978374B2 (en) * 2015-09-04 2018-05-22 Google Llc Neural networks for speaker verification
US20180018973A1 (en) 2016-07-15 2018-01-18 Google Inc. Speaker verification

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150199963A1 (en) * 2012-10-23 2015-07-16 Google Inc. Mobile speech recognition hardware accelerator

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11862186B2 (en) 2013-02-07 2024-01-02 Apple Inc. Voice trigger for a digital assistant
US11838579B2 (en) 2014-06-30 2023-12-05 Apple Inc. Intelligent automated assistant for TV user interactions
US11954405B2 (en) 2015-09-08 2024-04-09 Apple Inc. Zero latency digital assistant
US11809886B2 (en) 2015-11-06 2023-11-07 Apple Inc. Intelligent automated assistant in a messaging environment
US11594230B2 (en) 2016-07-15 2023-02-28 Google Llc Speaker verification
US11017784B2 (en) 2016-07-15 2021-05-25 Google Llc Speaker verification across locations, languages, and/or dialects
US11545146B2 (en) * 2016-11-10 2023-01-03 Cerence Operating Company Techniques for language independent wake-up word detection
US20230082944A1 (en) * 2016-11-10 2023-03-16 Cerence Operating Company Techniques for language independent wake-up word detection
US11276395B1 (en) * 2017-03-10 2022-03-15 Amazon Technologies, Inc. Voice-based parameter assignment for voice-capturing devices
US11837237B2 (en) 2017-05-12 2023-12-05 Apple Inc. User-specific acoustic models
US11862151B2 (en) 2017-05-12 2024-01-02 Apple Inc. Low-latency intelligent automated assistant
US11289072B2 (en) * 2017-10-23 2022-03-29 Tencent Technology (Shenzhen) Company Limited Object recognition method, computer device, and computer-readable storage medium
US10916252B2 (en) 2017-11-10 2021-02-09 Nvidia Corporation Accelerated data transfer for latency reduction and real-time processing
US20190156837A1 (en) * 2017-11-23 2019-05-23 Samsung Electronics Co., Ltd. Neural network device for speaker recognition, and method of operation thereof
US11094329B2 (en) * 2017-11-23 2021-08-17 Samsung Electronics Co., Ltd. Neural network device for speaker recognition, and method of operation thereof
US10783873B1 (en) * 2017-12-15 2020-09-22 Educational Testing Service Native language identification with time delay deep neural networks trained separately on native and non-native english corpora
US10593321B2 (en) * 2017-12-15 2020-03-17 Mitsubishi Electric Research Laboratories, Inc. Method and apparatus for multi-lingual end-to-end speech recognition
US20190189111A1 (en) * 2017-12-15 2019-06-20 Mitsubishi Electric Research Laboratories, Inc. Method and Apparatus for Multi-Lingual End-to-End Speech Recognition
EP3744152A4 (en) * 2018-01-22 2021-07-21 Nokia Technologies Oy Privacy-preservign voiceprint authentication apparatus and method
WO2019140689A1 (en) 2018-01-22 2019-07-25 Nokia Technologies Oy Privacy-preservign voiceprint authentication apparatus and method
CN111630934A (en) * 2018-01-22 2020-09-04 诺基亚技术有限公司 Voiceprint authentication device and method with privacy protection function
CN108597525A (en) * 2018-04-25 2018-09-28 四川远鉴科技有限公司 Voice vocal print modeling method and device
US11152006B2 (en) * 2018-05-07 2021-10-19 Microsoft Technology Licensing, Llc Voice identification enrollment
US11907436B2 (en) 2018-05-07 2024-02-20 Apple Inc. Raise to speak
US20190341055A1 (en) * 2018-05-07 2019-11-07 Microsoft Technology Licensing, Llc Voice identification enrollment
US11170788B2 (en) 2018-05-18 2021-11-09 Emotech Ltd. Speaker recognition
CN110914898A (en) * 2018-05-28 2020-03-24 北京嘀嘀无限科技发展有限公司 System and method for speech recognition
JP2019219574A (en) * 2018-06-21 2019-12-26 株式会社東芝 Speaker model creation system, recognition system, program and control device
US10991379B2 (en) 2018-06-22 2021-04-27 Babblelabs Llc Data driven audio enhancement
WO2019246220A1 (en) * 2018-06-22 2019-12-26 Babblelabs, Inc. Data driven audio enhancement
US11893992B2 (en) 2018-09-28 2024-02-06 Apple Inc. Multi-modal inputs for voice commands
JP7152514B2 (en) 2018-10-10 2022-10-12 テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド Voiceprint identification method, model training method, server, and computer program
JP2021527840A (en) * 2018-10-10 2021-10-14 テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド Voiceprint identification methods, model training methods, servers, and computer programs
US11508381B2 (en) 2018-10-10 2022-11-22 Tencent Technology (Shenzhen) Company Limited Voiceprint recognition method, model training method, and server
CN110164452A (en) * 2018-10-10 2019-08-23 腾讯科技(深圳)有限公司 A kind of method of Application on Voiceprint Recognition, the method for model training and server
US11437046B2 (en) * 2018-10-12 2022-09-06 Samsung Electronics Co., Ltd. Electronic apparatus, controlling method of electronic apparatus and computer readable medium
US10930283B2 (en) * 2019-01-28 2021-02-23 Cheng Uei Precision Industry Co., Ltd. Sound recognition device and sound recognition method applied therein
US11948582B2 (en) * 2019-03-25 2024-04-02 Omilia Natural Language Solutions Ltd. Systems and methods for speaker verification
US11487821B2 (en) * 2019-04-30 2022-11-01 Walmart Apollo, Llc Systems and methods for processing retail facility-related information requests of retail facility workers
US11222622B2 (en) 2019-05-05 2022-01-11 Microsoft Technology Licensing, Llc Wake word selection assistance architectures and methods
US11158305B2 (en) * 2019-05-05 2021-10-26 Microsoft Technology Licensing, Llc Online verification of custom wake word
US11132992B2 (en) 2019-05-05 2021-09-28 Microsoft Technology Licensing, Llc On-device custom wake word detection
US11790914B2 (en) 2019-06-01 2023-10-17 Apple Inc. Methods and user interfaces for voice-based control of electronic devices
CN111370003A (en) * 2020-02-27 2020-07-03 杭州雄迈集成电路技术股份有限公司 Voiceprint comparison method based on twin neural network
EP4123641A4 (en) * 2020-03-16 2023-08-23 Panasonic Intellectual Property Corporation of America Information transmission device, information reception device, information transmission method, program, and system
US11914848B2 (en) 2020-05-11 2024-02-27 Apple Inc. Providing relevant data items based on context
US11508380B2 (en) * 2020-05-26 2022-11-22 Apple Inc. Personalized voices for text messaging
US20210375290A1 (en) * 2020-05-26 2021-12-02 Apple Inc. Personalized voices for text messaging
US11838734B2 (en) 2020-07-20 2023-12-05 Apple Inc. Multi-device audio adjustment coordination
US11750962B2 (en) 2020-07-21 2023-09-05 Apple Inc. User identification using headphones
US11776550B2 (en) * 2021-03-09 2023-10-03 Qualcomm Incorporated Device operation based on dynamic classifier
US20220292134A1 (en) * 2021-03-09 2022-09-15 Qualcomm Incorporated Device operation based on dynamic classifier

Also Published As

Publication number Publication date
US11017784B2 (en) 2021-05-25
US20210256981A1 (en) 2021-08-19
WO2018013401A1 (en) 2018-01-18
EP3345181B1 (en) 2019-01-09
US20190385619A1 (en) 2019-12-19
US20180277124A1 (en) 2018-09-27
US11594230B2 (en) 2023-02-28
KR102109874B1 (en) 2020-05-12
US10403291B2 (en) 2019-09-03
JP2019530888A (en) 2019-10-24
EP3345181A1 (en) 2018-07-11
CN108140386A (en) 2018-06-08
CN108140386B (en) 2021-11-23
KR20180050365A (en) 2018-05-14
RU2697736C1 (en) 2019-08-19
EP3373294A1 (en) 2018-09-12
EP3373294B1 (en) 2019-12-18
JP6561219B1 (en) 2019-08-14

Similar Documents

Publication Publication Date Title
US11594230B2 (en) Speaker verification
US11056120B2 (en) Segment-based speaker verification using dynamically generated phrases
EP3690875B1 (en) Training and testing utterance-based frameworks
US10255922B1 (en) Speaker identification using a text-independent model and a text-dependent model
US20170236520A1 (en) Generating Models for Text-Dependent Speaker Verification
US20190318744A1 (en) Voice-based authentication
KR20230116886A (en) Self-supervised speech representation for fake audio detection
KR20230156145A (en) Hybrid multilingual text-dependent and text-independent speaker verification

Legal Events

Date Code Title Description
AS Assignment

Owner name: GOOGLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MORENO, IGNACIO LOPEZ;WAN, LI;WANG, QUAN;SIGNING DATES FROM 20160914 TO 20160915;REEL/FRAME:039753/0053

AS Assignment

Owner name: GOOGLE LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:044567/0001

Effective date: 20170929

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION