GB2552082A - Voice user interface - Google Patents
Voice user interface Download PDFInfo
- Publication number
- GB2552082A GB2552082A GB1708954.1A GB201708954A GB2552082A GB 2552082 A GB2552082 A GB 2552082A GB 201708954 A GB201708954 A GB 201708954A GB 2552082 A GB2552082 A GB 2552082A
- Authority
- GB
- United Kingdom
- Prior art keywords
- authentication
- score
- segment
- speech
- authentication score
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 claims abstract description 150
- 230000004044 response Effects 0.000 claims abstract description 29
- 230000008569 process Effects 0.000 claims description 73
- 238000012545 processing Methods 0.000 claims description 28
- 230000000977 initiatory effect Effects 0.000 claims description 2
- 238000003860 storage Methods 0.000 claims description 2
- 238000004590 computer program Methods 0.000 claims 1
- 238000012795 verification Methods 0.000 abstract description 8
- 230000001186 cumulative effect Effects 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 11
- 238000011161 development Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 230000004927 fusion Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 230000001419 dependent effect Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 241000502522 Luscinia megarhynchos Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000012884 algebraic function Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/22—Interactive procedures; Man-machine interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/02—Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/04—Training, enrolment or model building
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/06—Decision making techniques; Pattern matching strategies
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/06—Decision making techniques; Pattern matching strategies
- G10L17/12—Score normalisation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L2015/088—Word spotting
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Computer Security & Cryptography (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Business, Economics & Management (AREA)
- Game Theory and Decision Science (AREA)
- Collating Specific Patterns (AREA)
- Telephonic Communication Services (AREA)
- Diaphragms For Electromechanical Transducers (AREA)
- Electrophonic Musical Instruments (AREA)
- Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)
Abstract
A method of speaker verification comprises: receiving a speech signal 50; dividing the speech signal into segments 52; and, following each segment 54, obtaining an authentication score based on said segment and previously received segments 56, wherein the authentication score represents a probability that the speech signal comes from a specific registered speaker. In response to an authentication request, an authentication result is output based on the authentication score 58. The method may either form a cumulative or running average of frame scores, which may be a weighted average, accrue the frame features over a plurality of frames and compare these features to a model, or collect audio frames into a larger audio sample which may be analysed. The result may be arrived at by comparing the overall authentication result to a threshold. The threshold may vary depending on the level of security required by a separate program. An apparatus to perform the method is also claimed.
Description
(71) Applicant(s):
Cirrus Logic International Semiconductor Limited 7B Nightingale Way, Quartermile, Edinburgh,
EH3 9EG, United Kingdom (51) INT CL:
G06F 21/32 (2013.01) G10L 17/06 (2013.01) G10L 17/12 (2013.01) (56) Documents Cited:
US 20150301796 A1 US 20100204993 A1 US 20080255842 A1 (58) Field of Search:
INT CL G06F, G10L, H04L
Other: EPODOC, WPI, full text databases (72) Inventor(s):
Carlos Vaquero Aviles-Casco David Martinez Gonzalez Ryan Roberts (74) Agent and/or Address for Service:
Haseltine Lake LLP
Redcliff Quay, 120 Redd iff Street, BRISTOL, BS1 6HU, United Kingdom (54) Title of the Invention: Voice user interface
Abstract Title: Speaker Authentication by frame-by-frame analysis (57) A method of speaker verification comprises: receiving a speech signal 50; dividing the speech signal into segments 52; and, following each segment 54, obtaining an authentication score based on said segment and previously received segments 56, wherein the authentication score represents a probability that the speech signal comes from a specific registered speaker. In response to an authentication request, an authentication result is output based on the authentication score 58. The method may either form a cumulative or running average of frame scores, which may be a weighted average, accrue the frame features over a plurality of frames and compare these features to a model, or collect audio frames into a larger audio sample which may be analysed. The result may be arrived at by comparing the overall authentication result to a threshold. The threshold may vary depending on the level of security required by a separate program. An apparatus to perform the method is also claimed.
Figure 3
C3
CO
CM
CM
CO
O
CO
Μ O
CM CM
CO
CM
CM
CM
3/5
CO m
CO
CD
Z3
Lb co cr>
θ co co f— ο
co ο
co c:
'to to to σ>
co co
CM
CO
Ox
O
ο co
00:05:40 00:08.2
VOICE USER INTERFACE
Technical Field
The embodiments described herein relate to a method and system for use in a voice user interface, for example for allowing a user to control the operation of a device using speech.
Background of the invention
Voice user interfaces are provided to allow a user to interact with a system using their voice. One advantage of this, for example in devices such as smartphones, tablet computers and the like, is that if allows the user to operate the device in a hands-free manner.
In one typical system, the user wakes the voice user interface from a low-power standby mode by speaking a trigger phrase. Speech recognition techniques are used to detect that the trigger phrase has been spoken and, separately, a speaker recognition process is used to confirm that the trigger phrase was spoken by a registered user of the device.
The voice user interface may then provide a prompt to the user, to confirm that the system is active, and the user may then speak a command, which can be recognised by the voice user interface using speech recognition techniques.
The voice user interface may then act on that spoken command. For example, if the spoken command asks for publicly available information, the spoken command may be recognised, and used to generate a query to an internet search engine in order to be able to supply that information to the user.
However, in other cases, for example if the spoken command relates to personal information, the level of authentication provided by the speaker recognition process may be considered insufficient for the voice user interface to act on that command, in such cases, the user may be asked to provide an additional form of authentication, for example by entering a PIN number or password through a keypad of the device, or by providing additional biometric authentication, such as a fingerprint scan.
This means that the user is no longer able to operate the device in a hands-free manner.
Summary of the invention
According to the embodiments described herein, there is provided a method and a system which reduce or avoid one or more of the disadvantages mentioned above.
According to a first aspect of the invention, there is provided a method of speaker authentication, comprising:
receiving a speech signal;
dividing the speech signal into segments;
following each segment, obtaining an authentication score based on said segment and previously received segments, wherein the authentication score represents a probability that the speech signal comes from a specific registered speaker; and outputting an authentication result based on the authentication score in response to an authentication request.
The authentication score may be obtained by comparing features of the speech signal with a model generated during enrolment of the registered speaker.
The speech signal may represent multiple discrete sections of speech.
The first segment may represent a trigger phrase. The method may then comprise performing the steps of obtaining the authentication score and outputting the authentication result in response to detecting that the trigger phrase has been spoken.
The method may comprise, after the trigger phrase, dividing the speech signal into segments of equal lengths. For example, the method may comprise, after the trigger phrase, dividing the speech signal into segments covering equal length periods of time, or may comprise, after the trigger phrase, dividing the speech signal into segments comprising equal durations of net speech.
The method may comprise comparing the authentication score with a first threshold score, and determining a positive authentication result if the authentication score exceeds the first threshold score.
The first threshold score may be set in response to a signal received from a separate process.
The method may comprise receiving the signal from the separate process, and selecting the first threshold score from a plurality of available threshold scores
The signal received from the separate process may indicate a requested level of security.
The separate process may be a speech recognition process.
The method may comprise comparing the authentication score with a second threshold score, and determining a negative authentication result if the authentication score is below the second threshold score.
The second threshold score may be set in response to a signal received from a separate process.
The method may comprise receiving the signal from the separate process, and selecting the second threshold score from a plurality of available threshold scores.
The signal received from the separate process may indicate a requested level of security.
The separate process may be a speech recognition process.
The method may comprise initiating the method in response to determining that a trigger phrase has been spoken.
The method may comprise receiving the authentication request from a speech recognition process.
The authentication request may request that the authentication result be output when the authentication score exceeds a threshold, or may request that the authentication result be output when the speech signal ends.
The step of, following each segment, obtaining an authentication score based on said segment and previously received segments may comprise:
obtaining a first authentication score based on a first segment; obtaining a respective subsequent authentication score based on each subsequent segment; and obtaining the authentication score based on said segment and previously received segments by merging the first authentication score and the or each subsequent authentication score.
The step of merging the first authentication score and the or each subsequent authentication score may comprise forming a weighted sum of the first authentication score and the or each subsequent authentication score.
The method may comprise forming the weighted sum of the first authentication score and the or each subsequent authentication score by applying weights that depend on respective signal-to-noise ratios applicable to the respective segments, or by applying weights that depend on quantities of speech present in the respective segments.
The method may comprise forming the weighted sum of the first authentication score and the or each subsequent authentication score by disregarding some or all outlier scores. For example, the method may comprise forming the weighted sum of the first authentication score and the or each subsequent authentication score by disregarding low outlier scores while retaining high outlier scores.
The step of, following each segment, obtaining an authentication score based on said segment and previously received segments may comprise:
obtaining a first authentication score based on a first segment of the speech signal; and following each new segment of the speech signal, combining the new segment of the speech signal with the or each previously received segment of the speech signal to form a new combined speech signal; and obtaining an authentication score based on said new combined speech signal.
The step of, following each segment, obtaining an authentication score based on said segment and previously received segments may comprise:
extracting features from each segment;
obtaining a first authentication score based on the extracted features of a first segment of the speech signal; and following each new segment of the speech signal, combining the extracted features of the new segment of the speech signal with the extracted features of the or each previously received segment of the speech signal; and obtaining an authentication score based on said combined extracted features.
The method may comprise after determining a positive authentication result: starting a timer that runs for a predetermined period of time; and treating the specific registered speaker as authenticated for as long as the timer is running.
The method may comprise restarting the timer if a new positive authentication result is determined while the timer is running.
According to an aspect of the invention, there is provided a device for processing a received signal representing a user’s speech, for performing speaker recognition, wherein the device is configured to:
receive a speech signal;
divide the speech signal into segments;
following each segment, obtain an authentication score based on said segment and previously received segments, wherein the authentication score represents a probability that the speech signal comes from a specific registered speaker: and output an authentication result based on the authentication score in response to an authentication request.
The device may comprise a mobile telephone, an audio player, a video player, a mobile computing platform, a games device, a remote controiler device, a toy, a machine, or a home automation controiler or a domestic appliance.
The device may be further configured for performing speech recognition on at least a portion of the received signal.
The device may be further configured for transferring at least a portion of the received signal to a remote device for speech recognition, in which case the device may be further configured for receiving a result of the speech recognition.
According to an aspect of the invention, there is provided an integrated circuit device for processing a received signal representing a user’s speech, for performing speaker recognition, wherein the integrated circuit device is configured to:
receive a speech signal;
divide the speech signal into segments;
following each segment, obtain an authentication score based on said segment and previously received segments, wherein the authentication score represents a probability that the speech signal comes from a specific registered speaker; and output an authentication result based on the authentication score in response to an authentication request.
The authentication score may be obtained using at least one user or background model stored in said device.
The invention also provides a non-transitory computer readable storage medium having computer-executable instructions stored thereon that, when executed by processor circuitry, cause the processor circuitry to perform any of the methods set out above.
For a better understanding of the invention, and to show' more clearly how it may be carried into effect, reference will now be made, by way of example only, to the accompanying drawings in which:
Figure 1 is a schematic view of an electronic device;
Figure 2 is a further schematic diagram of an electronic device;
Figure 3 is a flow chart, illustrating a method;
Figure 4 is a block diagram, illustrating a processing system; and
Figure 5 is a time history, illustrating the operation of the processing system.
Detailed description
For clarity, it will be noted here that this description refers to speaker recognition and to speech recognition, which are intended to have different meanings. Speaker recognition refers to a technique that provides information about the identity of a person speaking. For example, speaker recognition may determine the identity of a speaker, from amongst a group of previously registered individuals, or may provide information indicating whether a speaker is or is not a particular individual, for the purposes of identification or authentication. Speech recognition refers to a technique for determining the content and/or the meaning of what is spoken, rather than recognising the person speaking.
Figure 1 shows a device in accordance with one aspect of the invention. The device may be any suitable type of device, such as a mobile computing device for example a laptop or tablet computer, a games console, a remote control device, a home automation controller or a domestic appliance including a domestic temperature or lighting control system, a toy, a machine such as a robot, an audio player, a video player, or the like, but in this illustrative example the device is a mobile telephone, and specifically a smartphone 10. The smartphone 10 may, by suitable software, be used as the control interface for controlling any other further device or system.
The smartphone 10 includes a screen 12 for displaying information to a user, a sound inlet 14, for allowing sound to be detected by a microphone, and a jack socket 16, or other port or receptacle, for allowing an accessory to be connected to the device.
Figure 2 is a schematic diagram showing the smartphone 10. in this example, the smartphone 10 includes a microphone 20, which may for example be located close to the sound Inlet 14 shown in Figure 1. Electronic signals generated by the microphone 20 are passed to a signal processing block 22, which performs initial signal processing of the signals, for example converting analog signals received from the microphone 20 into digital signals.
The smartphone 10 also includes an accessory interface 24, which may for example be located close to the jack socket 16 shown in Figure 1. The jack socket 16 and the interface 24 may be suitable for allowing a headset accessory to be connected thereto, and signals received from a microphone on such an accessory are also passed to the signal processing block 22, which performs initial signal processing of the signals.
The signal processing block 22 is connected to a processor 26, which performs methods as described herein on the basis of data and program instructions stored in a memory 28. Specifically, the methods described herein can be performed on the processor 26 by executing instructions that are stored in non-transient form in the memory 28, with the program instructions being stored either during manufacture of the device 10 or by upload while the device 10 is in use.
The processor 26 is connected to an interface 30, which is itself connected to an antenna 32, allowing signals to be transmitted and received over an external network to remote devices.
In other examples, the device performing the processes described herein may receive the required input signals in a suitable form, without needing to perform any prior signal detection or signal processing and thus not requiring the device to comprise signal processing block 22.
In some examples, some of the processing described below may be performed on an external device communicated with via an external network, for example a remote computing server or a server in a home network. In other examples, ail of the processing described beiow may be performed in a single device, without requiring the device to comprise any interface to any external device or network.
Figure 3 is a flow chart, illustrating a method of operation of a voice user interface according to one embodiment.
As described in more detail below, the process shown in Figure 3 is performed after a user has registered with the system, for example by providing one or more sets of voice samples that can be used to form one or more model of the user’s speech. Typically, the registration or enrolment process requires the user to provide speech inputs, and then uses these speech inputs to form a model of the user’s speech, starting from a particular background model defined in a prior development phase. Thus, the background model and the speech inputs are the inputs to the enrolment process that is used to form the model of the user’s speech. Subsequently, during verification, as described in more detail below, further speech inputs are compared with the model of the user’s speech, and with a background model, in order to provide an output. An output of this comparison may for example be a numerical value indicating a likelihood that the speech inputs received during the verification phase were provided by the same user that provided the speech inputs during enrolment. The numerical value indicative of the likelihood may be for example a log likelihood ratio (LLR) or may be some more indirect indication, for example a metric of distance of extracted features of the speech sample from some one- or muiti-dimensional threshold.
The voice user interface may spend the majority of its time in a standby state, in order to save power. A voice activity detection block may be provided, for determining when sounds that are detected by a microphone represent speech. In some embodiments, signals that are received from a microphone are stored. Then, when the voice activity block determines that the sounds that are detected by the microphone represent speech, those stored signals are analysed as described below.
The signal that is determined to represent speech may be sent to a speech recognition block, to determine the content of the speech. The method set out below may be initiated in response to determining that a predetermined trigger phrase has been spoken.
in step 50, the voice user interface receives the speech signal. The speech signal may for example result from an interaction with a virtual assistant. For example, in one typical interaction, a user may first speak a trigger phrase to wake the virtual assistant, and may then speak an initial query, in response to which the virtual assistant provides some information, and the user then speaks a further command. The speech signal may therefore represent multiple discrete sections of speech. In other cases, the speech signal may represent continual speech from the user.
In step 52, the received speech signal is divided into segments. This division may take place as the speech signal is being received.
For example, when the speech signal contains a trigger phrase, plus one or more subsequent spoken commands or queries, the trigger phrase may be treated as the first segment.
The part of the speech signal after the trigger phrase may be divided into segments of equal lengths. More specifically, the part of the speech signal after the trigger phrase may be divided into segments covering equal length periods of time. Alternatively, the part of the speech signal after the trigger phrase may be divided into segments comprising equal durations of net speech.
That is, the speech signal may cover a period of several seconds, in some cases, the user will be speaking for the whole of that period, in other cases, there will be periods when the virtual assistant is providing an output, and there will be periods of silence. In such cases, the part of the speech signal after the trigger phrase may be divided into segments covering equal length periods of time (for example 1 second), even though different segments will contain different amounts of the user’s speech. Alternatively, the part of the speech signal after the trigger phrase may be divided into segments comprising equal durations of the user’s speech, even though the segments may then cover different durations.
In step 54, it is recognised that a speech segment has been completed. For example, this may be when one of the equal length periods of time has expired, or when a predetermined duration of the user’s speech has been received.
In step 56, an authentication score is obtained, based on the newly completed speech segment and on previously received segments. The authentication score represents a probability that the speech signal comes from a specific registered speaker.
In step 58, in response to an authentication request, an authentication result is output, based on the authentication score. The authentication request may for example be received from a speech recognition process.
The authentication result may be generated by comparing the authentication score with a threshold score, and determining a positive authentication result if the authentication score exceeds the threshold score. The threshold score may be set according to a received requested level of security. For example, the system may receive a signal from an external system, and may then select the threshold score from a plurality of available threshold scores in response to said requested level of security. The requested level of security may be received from a speech recognition process.
Thus, for example, the speech signal may be sent to a speech recognition system, which determines the content and meaning of the user’s speech. The threshold score may then be determined based on a content of the speech. For example, the threshold score may be selected from a plurality of available threshold scores in response to the content of the speech.
For example, when the speech recognition system recognises that the user’s speech contains a command, the command may be executed only if a positive authentication result is obtained. If the command is a request for information, for example a request for information about flight times between two cities, then a low threshold score may be set, because the consequences of a mistaken decision to accept the speech as being from the enrolled user are not serious. However, if the command is a request to supply personal information, or a request to authorise a financial transaction, for example, then the consequences of a mistaken decision to accept the speech as being from the enrolled user are much more serious, and so a high threshold score may be set, so that a positive authentication result is output only if the system has a high degree of certainty that the speech signal does represent the speech of the enrolled user.
As mentioned above, the authentication result is output in response to an authentication request. The authentication request may request that the authentication result be output when the authentication score exceeds the threshold score. This may be subject to a time-out, so that the user is rejected and the process ends if the user cannot be authenticated within a predetermined time limit.
Alternatively, the authentication request may request that the authentication result be output immediately.
Alternatively, the authentication request may request that the authentication result be output at some future time, for example when the speech signal ends.
If, after step 56, it is determined that there is no need to output an authentication result, for example, because the authentication score does not yet exceed the threshold score and the speech signal has not yet ended, then the process may return to step 54.
Step 56, which comprises, following each segment, obtaining an authentication score based on said segment and previously received segments, may then comprise: obtaining a first authentication score based on a first segment; obtaining a respective subsequent authentication score based on each subsequent segment; and obtaining the authentication score based on said segment and previously received segments by merging the first authentication score and the or each subsequent authentication score.
Thus, a separate score is obtained for each segment, and the separate scores are merged. For example, the step of merging the first authentication score and the or each subsequent authentication score may comprise forming a weighted sum of the first authentication score and the or each subsequent authentication score. The weighted sum may give different weights to the scores from the different segments based for example on the signal/noise ratios in the respective segments, or on the amounts of the user’s speech in the respective segments, or on other factors.
This process of weighting the scores may also be used to remove the effect of clear outliers, where the score for one segment is very clearly different from the scores obtained for the segments processed previously. For example, ail outliers may be given zero weight, i.e. effectively discarded when merging the authentication scores. Alternatively, high authentication scores (i.e. representing a positive authentication decision) may be retained, while outlier low authentication scores may be given zero weight, i.e. effectively discarded when merging the authentication scores. In this case it should be noted that, for example in the case of a change of speaker, a low authentication score may be obtained for one segment, and this may initially be regarded as an outlier and disregarded, but may subsequently be regarded as a typical score and would then be taken fully into account.
Alternatively, a median filter could be applied to the authentication scores to provide stability to the updated score.
Alternatively, step 56, which comprises, following each segment, obtaining an authentication score based on said segment and previously received segments, may then comprise: obtaining a first authentication score based on a first segment of the speech signal; and, following each new segment of the speech signal, combining the new segment of the speech signal with the or each previously received segment of the speech signal to form a new combined speech signal; and obtaining an authentication score based on said new combined speech signal.
Thus, a score is obtained for the first segment but, after each new segment, there is not a separate score for that new segment, but only a new score for fhe whole of the signal to date.
As a further alternative, a first score may be obtained for one or more first segment. Then a separate second score may be obtained for one or more second segment. The first and second scores may then be merged to obtain the overall authentication score. This alternative may for example be useful in a situation in which the first segment of speech is a trigger phrase, while the second segments represent a spoken command. Because the trigger phrase is known in advance, it is possible for the system to be trained by the user speaking that known trigger phrase, allowing text-dependent speaker recognition techniques to be used to obtain the first authentication score. However, there will potentially be a large number of possible commands, making it impractical for the system to be trained by the user speaking those commands. Thus, text-independent speaker recognition techniques will need to be used to obtain the separate second authentication score.
Figure 4 is a block diagram, illustrating a general form of a speaker recognition system 80, for use in a virtual assistant system, in one embodiment. The functions illustrated in Figure 4 may be performed in a processor 26 of a smartphone 10, as shown in Figure 2, or they may be performed in a separate system, for example in a cloud computing environment, in general, computationaliy-intensive tasks may advantageously be performed remotely in order to save power in the portable device, and similarly tasks that would impose impractical data storage requirements on a portable device may be performed remotely, while less computationally-intensive tasks, and tasks involving private data may advantageously be performed in the users smartphone. For example, speech recognition may be performed remotely, while speaker recognition is performed in the smartphone itself, though many different assignments of the tasks can be devised.
In this example, it is assumed that a user has enrolled with the system by providing spoken inputs to the system, in order to train the system, during the enrolment phase. This further description then relates to the verification phase, during which it is determined whether a speaker can be taken to be the enrolled user.
In this example, the system 80 comprises a biock 82 for determining whether a trigger phrase has been detected. This block will typically contain a buffer that will be continually storing the most recently received part of the speech signal. The buffer should in this case be long enough to store speech of at least the duration of the expected trigger phrase. Then, when the block 82 determines that the trigger phrase has been spoken, without making any determination as to whether it was spoken by the enrolled user, the biock 82 sends a signal to a split trigger/command block 84. This signal is extracted from the buffer, and contains the part of the stored speech signal that includes the stored trigger phrase.
The detection of the trigger phrase can use a voice activity defector (VAD) and/or a speech recognition process.
In this illustrated example, it is assumed that the user’s speech contains a trigger phrase (for example “OK phone”, or the like), followed by a command. The split trigger/command block 84 determines when the trigger phrase has ended and the command is about to start.
The start of the command can be identified simply by considering everything that follows the known trigger to be part of the command. Alternatively, a speech recognition process can be used to determine that the user has started an utterance after the trigger.
The trigger/eommand block 84 sends that portion of the speech signal that represents the trigger phrase to the process trigger block 86, which performs a speaker recognition process on the trigger phrase. Because the trigger phrase is known in advance, it is possible for the speaker recognition system to be trained during enrolment by the user speaking that known trigger phrase. This allows text-dependent or text-constrained speaker recognition techniques to be used by fhe process trigger block 86 during the verification phase in order to obtain a first authentication score.
The first authentication score indicates a likelihood that the speech inputs received during the verification phase were provided by the same user that provided the speech inputs during enrolment.
An antispoofing method, which attempts to detect attacks such as replayed recordings of an enrolled user or malware attacks, can be included in the process trigger block 86 in order to provide information on the robustness of the first authentication score.
The trigger/eommand block 84 then streams the portion of the input speech signal that represents the command phrase to the new command segment block 88.
Although the system has been described so far with reference to the use of a trigger phrase, it should be noted that, In other examples, there may be no trigger phrase, and the system may be activated by some other action of the user, such as pressing a button or performing some other form of authentication. In that case, the whole of the speech signal is passed to the new command segment block 88.
The new command segment block 88 aiso receives information indicating when the virtual assistant is speaking. For example a virtual assistant may speak to a user in response to a command in order to elicit further information from the user, and so the user’s speech may in that case be interrupted by the virtual assistant speaking.
The new command segment block 88 divides the received speech signal into segments of the user’s speech, omitting any speech generated by the virtual assistant.
in this example, the user’s speech after the trigger phrase is divided into segments of equal lengths. More specifically, in this example, the part of the speech signal after the trigger phrase is divided into segments covering equal length periods of time, for example 1 second. As mentioned above, the new command segment block 88 may divide the received speech signal after the trigger phrase into segments covering equal length periods of time, or into segments comprising equal durations of net speech, for example.
Immediately it is completely received, each new segment of the command is passed to a process command segment block 90. This block performs speaker recognition on the content of the command. Typically, the process of speaker recognition involves extracting relevant features of the speech signal, for example Mel Frequency Cepstral Coefficients (MFCCs), and using these features as the basis for a statistical speaker recognition process.
When the first segment of the command is received, it is processed by the voice biometrics system in the block 90, in order to obtain a score that represents the probability that the speaker is the previously enrolled speaker. This score is stored in the update command score block 92.
The voice biometrics system in the block 90 will In general be a text-independent or unconstrained system, because it will in general not be known in advance what the user will say. The speaker recognition processing performed in the block 90 could be exactly the same as the system used for processing the trigger phrase in the block 86, or it could share some aspects of that system (for example it could use the same algorithm but use different background models), or it could be completely different.
When the first segment of the command has been processed, some information relating to that segment is extracted.
The information that is extracted is stored in the accumulated scoring information block 94.
In one example, this information comprises only the authentication score for the first segment. Then, when the second segment of the command is received, it is processed by the voice biometrics block 90 completely Independently of the first, in order to obtain an authentication score for the second segment.
The update command score block 92 then combines the scores from the first and second segments to obtain an updated score for the command. For example, the combination may be a weighted sum of the individual scores. As one example of this, the scores may be weighted by the duration of the user’s speech in each segment. For example, if the first segment contains 600 milliseconds of the user’s speech and the second segment contains 300 milliseconds of the user’s speech, then the score of the first segment may be given double the weighting of the second segment in the combined score.
in order to allow this weighting to be used, the information that is stored in the accumulated scoring information block 94 includes the duration of the user’s speech taken into account in calculating the relevant score.
Then, when the third segment of the command is received, it is also processed by the voice biometrics block 90 completely independently of the first and second segments, in order to obtain an authentication score for the third segment.
The update command score block 92 then combines the score from the third segment with the combined score for the first and second segments to obtain an updated score for the command. Again, the combination may be a weighted sum of the individual scores, and the scores may be weighted by the duration of the user’s speech used in forming each score.
Further techniques for fusing the scores obtained on the different segments are discussed below, with reference to the fusion block 96. For example, the weightings given to the authentication scores can depend on some quality measure associated with each segment such as a respective signal to noise ratio measured during the segment.
As another example, the information that is stored in the accumulated scoring information block 94 may comprise the whole of the audio signal that is received in the command.
in that case, when a new command segment is received, the process command segment block 90 may perform a new authentication process on the whole of the command audio, including the newly received segment. The update command score block 92 then replaces the previously obtained authentication score by the newly calculated authentication score.
As a further example, the information that is stored in the accumulated scoring information block 94 may comprise the extracted relevant features of the speech signal, such as the MFCCs mentioned previously.
In that case, when a new command segment is received, the process command segment block 90 may perform a new authentication process on the features extracted from the whole of the command audio, including the newly received segment. The update command score block 92 then replaces the previously obtained authentication score by the newly calculated authentication score.
Each time that the authentication score for the command is updated by the update command score block 92, a fusion block 96 calculates a new authentication score, based on the outputs of the process trigger block 86 and the update command score block 92.
In this example, the fusion block 96 thus combines the results of the speaker recognition processes performed on the trigger and the command, in order to obtain a combined authentication score indicating a likelihood that the user is the enrolled user. The combined authentication score indicative of the likelihood may be for example a log likelihood ratio (LLR) or may be some more indirect indication, for example a metric of distance of extracted features of the speech sample from some one- or multidimensional threshold or nominal point or volume in a multi-dimensional speech parameter space.
The combined authentication score may be obtained from the separate authentication scores, i.e. the results of the speaker recognition processes performed on the trigger and the command by any suitable method. For example, the combined authentication score may be a weighted sum Sf of the authentication scores St and Sc obtained from the trigger and the command respectively. That is, in general terms:
Sf = csst + pSc + Y
The weighting factors α, β, and γ may be constant and determined in advance.
Alternatively, the step of combining the results of the speaker recognition processes performed on the first and second voice segments, to obtain a combined output authentication score, may use quality measures to determine how the results should be combined, in order to improve the reliability of the decision. That is, separate quality measures are obtained for the trigger and command voice segments, and these quality measures are then used as further inputs to the process by which the authentication scores are combined.
These quality measures may for example be based on properties of the trigger phrase and the command. Certain triggers will be more suitable for use in voice biometrics than others because they are longer in duration, or because they contain more phonetic variability and thus they provide more information to differentiate speakers. Certain commands will be more suitable for use in voice biometrics than others for the same reasons. Other aspects, such as the presence of non-stationary noise in either the first and second voice segments may make one voice segment more reliable than the other.
In one embodiment there is defined a set of quality measures, namely a set of quality measures Qt for the trigger and a set of quality measures Qc for the command, and the values of weighting factors α, β, and y are set based on the quality measures. Then a weighted sum Sf will be obtained as a function of these quality measures:
SF ~ 0(Qt, Qc)St + P(Qt, Qc)Sc + Y(Qt, Qc)
The functions that map the quality measures Qt, Qc to the weighting factors α, β, and γ are part of the system design and are thus obtained and defined during a development phase, before the system is deployed for user enrolment or verification. The values returned by these functions in use after the development phase will vary from sample to sample as the quality measures Qt, Qcvary from sample to sample.
The functions may be obtained during the development phase on the basis of exercising the system with a large number of speech samples arranged to have a range of different values of the quality measures.
The form of the functions may be defined before the development phase, and coefficients or optimised to provide the best fit. in some embodiments, the functions may not be algebraic functions but may comprise the form of a look-up table containing optimised coefficients optimised over ranges of value of the quality measures or fixed values applied to optimised ranges of quality measures. More generally a function may be the result of some more complex algorithm characterised by some coefficients and delivering a vaiue dependent on the quality measures.
In some embodiments the combined score may be a non-linear combination of the scores SFand Sc, which may for example be represented in the form
Sf ~ o(Qt, Qc, St)St + β(Οτ, Qc. Sc)Sc + Y(Gt, Qc) where the each weighting factor a or β may depend continuously or non-continuously on the respective score.
More generally, the combined score may be any function of the scores, sT and sc, that are obtained from the speaker recognition processes performed on the first and second voice segments, and of the quality measures, Gt and Qc, that apply to those voice segments. That is:
Sf = f(sT , SC , Gt, Gc) where f may be any function.
The values of the scores, St and Sc, and of the quality measures, Gt and Gc, may be applied to a neural network, which then produces a value for the combined score Sf
When determining the weights to be given to the results of the first and second speaker recognition processes, different quality measures can be considered.
One suitable quality measure is the Signal to Noise Ratio (SNR), which may for example be measured in the input trigger and in the input command separately. In the case of non-stationary noise, where the SNR varies rapidly, a higher weight can be given to the result obtained from the input speech segment that has the higher SNR.
Another suitable quality measure is the net-speech measure. As discussed in connection with the illustrated embodiment, the weight that is given to the score obtained from the command can be increased according to the amount of speech in the command. That is, the total length of the fragments in the command that actually contain speech, excluding non-speech segments, is measured, for example in time units such as seconds, and this is used to form the weight to be applied to the command, relative to the weight applied to the trigger.
The new authentication score generated by the fusion block 96 in response to a new segment of the audio input is transmitted to a decision update block 98, which produces an authentication result. In this example, the authentication result is an authentication flag, having two possible values, namely “user authenticated” and “user not authenticated”. The authentication result is obtained by comparing the authentication score with a threshold value. The threshold value may be fixed, or may depend on some variable criterion. For example, the threshold value may be determined by a security level. For a low security system, or a low security command within a system, a relatively high False Acceptance Rate (FAR) may be tolerable, and so a low threshold may be set. In that case, a relatively low authentication score, representing a relatively low degree of confidence that the user is the enrolled user, may still exceed the threshold. For a high security system, or a high security command within a system, a relatively low False Acceptance Rate (FAR) may be required, and so a high threshold may be set, such that only a high authentication score, representing a high degree of confidence that the user is the enrolled user, will exceed the threshold.
An input indicating the required security level may be received from an external process, for example from a speech recognition process determining the content of the user’s speech, and so this may be used to set the threshold value. The decision update block 98 may store multiple threshold values, with one of these threshold values being chosen in response to a signal received from the external process, and the authentication result then being obtained by comparing the authentication score with the selected threshold value.
In some embodiments, the authentication score may be compared with multiple threshold values, to give multiple provisional authentication results, in this example, the system may include multiple registers to store corresponding authentication flags, indicating the results of comparing the authentication score with the respective threshold values.
The authentication result may then be output in response to an authentication request.
For example, the authentication flag may be consulted by an external process (for example, the Applications Processor, AP, in a mobile device) when authentication is required, or the authentication result can be pushed up by the decision update block 98 after every segment has been processed, or when the authentication result changes, or when a predefined condition to end the process is satisfied. An authentication request may set the conditions for outputting the authentication result.
The decision update block 98 may therefore set the authentication flag based only on the most recent output of the fusion block 96.
In some examples, the decision update block 98 sets the authentication flag in a manner intended to provide additional stability within the proposed system. In this case, whenever the authentication score is above the relevant threshold, the authentication flag is set to “user authenticated”. A timer, referred to as the authentication timer, is started, and runs for a predetermined period of time. The timer is restarted if a new authentication score above the relevant threshold is calculated.
The authentication flag is then maintained in the “user authenticated” state, for the duration of the authentication timer, whatever happens to the authentication score during that period. This guarantees that the user remains authenticated for the duration of the authentication timer, and that silences or extraneous noises that occur after the end of the command do not affect the authentication process. The duration of this timer should be long enough that the authentication flag remains set to “user authenticated” after the end of the command for long enough that an external (possibly remote) speech processing system can interpret the command and can request the authentication flag status over the relevant communication network. However, the duration of the timer should not be set to be so long that other speakers are able to utter command modifications after the command spoken by the enrolled user, and have those command modifications automatically authenticated by default. The timer duration may therefore be as short as a few' tens of milliseconds or as long as a few seconds.
After the decision update block 98 has set the authentication flag, an ending condition block 100 determines whether the process should end or continue.
Thus, every time the decision is updated, a set of ending conditions is checked. The process controlling the virtual assistant may indicate that no more command is expected, and thus that the audio process should finish. In response, the processing of the input audio is ended, and the system returns to the state of waiting for a new trigger. This is typically accompanied by an authentication request from the main process requesting the current authentication flag value. Alternatively, the decision update block 98 could at that moment push the current authentication flag value to the main application.
Figure 5 is a timing diagram, illustrating the operation of the system 80, in one example.
In this example, the user utters a 2 second trigger phrase 120 (“OK Phone”) followed by a 6 second command 122 (“Send a text message to John, saying that I will arrive thirty minutes late”). In other examples, the user may perform a more complex interaction with a virtual assistant system with multiple discrete sections of speech, for example selecting an item to order, and then indicating a delivery address in response to a first query from the virtual assistant, and then authorising a payment method in response to a second query from the virtual assistant In such a case, the system excludes the speech of the virtual assistant when forming the speech segments for analysis in the voice biometrics processing.
Figure 5 shows the times in minutes and seconds (00:00, 00:01, 00:02, ..., 00:12) along a horizontal line below the spoken words.
Figure 5 also shows the voice biometrics processing.
When the trigger phrase has been completed, it can be defected by a trigger detection system. In some cases the voice biometrics processing system will be activated only when the trigger is detected.
Thus, in this example, the processing of the trigger phrase by the process trigger block in the voice biometrics processing system can start at the time 00:02,
The processing of a block of speech will typically take less time than the duration of the speech. Thus, in this case, the trigger phrase is processed, and the authentication score for the trigger is available at the time 00:03.2.
Figure 5 also shows the evolution of the authentication score and of the authentication result, in this example, the authentication result is obtained by comparing the authentication score 124 with a moderately high threshold value 126.
Until the trigger phrase has been processed, there is no available authentication score, and so the system is not able to produce any authentication result.
When the trigger phrase has been processed, an authentication score will be available, but the trigger phrase contains relatively little information, and so the authentication score is not high enough to exceed the threshold, because it is not possible to be certain based on this limited information that the user is the enrolled user of the system.
Thus, the authentication result is set to “user not authenticated”.
Figure 5 shows the command phrase 122 divided into six segments 128, 130, 132,
134, 136, 138, each with a duration of 1 second.
As soon as the voice biometrics system has finished processing the trigger phrase, it processes the first segment 128 of the command. This is completed at time 00:03.8, at which point the authentication score can be updated to produce a new combined authentication score.
In this example, the updated authentication score based on the trigger and 1 second of the command does not exceed the threshold value 126. Thus, the authentication result remains set to “user not authenticated”.
The voice biometrics system then processes the second segment 130 of the command. This is completed at time 00:04.7, at which point the authentication score can be updated to produce a new combined authentication score.
In this example, the updated authentication score based on the trigger and 2 seconds of the command does not exceed the threshold value 126. Thus, the authentication result remains set to “user not authenticated”.
The voice biometrics system then processes the third segment 132 of the command. This is completed at time 00:05.7, at which point the authentication score can be updated to produce a new combined authentication score.
In this example, the updated authentication score based on the trigger and 3 seconds of the command does exceed the threshold value 126.
Thus, the authentication result is now set to “user authenticated”. At this point, the system could push the authentication result “user authenticated” as an output, but in this case the ending condition that has been set is that the system should wait for an authentication request.
The voice biometrics system then processes the fourth segment 134 of the command. This is completed at time 00:06.7, at which point the authentication score can be updated to produce a new combined authentication score.
In this example, the updated authentication score based on the trigger and 4 seconds of the command still exceeds the threshold value 126, and so the authentication result remains set to “user authenticated”.
The voice biometrics system then processes the fifth segment 136 of the command. This is completed at time 00:07.7, at which point the authentication score can be updated to produce a new combined authentication score.
In this example, the updated authentication score based on the trigger and 5 seconds of the command still exceeds the threshold value 126, and so the authentication result remains set to “user authenticated”.
Figure 5 also shows that the voice biometrics system then processes the sixth segment 138 of the command. This is completed at time 00:08.7, at which point the authentication score can be updated to produce a new combined authentication score.
In this example, the updated authentication score based on the trigger and the whole 6 seconds of the command still exceeds fhe threshold value 126, and so the authentication result remains set to “user authenticated”.
However, Figure 5 also shows that, in this example, authentication is requested by an external process at time 00:08.2. For example, this could happen because fhe external process has recognised that the command 122 has ended.
The system is able to respond immediately at time 00:08.2 with the result “user authenticated”, because the updated authentication score based on the trigger and 5 seconds of the command exceeds the threshold value 126. The system is therefore able to respond with very little latency because an authentication result had previously been computed and was already available, without needing to complete the voice biometrics processing on fhe whole of the command.
Figure 5 shows an example in which the authentication result may be either “user authenticated”, or “user not authenticated”, with the “user not authenticated” result typically being output initially, before the system has acquired enough information to authenticate the user with the required degree of certainty.
In other examples, the authentication score may be compared with a first threshold and with a second threshold. In that case, the first threshold may be set to a level that means that, when the first threshold value is exceeded, there is a high degree of certainty that the speaker is the enrolled user, and so the authentication result may indicate that the user is authenticated. The second threshold may be set to a level that means that, if the authentication score is below the second threshold, there is a high degree of certainty that the speaker is not the enrolled user. The authentication result may then indicate this. If the authentication score is between the first and second thresholds, there is uncertainty as to whether the speaker is the enrolled user, and the authentication result may indicate that the user is not yet authenticated.
Thus, fhe process of authenticating the speaker can be performed continuously.
The skilled person will thus recognise that some aspects of the above-described apparatus and methods, for example the calculations performed by the processor may be embodied as processor control code, for example on a non-volatile carrier medium such as a disk, CD- or DVD-ROM, programmed memory such as read oniy memory (Firmware), or on a data carrier such as an optical or electrical signal carrier. For many applications embodiments of the invention will be implemented on a DSP (Digital Signal Processor), ASIC (Application Specific integrated Circuit) or FPGA (Field Programmable Gate Array). Thus the code may comprise conventional program code or microcode or, for example code for setting up or controlling an ASIC or FPGA. The code may also comprise code for dynamically configuring re-configurable apparatus such as re-programmable logic gate arrays. Similarly the code may comprise code for a hardware description language such as Verllog ™ or VHDL (Very high speed integrated circuit Hardware Description Language). As the skilled person will appreciate, the code may be distributed between a plurality of coupled components in communication with one another. Where appropriate, the embodiments may also be implemented using code running on a field-(re)programmable analogue array or similar device in order to configure analogue hardware
Embodiments of the invention may be arranged as part of an audio processing circuit, for instance an audio circuit which may be provided in a host device. A circuit according to an embodiment of the present invention may be implemented as an integrated circuit.
Embodiments may be implemented in a host device, especially a portable and/or battery powered host device such as a mobile telephone, an audio player, a video player, a PDA, a mobile computing platform such as a laptop computer or tablet and/or a games device for example. Embodiments of the invention may also be implemented wholly or partially in accessories attachable to a host device, for example in active speakers or headsets or the like. Embodiments may be implemented in other forms of device such as a remote controller device, a toy, a machine such as a robot, a home automation controller or suchlike.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. The word “comprising” does not exclude the presence of elements or steps other than those listed in a claim, “a” or an” does not exclude a plurality, and a single feature or other unit may fulfil the functions of several units recited in the claims. Any reference signs in the claims shall not be construed so as to limit their scope.
Claims (41)
1. A method of speaker authentication, comprising: receiving a speech signal;
dividing the speech signal into segments;
following each segment, obtaining an authentication score based on said segment and previously received segments, wherein the authentication score represents a probability that the speech signal comes from a specific registered speaker; and outputting an authentication result based on the authentication score in response to an authentication request.
2. A method according to claim 1, wherein the authentication score is obtained by comparing features of the speech signal with a model generated during enrolment of the registered speaker.
3. A method according to claim 1 or 2, wherein the speech signal represents multiple discrete sections of speech,
4. A method according to any preceding claim, wherein a first segment represents a trigger phrase.
5. A method according to claim 4, comprising:
performing the steps of obtaining the authentication score and outputting the authentication result in response to detecting that the trigger phrase has been spoken.
6. A method according to any preceding claim, comprising, after the trigger phrase, dividing the speech signal into segments of equal lengths.
7. A method according to claim 6, comprising, after the trigger phrase, dividing the speech signal into segments covering equal length periods of time.
8. A method according to claim 6, comprising, after the trigger phrase, dividing the speech signal into segments comprising equal durations of net speech.
9. A method according to any preceding claim, comprising comparing the authentication score with a first threshold score, and determining a positive authentication result if the authentication score exceeds the first threshold score.
10. A method according to ciaim 9, wherein the first threshold score is set in response to a signal received from a separate process.
11. A method according to claim 10, comprising receiving the signal from the separate process, and selecting the first threshold score from a plurality of available threshold scores
12. A method according to claim 10 or 11, wherein the signal received from the separate process indicates a requested level of security.
13. A method according to claim 10, 11 or 12, wherein the separate process is a speech recognition process.
14. A method according to any preceding claim, comprising comparing the authentication score with a second threshold score, and determining a negative authentication result if the authentication score is below the second threshold score.
15. A method according to ciaim 14, wherein the second threshold score is set in response to a signal received from a separate process.
16. A method according to claim 15, comprising receiving the signal from the separate process, and selecting the second threshold score from a plurality of available threshold scores.
17. A method according to claim 15 or 16, wherein the signal received from the separate process indicates a requested level of security.
18. A method according to claim 15, 16 or 17, wherein the separate process is a speech recognition process.
19. A method according to any preceding claim, comprising initiating the method in response to determining that a trigger phrase has been spoken.
20. A method according to any preceding claim, comprising receiving the authentication request from a speech recognition process.
21. A method according to any preceding claim, wherein the authentication request requests that the authentication result be output when the authentication score exceeds a threshold.
22. A method according to any preceding claim, wherein the authentication request requests that the authentication result be output when the speech signal ends.
23. A method according to any preceding claim, wherein the step of, following each segment, obtaining an authentication score based on said segment and previously received segments comprises:
obtaining a first authentication score based on a first segment; obtaining a respective subsequent authentication score based on each subsequent segment; and obtaining the authentication score based on said segment and previously received segments by merging the first authentication score and the or each subsequent authentication score.
24. A method according to claim 23, wherein the step of merging the first authentication score and the or each subsequent authentication score comprises forming a weighted sum of the first authentication score and the or each subsequent authentication score.
25. A method according to claim 24, comprising forming the weighted sum of the first authentication score and the or each subsequent authentication score by applying weights that depend on respective signal-to-noise ratios applicable to the respective segments.
26. A method according to claim 24, comprising forming the weighted sum of the first authentication score and the or each subsequent authentication score by applying weights that depend on quantities of speech present in the respective segments.
27. A method according to claim 24, comprising forming the weighted sum of the first authentication score and the or each subsequent authentication score by disregarding some or all outlier scores.
28. A method according to claim 27, comprising forming the weighted sum of the first authentication score and the or each subsequent authentication score by disregarding low outlier scores while retaining high outlier scores.
29. A method according to any of claims 1 to 22, wherein the step of, following each segment, obtaining an authentication score based on said segment and previously received segments comprises:
obtaining a first authentication score based on a first segment of the speech signal; and following each new segment of the speech signal, combining the new segment of the speech signal with the or each previously received segment of the speech signal to form a new combined speech signal; and obtaining an authentication score based on said new combined speech signal.
30. A method according to any of claims 1 to 22, wherein the step of, following each segment, obtaining an authentication score based on said segment and previously received segments comprises:
extracting features from each segment;
obtaining a first authentication score based on the extracted features of a first segment of the speech signal; and following each new segment of the speech signal, combining the extracted features of the new segment of the speech signal with the extracted features of the or each previously received segment of the speech signal; and obtaining an authentication score based on said combined extracted features.
31. A method according to any preceding claim, comprising, after determining a positive authentication result:
starting a timer that runs for a predetermined period of time; and treating the specific registered speaker as authenticated for as long as the timer is running.
32. A method according to claim 31, further comprising restarting the timer if a new positive authentication result is determined while fhe timer is running.
33. A device for processing a received signal representing a user’s speech, for performing speaker recognition, wherein the device is configured to:
receive a speech signal;
divide the speech signal into segments;
following each segment, obtain an authentication score based on said segment and previously received segments, wherein the authentication score represents a probability that the speech signal comes from a specific registered speaker; and output an authentication result based on the authentication score in response to an authentication request.
34. A device as claimed in claim 33, wherein the device comprises a mobile telephone, an audio player, a video player, a mobile computing platform, a games device, a remote controller device, a toy, a machine, or a home automation controller or a domestic appliance.
35. A device as claimed in claim 33 or 34, further configured for performing speech recognition on at ieast a portion of the received signal.
36. A device as claimed in claim 33, 34 or 35, further configured for transferring at ieast a portion of fhe received signal to a remote device for speech recognition.
37. A device as claimed in claim 36, further configured for receiving a result of the speech recognition.
38. An integrated circuit device for processing a received signal representing a user’s speech, for performing speaker recognition, wherein the integrated circuit device is configured to:
receive a speech signal;
divide the speech signal into segments;
following each segment, obtain an authentication score based on said segment and previously received segments, wherein the authentication score represents a probability that the speech signal comes from a specific registered speaker; and output an authentication result based on the authentication score in response to an authentication request.
39. An integrated circuit device as claimed in claim 38, wherein the authentication
5 score is obtained using at least one user or background model stored in said device.
40. A computer program product, comprising a computer-readable tangible medium, and instructions for performing a method according to any one of claims 1 to 32.
10
41. A non-transitory computer readable storage medium having computer-executable instructions stored thereon that, when executed by processor circuitry, cause the processor circuitry to perform a method according to any one of claims 1 to 32.
Intellectual
Property
Office
Application No: GB1708954.1
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662346036P | 2016-06-06 | 2016-06-06 | |
US201662418453P | 2016-11-07 | 2016-11-07 | |
US201662429196P | 2016-12-02 | 2016-12-02 | |
US201762486625P | 2017-04-18 | 2017-04-18 |
Publications (2)
Publication Number | Publication Date |
---|---|
GB201708954D0 GB201708954D0 (en) | 2017-07-19 |
GB2552082A true GB2552082A (en) | 2018-01-10 |
Family
ID=59067696
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1708954.1A Withdrawn GB2552082A (en) | 2016-06-06 | 2017-06-06 | Voice user interface |
GB1821259.7A Active GB2566215B (en) | 2016-06-06 | 2017-06-06 | Voice user interface |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1821259.7A Active GB2566215B (en) | 2016-06-06 | 2017-06-06 | Voice user interface |
Country Status (5)
Country | Link |
---|---|
US (1) | US11322157B2 (en) |
KR (1) | KR102441863B1 (en) |
CN (1) | CN109313903A (en) |
GB (2) | GB2552082A (en) |
WO (1) | WO2017212235A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2563952A (en) * | 2017-06-29 | 2019-01-02 | Cirrus Logic Int Semiconductor Ltd | Speaker identification |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10373612B2 (en) * | 2016-03-21 | 2019-08-06 | Amazon Technologies, Inc. | Anchored speech detection and speech recognition |
GB2555661A (en) * | 2016-11-07 | 2018-05-09 | Cirrus Logic Int Semiconductor Ltd | Methods and apparatus for biometric authentication in an electronic device |
CN107945806B (en) * | 2017-11-10 | 2022-03-08 | 北京小米移动软件有限公司 | User identification method and device based on sound characteristics |
WO2019182569A1 (en) * | 2018-03-20 | 2019-09-26 | Visa International Service Association | Distributed biometric comparison framework |
WO2019216499A1 (en) * | 2018-05-08 | 2019-11-14 | 엘지전자 주식회사 | Electronic device and control method therefor |
EP3816996B1 (en) * | 2018-06-27 | 2023-03-01 | NEC Corporation | Information processing device, control method, and program |
EP3647993B1 (en) * | 2018-10-29 | 2023-12-13 | Onfido Ltd | Interactive user verification |
WO2020139121A1 (en) * | 2018-12-28 | 2020-07-02 | Ringcentral, Inc., (A Delaware Corporation) | Systems and methods for recognizing a speech of a speaker |
TWI713016B (en) * | 2019-01-03 | 2020-12-11 | 瑞昱半導體股份有限公司 | Speech detection processing system and speech detection method |
GB201906367D0 (en) | 2019-02-28 | 2019-06-19 | Cirrus Logic Int Semiconductor Ltd | Speaker verification |
KR20210143953A (en) | 2019-04-19 | 2021-11-30 | 엘지전자 주식회사 | A non-transitory computer-readable medium storing a multi-device control system and method and components for executing the same |
US10984086B1 (en) * | 2019-10-18 | 2021-04-20 | Motorola Mobility Llc | Methods and systems for fingerprint sensor triggered voice interaction in an electronic device |
US11721346B2 (en) * | 2020-06-10 | 2023-08-08 | Cirrus Logic, Inc. | Authentication device |
US11394698B2 (en) * | 2020-07-29 | 2022-07-19 | Nec Corporation Of America | Multi-party computation (MPC) based authorization |
WO2022040524A1 (en) * | 2020-08-21 | 2022-02-24 | Pindrop Security, Inc. | Improving speaker recognition with quality indicators |
US11315575B1 (en) | 2020-10-13 | 2022-04-26 | Google Llc | Automatic generation and/or use of text-dependent speaker verification features |
EP4064081B1 (en) * | 2021-03-23 | 2023-05-24 | Deutsche Telekom AG | Method and system for identifying and authenticating a user in an ip network |
CN113327618B (en) * | 2021-05-17 | 2024-04-19 | 西安讯飞超脑信息科技有限公司 | Voiceprint discrimination method, voiceprint discrimination device, computer device and storage medium |
CN113327617B (en) * | 2021-05-17 | 2024-04-19 | 西安讯飞超脑信息科技有限公司 | Voiceprint discrimination method, voiceprint discrimination device, computer device and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080255842A1 (en) * | 2005-11-17 | 2008-10-16 | Shaul Simhi | Personalized Voice Activity Detection |
US20100204993A1 (en) * | 2006-12-19 | 2010-08-12 | Robert Vogt | Confidence levels for speaker recognition |
US20150301796A1 (en) * | 2014-04-17 | 2015-10-22 | Qualcomm Incorporated | Speaker verification |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7039951B1 (en) | 2000-06-06 | 2006-05-02 | International Business Machines Corporation | System and method for confidence based incremental access authentication |
EP1202228A1 (en) * | 2000-10-17 | 2002-05-02 | Varette Limited | A user authentication system and process |
GB2388947A (en) * | 2002-05-22 | 2003-11-26 | Domain Dynamics Ltd | Method of voice authentication |
US7212613B2 (en) * | 2003-09-18 | 2007-05-01 | International Business Machines Corporation | System and method for telephonic voice authentication |
US7822605B2 (en) * | 2006-10-19 | 2010-10-26 | Nice Systems Ltd. | Method and apparatus for large population speaker identification in telephone interactions |
US8442824B2 (en) * | 2008-11-26 | 2013-05-14 | Nuance Communications, Inc. | Device, system, and method of liveness detection utilizing voice biometrics |
US8639508B2 (en) * | 2011-02-14 | 2014-01-28 | General Motors Llc | User-specific confidence thresholds for speech recognition |
US8768707B2 (en) * | 2011-09-27 | 2014-07-01 | Sensory Incorporated | Background speech recognition assistant using speaker verification |
US20130144618A1 (en) * | 2011-12-02 | 2013-06-06 | Liang-Che Sun | Methods and electronic devices for speech recognition |
CN102647521B (en) * | 2012-04-05 | 2013-10-09 | 福州博远无线网络科技有限公司 | Method for removing lock of mobile phone screen based on short voice command and voice-print technology |
US9460715B2 (en) * | 2013-03-04 | 2016-10-04 | Amazon Technologies, Inc. | Identification using audio signatures and additional characteristics |
KR20140139982A (en) | 2013-05-28 | 2014-12-08 | 삼성전자주식회사 | Method for executing voice recognition and Electronic device using the same |
US9343068B2 (en) * | 2013-09-16 | 2016-05-17 | Qualcomm Incorporated | Method and apparatus for controlling access to applications having different security levels |
KR102287739B1 (en) * | 2014-10-23 | 2021-08-09 | 주식회사 케이티 | Speaker recognition system through accumulated voice data entered through voice search |
US10432622B2 (en) * | 2016-05-05 | 2019-10-01 | International Business Machines Corporation | Securing biometric data through template distribution |
-
2017
- 2017-06-06 GB GB1708954.1A patent/GB2552082A/en not_active Withdrawn
- 2017-06-06 CN CN201780034684.9A patent/CN109313903A/en active Pending
- 2017-06-06 US US16/315,277 patent/US11322157B2/en active Active
- 2017-06-06 KR KR1020197000174A patent/KR102441863B1/en active IP Right Grant
- 2017-06-06 WO PCT/GB2017/051621 patent/WO2017212235A1/en active Application Filing
- 2017-06-06 GB GB1821259.7A patent/GB2566215B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080255842A1 (en) * | 2005-11-17 | 2008-10-16 | Shaul Simhi | Personalized Voice Activity Detection |
US20100204993A1 (en) * | 2006-12-19 | 2010-08-12 | Robert Vogt | Confidence levels for speaker recognition |
US20150301796A1 (en) * | 2014-04-17 | 2015-10-22 | Qualcomm Incorporated | Speaker verification |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2563952A (en) * | 2017-06-29 | 2019-01-02 | Cirrus Logic Int Semiconductor Ltd | Speaker identification |
US11056118B2 (en) | 2017-06-29 | 2021-07-06 | Cirrus Logic, Inc. | Speaker identification |
Also Published As
Publication number | Publication date |
---|---|
KR102441863B1 (en) | 2022-09-08 |
GB201821259D0 (en) | 2019-02-13 |
WO2017212235A1 (en) | 2017-12-14 |
GB2566215A (en) | 2019-03-06 |
KR20190015488A (en) | 2019-02-13 |
GB2566215B (en) | 2022-04-06 |
CN109313903A (en) | 2019-02-05 |
GB201708954D0 (en) | 2017-07-19 |
US11322157B2 (en) | 2022-05-03 |
US20190214022A1 (en) | 2019-07-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11322157B2 (en) | Voice user interface | |
US11056118B2 (en) | Speaker identification | |
US10877727B2 (en) | Combining results from first and second speaker recognition processes | |
CN110140168B (en) | Contextual hotwords | |
US10762899B2 (en) | Speech recognition method and apparatus based on speaker recognition | |
US11037574B2 (en) | Speaker recognition and speaker change detection | |
US9697828B1 (en) | Keyword detection modeling using contextual and environmental information | |
US11437022B2 (en) | Performing speaker change detection and speaker recognition on a trigger phrase | |
GB2608710A (en) | Speaker identification | |
JP7230806B2 (en) | Information processing device and information processing method | |
US11200903B2 (en) | Systems and methods for speaker verification using summarized extracted features | |
US20200201970A1 (en) | Biometric user recognition | |
US10923113B1 (en) | Speechlet recommendation based on updating a confidence value | |
US12080276B2 (en) | Adapting automated speech recognition parameters based on hotword properties | |
WO2019041871A1 (en) | Voice object recognition method and device | |
US20190147887A1 (en) | Audio processing | |
JP2024538771A (en) | Digital signal processor-based continuous conversation | |
CN117935841A (en) | Vehicle-mounted voiceprint awakening method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |