EP1399915A2 - Systemes de reconnaissance du locuteur - Google Patents

Systemes de reconnaissance du locuteur

Info

Publication number
EP1399915A2
EP1399915A2 EP02738369A EP02738369A EP1399915A2 EP 1399915 A2 EP1399915 A2 EP 1399915A2 EP 02738369 A EP02738369 A EP 02738369A EP 02738369 A EP02738369 A EP 02738369A EP 1399915 A2 EP1399915 A2 EP 1399915A2
Authority
EP
European Patent Office
Prior art keywords
model
enrolment
speaker
speech
models
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP02738369A
Other languages
German (de)
English (en)
Other versions
EP1399915B1 (fr
Inventor
Andrew Thomas Sapeluk
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SPEECH SENTINEL Ltd
Original Assignee
Securivox Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GBGB0114866.7A external-priority patent/GB0114866D0/en
Application filed by Securivox Ltd filed Critical Securivox Ltd
Publication of EP1399915A2 publication Critical patent/EP1399915A2/fr
Application granted granted Critical
Publication of EP1399915B1 publication Critical patent/EP1399915B1/fr
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/06Decision making techniques; Pattern matching strategies
    • G10L17/12Score normalisation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/20Pattern transformations or operations aimed at increasing system robustness, e.g. against channel noise or different working conditions

Definitions

  • the present invention relates to systems, methods and apparatus for performing speaker recognition.
  • Speaker recognition encompasses the related fields of speaker verification and speaker identification.
  • the main objective is to confirm the claimed identity of a speaker from his/her utterances, known as verification, or to recognise the speaker from his/her utterances, known as identification. Both use a person's voice as a biometric measure and assume a unique relationship between the utterance and the person producing the utterance. This unique relationship makes both verification and identification possible.
  • Speaker recognition technology analyses a test utterance and compares it to a known template or model for the person being recognised or verified. The effectiveness of the system is dependent on the quality of the algorithms used in the process. Speaker recognition systems have many possible applications.
  • speaker recognition technology may be used to permanently mark an electronic document with a biometric print for every person who views or edits the content. This produces an audit trail identifying all of the users and the times of access and modification. As the user mark is biometric it is very difficult for the user to dispute the authenticity of the mark.
  • biometric measures may provide the basis for possible recognition systems, such as iris scanning, finger printing and facial features. These measures all require additional hardware for recording whereas speaker recognition can be used with any voice input such as over a telephone line or using a standard multi-media personal computer with no modification.
  • the techniques can be used in conjunction with other security measures and other biometrics for increased security. From the point of view of a user the operation of the system is very simple.
  • the speaker authentication server may maintain a set of templates (models) for all currently enrolled persons and a historical record of previously enrolled persons.
  • Speaker recognition systems rely on extracting some unique features from a person's speech. This in turn depends on the manner in which human speech is produced using the vocal tract and the nasal tract.
  • the vocal tract and nasal tract can be regarded as two connected pipes, which can resonate in a manner similar to a musical instrument .
  • the resonances produced depend on the diameter and length of the pipes. In the human speech production mechanism, these diameters and to some extent the length of the pipe sections can be modified by the articulators, typically the positions of the tongue, the jaw, the lips and the soft palate (velum) . These resonances in the spectrum are called the formant frequencies . There are normally around four formant frequencies in a typical voice spectrum.
  • All existing speaker recognition systems perform similar computational steps. They operate by creating a template or model for an enrolled speaker.
  • the model is created by two main steps applied to a speech sample, namely spectral analysis and statistical analysis.
  • Subsequent recognition of an input speech sample is performed by modelling the input sample (test utterance) in the same way as during speaker enrolment, and pattern/classification matching of the input model against a database of enrolled speakers.
  • Existing systems vary in the approach taken when performing some or all of these steps.
  • the spectral analysis is either Linear Predictive Coding (LPC) /Cepstral analysis (“LPCC”) or FFT/sub- banding.
  • HMM Hidden Markov Modelling
  • Fig. 1 An example of a typical time signal representation of a speech utterance divided into frames is illustrated in Fig. 1 of the accompanying drawings.
  • a generic speaker recognition system is shown in block diagram form in Fig.
  • LPCC spectral analysis
  • HMM statistical analysis
  • score normalisation and speaker classification 16 by thresholding, employing a database 18 of speaker models (enrolled speaker data-set) , before generating a decision as to the identity of the speaker (identification) or the veracity of the speaker's claimed identity (verification) .
  • HMM techniques are "black-box" methods, which combine good performance with relative ease of use, but at the expense of transparency. The relative importance of features extracted by the technique are not visible to the designer.
  • HMM technology uses temporal information to construct the model and is therefore vulnerable to mimics, who impersonate others' voices by temporal variations in pitch etc.
  • the world model/impostor cohort employed by the system cannot easily be optimised for the purpose of testing an utterance by a claimed speaker.
  • a speaker recognition system relies on the fact that when a true speaker utterance is tested against a model for that speaker it will produce a score, which is lower than a score that is produced when an impostor utterance is tested against the same model. This allows an accept/reject threshold to be set. Consecutive tests by the true speaker will not produce identical scores. Rather, the scores will form a statistical distribution. However, the mean of the true speaker distribution will be considerably lower than the means of impostor distributions tested against the same model. This is illustrated in Fig. 3, where 25 scores are plotted for each of eight speakers, speaker 1 being the true speaker. It can be seen from Fig. 3 that the scores of some speakers are closer to the true speaker than others and can be problematic.
  • the present invention relates to improved speaker recognition methods and systems which provide improved performance in comparison with conventional systems.
  • the invention provides improvements including but not limited to: improved spectral analysis, transparency in its statistical analysis, improved modelling, models that can be compared allowing the data-set structure to be analysed and used to improve system performance, improved classification methods and the use of statistically independent/partially independent parallel processes to improve system performance.
  • the invention further embraces computer programs for implementing the methods and systems of the invention, data carriers and storage media encoded with such programs, data processing devices and systems adapted to implement the methods and systems, and data processing systems and devices incorporating the methods and systems.
  • Fig. 1 is a time signal representation of an example of a speech utterance divided into frames
  • Fig. 2 is a block diagram of a generic, prior art speaker recognition system
  • Fig. 3 is a plot of speaker recognition score distributions for a number of speakers tested against one of the speakers, obtained using a conventional speaker recognition system
  • Fig. 4 is a block diagram illustrating a first embodiment of the present invention
  • Fig. 5 is a block diagram illustrating a second embodiment of the present invention
  • Fig. 6 is a block diagram illustrating a third embodiment of the present invention
  • Fig. 1 is a time signal representation of an example of a speech utterance divided into frames
  • Fig. 2 is a block diagram of a generic, prior art speaker recognition system
  • Fig. 3 is a plot of speaker recognition score distributions for a number of speakers tested against one of the speakers, obtained using a conventional speaker recognition system
  • Fig. 4 is a block diagram illustrating a first embodiment of the present invention
  • Fig. 5
  • FIG. 7 is a block diagram illustrating a further embodiment of a speaker recognition system in accordance with the present invention
  • Fig. 8(a) is a time signal representation of an example of a speech utterance divided into frames and Fig. 8(b) shows the corresponding frequency spectrum and smoothed frequency spectrum of one frame thereof
  • Fig. 9 illustrates the differences between the frequency spectra of two mis-aligned frames
  • Fig. 10 shows the distribution of accumulated frame scores plotted against their frequency of occurrence
  • Fig. 11(a) shows the same accumulated score distributions as Fig. 3 for comparison with Fig. 11 (b) , which shows corresponding accumulated score distributions obtained using a speaker recognition system in accordance with the present invention
  • Fig. 8(a) is a time signal representation of an example of a speech utterance divided into frames and Fig. 8(b) shows the corresponding frequency spectrum and smoothed frequency spectrum of one frame thereof
  • Fig. 9 illustrates the differences between the frequency spectra of two mis-
  • FIG. 12 illustrates the results of model against model comparisons as compared with actual test scores, obtained using a system in accordance with the present invention
  • Fig. 13 illustrates the distribution of speaker models used by a system in accordance with the present invention in a two-dimensional representation of a multi-dimensional dataspace
  • Fig. 14 illustrates the use of an impostor cohort as used in a system in accordance with the present invention
  • Fig. 15 is a block diagram illustrating a normalisation process in accordance with one aspect of the present invention
  • Fig. 16 is a block diagram illustrating an example of wide area user authentication system in accordance with the present invention
  • FIG. 17 is a block diagram illustrating the corruption of a speech signal by various noise sources and channel characteristics in the input channel of a speaker recognition system; Figs. 18 and 19 illustrate the effects of noise and channel characteristics on test utterances and enrolment models in a speaker recognition system; and Fig. 20 illustrates a channel normalisation method in accordance with one aspect of the present invention.
  • the present invention includes a number of aspects and features which may be combined in a variety of ways in order to provide improved speaker recognition (verification and/or identification) systems. Certain aspects of the invention are concerned with the manner in which speech samples are modelled during speaker enrolment and during subsequent recognition of input speech samples. Other aspects are concerned with the manner in which input speech models are classified in order to reach a decision regarding the identity of the speaker. A further aspect is concerned with normalising speech signals input to speaker recognition systems (channel normalisation) . Still further aspects concern applications of speaker recognition systems.
  • Figs. 4 to 6 illustrate the basic architectures used in systems embodying various aspects of the invention. It will be understood that the inputs to all of the embodiments of the invention described herein are digital signals comprising speech samples which have previously been digitised by any suitable means (not shown) , and all of the filters and other modules referred to are digital.
  • a speech sample is input to the system via a channel normalisation module 200 and a filter 24.
  • channel normalisation may be performed at a later stage of processing the speech sample, as shall be discussed further below.
  • the sample would be divided into a series of frames prior to being input to the filter 24 or at some other point prior to feature extraction.
  • a noise signal 206 may be added to the filtered signal (or could be added prior to the filter 24) .
  • the sample data are input to a modelling (feature extraction) module 202, which includes a spectral analysis module 26 and (at least in the case of speech sample data being processed for enrolment purposes) a statistical analysis module 28.
  • the model (feature set) output from the modelling module 202 comprises a set of coefficients representing the smoothed frequency spectrum of the input speech sample.
  • the model is added to a database of enrolled speakers (not shown) .
  • the model (feature set) is input to a classification module 110, which compares the model (feature set) with models selected from the database of enrolled speakers. On the basis of this comparison, a decision is reached at 204 so as to identify the speaker or to verify the claimed identity of the speaker.
  • the channel normalisation of the input sample and the addition of the noise signal 206 comprise aspects of the invention, as shall be described in more detail below, and are preferred features of all implementations of the invention.
  • channel normalisation may be applied following spectral analysis 26 or during the classification process, rather than being applied to the input speech sample prior to processing as shown in Figs. 4 to 6. Novel aspects of the modelling and classification processes in accordance with other aspects of the invention will also be described in more detail below.
  • the basic operation of the system is the same as in Fig. 4, except that the output from the modelling module 202 is input to multiple, parallel classification processes 110a, 110b ... llOn, and the outputs from the multiple classification processes are combined in order to reach a final decision, as shall be described in more detail below.
  • the basic operation of the system is also the same as in Fig. 4, except that the input sample is processed by multiple, parallel modelling processes 202a, 202b ... 202n (typically providing slightly different feature extraction/modelling as described further below) , possibly via multiple filters 24a, 24b ... 24n (in this case the noise signal 206 is shown being added to the input signal upstream of the filters 24a, 24b ... 24n) , and the outputs from the multiple modelling processes are input to the classification module 110, as shall also be described in more detail below.
  • These types of multiple parallel modelling processes are preferably applied to both enrolment sample data and test sample data.
  • Multiple parallel modelling processes may also be combined with multiple parallel classification processes; e.g. the input to each of the parallel classification processes llOa-n in Fig. 5 could be the output from multiple parallel modelling processes as shown in Fig. 6.
  • the spectral analysis modules 26, 26a-n may apply similar spectral analysis methods to those used in conventional speaker recognition systems.
  • the spectral analysis applied by the modules 26a-n is of a type that, for each frame of the sample data, extracts a set of feature vectors (coefficients) representing the smoothed frequency spectrum of the frame.
  • This preferably comprises LPC/Cepstral (LPCC) modelling, producing an increased feature set which models the finer detail of the spectra, but may include variants such as delta cepstral or emphasis/de-emphasis of selected coefficients based on a weighting scheme.
  • Similar coefficients may alternatively be obtained by other means such as or Fast Fourier Transform [FFT] or by use of a filter bank.
  • FFT Fast Fourier Transform
  • the complete sample is represented by a matrix consisting of one row of coefficients for each frame of the sample.
  • these matrices will each have a size of the order of 1000 (frames) x 24 (coefficients) .
  • a single first matrix of this type, representing the complete original signal would be subject to statistical analysis such as HMM.
  • the LP transform effectively produces a set of filter coefficients representing the smoothed frequency spectrum for each frame of the test utterance.
  • the LP filter coefficients are related to Z-plane poles.
  • the Cepstral transform uses a log function for this purpose. It will be understood that other similar or equivalent techniques could be used in the spectral analysis of the speech sample in order to obtain a smoothed frequency spectrum and to de-emphasise the poles thereof.
  • This de-emphasis produces a set of coefficients which when transformed back into the time domain are less dynamic and more well balanced (the cepstral coefficients are akin to a time signal or impulse response of the LP filter with de- emphasised poles) .
  • the log function also transforms multiplicative processes into additive processes.
  • the model derived from the speech sample may be regarded as a set of feature vectors based on the frequency content of the sample signal.
  • the order of the vector is important. If the order is too low then some important information may not be modelled.
  • the order of the feature extractor e.g. the number of poles of an LP filter
  • the order of the feature extractor may be selected to be greater than the expected order.
  • Poles which match resonances in the signal give good results, whilst the other resulting coefficients of the feature vector will model spurious aspects of the signal.
  • the distance measure computed may be unduly influenced by the values of those coefficients which are modelling spurious aspects of the signal.
  • the distance measure (score) which is returned will thus be inaccurate, possibly giving a poor score for a frame which in reality is a good match.
  • the same noise signal would be used during enrolment of speakers and in subsequent use of the system.
  • the addition of the known noise signal has the effect of forcing the "extra" coefficients (above the number actually required) to model a known function and hence to give consistent results which are less problematic during model/test vector comparison. This is particularly relevant for suppressing the effect of noise (channel noise and other noise) during "silences" in the speech sample data. This problem may also be addressed as a consequence of the use of massively overlapping sample frames discussed below.
  • Models generated by speaker recognition systems thus comprise a plurality of feature sets (vectors corresponding to sets of coefficients) representing a plurality of frames.
  • Figs. 8(a) show a time signal divided into frames
  • 8 (b) shows the corresponding frequency spectrum and smoothed frequency spectrum of one of the frames of Fig. 8(a)
  • the systems then perform further transformations and analysis (such as Cepstral transformation, Vector Quantisation, Hidden Markov Modelling (HMM) and Dynamic Time Warping (DTW) ) to obtain the desired result .
  • Frame boundaries can be allocated in many ways, but are usually measured from an arbitrary starting point estimated to be the starting point of the useful speech signal.
  • HMM and DTW are used when comparing two or more utterances such as when building models or when comparing models with test utterances.
  • the HMM/DTW compensation is generally done at a point in the system following spectral analysis, using whatever coefficient set is used to represent the content of a frame, and does not refer to the original time signal.
  • the alignment precision is thus limited to the size of a frame.
  • these techniques assume that the alignment of a particular frame will be within a fixed region of an utterance which is within a few frames of where it is expected to lie.
  • This approach derives from speech recognition methods (e.g. speech-to-text conversion) , where it is used to estimate a phonetic sequence from a series of frames. The present applicants believe that this approach is inappropriate for speaker recognition, for the following reasons.
  • the conventional approach relies on the temporal sequence of the frames and bases speaker verification on spectral characteristics derived from temporally adjacent frames.
  • the present enrolment modelling process involves the use of very large frame overlaps, akin to convolution, to avoid problems arising from frame alignment between models (discussed at A. above) and to improve the quality of the model obtained.
  • This technique is applied during speaker enrolment in order to obtain a model, preferably based on repeated utterances of the enrolment phrase.
  • the frame overlap is selected to be at least 80%, more preferably it is in the range 80% to 90%, and may be as high as 95%.
  • each utterance employed in the reference model generated by the enrolment process is represented by a matrix (typically having a size of the order of 1000 frames by 24 coefficients as previously described) .
  • a clustering or averaging technique such as Vector Quantisation (described further below) is then used to reduce the data to produce the reference model for the speaker.
  • This model does not depend on the temporal order of the frames, addressing the problems described at B. above.
  • Preferred embodiments of the present invention combine the massive overlapping of frames described above with Vector Quantisation or the like as described below. This provides a mode of operation which is quite different from conventional HMM/DTW systems. In such conventional systems, all frames are considered equally valid and are used to derive a final "score" for thresholding into a yes/no decision, generally by accumulating scores derived by comparing and matching individual frames. The validity of the scores obtained is limited by the accuracy of the frame alignments.
  • the reference (enrolment) models represent a large number of possible frame alignments. Rather than matching individual frames of a test utterance with individual frames of the reference models and deriving scores for each matched pair of frames, this allows all frames of the test utterance to be compared and scored against every frame of the reference model, giving a statistical distribution of the frequency of occurrence of frame score values. "Good" frame matches will yield low scores and “poor” frame matches will yield high scores (or the converse, depending on the scoring scheme) .
  • a test utterance frame tested against a large number of reference models will result in a normal distribution as illustrated in Fig. 10. Most frame scores will lie close to the mean and within a few standard deviations therefrom.
  • the score distributions will include "best matches" between accurately aligned corresponding frames of the test utterance and reference models.
  • the distribution will thus include a higher incidence of very low scores. This ultimately results in "true speaker” scores being consistently low due to some parts of the utterance being easily identified as originating from the true speaker while other parts less obviously from the true speaker are classified by being from the general population. Impostor frame scores will not produce low scores and will be classified as being from the general population.
  • the reference models comprise sets of coefficients derived for a plurality of massively overlapping frames, and a test utterance is tested by comparing all of the frames of the test utterance with all of the frames of the relevant reference models and analysing the distribution of frame scores obtained therefrom.
  • the massive overlapping of frames applied to speech samples for enrolment purposes may also be applied to input utterances during subsequent speaker recognition, but this is not necessary.
  • the invention uses clustering or averaging techniques such as Vector Quantisation applied by the modules 28, 28a-n in a manner that differs from statistical analysis techniques used in conventional speaker recognition systems.
  • the system of the present invention uses a Vector Quantisation (VQ) technique in processing the enrolment sample data output from the spectral analysis modules 26, 26a-n.
  • VQ Vector Quantisation
  • This is a simplified technique, compared with statistical analysis techniques such as HMM employed in many prior art systems, resulting in transparent modelling providing models in a form which allow model- against-model comparisons in the subsequent classification stage.
  • VQ as deployed in the present invention does not use temporal information, making the system resistant to impostors.
  • the VQ process effectively compresses the LPCC output data by identifying clusters of data points, determining average values for each cluster, and discarding data which do not clearly belong to any cluster. This results in a set of second matrices of second coefficients, representing the LPCC data of the set of first matrices, but of reduced size (typically, for example, 64 x 24 as compared with 1000 x 24) .
  • spectral magnitude is useful and that the phase may be disregarded. This is known to apply to human hearing and if it was not applied to a verifier the system would exhibit undesirable phase related problems, such as sensitivity to the distance of the microphone from the speaker.
  • the spectral information of a speech sample can be regarded as consisting of two parts a static part ss( ⁇ ) and a dynamic part sd( ⁇ ) and that the processes are multiplicative. It is also assumed that the dynamic part is significantly larger than the static part.
  • s( ⁇ ) ss( ⁇ ) x sd( ⁇ )
  • the static part is fixed it is the more useful as a biometric as it will be related to the static characteristics of the vocal tract. This will relate the measure to some fixed physical characteristic as opposed to sd( ⁇ ) which is related to the dynamics of the speech.
  • ss (co) The complete extraction of ss (co) would give a biometric which exhibits the properties of a physical biometric, i.e. cannot be changed at will and does not deteriorate over time.
  • sd(co) the exclusive use of sd(co) will give a biometric which exhibits the properties of a behavioral biometric, i.e. can be changed at will and will deteriorate over time.
  • a mixture of the two should exhibit intermediate properties but as sd( ⁇ ) is much larger than ss ( ⁇ ) it is more likely that a combination will exhibit the properties of sd( ⁇ ); i.e. behavioral.
  • the assumption is that the time signal exists from -oo to +oo which clearly is not physically possible.
  • all spectral estimates of a signal will be made using a window, which exists for a finite period of time.
  • the window can either be rectangular or shaped by a function (such as a Hamming window) .
  • Fig. 1 shows a time signal with the frames indicated.
  • the frames can be shaped using an alternate window.
  • the major effect of windowing is a spreading of the characteristic of a particular frequency to its neighbours, a kind of spectral averaging. This effect is caused by the main lobe; in addition to this the side lobes produce spectral oscillations, which are periodic in the spectrum.
  • the present system later extracts the all-pole Linear Prediction coefficients, which have the intended effect of spectral smoothing and the extra smoothing, caused by the windowing, is not seen as a major issue.
  • the periodic side lobe effects might be troublesome if the window size was inadvertently changed. This however can be avoided by good housekeeping.
  • N For real world conditions it cannot be assumed that N would be large in the sense that the frames have independent spectral characteristics. It is important to remember that this would require N to be large under two conditions: 1. During model creation 2. During a verification event Failure to comply during either would potentially cause a system failure (error) , however a failure in 1 is the more serious as it would remain a potential source of error until updated, whereas a problem in 2 is a single instance event.
  • U(co) cannot be guaranteed to converge to white noise, what can be done to cope with the situation?
  • U( ⁇ ) will be a variable quantity 2.
  • Usm( ⁇ ) l 3.
  • U(co) is the truncated sum of the speech frames the number of which would ideally tend to infinity.
  • cepstral analysis consists of time _ domain — ⁇ frequency _ domain -> log(spectrum) ⁇ time _ domain
  • Cepstral transformation has been used in speech analysis in many forms.
  • the length of the speech signal would be long enough so that the dynamic part was completely random and the mean would tend to zero. This would leave the static part cs(t) as our biometric measure.
  • Cepstral coefficients are such that they decay with increasing time and have the appearance of an impulse response for stable systems. This means that the dynamic range of each coefficient is different and they are in general in descending order.
  • the errors el and e2 above are average model construction errors; the actual errors are on a frame by frame basis and will have a distribution about the mean. This distribution could be modelled in a number of ways the simplest being by use of a standard clustering technique such as k-means to model the distribution.
  • k-means clustering is also known in other forms as Vector Quantisation (VQ) and is a major part of the Self Organising Map (SOM) also known as the Kohonen Artificial Neural Network.
  • the FRR and FAR are largely decoupled: the FRR is fixed by the quality of the model produced and the FAR is fixed by the cohort size. It is also worth observing that to halve the error rate we need to double the cohort size e.g. for 99% accuracy the cohort is 50, for 99.5% accuracy the cohort is 100, for 99.75% accuracy the cohort is 200. As the cohort increases the computational load increases and in fact doubles for each halving of the error rate. As the cohort increases to very large numbers the decoupling of the FRR and FAR will break down and the FRR will begin to increase.
  • the approach in accordance with one aspect of the invention, is to use parallel processes (also discussed elsewhere in the present description) , which exhibit slightly different impostor characteristics and are thus partially statistically independent with respect to the identifier strategy.
  • the idea is to take a core identifier which exhibits the zero or approximately zero FRR and which has a FAR that is set by the cohort size.
  • the front end processing of this core identifier is then modified slightly to reorder the distances of the cohort member models from the true speaker model. This is done while maintaining the FRR ⁇ 0 and can be achieved by altering the spectral shaping filters 24a-24n (see Fig. 7) , or by altering the transformed coefficients, such as by using delta-ceps etc.
  • the scheme for matching a test sample against a claimed identity may require a successful match for each process or may require a predetermined proportion of successful matches.
  • the combined use of massive sample frame overlaps and Vector Quantisation (or equivalent) in building enrolment models in accordance with the present invention provides particular advantages.
  • the massive overlapping is applied at the time of constructing the models, although it could also be applied at the time of testing an utterance.
  • the technique involves using a massive frame overlap, typically 80-90%, to generate a large possible number of alignments; the frames generated by the alignments are then transformed into representative coefficients using the LPCC transformation to produce a matrix of coefficients representing all of the alignments. This avoids conventional problems of frame alignment.
  • the matrix is typically of the size no_of_frames by LPCC_order, for example 1000x24.
  • VQ and massive frame overlapping produces an operation mode which is different from conventional systems based upon HMM/DTW.
  • HMM/DTW all frames are considered to be equally valid and are used to form a final score for thresholding into a yes/no decision.
  • every row (frame) of the test sample data is tested against every row of the enrolment model data for the claimed speaker and the associated impostor cohort. For each row of the test sample data, a best match can be found with one row of the enrolment model, yielding a test score for the test sample against each of the relevant enrolment models. The test sample is matched to the enrolment model that gives the best score. If the match is with the claimed identity, the test speaker is accepted. If the match is with an impostor the test speaker is rejected.
  • the present system uses LPCC and VQ modelling (or similar/equivalent spectral analysis and clustering techniques) , together with massive overlapping of the sample frames, to produce the reference models for each enrolled speaker, which are stored in the database.
  • VQ modelling or similar/equivalent spectral analysis and clustering techniques
  • an input test utterance is subjected to similar spectral analysis to obtain an input test model which can be tested against the enrolled speaker data-set.
  • this approach can be applied so as to obtain a very low False Rejection Rate (FRR), substantially equal to zero. The significance of this is discussed further below.
  • FRR False Rejection Rate
  • one preferred embodiment of a speaker recognition system employing parallel modelling processes in accordance with one aspect of the invention comprises an input channel 100 for inputting a signal representing a speech sample to the system, a channel normalisation process 200 as described elsewhere, a plurality of parallel signal processing channels 102a, 102b ... 102n, a classification module 110 and an output channel 112.
  • the system further includes an enrolled speaker data-set 114; i.e. a database of speech models obtained from speakers enrolled to use the system.
  • the speech sample data is processed in parallel by each of the processing channels 102a-n, the outputs from each of the processing channels is input to the classification module 110, which communicates with the database 114 of enrolled speaker data, and a decision as to the identity of the source of the test utterance is output via the output channel 112.
  • Each of the processing channels 102a-n comprises, in series, a spectral shaping filter 24a-n, an (optional) added noise input 206a-n, as described elsewhere, a spectral analysis module 26a-n and a statistical analysis module 28a-n.
  • the outputs from each of the statistical analysis modules 28a-n is input to the classification module 110.
  • the spectral shaping filters 24a-n comprise a bank of filters which together divide the utterance signal into a plurality of overlapping frequency bands, each of which is then processed in parallel by the subsequent modules 26a-n and 28a-n.
  • the number of processing channels, and hence the number of frequency bands, may vary, with more channels providing more detail in the subsequent analysis of the input data.
  • at least two channels are employed, more preferably at least four channels.
  • the filters 24a-n preferably constitute a low-pass or band-pass or high-pass filter bank.
  • the bandwidth of the base filter 24a is selected such that the False Rejection Rate (FRR) resulting from subsequent analysis of the output from the first channel 102a is zero or as close as possible to zero.
  • FRR False Rejection Rate
  • the subsequent filters 24b-n have incrementally increasing bandwidths that incrementally pass more of the signal from the input channel 100.
  • the FRR for the output from each channel 102a-n is thus maintained close to zero whilst the different channel outputs have slightly different False Acceptance (FA) characteristics.
  • FA False Acceptance
  • each enrolment model may include data-sets for each of a plurality of enrolment utterances. For each enrolment utterance, there will be a matrix of data representing the output of each of the parallel modelling processes. Each of these matrices represents the clustered/averaged spectral feature vectors. Test sample data is subject to the same parallel spectral analysis processes, but without clustering/averaging, so that the test model data comprises a matrix representing the spectral analysis data for each of the parallel modelling processes. When a test model is tested against an enrolment model, the test matrix representing a particular modelling process is tested against enrolment matrices generated by the same modelling process.
  • each enrolment model is associated with an impostor cohort. That is, for the reference model of each enrolled speaker ("subject"), there is an impostor cohort comprising a predetermined number of reference models of other enrolled speakers, specific to that subject and which has a known and predictable relationship to the subject's reference model. These predictable relationships enable the performance of the system to be improved.
  • Fig. 11(a) shows the results obtained by a conventional speaker recognition system, similar to Fig. 3, comparing scores for an input utterance tested against reference data for eight speakers.
  • Speaker 1 is the true speaker, but the scores for some of the other speakers are sufficiently close to reduce significantly the degree of confidence that the system has identified the correct speaker.
  • Fig. 11(b) shows equivalent results obtained using a system in accordance with the present invention. It can be seen that the results for speaker 1 are much more clearly distinguished from the results of all of the other speakers 2 to 8.
  • the speaker modelling method employed in the preferred embodiments of the present invention is inherently simpler (and, in strict mathematical terms, cruder) than conventional techniques such as HMM and possible alternatives such as gaussian mixture models.
  • conventional techniques such as HMM and possible alternatives
  • gaussian mixture models are much more effective in practice.
  • the temporal nature of HMM makes it susceptible to mimics, a problem which is avoided by the present invention.
  • the models of the present invention are ideally suited to enable analysis of the structure of the enrolled speaker data-set by model against model testing.
  • VQ modelling involves choosing the size of the model; i.e. choosing the number of coefficients ("centres") . Once this has been done, the positions of the centres can be moved around until they give the best fit to all of the enrolment data vectors. This effectively means allocating a centre to a cluster of enrolment vectors, so each centre in the model represents a cluster of information important to the speaker identity.
  • Fig. 12 illustrates the results of testing reference models for speakers 2 to 8 against the reference model for speaker 1.
  • the ellipses show the model against model results whilst the stars show actual scores for speaker utterances tested against model 1. It can be seen that the model against model tests can be used to predict the actual performance of a particular speaker against a particular reference model.
  • the model against model results tend to lie at the bottom of the actual score distributions and therefore indicate how well a particular impostor will perform against model 1. This basic approach of using model against model tests to predict actual performance is known as such.
  • model against model testing is the ability to predict the performance of a test utterance against some or, if need be, all of the enrolled speaker models. This enables a virtually unlimited number of test patterns to be used to confirm an identity, which is not possible with conventional systems.
  • model against model test results may be used to assemble a specific impostor cohort for use with each reference model.
  • This allows accurate score normalisation and also allows each model to be effectively "guarded” against impostors by using a statistically variable grouping which is selected for each enrolled speaker.
  • Fig. 13 Each reference model can be regarded as a point in a multi-dimensional dataspace, so that "distances" between models can be calculated. Fig. 13 illustrates this idea in two dimensions for clarity, where each star represents a model and the two-dimensional distance represents the distance between models.
  • Fig. 14 shows, in a similar manner to Fig. 13, a subject model represented by a circle, members of an impostor cohort represented by stars, and a score for an impostor claiming to be the subject, represented by an "x" .
  • the impostor score is sufficiently close to the subject model to cause recognition problems.
  • this information can be used to distinguish the impostor x from the true subject, by testing the impostor against the models of the cohort members as well as against the true subject model. That is, it can be seen that the impostor utterance x is closer to some of the cohort members than would be expected for the true subject, and further away from others than expected. This would indicate an impostor event and result in the impostor utterance being rejected as a match for the true subject.
  • the database of enrolled speakers (the "speaker space") is partitioned; e.g. so that each speaker enrolled in the system is assigned to a cohort comprising a fixed number N of enrolled speakers, as described above.
  • the speaker classification module of the system e.g. the module 110 in the system of Fig. 4 operates such that the input test utterance is compared with all of the members of the cohort associated with the identity claimed by the speaker, and the test utterance is classified as corresponding to that member of the cohort which provides the best match. That is, the test utterance is always matched to one member of the cohort, and will never be deemed not to match any member of the cohort. If the cohort member to which the utterance is matched corresponds to the claimed identity, then the claimed identity is accepted as true. If the utterance is matched to any other member of the cohort then the claimed identity is rejected as false.
  • the cohort is of a fixed size N
  • the system is scalable to any size of population while maintaining a fixed and predictable error rate. That is, the accuracy of the system is based on the size of the cohort and is independent of the size of the general population, making the system scalable to very large populations. Accuracy can be improved by increasing the cohort size, as long as the false rejection rate does not increase significantly.
  • thresholds could still be used to reduce false acceptances; i.e. once a test utterance has been matched to the claimed identity using the foregoing strategy, thresholds could be applied to determine whether the match is close enough to be finally accepted.
  • an impostor cohort associated with a particular enrolment model may involve the use of algorithms so that the members of the impostor cohort have a particular relationship with the enrolment model in question. In principle, this may provide a degree of optimisation in the classification process. However, it has been found that a randomly selected impostor cohort performs equally well for most practical purposes. The most important point is that the cohort size should be predetermined in order to give predictable performance.
  • the impostor cohort for a particular enrolment model may be selected at the time of enrolment or at the time of testing a test utterance.
  • the performance of a speaker recognition system in accordance with the invention may be improved by the use of multiple parallel classification processes. Generally speaking, such processes will be statistically independent or partially independent. This approach will provide multiple classification results which can be combined to derive a final result, as illustrated in Fig. 5.
  • a further problem encountered with conventional speaker recognition systems is that system performance may be affected by differences between speech sampling systems used for initial enrolment and subsequent recognition. Such differences arise from different transducers (microphones) , soundcards etc.
  • these difficulties can be obviated or mitigated by normalising speech samples on the basis of a normalisation characteristic which is obtained and stored for each sampling system (or, possibly, each type of sampling system) used to input speech samples to the recognition system.
  • the normalisation characteristic can be estimated "on the fly" when a speech sample is being input to the system.
  • the normalisation characteristic (s) can then be applied to all input speech samples, so that reference models and test scores are independent of the characteristics of particular sampling systems.
  • a normalisation process can be applied at the time of testing test sample data against enrolment sample data.
  • a normalisation characteristic is effectively a transfer function of the sampling system and can be derived, for example, by inputting a known reference signal to the sampling system, and processing the sampled reference signal through the speech recognition system. The resulting output from the recognition system can then be stored and used to normalise speech samples subsequently input through the same sampling system or the same type of sampling system. Alternatively, as illustrated in Fig.
  • a speech signal S(f) which has been modified by the transfer function C(f) of an input channel 300 can be normalised on the fly by inputting the modified speech signal S(f)*C(f) to an estimating module 302, which estimates the transfer function C(f) of the channel 300, and to a normalisation module 304, and applying the inverse of the estimated transfer function l/C(f) to the normalisation module, so that the output from the normalisation module closely approximates the input signal S(f) .
  • the estimator module 302 creates a digital filter with the spectral characteristics of the channel 300 and the inverse of this filter is used to normalise the signal.
  • the inverse filter can be calculated by determining the all-pole filter which represents the spectral quality of a sample frame. The filter coefficients are then smoothed over the frames to remove as much of the signal as possible, leaving the spectrum of the channel (C(f)). The estimate of the channel spectrum is then used to produce the inverse filter l/C(f) .
  • This basic approach can be enhanced to smooth the positions of the poles of the filters obtained for the frames, with intelligent cancellation of the poles to remove those which are known not to be concerned with the channel characteristics.
  • the normalisation process can be applied to the speech sample prior to processing by the speaker recognition system or to the spectral data or to the model generated by the system.
  • a preferred method of channel normalisation is applied to the test model data and the relevant enrolment models at the time of testing the test sample against the enrolment models.
  • the unwanted channel characteristic can be estimated and removed. In practice the removal can be achieved in the time domain, frequency domain or a combination. They both achieve the same effect, that is to estimate cc(co) and remove it using some form of inverse filter or spectral division. If cc(co)is the estimate of the spectrum of the unwanted channel then we would calculate s( ⁇ ) . .
  • An alternative implementation is to model the channel characteristic as a filter, most likely in the all-pole form,
  • Figure 16 illustrates various sources of corruption of a speech sample in a speaker recognition system.
  • the input speech signal s(t) is altered by environmental background noise, b(t), the recording device bandwidth, r(t), electrical noise and channel crosstalk, t (t) , and transmission channel bandwidth, c(t), so that the signal input to the recognition system is an altered signal v(t) .
  • the cohort models are selected from the database of
  • test speaker can either be the true
  • the removal (reduction) of the channel characteristics using the silence model as described above requires suitable channel noise and perfect detection of the silence parts of the utterance. As these cannot be guaranteed they need to be mitigated (for instance, if the silence includes some speech we will include some of the claimed identity speaker static speech and inadvertently remove it) . Fortunately they can be dealt with in one simple modification to the process: the cohort models should all be referred to the same silence model.
  • Fig. 19 shows the Cepstral coefficients of the test utterance together with the claimed identity model and the cohort models 1 to m being input to the classifier 110.
  • a "silence model” or “normalisation model” 400 derived from the claimed identity enrolment data is used to normalise each of these before input to the classifier, so that the actual inputs to the classifier are a normalised test utterance, normalised claimed identity model and normalised cohort models.
  • the normalisation model 400 is based on data from periods of silence in the claimed identity enrolment sample as discussed above, but it could be derived from the complete claimed identity enrolment sample.
  • the normalisation model comprises a single row of Cepstral coefficients, each of which is the mean value of one column (or selected members of one column) of Cepstral coefficients from the claimed identity model. These mean values are used to replace the mean values of each of the sets of input data. That is, taking the test utterance as an example, the mean value of each column of the test utterance Cepstral coefficients is subtracted from each individual member of that column and the corresponding mean value from the normalisation model is added to each individual member of the column. A similar operation is applied to the claimed identity model and each of the cohort models.
  • the normalisation model could be derived from the claimed identity model or from the test utterance or from any of the cohort models. It is preferable for the model to be derived from either the claimed identity model or the test utterance, and it is most preferable for it to be derived from the claimed identity model.
  • the normalisation model could be derived from the "raw" enrolment sample Cepstral coefficients or from final model after Vector Quantisation. That is, it could be derived at the time of enrolment and stored along with the enrolment model or it could be calculated when needed as part of the verification process. Generally, it is preferred that a normalisation model is calculated for each enrolled speaker at the time of enrolment and stored as part of the enrolled speaker database.
  • Speaker recognition systems in accordance with the invention provide improved real world performance for a number of reasons.
  • the modelling techniques employed significantly improve separation between true speakers and impostors.
  • This improved modelling makes the system less sensitive to real world problems such as changes of sound system (voice sampling system) and changes of speaker characteristics (due to, for example, colds etc.).
  • the modelling technique is non-temporal in nature so that it is less susceptible to temporal voice changes, thereby providing longer persistence of speaker models.
  • filter pre- processing allows the models to be used for variable bandwidth conditions; e.g. models created using high fidelity sampling systems such as multimedia PCs will work with input received via reduced bandwidth input channels such as telephony systems.
  • SYSTEMS The invention thus provides the basis for flexible, reliable and simple voice recognition systems operating on a local or wide area basis and employing a variety of communications/input channels.
  • Fig. 16 illustrates one example of a wide area system operating over local networks and via the Internet, to authenticate users of a database system server 400, connected to a local network 402, such as an Ethernet network, and, via a router 404, to the Internet 406.
  • a speaker authentication system server 408, implementing a speaker recognition system in accordance with the present invention, is connected to the local network for the purpose of authenticating users of the database 400. Users of the system may obviously be connected directly to the local network 402.
  • users at sites such as 410 and 412 may access the system via desktop or laptop computers 414, 416 equipped with microphones and connected to other local networks which are in turn connected to the Internet 406.
  • Other users such as 418, 420, 422 may access the system by dial-up modem connections via the public switched telephone network 424 and Internet Service Providers 426.
  • the algorithms employed by speaker recognition systems in accordance with the invention may be implemented as computer programs using any suitable programming language such as C or C++, and executable programs may be in any required form including stand alone applications on any hardware/operating system platform, embedded code in DSP chips etc. (hardware/firmware implementations), or be incorporated into operating systems (e.g. as MS Windows DLLs) .
  • User interfaces for purposes of both system enrolment and subsequent system access
  • speech sampling may be implemented using, for example, ActiveX/Java components or the like.
  • the system is applicable to other terminal devices including palmtop devices, WAP enabled mobile phones etc. via cabled and/or wireless data/telecommunications networks.
  • Speaker recognition systems having the degree of flexibility and reliability provided by the present invention have numerous applications.
  • One particular example, in accordance with a further aspect of the present invention, is in providing an audit trail of users accessing and/or modifying digital information such as documents or database records. Such transactions can be recorded, providing information regarding the date/time and identity of the user, as is well known in the art.
  • conventional systems do not normally verify or authenticate the identity of the user.
  • Speaker recognition preferably using a speaker recognition system in accordance with the present invention, may be used to verify the identity of a user whenever required; e.g. when opening and/or editing and/or saving a digital document, database record or the like.
  • the document or record itself may be marked with data relating to the speaker verification procedure, or such data may be recorded in a separate audit trail, providing a verified record of access to and modification of the protected document, record etc. Unauthorised users identified by the system will be denied access or prevented from performing actions which are monitored by the system.
EP02738369A 2001-06-19 2002-06-13 Verification du locuteur Expired - Lifetime EP1399915B1 (fr)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
GBGB0114866.7A GB0114866D0 (en) 2001-06-19 2001-06-19 Speaker recognition systems
GB0114866 2001-06-19
US30250101P 2001-07-02 2001-07-02
US302501P 2001-07-02
PCT/GB2002/002726 WO2002103680A2 (fr) 2001-06-19 2002-06-13 Systemes de reconnaissance du locuteur

Publications (2)

Publication Number Publication Date
EP1399915A2 true EP1399915A2 (fr) 2004-03-24
EP1399915B1 EP1399915B1 (fr) 2009-03-18

Family

ID=26246204

Family Applications (1)

Application Number Title Priority Date Filing Date
EP02738369A Expired - Lifetime EP1399915B1 (fr) 2001-06-19 2002-06-13 Verification du locuteur

Country Status (8)

Country Link
US (1) US20040236573A1 (fr)
EP (1) EP1399915B1 (fr)
CN (1) CN100377209C (fr)
AT (1) ATE426234T1 (fr)
AU (1) AU2002311452B2 (fr)
CA (1) CA2451401A1 (fr)
DE (1) DE60231617D1 (fr)
WO (1) WO2002103680A2 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019145708A1 (fr) * 2018-01-23 2019-08-01 Cirrus Logic International Semiconductor Limited Identification de locuteur

Families Citing this family (79)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003131683A (ja) * 2001-10-22 2003-05-09 Sony Corp 音声認識装置および音声認識方法、並びにプログラムおよび記録媒体
EP1565906A1 (fr) * 2002-11-22 2005-08-24 Koninklijke Philips Electronics N.V. Dispositif et procede de reconnaissance vocale
KR100937572B1 (ko) 2004-04-30 2010-01-19 힐크레스트 래보래토리스, 인크. 3d 포인팅 장치 및 방법
US7386448B1 (en) * 2004-06-24 2008-06-10 T-Netix, Inc. Biometric voice authentication
US20060020447A1 (en) * 2004-07-26 2006-01-26 Cousineau Leo E Ontology based method for data capture and knowledge representation
US20060020493A1 (en) * 2004-07-26 2006-01-26 Cousineau Leo E Ontology based method for automatically generating healthcare billing codes from a patient encounter
US20060020466A1 (en) * 2004-07-26 2006-01-26 Cousineau Leo E Ontology based medical patient evaluation method for data capture and knowledge representation
US20060020465A1 (en) * 2004-07-26 2006-01-26 Cousineau Leo E Ontology based system for data capture and knowledge representation
US7363223B2 (en) * 2004-08-13 2008-04-22 International Business Machines Corporation Policy analysis framework for conversational biometrics
US10223934B2 (en) 2004-09-16 2019-03-05 Lena Foundation Systems and methods for expressive language, developmental disorder, and emotion assessment, and contextual feedback
US8078465B2 (en) * 2007-01-23 2011-12-13 Lena Foundation System and method for detection and analysis of speech
US8938390B2 (en) 2007-01-23 2015-01-20 Lena Foundation System and method for expressive language and developmental disorder assessment
US9355651B2 (en) 2004-09-16 2016-05-31 Lena Foundation System and method for expressive language, developmental disorder, and emotion assessment
US9240188B2 (en) 2004-09-16 2016-01-19 Lena Foundation System and method for expressive language, developmental disorder, and emotion assessment
US8137195B2 (en) 2004-11-23 2012-03-20 Hillcrest Laboratories, Inc. Semantic gaming and application transformation
FR2893157B1 (fr) * 2005-11-08 2007-12-21 Thales Sa Procede d'aide a la prise de decision pour la comparaison de donnees biometriques
JP4745094B2 (ja) * 2006-03-20 2011-08-10 富士通株式会社 クラスタリングシステム、クラスタリング方法、クラスタリングプログラムおよびクラスタリングシステムを用いた属性推定システム
US7809170B2 (en) 2006-08-10 2010-10-05 Louisiana Tech University Foundation, Inc. Method and apparatus for choosing and evaluating sample size for biometric training process
WO2008022157A2 (fr) * 2006-08-15 2008-02-21 Vxv Solutions, Inc. Réglage adaptatif de moteurs biométriques
CN101154380B (zh) * 2006-09-29 2011-01-26 株式会社东芝 说话人认证的注册及验证的方法和装置
US7650281B1 (en) * 2006-10-11 2010-01-19 The U.S. Goverment as Represented By The Director, National Security Agency Method of comparing voice signals that reduces false alarms
CA2676380C (fr) 2007-01-23 2015-11-24 Infoture, Inc. Systeme et procede pour la detection et l'analyse de la voix
US7974977B2 (en) * 2007-05-03 2011-07-05 Microsoft Corporation Spectral clustering using sequential matrix compression
US20090071315A1 (en) * 2007-05-04 2009-03-19 Fortuna Joseph A Music analysis and generation method
US8160866B2 (en) * 2008-04-18 2012-04-17 Tze Fen Li Speech recognition method for both english and chinese
US9247369B2 (en) * 2008-10-06 2016-01-26 Creative Technology Ltd Method for enlarging a location with optimal three-dimensional audio perception
EP2182512A1 (fr) 2008-10-29 2010-05-05 BRITISH TELECOMMUNICATIONS public limited company Vérification du locuteur
US9390420B2 (en) * 2008-12-19 2016-07-12 At&T Intellectual Property I, L.P. System and method for wireless ordering using speech recognition
GB2478780A (en) * 2010-03-18 2011-09-21 Univ Abertay Dundee An adaptive, quantised, biometric method
WO2012075640A1 (fr) * 2010-12-10 2012-06-14 Panasonic Corporation Dispositif et procédé de modélisation pour la reconnaissance du locuteur, et système de reconnaissance du locuteur
JP2012151663A (ja) * 2011-01-19 2012-08-09 Toshiba Corp 立体音響生成装置及び立体音響生成方法
CN102237089B (zh) * 2011-08-15 2012-11-14 哈尔滨工业大学 一种减少文本无关说话人识别系统误识率的方法
US9147401B2 (en) * 2011-12-21 2015-09-29 Sri International Method and apparatus for speaker-calibrated speaker detection
US9147400B2 (en) * 2011-12-21 2015-09-29 Sri International Method and apparatus for generating speaker-specific spoken passwords
WO2013110125A1 (fr) * 2012-01-24 2013-08-01 Auraya Pty Ltd Authentification vocale et système et procédé de reconnaissance vocale
US9042867B2 (en) 2012-02-24 2015-05-26 Agnitio S.L. System and method for speaker recognition on mobile devices
US9280984B2 (en) 2012-05-14 2016-03-08 Htc Corporation Noise cancellation method
US9251792B2 (en) 2012-06-15 2016-02-02 Sri International Multi-sample conversational voice verification
US8484022B1 (en) * 2012-07-27 2013-07-09 Google Inc. Adaptive auto-encoders
US8442821B1 (en) 2012-07-27 2013-05-14 Google Inc. Multi-frame prediction for hybrid neural network/hidden Markov models
US9240184B1 (en) 2012-11-15 2016-01-19 Google Inc. Frame-level combination of deep neural network and gaussian mixture models
US10438593B2 (en) * 2015-07-22 2019-10-08 Google Llc Individualized hotword detection models
US10062388B2 (en) * 2015-10-22 2018-08-28 Motorola Mobility Llc Acoustic and surface vibration authentication
GB2552723A (en) 2016-08-03 2018-02-07 Cirrus Logic Int Semiconductor Ltd Speaker recognition
GB2552722A (en) * 2016-08-03 2018-02-07 Cirrus Logic Int Semiconductor Ltd Speaker recognition
WO2018106971A1 (fr) * 2016-12-07 2018-06-14 Interactive Intelligence Group, Inc. Système et procédé de classification de locuteur à base de réseau neuronal
US20180254054A1 (en) * 2017-03-02 2018-09-06 Otosense Inc. Sound-recognition system based on a sound language and associated annotations
US20180268844A1 (en) * 2017-03-14 2018-09-20 Otosense Inc. Syntactic system for sound recognition
WO2019002831A1 (fr) 2017-06-27 2019-01-03 Cirrus Logic International Semiconductor Limited Détection d'attaque par reproduction
GB201713697D0 (en) 2017-06-28 2017-10-11 Cirrus Logic Int Semiconductor Ltd Magnetic detection of replay attack
GB2563953A (en) 2017-06-28 2019-01-02 Cirrus Logic Int Semiconductor Ltd Detection of replay attack
GB201801530D0 (en) 2017-07-07 2018-03-14 Cirrus Logic Int Semiconductor Ltd Methods, apparatus and systems for authentication
GB201801532D0 (en) 2017-07-07 2018-03-14 Cirrus Logic Int Semiconductor Ltd Methods, apparatus and systems for audio playback
GB201801527D0 (en) 2017-07-07 2018-03-14 Cirrus Logic Int Semiconductor Ltd Method, apparatus and systems for biometric processes
GB201801528D0 (en) 2017-07-07 2018-03-14 Cirrus Logic Int Semiconductor Ltd Method, apparatus and systems for biometric processes
GB201801526D0 (en) 2017-07-07 2018-03-14 Cirrus Logic Int Semiconductor Ltd Methods, apparatus and systems for authentication
US10325602B2 (en) * 2017-08-02 2019-06-18 Google Llc Neural networks for speaker verification
CN107680599A (zh) * 2017-09-28 2018-02-09 百度在线网络技术(北京)有限公司 用户属性识别方法、装置及电子设备
GB201801661D0 (en) 2017-10-13 2018-03-21 Cirrus Logic International Uk Ltd Detection of liveness
GB201804843D0 (en) 2017-11-14 2018-05-09 Cirrus Logic Int Semiconductor Ltd Detection of replay attack
GB2567503A (en) 2017-10-13 2019-04-17 Cirrus Logic Int Semiconductor Ltd Analysing speech signals
GB2580821B (en) * 2017-10-13 2022-11-09 Cirrus Logic Int Semiconductor Ltd Analysing speech signals
GB201801663D0 (en) 2017-10-13 2018-03-21 Cirrus Logic Int Semiconductor Ltd Detection of liveness
GB201803570D0 (en) 2017-10-13 2018-04-18 Cirrus Logic Int Semiconductor Ltd Detection of replay attack
GB201719734D0 (en) * 2017-10-30 2018-01-10 Cirrus Logic Int Semiconductor Ltd Speaker identification
GB201801874D0 (en) 2017-10-13 2018-03-21 Cirrus Logic Int Semiconductor Ltd Improving robustness of speech processing system against ultrasound and dolphin attacks
GB201801664D0 (en) 2017-10-13 2018-03-21 Cirrus Logic Int Semiconductor Ltd Detection of liveness
GB201801659D0 (en) 2017-11-14 2018-03-21 Cirrus Logic Int Semiconductor Ltd Detection of loudspeaker playback
US10482878B2 (en) 2017-11-29 2019-11-19 Nuance Communications, Inc. System and method for speech enhancement in multisource environments
WO2019113477A1 (fr) 2017-12-07 2019-06-13 Lena Foundation Systèmes et procédés de détermination automatique des pleurs d'un nourrisson et de distinction entre les pleurs et l'agitation
US11475899B2 (en) 2018-01-23 2022-10-18 Cirrus Logic, Inc. Speaker identification
US11735189B2 (en) 2018-01-23 2023-08-22 Cirrus Logic, Inc. Speaker identification
US11264037B2 (en) 2018-01-23 2022-03-01 Cirrus Logic, Inc. Speaker identification
US10529356B2 (en) 2018-05-15 2020-01-07 Cirrus Logic, Inc. Detecting unwanted audio signal components by comparing signals processed with differing linearity
CN108877809B (zh) * 2018-06-29 2020-09-22 北京中科智加科技有限公司 一种说话人语音识别方法及装置
CN109147798B (zh) * 2018-07-27 2023-06-09 北京三快在线科技有限公司 语音识别方法、装置、电子设备及可读存储介质
US10692490B2 (en) 2018-07-31 2020-06-23 Cirrus Logic, Inc. Detection of replay attack
US10915614B2 (en) 2018-08-31 2021-02-09 Cirrus Logic, Inc. Biometric authentication
US11037574B2 (en) 2018-09-05 2021-06-15 Cirrus Logic, Inc. Speaker recognition and speaker change detection

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5528731A (en) * 1993-11-19 1996-06-18 At&T Corp. Method of accommodating for carbon/electret telephone set variability in automatic speaker verification
US5839103A (en) * 1995-06-07 1998-11-17 Rutgers, The State University Of New Jersey Speaker verification system using decision fusion logic
US6119083A (en) * 1996-02-29 2000-09-12 British Telecommunications Public Limited Company Training process for the classification of a perceptual signal
DE19630109A1 (de) * 1996-07-25 1998-01-29 Siemens Ag Verfahren zur Sprecherverifikation anhand mindestens eines von einem Sprecher eingesprochenen Sprachsignals, durch einen Rechner
US6205424B1 (en) * 1996-07-31 2001-03-20 Compaq Computer Corporation Two-staged cohort selection for speaker verification system
WO1998022936A1 (fr) * 1996-11-22 1998-05-28 T-Netix, Inc. Identification d'un locuteur fondee par le sous-mot par fusion de plusieurs classificateurs, avec adaptation de canal, de fusion, de modele et de seuil
US6246751B1 (en) * 1997-08-11 2001-06-12 International Business Machines Corporation Apparatus and methods for user identification to deny access or service to unauthorized users
US6424946B1 (en) * 1999-04-09 2002-07-23 International Business Machines Corporation Methods and apparatus for unknown speaker labeling using concurrent speech recognition, segmentation, classification and clustering
US6629073B1 (en) * 2000-04-27 2003-09-30 Microsoft Corporation Speech recognition method and apparatus utilizing multi-unit models

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO02103680A3 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019145708A1 (fr) * 2018-01-23 2019-08-01 Cirrus Logic International Semiconductor Limited Identification de locuteur
GB2583420A (en) * 2018-01-23 2020-10-28 Cirrus Logic Int Semiconductor Ltd Speaker identification
GB2583420B (en) * 2018-01-23 2022-09-14 Cirrus Logic Int Semiconductor Ltd Speaker identification
GB2608710A (en) * 2018-01-23 2023-01-11 Cirrus Logic Int Semiconductor Ltd Speaker identification
GB2608710B (en) * 2018-01-23 2023-05-17 Cirrus Logic Int Semiconductor Ltd Speaker identification

Also Published As

Publication number Publication date
WO2002103680A3 (fr) 2003-03-13
EP1399915B1 (fr) 2009-03-18
CN100377209C (zh) 2008-03-26
WO2002103680A2 (fr) 2002-12-27
CA2451401A1 (fr) 2002-12-27
ATE426234T1 (de) 2009-04-15
US20040236573A1 (en) 2004-11-25
AU2002311452B2 (en) 2008-06-19
DE60231617D1 (de) 2009-04-30
CN1543641A (zh) 2004-11-03

Similar Documents

Publication Publication Date Title
AU2002311452B2 (en) Speaker recognition system
AU2002311452A1 (en) Speaker recognition system
Campbell Speaker recognition: A tutorial
US6539352B1 (en) Subword-based speaker verification with multiple-classifier score fusion weight and threshold adaptation
Furui Recent advances in speaker recognition
US6519561B1 (en) Model adaptation of neural tree networks and other fused models for speaker verification
US8160877B1 (en) Hierarchical real-time speaker recognition for biometric VoIP verification and targeting
EP0870300B1 (fr) Systeme de verification de locuteur
CN112735435A (zh) 具备未知类别内部划分能力的声纹开集识别方法
Campbell Speaker recognition
Furui Speaker recognition
KR100917419B1 (ko) 화자 인식 시스템
Omer Joint MFCC-and-vector quantization based text-independent speaker recognition system
Furui Speaker recognition in smart environments
Lotia et al. A review of various score normalization techniques for speaker identification system
Xafopoulos Speaker Verification (an overview)
Melin et al. Voice recognition with neural networks, fuzzy logic and genetic algorithms
Saranya Feature Switching: A new paradigm for speaker recognition and spoof detection
Rosenberg et al. Overview of S
Thakur et al. Speaker Authentication Using GMM-UBM
Vyawahare Speaker recognition: A review
Chao Verbal Information Verification for High-performance Speaker Authentication
Srinivasan A nonlinear mixture autoregressive model for speaker verification
Xafopoulos Speaker Verification
Avinash Exploring features for text-dependent speaker verification in distant speech signals

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20040113

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SECURIVOX LTD

17Q First examination report despatched

Effective date: 20040420

RTI1 Title (correction)

Free format text: SPEAKER VERIFICATION

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

17Q First examination report despatched

Effective date: 20040420

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SPEECH SENTINEL LIMITED

GRAJ Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted

Free format text: ORIGINAL CODE: EPIDOSDIGR1

GRAL Information related to payment of fee for publishing/printing deleted

Free format text: ORIGINAL CODE: EPIDOSDIGR3

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAL Information related to payment of fee for publishing/printing deleted

Free format text: ORIGINAL CODE: EPIDOSDIGR3

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAF Information related to payment of grant fee modified

Free format text: ORIGINAL CODE: EPIDOSCIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SPEECH SENTINEL LIMITED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 60231617

Country of ref document: DE

Date of ref document: 20090430

Kind code of ref document: P

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090318

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090318

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090618

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090318

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090318

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090826

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090629

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090318

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090630

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

26N No opposition filed

Effective date: 20091221

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090630

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090630

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IE

Payment date: 20100614

Year of fee payment: 9

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090619

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090318

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090613

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090318

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090318

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20110613

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 15

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 16

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 17

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20190612

Year of fee payment: 18

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20190620

Year of fee payment: 18

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20200603

Year of fee payment: 19

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 60231617

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210101

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20210613

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210613