WO2000028524A1 - Method of comparing utterances for security control - Google Patents
Method of comparing utterances for security control Download PDFInfo
- Publication number
- WO2000028524A1 WO2000028524A1 PCT/US1998/023928 US9823928W WO0028524A1 WO 2000028524 A1 WO2000028524 A1 WO 2000028524A1 US 9823928 W US9823928 W US 9823928W WO 0028524 A1 WO0028524 A1 WO 0028524A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- enrollment
- matrices
- frames
- steps
- signal
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000001755 vocal effect Effects 0.000 claims abstract description 24
- 230000009471 action Effects 0.000 claims abstract description 13
- 239000011159 matrix material Substances 0.000 claims description 7
- 238000012795 verification Methods 0.000 claims description 7
- 238000000540 analysis of variance Methods 0.000 abstract 1
- 230000001419 dependent effect Effects 0.000 abstract 1
- 230000008901 benefit Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000035899 viability Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000033458 reproduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/06—Decision making techniques; Pattern matching strategies
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/10—Speech classification or search using distance or distortion measures between unknown speech and reference templates
Definitions
- TITLE METHOD OF COMPARING UTTERANCES FOR SECURITY CONTROL
- This invention relates generally to electronic security methods which provide for modeling or otherwise comparing human features such as fingerprints, voice patterns, and retina patterns, in order to distinguish between individuals, and, more particularly, to a security method and protocol for modeling and comparing voice utterances to control the operation of a security device.
- Parra, U.S., 5,313,556 describes the identity of an individual (known or unknown) that is determined by a sonic profile of sounds issued through his oral-nasal passages. The sounds are converted to digital electrical signals and produce a three domain format of frequency, amplitude and time samples to produce an array of peaks and valleys constituting the sonic profile of an individual.
- a source or library of sonic profiles in the same format of a known individual have a interrelationship including relative positions of said peaks and valleys of said sonic profile of the known individual with that of said unknown individual compared and a utilization signal is provided upon detecting or non- detecting a correlation between said sonic profiles.
- Hair et al., U.S., 3,673,331 describes voice verification that is accomplished at a plurality of spaced apart facilities each having a plurality of terminals.
- Multiplexing structure interconnects the terminals through a communications link to a central processing station.
- Analog reproductions of voices transmitted from the terminals are converted into digital signals.
- the digital signals are transformed into the frequency domain at the central processing station.
- Predetermined features of the transformed signals are compared with stored predetermined features of each voice to be verified.
- N verify or non-verify signal is then transmitted to the particular terminal in response to the comparison of the predetermined features.
- a security card (which may be a credit card) according to the invention has recorded on it data identifying a personal and non- counterfeitable attribute, such as the voice characteristics, of the authorized holder of the card.
- N card utilization system provides means for comparing the attribute as recorded by these data with the corresponding attribute of the person wishing to use the card, thereby substantially eliminating the possibility of unauthorized card utilization.
- Muroi et al., U.S., 4,833,713 describes a voice or sound recognition system including a microphone for converting a voice into an electrical voice signal, a frequency analyzer for generating a voice pattern in the form of a time-frequency distribution, and a matching unit for matching the voice pattern with registered voice patterns.
- Feix et al. U.S., 4,449,189 describes a method and an apparatus for identifying an individual through a combination of both speech and face recognition.
- the voice signature of an interrogated person uttering a key word into a microphone is compared in a pattern matcher with the previously stored voice signature of a known person uttering the same key word to obtain a first similarity score.
- a key event in the utterance of the key word by the interrogated person occurs, a momentary image of that person's mouth region onto which a grid pattern has been projected is optically recorded and compared with the previously stored corresponding momentary image of the same known person to obtain a second similarity score.
- the prior art teaches the comparing of voice signatures in time as well as frequency domain. However, the prior art does not teach a means for filtering such voice profiles by difference techniques.
- the present invention method fulfills these needs and provides further related advantages as described in the following summary.
- the present invention teaches certain benefits in methods which give rise to the objectives described below.
- the present invention is a security method which compares a present verbal utterance with a previously recorded verbal utterance by comparing frequency domain representations of the present utterance, with previously recorded multiply repeated utterances of the same material, forming a basis for comparison.
- the present method approaches the comparison by establishing energy content in a variety of cells in the frequency domain, and instead of focusing on the ability of an individual to repeat an utterance from one trial to the next, sometimes separated by days, weeks or even longer, the present approach focuses on the variability of the difference between multiple utterances of the same words or phrases.
- the method attempts to determine if two sounds were produced by the same human voice in an attempt to discriminate between allowed and non-allowed personnel seeking to operate a secure device. Further, the method may be used to determine what command is being given by the individual, culling out the selected command from a library of such commands all uttered by the same individual.
- the present method invention has as an important aspect, the discrimination between, and, or matching of a presently uttered verbal word or phrase with the same utterance stored in a library of such utterancEs.
- Another aspect of the present method is the achievement of high accuracy and fast results in discrimination and, or matching of verbal utterances by using a difference method for comparison.
- the present method is a non-obvious and highly effective procedure for extremely high speed comparison of large data sets against a challenge so as to provide the convenience, for instance, of verbal only challenges at a secure door used by a large number of individuals with separate verbal access codes, wherein the delay time for approval has been shown to be in the range of a few seconds.
- the method also results in an extraordinary level of discrimination between individuals while providing a high level of "forgiveness" for the normal changes in tone, timber and volume of the human voice from moment to moment and day to day.
- the discrimination capability of the present method is strong enough for use in military as well as industrial applications, and is inexpensive and simple to use so as to find application in residential use or commercial.
- a further aspect of the present method is the use of testing for minimum energy levels in a set of frequency ranges in the frequency domain, as well as testing for corresponding energy levels that surpass a selected energy level criterion.
- the present invention provides a step-by-step method for comparing a verbal utterance of a speaker in the present (challenge utterance), with a previously recorded verbal utterance (enrollment utterance), to determine the validity of the speaker, i.e., if the challenge utterance is being produced by the same person as was the enrollment utterance.
- an action is authorized, such as opening a lock, dialing a secret phone number granting access to data or services, etc.
- the method comprises certain steps which are performed in sequence.
- the steps are defined by, first, preparing an enrollment data from an utterance from one or more persons, next, challenging the enrollment data with a present utterance from one of those persons, and finally enabling the security action if the challenge utterance is accepted as being close enough to one of the enrollment data.
- Preparing the enrollment data comprises the steps of, converting a verbal utterance, which we shall refer to as an enrollment utterance, into a first electrical signal as by a microphone or other transducer.
- the electrical signal is tranformed into a digital format.
- a fast Fourier transformation of this electrical signal is conducted to produce a frequency domain representation of the enrollment utterance.
- the Frequency domain representqation is then divided into frames of time, e.g. 10ms. Frames which show no energy content are deleted.
- a number of samples, represented by "M" are taken for each of N frequency channels to form an M by N sample enrollment matrix Ei.
- M and N are selected as integers of a magnitude necessary for the level of security desired with larger numbers providing greater security and vice-versa.
- This matrix provides cell samples Mi-Ni, where i represents an integer, which are characterized by a total energy content within each cell, i.e., a number.
- the method provides for determining if at least X, an arbitrary number, of the M samples have a selected minimum energy content in at least Y, another arbitrary number, of the N frequency channels. If not, the enrollment utterance is repeated until the criteria X and Y are satisfied, or if not, after several tries, the process of enrollment is aborted. This usually would only happen if the speaker is not able to provide enough volume in his or her speech to enable production of the minimum energy criterion or if the input is not a human voice so that a necessary spectral content is lacking.
- the method requires forming the difference between each pair of the enrollment matrices Ei, as (E1-E2), (E1-E3), (E2-E3), ... .
- Algorithms are applied such that each individual frame is compared with each other frame and is allowed to slip any number of frames, forward or backward in order to obtain a best match.
- the second part of this step is also critical to the viability of the present method in that ES assures us that a recording is not being used as an impostor, i.e., we can be sure that the differences in the matrices are at least as great as would be normally expected from a human voice.
- ES assures us that a recording is not being used as an impostor, i.e., we can be sure that the differences in the matrices are at least as great as would be normally expected from a human voice.
- a recording is used for each of the enrollments, we find that variability between them is less than is produced by the human voice.
- Challenging the enrollment data comprising the steps of, first, converting a challenge verbal utterance into a second electrical signal, as by a microphone or other transducer.
- This electrical signal is converted into a digital format.
- the digital signal is ued to perform a second fast Fourier transformation of the represented electrical signal to produce a frequency domain representation of the challenge utterance.
- the Frequency spectrum is then divided into frames of time, e.g. 10 ms. Frames which show no energy content are deleted. Taking M samples of the second signal for each of N frequency channels to form an M by N sample challenge matrix C and determining if at least X of the M samples have a selected minimum energy content in at least Y of the N frequency channels.
- S [(C-Dl) + (C-D2) + (C-D3)+ ...] is formed.
- S' is accepted as a valid challenge if S' ⁇ VB, VB being a selected first verification criterion and also if each said difference between each of the pairs C and Di is greater than VS, VS being a selected second verification criterion.
- Both VB and VS play corresponding roles to EB and ES in the enrollment procedure and are used for the same reason. When these criteria are met the challenge verbal utterance is accepted and the requisite security step is enabled.
- such a system may use the method for a plurality of users at the same time when an enrollment is completed for each user.
- a challenge is made by any one of the users and the challenge method is then carried out for each of the enrollments until either a successful challenge is made or all of the enrollments have been tested without success.
- each of the data sets may include a definition of a specific security action so that when a specific match is made between a challenge and an enrollment, the specific action may be carried out in deference to other actions corresponding to the other enrollments.
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Game Theory and Decision Science (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Collating Specific Patterns (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US1998/023928 WO2000028524A1 (en) | 1998-11-10 | 1998-11-10 | Method of comparing utterances for security control |
JP2000581634A JP2002529799A (en) | 1998-11-10 | 1998-11-10 | How to compare utterances for security control |
AU13938/99A AU1393899A (en) | 1998-11-10 | 1998-11-10 | Method of comparing utterances for security control |
EP98957754A EP1129447A1 (en) | 1998-11-10 | 1998-11-10 | Method of comparing utterances for security control |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US1998/023928 WO2000028524A1 (en) | 1998-11-10 | 1998-11-10 | Method of comparing utterances for security control |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2000028524A1 true WO2000028524A1 (en) | 2000-05-18 |
Family
ID=22268265
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US1998/023928 WO2000028524A1 (en) | 1998-11-10 | 1998-11-10 | Method of comparing utterances for security control |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP1129447A1 (en) |
JP (1) | JP2002529799A (en) |
AU (1) | AU1393899A (en) |
WO (1) | WO2000028524A1 (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3673331A (en) * | 1970-01-19 | 1972-06-27 | Texas Instruments Inc | Identity verification by voice signals in the frequency domain |
US3896266A (en) * | 1971-08-09 | 1975-07-22 | Nelson J Waterbury | Credit and other security cards and card utilization systems therefore |
US4833713A (en) * | 1985-09-06 | 1989-05-23 | Ricoh Company, Ltd. | Voice recognition system |
US5216720A (en) * | 1989-05-09 | 1993-06-01 | Texas Instruments Incorporated | Voice verification circuit for validating the identity of telephone calling card customers |
US5293452A (en) * | 1991-07-01 | 1994-03-08 | Texas Instruments Incorporated | Voice log-in using spoken name input |
US5313556A (en) * | 1991-02-22 | 1994-05-17 | Seaway Technologies, Inc. | Acoustic method and apparatus for identifying human sonic sources |
US5339385A (en) * | 1992-07-22 | 1994-08-16 | Itt Corporation | Speaker verifier using nearest-neighbor distance measure |
US5608784A (en) * | 1994-01-24 | 1997-03-04 | Miller; Joel F. | Method of personnel verification using voice recognition |
US5835894A (en) * | 1995-01-19 | 1998-11-10 | Ann Adcock Corporation | Speaker and command verification method |
-
1998
- 1998-11-10 EP EP98957754A patent/EP1129447A1/en active Pending
- 1998-11-10 AU AU13938/99A patent/AU1393899A/en not_active Abandoned
- 1998-11-10 JP JP2000581634A patent/JP2002529799A/en active Pending
- 1998-11-10 WO PCT/US1998/023928 patent/WO2000028524A1/en not_active Application Discontinuation
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3673331A (en) * | 1970-01-19 | 1972-06-27 | Texas Instruments Inc | Identity verification by voice signals in the frequency domain |
US3896266A (en) * | 1971-08-09 | 1975-07-22 | Nelson J Waterbury | Credit and other security cards and card utilization systems therefore |
US4833713A (en) * | 1985-09-06 | 1989-05-23 | Ricoh Company, Ltd. | Voice recognition system |
US5216720A (en) * | 1989-05-09 | 1993-06-01 | Texas Instruments Incorporated | Voice verification circuit for validating the identity of telephone calling card customers |
US5313556A (en) * | 1991-02-22 | 1994-05-17 | Seaway Technologies, Inc. | Acoustic method and apparatus for identifying human sonic sources |
US5293452A (en) * | 1991-07-01 | 1994-03-08 | Texas Instruments Incorporated | Voice log-in using spoken name input |
US5339385A (en) * | 1992-07-22 | 1994-08-16 | Itt Corporation | Speaker verifier using nearest-neighbor distance measure |
US5608784A (en) * | 1994-01-24 | 1997-03-04 | Miller; Joel F. | Method of personnel verification using voice recognition |
US5835894A (en) * | 1995-01-19 | 1998-11-10 | Ann Adcock Corporation | Speaker and command verification method |
Also Published As
Publication number | Publication date |
---|---|
EP1129447A1 (en) | 2001-09-05 |
AU1393899A (en) | 2000-05-29 |
JP2002529799A (en) | 2002-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5835894A (en) | Speaker and command verification method | |
US6519565B1 (en) | Method of comparing utterances for security control | |
Naik | Speaker verification: A tutorial | |
US5548647A (en) | Fixed text speaker verification method and apparatus | |
US6480825B1 (en) | System and method for detecting a recorded voice | |
EP0647344B1 (en) | Method for recognizing alphanumeric strings spoken over a telephone network | |
EP0953972B1 (en) | Simultaneous speaker-independent voice recognition and verification over a telephone network | |
KR0139949B1 (en) | Voice verification circuit for validating the identity of telephone calling card customers | |
US5127043A (en) | Simultaneous speaker-independent voice recognition and verification over a telephone network | |
EP2120232A1 (en) | A random voice print cipher certification system, random voice print cipher lock and generating method thereof | |
US20070294083A1 (en) | Fast, language-independent method for user authentication by voice | |
CA2172406A1 (en) | Voice-controlled account access over a telephone network | |
US6161094A (en) | Method of comparing utterances for security control | |
Campbell | Speaker recognition | |
US10957318B2 (en) | Dynamic voice authentication | |
Zhang et al. | Human and machine speaker recognition based on short trivial events | |
EP1129447A1 (en) | Method of comparing utterances for security control | |
Chadha et al. | Text-independent speaker recognition for low SNR environments with encryption | |
Marinov | Text dependent and text independent speaker verification system: Technology and application | |
Corsi | Speaker recognition: A survey | |
Inthavisas et al. | Attacks on speech biometric authentication | |
Campbell et al. | Audio sensors and low-complexity recognition technologies for soldier systems | |
Chenafa et al. | Speaker recognition using decision fusion | |
JPS63269283A (en) | Personal identifier | |
Kounoudes et al. | Intelligent Speaker Verification based Biometric System for Electronic Commerce Applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
ENP | Entry into the national phase |
Ref country code: JP Ref document number: 2000 581634 Kind code of ref document: A Format of ref document f/p: F |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1998957754 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1998957754 Country of ref document: EP |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1998957754 Country of ref document: EP |