WO2007001602A2 - Speech recognition system for secure information - Google Patents

Speech recognition system for secure information Download PDF

Info

Publication number
WO2007001602A2
WO2007001602A2 PCT/US2006/015250 US2006015250W WO2007001602A2 WO 2007001602 A2 WO2007001602 A2 WO 2007001602A2 US 2006015250 W US2006015250 W US 2006015250W WO 2007001602 A2 WO2007001602 A2 WO 2007001602A2
Authority
WO
WIPO (PCT)
Prior art keywords
sub
word speech
speech units
security
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2006/015250
Other languages
English (en)
French (fr)
Other versions
WO2007001602A3 (en
Inventor
David G. Ollason
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Corp
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to JP2008518142A priority Critical patent/JP2008544327A/ja
Priority to EP06751084A priority patent/EP1894186A4/en
Publication of WO2007001602A2 publication Critical patent/WO2007001602A2/en
Publication of WO2007001602A3 publication Critical patent/WO2007001602A3/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces
    • G10L17/24Interactive procedures; Man-machine interfaces the user being prompted to utter a password or a predefined phrase
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/04Segmentation; Word boundary detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination

Definitions

  • automated banking systems may require a secure password or security code to retrieve account information.
  • Such systems may prompt a user to input secret information, such as a birth date or social security number, or other password associated with the user. The system then verifies the user's input or response against a stored record of the secret information or password to verify the authenticity of the user.
  • secret information such as a birth date or social security number, or other password associated with the user.
  • the system verifies the user's input or response against a stored record of the secret information or password to verify the authenticity of the user.
  • Embodiments of the present invention relate to a speech recognition system for secure information.
  • the speech recognition system includes a sub-word speech unit recognition component which interfaces with a security system.
  • the sub-word speech unit recognition component receives a speech input utterance, representing a password or secret information, from a user, recognizes the sub-word speech units in the utterance and provides the sub-word speech units to the security system to compare the sub-word speech units against stored information or data.
  • FIG. 1 is a block diagram of one illustrative embodiment of a computing environment in which embodiments of the present invention can be used or implemented.
  • FIG. 2 is a block diagram of an illustrated embodiment of a speech recognition system for secure information.
  • FIG. 3 is a flow chart illustrating one embodiment of authentication of a user input utterance relative to secure information.
  • FIG. 4 is a block diagram illustrating an embodiment for entry of secure information in a security system.
  • FIG. 5 is a flow chart of an illustrated embodiment of steps for entry of secure information in a security system.
  • Embodiments of the present invention relate to sub-word speech recognition for secure information.
  • an embodiment of on illustrative a computing environment 100 in which the invention can be implemented will be described with respect to FIG. 1.
  • the computing system environment 100 shown in FIG. 1 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing environment 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 100.
  • the invention is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor- based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
  • the invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • Those skilled in the art can implement aspects of the present invention as instructions stored on computer readable media based on the description and figures provided herein.
  • the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computer storage media including memory storage devices.
  • an exemplary system for implementing the invention includes a general purpose computing device in the form of a computer 110.
  • Components of computer 110 may include, but are not limited to, a processing unit 120, a system memory 130, and a system bus 121 that couples various system components including the system memory to the processing unit 120.
  • the system bus 121 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, arid a local bus using any of a variety of bus architectures.
  • bus architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
  • Computer 110 typically includes a variety of computer readable media.
  • Computer readable media can be any available media that can be accessed by computer 110 and includes both volatile and nonvolatile media, removable and nonremovable media.
  • Computer readable media may comprise computer storage media and communication media.
  • Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media includes, but is not limited to, RAM, ROM,
  • Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier WAV or other transport mechanism and includes any information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, FR, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
  • the system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132.
  • ROM read only memory
  • RAM random access memory
  • BIOS basic input/output system
  • RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120.
  • FIG. 1 illustrates operating system 134, application programs 135, other program modules 136, and program data 137.
  • the computer 110 may also include other removable/non-removable volatile/nonvolatile computer storage media.
  • FIG. 1 illustrates a hard disk drive 141 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 151 that reads from or writes to a removable, nonvolatile magnetic disk 152, and an optical disk drive 155 that reads from or writes to a removable, nonvolatile optical disk 156 such as a CD ROM or other optical media.
  • removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like.
  • the hard disk drive 141 is typically connected to the system bus 121 through a non-removable memory interface such as interface 140, and magnetic disk drive 151 and optical disk drive 155 are typically connected to the system bus 121 by a removable memory interface, such as interface 150.
  • hard disk drive 141 is illustrated as storing operating system 144, application programs 145, other program modules 146, and program data 147. Note that these components can either be the same as or different from operating system 134, application programs 135, other program modules 136, and program data 137. Operating system 144, application programs 145, other program modules 146, and program data 147 are given different numbers here to illustrate that, at a minimum, they are different copies.
  • a user may enter commands and information into the computer 110 through input devices such as a keyboard 162, a microphone 163, and a pointing device 161, such as a mouse, trackball or touch pad.
  • Other input devices may include a joystick, game pad, satellite dish, scanner, or the like.
  • a monitor 191 or other type of display device is also connected to the system bus 121 via an interface, such as a video interface 190.
  • computers may also include other peripheral output devices such as speakers 197 and printer 196, which may be connected through an output peripheral interface 190.
  • the computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180.
  • the remote computer 180 may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 110.
  • the logical connections depicted in FIG. 1 include a local area network (LAN) 171 and a wide area network (WAN) 173, but may also include other networks.
  • LAN local area network
  • WAN wide area network
  • Such networking environments are commonplace in offices, enterprise-wide computer networks, Intranets and the Internet.
  • the computer 110 When used in a LAN networking environment, the computer 110 is connected to the LAN 171 through a network interface or adapter 170. When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173, such as the Internet.
  • the modem 172 which may be internal or external, may be connected to the system bus 121 via the user-input interface 160, or other appropriate mechanism.
  • program modules depicted relative to the computer 110, or portions thereof may be stored in the remote memory storage device.
  • FIG. 1 illustrates remote application programs 185 as residing on remote computer 180. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • Embodiments of the present invention relate to a speech recognition system 200 for secure information which has varied applications and is not limited to the specific embodiments shown.
  • the speech recognition system 200 includes application 202 and security system 204.
  • application 202 is illustrating a telephone or dialog system that has a speech recognition system 206 that, in general, prompts a user 207 with audio prompts 208 and receives speech responses 210, and allows the user to perform certain tasks using voice commands and speech responses to prompts.
  • speech recognition system 206 includes a sub-word speech unit recognition component 212. The sub-word speech unit recognition component 212 receives the response or utterance 210 from user 207.
  • Component 212 recognizes, in the input speech utterance or response 210, sub-word speech units 214, such as phonemes.
  • the security system 204 includes a secure database or secure information 220.
  • the database 220 includes sub-word speech units corresponding to security data, such as passwords or security codes.
  • the recognition component 212 interfaces with the security system 204 through a secure interface 222 for authentication of the input speech or utterance 210.
  • Secure interface 222 illustratively is a firewall or other interface that employs a security protocol. The particular interface or protocol is not important for purposes of the present invention other than to say that the data in security system 204 is more secure than that in application 202.
  • the system 200 is used to verify or authenticate a password or security code.
  • the password or code is input by the user 207 in response to prompt 208.
  • the utterance is processed into sub-word speech units 214 by the sub-word speech unit recognition component 212.
  • the application 202 provides the sub-word speech units 214 in addition to a user identification 224, such as the user's name, account number or other identification code, to the security system 204.
  • the security system 204 uses the sub-word speech units 214 and user identification 224 to access stored information indicative of the password or security code corresponding to the received user identification 224.
  • the stored information may be, for example, stored sub-word speech units.
  • Sub-word speech units corresponding to the input speech are compared to stored data or stored sub-word speech units by a speech unit comparator component 225.
  • an authorization message 226 is provided to application 202 through the secure interface 222 that the password is correct. Otherwise, the message 226 indicates that the password is not correct.
  • the secure information only sub-word speech units are recognized at application 202 and passed to the security system 204 over secure interface 222. Thus, word level recognition of secure information is not available outside of the security system 204 to protect the security of the information.
  • FIG. 3 illustrates in more detail steps for implementing a secure speech recognition embodiment for secure data such as a security code or password.
  • the user 207 accesses the application 202 to perform a task as shown in block 230 and the user 207 is prompted to enter secure information as illustrated by block 232, such as a password or security code.
  • the user 207 utters a response 210 as shown in block 234.
  • the sub-word speech units in uttered response 210 are recognized by the sub-word speech unit recognition component 212 as illustrated by block 236.
  • the sub-word speech units 214 are provided to the security system 204 through the secure interface 222 along with other identifying information 224 as illustrated by step 238.
  • the security system 204 compares sub-word speech units 214 with secure data or information stored in store 220 for the identified user 207.
  • speech unit comparator component 225 retrieves stored sub-word speech units for the secure data or information and compares the stored sub-word speech units to the input sub-word speech units 214 for the input utterance as illustrated by block 240.
  • the stored sub- word speech units and the sub-word speech units for the input speech or utterance are compared to determine if the input utterance matches the stored data or password for the user 207 as illustrated by block 242.
  • the security system or application 204 sends a message 226 to the application 202 verifying the match as shown in block 248 and the application 202 unlocks the task or information sought by user 207, as shown in block 250. For example, if the sub-word speech units for the input utterance match the sub- word speech units or phonemes for the stored information, the security system can unlock the application 202 so that the user can access otherwise locked information or perform a desired task or tasks.
  • the security system 204 sends a message to the application 202 that there is no match as shown in block 252, and the application 202 remains locked and/or displays an error message to the user 207 as illustrated by block 254.
  • the secure information is never fully recognized outside of security system 204. Instead, only the sub-word speech units corresponding to the secure information are recognized and passed to the security system 204. Thus, word-level grammars for the secure information need not be available outside of the security system 204. For example, if the user is prompted to input the user's mother's maiden name to unlock a bank account of a telephonic banking system, the word level recognition is not available outside of the security system 204.
  • the input utterance of the user's mother's maiden name is recognized as sub-word speech units, and the sub-word speech units are passed to the security system 204 to verify that the user's input utterance matches the data for the user's mother's maiden name stored in the secure database 220.
  • FIG. 4 illustrates an embodiment for registering with or enrolling in, system 200.
  • the process involves inputting or creating sub-word speech units identifying the user's secure information for storage in the secure database 220.
  • FIG. 4 shows an embodiment in which the user inputs the information directly into security system
  • the secure information can be input through application 202 in system 200 in FIG. 2 as well.
  • the secure information can be input to the security system 204 using a speech or audio input device 260 (such as a telephone or other voice dialog system) or alternatively using a non-audible input device 262 such as an alphanumeric keyboard or keypad.
  • the security system 204 provides a security prompt 264 to the user 207 to enter secure information or data, such as for example, the user's mother's maiden name.
  • the user can provide an audio response or utterance or a non-audio response (such as a text response).
  • sub-word speech units in the audio response are recognized by a sub-word speech unit recognizer 268.
  • a sub-word speech unit generator 270 If the user's response is entered via a non-audible input device 262 (such as in text), a sub-word speech unit generator 270 generates sub-word speech units for the text entry.
  • sub-word speech units are phonemes, and are generated from text by the sub-word speech unit generator 270 using a dictionary or lexicon 272 to identify input words and letter to sound rules 274 to generate the phonemes for the recognized words.
  • the sub-word speech units 271 from the sub-word speech unit generator 270 or sub- word speech recognizer 268 are stored in the secure database 220.
  • FIG. 5 illustrates steps, in more detail, for inputting secure information into the secure database 220.
  • the user accesses the security system 204 as illustrated by block 280, and the user is prompted with prompt 264 to enter user identification information (e.g. name, telephone number, etc) to enroll, as shown in block 282.
  • user identification information e.g. name, telephone number, etc
  • secure information e.g. password or security code
  • the secure information is entered by the user through an audio input device 260 or non-audible input device 262 as illustrated by block 286.
  • the system determines if the user's response is non-audible (such as text) or speech. If the user's secure information is entered via the audio input device 260, sub-word speech units are recognized for the secure information entered by the user with the sub-word speech unit recognizer 268 as illustrated by block 290. If the user's response is entered as text input, sub-word speech units are generated for the text input or response by the sub-word speech unit generator 270 as illustrated by step 292. Once the sub-word speech units 271 are generated or recognized, the sub-word speech units 271 are stored in the secure database 220 under the user's identification or account, as illustrated by block 294.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Telephonic Communication Services (AREA)
  • Storage Device Security (AREA)
PCT/US2006/015250 2005-06-22 2006-04-21 Speech recognition system for secure information Ceased WO2007001602A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2008518142A JP2008544327A (ja) 2005-06-22 2006-04-21 セキュア情報のための音声認識システム
EP06751084A EP1894186A4 (en) 2005-06-22 2006-04-21 LANGUAGE RECOGNITION SYSTEM FOR SAFE INFORMATION

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/158,830 US20060293898A1 (en) 2005-06-22 2005-06-22 Speech recognition system for secure information
US11/158,830 2005-06-22

Publications (2)

Publication Number Publication Date
WO2007001602A2 true WO2007001602A2 (en) 2007-01-04
WO2007001602A3 WO2007001602A3 (en) 2007-12-13

Family

ID=37568670

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2006/015250 Ceased WO2007001602A2 (en) 2005-06-22 2006-04-21 Speech recognition system for secure information

Country Status (6)

Country Link
US (1) US20060293898A1 (enExample)
EP (1) EP1894186A4 (enExample)
JP (1) JP2008544327A (enExample)
KR (1) KR20080019210A (enExample)
CN (1) CN101208739A (enExample)
WO (1) WO2007001602A2 (enExample)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103379488A (zh) * 2012-04-26 2013-10-30 国民技术股份有限公司 密钥钥匙装置及使用方法

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8010367B2 (en) * 2006-12-22 2011-08-30 Nuance Communications, Inc. Spoken free-form passwords for light-weight speaker verification using standard speech recognition engines
CN102254559A (zh) * 2010-05-20 2011-11-23 盛乐信息技术(上海)有限公司 基于声纹的身份认证系统及方法
US11700412B2 (en) 2019-01-08 2023-07-11 Universal Electronics Inc. Universal voice assistant
US11451618B2 (en) 2014-05-15 2022-09-20 Universal Electronics Inc. Universal voice assistant
US11792185B2 (en) 2019-01-08 2023-10-17 Universal Electronics Inc. Systems and methods for associating services and/or devices with a voice assistant
US8898064B1 (en) * 2012-03-19 2014-11-25 Rawles Llc Identifying candidate passwords from captured audio
CN103077341B (zh) * 2013-01-30 2016-01-20 广东欧珀移动通信有限公司 一种应用程序解锁方法及装置
KR102140770B1 (ko) * 2013-09-27 2020-08-03 에스케이플래닛 주식회사 음성에 기반한 잠금 해제를 수행하는 사용자 장치, 음성에 기반한 사용자 장치의 잠금 해제 방법 및 컴퓨터 프로그램이 기록된 기록매체
US8812320B1 (en) * 2014-04-01 2014-08-19 Google Inc. Segment-based speaker verification using dynamically generated phrases
US11445011B2 (en) 2014-05-15 2022-09-13 Universal Electronics Inc. Universal voice assistant
EP4350558A3 (en) 2014-11-07 2024-06-19 Samsung Electronics Co., Ltd. Speech signal processing method and speech signal processing apparatus
US10319367B2 (en) 2014-11-07 2019-06-11 Samsung Electronics Co., Ltd. Speech signal processing method and speech signal processing apparatus
CN105245729B (zh) * 2015-11-02 2019-02-26 北京奇虎科技有限公司 移动终端消息阅读方法和装置
US10909978B2 (en) * 2017-06-28 2021-02-02 Amazon Technologies, Inc. Secure utterance storage
KR102489487B1 (ko) 2017-12-19 2023-01-18 삼성전자주식회사 전자 장치, 그 제어 방법 및 컴퓨터 판독가능 기록 매체
JP2020004192A (ja) * 2018-06-29 2020-01-09 株式会社フュートレック 通信装置および通信装置を備える音声認識端末装置
KR102623727B1 (ko) 2018-10-29 2024-01-11 삼성전자주식회사 전자 장치 및 이의 제어 방법
US11776539B2 (en) * 2019-01-08 2023-10-03 Universal Electronics Inc. Voice assistant with sound metering capabilities
US11665757B2 (en) 2019-01-08 2023-05-30 Universal Electronics Inc. Universal audio device pairing assistant

Family Cites Families (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IT1160148B (it) * 1983-12-19 1987-03-04 Cselt Centro Studi Lab Telecom Dispositivo per la verifica del parlatore
US5548647A (en) * 1987-04-03 1996-08-20 Texas Instruments Incorporated Fixed text speaker verification method and apparatus
US5127043A (en) * 1990-05-15 1992-06-30 Vcs Industries, Inc. Simultaneous speaker-independent voice recognition and verification over a telephone network
US5430827A (en) * 1993-04-23 1995-07-04 At&T Corp. Password verification system
US5677989A (en) * 1993-04-30 1997-10-14 Lucent Technologies Inc. Speaker verification system and process
US5907597A (en) * 1994-08-05 1999-05-25 Smart Tone Authentication, Inc. Method and system for the secure communication of data
US5774858A (en) * 1995-10-23 1998-06-30 Taubkin; Vladimir L. Speech analysis method of protecting a vehicle from unauthorized accessing and controlling
US5752231A (en) * 1996-02-12 1998-05-12 Texas Instruments Incorporated Method and system for performing speaker verification on a spoken utterance
US6529881B2 (en) * 1996-06-28 2003-03-04 Distributed Software Development, Inc. System and method for identifying an unidentified customer at the point of sale
WO1998023062A1 (en) * 1996-11-22 1998-05-28 T-Netix, Inc. Voice recognition for information system access and transaction processing
US5995927A (en) * 1997-03-14 1999-11-30 Lucent Technologies Inc. Method for performing stochastic matching for use in speaker verification
US5897616A (en) * 1997-06-11 1999-04-27 International Business Machines Corporation Apparatus and methods for speaker verification/identification/classification employing non-acoustic and/or acoustic models and databases
JPH1127750A (ja) * 1997-07-08 1999-01-29 Koorasu Computer Kk アクセス認証方法、接続制御装置、及び通信システム
US6246988B1 (en) * 1998-02-10 2001-06-12 Dsc Telecom L.P. Method and apparatus for accessing a data base via speaker/voice verification
US6243678B1 (en) * 1998-04-07 2001-06-05 Lucent Technologies Inc. Method and system for dynamic speech recognition using free-phone scoring
US6185530B1 (en) * 1998-08-14 2001-02-06 International Business Machines Corporation Apparatus and methods for identifying potential acoustic confusibility among words in a speech recognition system
US6519565B1 (en) * 1998-11-10 2003-02-11 Voice Security Systems, Inc. Method of comparing utterances for security control
US6671672B1 (en) * 1999-03-30 2003-12-30 Nuance Communications Voice authentication system having cognitive recall mechanism for password verification
US6393305B1 (en) * 1999-06-07 2002-05-21 Nokia Mobile Phones Limited Secure wireless communication user identification by voice recognition
US6691089B1 (en) * 1999-09-30 2004-02-10 Mindspeed Technologies Inc. User configurable levels of security for a speaker verification system
JP2001111652A (ja) * 1999-10-12 2001-04-20 Fujitsu Ltd 音声・pb信号変換電話による音声応答システム
US6356868B1 (en) * 1999-10-25 2002-03-12 Comverse Network Systems, Inc. Voiceprint identification system
US6401063B1 (en) * 1999-11-09 2002-06-04 Nortel Networks Limited Method and apparatus for use in speaker verification
US7130800B1 (en) * 2001-09-20 2006-10-31 West Corporation Third party verification system
JP4689788B2 (ja) * 2000-03-02 2011-05-25 株式会社アニモ 電子認証システム、電子認証方法及び記録媒体
EP1209663A1 (de) * 2000-11-27 2002-05-29 Siemens Aktiengesellschaft Zugangssteueranordnung und Verfahren zur Zugangssteuerung
US20020128844A1 (en) * 2001-01-24 2002-09-12 Wilson Raymond E. Telephonic certification of electronic death registration
US6985861B2 (en) * 2001-12-12 2006-01-10 Hewlett-Packard Development Company, L.P. Systems and methods for combining subword recognition and whole word recognition of a spoken input
US7194069B1 (en) * 2002-01-04 2007-03-20 Siebel Systems, Inc. System for accessing data via voice
JP2004096204A (ja) * 2002-08-29 2004-03-25 Nippon Telegraph & Telephone East Corp 遠隔音声制御装置、および遠隔音声制御装置による本人認証方法ならびにデータ登録方法、自動音声通知方法、遠隔音声制御プログラム
US7224786B2 (en) * 2003-09-11 2007-05-29 Capital One Financial Corporation System and method for detecting unauthorized access using a voice signature
US20050071168A1 (en) * 2003-09-29 2005-03-31 Biing-Hwang Juang Method and apparatus for authenticating a user using verbal information verification
US20060229879A1 (en) * 2005-04-06 2006-10-12 Top Digital Co., Ltd. Voiceprint identification system for e-commerce

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of EP1894186A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103379488A (zh) * 2012-04-26 2013-10-30 国民技术股份有限公司 密钥钥匙装置及使用方法

Also Published As

Publication number Publication date
EP1894186A2 (en) 2008-03-05
JP2008544327A (ja) 2008-12-04
WO2007001602A3 (en) 2007-12-13
EP1894186A4 (en) 2009-05-20
CN101208739A (zh) 2008-06-25
KR20080019210A (ko) 2008-03-03
US20060293898A1 (en) 2006-12-28

Similar Documents

Publication Publication Date Title
US20060293898A1 (en) Speech recognition system for secure information
US6691089B1 (en) User configurable levels of security for a speaker verification system
US7386448B1 (en) Biometric voice authentication
US8010367B2 (en) Spoken free-form passwords for light-weight speaker verification using standard speech recognition engines
US10476872B2 (en) Joint speaker authentication and key phrase identification
US6073101A (en) Text independent speaker recognition for transparent command ambiguity resolution and continuous access control
CN101124623B (zh) 语音认证系统及语音认证方法
EP0983587B1 (en) Speaker verification method using multiple class groups
US5897616A (en) Apparatus and methods for speaker verification/identification/classification employing non-acoustic and/or acoustic models and databases
JP4463526B2 (ja) 声紋認証システム
US20180047397A1 (en) Voice print identification portal
JP4173207B2 (ja) 発声音に関する話者の検証を行うためのシステム及び方法
US6496800B1 (en) Speaker verification system and method using spoken continuous, random length digit string
US6246987B1 (en) System for permitting access to a common resource in response to speaker identification and verification
JP2006505021A (ja) 安全なアプリケーション環境のためのローバスト多要素認証
US20140188468A1 (en) Apparatus, system and method for calculating passphrase variability
JP4318475B2 (ja) 話者認証装置及び話者認証プログラム
US20130339245A1 (en) Method for Performing Transaction Authorization to an Online System from an Untrusted Computer System
JP7339116B2 (ja) 音声認証装置、音声認証システム、および音声認証方法
KR102604319B1 (ko) 화자 인증 시스템 및 그 방법
KR20140076056A (ko) 음성 기반 캡차 방법 및 장치
JP2006235623A (ja) 短い発話登録を使用する話者認証のためのシステムおよび方法
CN1522431A (zh) 使用行为模型来进行无干扰的说话者验证的方法和系统
US11929077B2 (en) Multi-stage speaker enrollment in voice authentication and identification
WO2000007087A1 (en) System of accessing crypted data using user authentication

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200680018409.X

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref document number: 2008518142

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1020077028302

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 2006751084

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE