EP2994907A2 - Procédé et appareil d'apprentissage d'une base de données de modèles de reconnaissance vocale - Google Patents

Procédé et appareil d'apprentissage d'une base de données de modèles de reconnaissance vocale

Info

Publication number
EP2994907A2
EP2994907A2 EP14725344.7A EP14725344A EP2994907A2 EP 2994907 A2 EP2994907 A2 EP 2994907A2 EP 14725344 A EP14725344 A EP 14725344A EP 2994907 A2 EP2994907 A2 EP 2994907A2
Authority
EP
European Patent Office
Prior art keywords
recorded
utterance
noise
speech
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP14725344.7A
Other languages
German (de)
English (en)
Inventor
John R Meloney
Joel A. Clark
Joseph C. Dwyer
Adrian SCHUSTER
Snehitha Singaraju
Robert A. Zurek
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google Technology Holdings LLC
Original Assignee
Google Technology Holdings LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/094,875 external-priority patent/US9275638B2/en
Application filed by Google Technology Holdings LLC filed Critical Google Technology Holdings LLC
Publication of EP2994907A2 publication Critical patent/EP2994907A2/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/20Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training

Definitions

  • the present disclosure relates to speech recognition and, more particularly, to methods and devices for training voice recognition databases.
  • Speech recognition in such devices is far from perfect, however.
  • a speech recognition engine typically relies on a phoneme or command database to be able to recognize voice utterances.
  • a user may, however, need to "train" the phoneme or command database to recognize his or her speech characteristics— accent, frequently mispronounced words and syllables, tonal characteristics, cadence, etc.
  • the phoneme or command database may not be accurate in all audio environments. For example, the presence of background noise can decrease speech recognition accuracy.
  • FIG. 1 shows a user speaking to an electronic device, which is depicted as a mobile device in the drawing.
  • FIG. 2 shows example components of the electronic device of FIG. 1.
  • FIG. 3 shows an architecture on which various embodiments may be implemented.
  • FIGS. 4-6 show steps that may be carried out according to embodiments of the disclosure.
  • noise-based voice recognition model database (abbreviated as "VR model database”) as used herein refers to a database that functions as a noise-based phoneme database, as a command database, or as both.
  • Various embodiments of the disclosure include manual and automated methods of training VR model databases.
  • the manual embodiments of this disclosure include a directed training methodology in which the electronic device (also referred to as "device") directs the user to perform operations, in response to which the device updates the VR model database.
  • the device may carry out a manual training method during the initial setup of the device, or at any time when the procedure is launched by the user. For example, when the user is in a new type of noise environment, the user may launch the manual method to train the VR model database for that type of noise, and the device may store the new noise in a noise database.
  • the automated embodiments include methods launched by the device without the user's knowledge.
  • the device may launch an automated method based on environmental characteristics, such as when it senses a new type of noise or in response to the user's actions.
  • Examples of user actions that could launch an automated training method include the user launching a speech recognition session via a button press, gesture trigger, or voice trigger. In these cases, the device would use the user's speech as well as other noises it detects to further train the VR model database. The device could also use the user's speech and detected noise for the speech recognition process itself.
  • the device would launch the automated training process using both the user's utterance from the speech recognition event and the result of that event as the training target.
  • the device trains the VR model database using previously-recorded noises and previously-recorded utterances (retrieved from a noise database and an utterance database, respectively) in addition to live utterances and live noises.
  • the previously-recorded utterances can be obtained in different noise environments and during different use cases of the device.
  • the previously-recorded utterances and noises may be stored in, and retrieved from, a noise database and an utterance database, respectively.
  • the device can store the live utterances and the live noises in a noise database and an utterance database, respectively, for future use.
  • the device can train the VR model database in various ways, any of which, depending on the circumstances, may be used for both the manual and the automated training methodologies.
  • three methodologies relate to how the composite speech and noise signal is captured for the purpose of training the VR model databases. The first of these methods is based on a composite signal of speech and natural noise captured by the device. The second is based on capturing a composite signal of live speech with noise generated by the device's acoustic output transducer. The third is based on a composite signal that the device generates by mixing speech and noise that it captures live or that it retrieves from memory. This last embodiment can use speech captured in a quiet environment mixed with previously stored noise files, or captured noise mixed with previously stored speech utterances.
  • an electronic device digitally combines a single voice input with each of a series of noise samples.
  • Each noise sample is taken from a different audio environment (e.g., street noise, babble, interior car noise).
  • the voice input / noise sample combinations are used to train the VR model database without the user having to repeat the voice input in each of the different environments.
  • the electronic device transmits the user's voice input to a server that maintains and trains the VR model database.
  • the method is carried out by recording an utterance, digitally combining the recorded utterance with a previously-recorded noise sample, and training a noise-based VR model database based on this digital combination.
  • these steps may be repeated for each previously-recorded noise sample of a set of noise samples (e.g., noise samples of a noise database), and may be thus repeated prior to recording a different utterance. Over time, this process can be repeated so as to continually improve speech recognition.
  • the electronic device can generate an artificial noise environment using a predefined noise playback (pink, car, babble), or no feedback (silence) using the speakers on the device.
  • the user speaks during the playback and without the playback. This allows the device to identify changes in user's speech characteristics in quiet vs. noisy audio environments.
  • the VR model database can be trained based on this information.
  • One embodiment involves receiving an utterance via a microphone of an electronic device and, while receiving the utterance, reproducing a previously-recorded noise sample through a speaker of the electronic device.
  • the microphone picks up both the utterance and the previously-recorded noise.
  • Yet another embodiment involves recording an utterance during a speech to text command ("STT") mode, and determining whether the recorded utterance is an STT command. Such a determination may be made based on whether a word-recognition confidence value exceeds a threshold.
  • STT speech to text command
  • the electronic device If the recorded utterance is identified as an STT command, the electronic device performs a function based on the STT command. If the electronic device performs the correct function (i.e., the function associated with the command), then the device trains the noise-based VR model database to associate the utterance with the command.
  • the correct function i.e., the function associated with the command
  • This method may also be repeatedly performed during the STT command mode for the same speech phrase recorded from the same person combined with different noise environments.
  • noise environments include a home, a car, a street, an office, and a restaurant.
  • AO A always-on audio
  • an electronic device When using AO A, an electronic device is capable of waking up from a sleep mode upon receiving a trigger command from a user.
  • AOA places additional demands on devices, especially mobile devices.
  • AOA is most effective when the electronic device is able to recognize the user's voice commands accurately and quickly
  • a user 104 provides voice input (or vocalized information or speech) 106 that is received by a speech recognition-enabled electronic device ("device") 102 by way of a microphone (or other sound receiver) 108.
  • the device 102 which is a mobile device in this example, includes a touch screen display 1 10 that is able to display visual images and to receive or sense touch type inputs as provided by way of a user's finger or other touch input device such as a stylus. Notwithstanding the presence of the touch screen display 1 10, in the embodiment shown in FIG. 1 , the device 102 also has a number of discrete keys or buttons 1 12 that serve as input devices of the device. However, in other embodiments such keys or buttons (or any particular number of such keys or buttons) need not be present, and the touch screen display 1 10 can serve as the primary or only user input device.
  • FIG. 1 particularly shows the device 102 as including the touch screen display 1 10 and keys or buttons 1 12, these features are only intended to be examples of components/features on the device 102, and in other embodiments the device 102 need not include one or more of these features and/or can include other features in addition to or instead of these features.
  • the device 102 is intended to be representative of a variety of devices including, for example, cellular telephones, personal digital assistants (PDAs), smart phones, or other handheld or portable electronic devices.
  • the device can also be a headset (e.g., a Bluetooth headset), MP3 player, battery-powered device, a watch device (e.g., a wristwatch) or other wearable device, radio, navigation device, laptop or notebook computer, netbook, pager, PMP (personal media player), DVR (digital video recorders), gaming device, camera, e-reader, e-book, tablet device, navigation device with video capable screen, multimedia docking station, or other device.
  • a headset e.g., a Bluetooth headset
  • MP3 player e.g., a watch device
  • watch device e.g., a wristwatch
  • radio navigation device
  • laptop or notebook computer netbook
  • pager pager
  • PMP personal media player
  • DVR digital video recorders
  • gaming device camera, e-reader,
  • Embodiments of the present disclosure are intended to be applicable to any of a variety of electronic devices that are capable of or configured to receive voice input or other sound inputs that are indicative or representative of vocalized information.
  • FIG. 2 shows internal components of the device 102 of FIG. 1 , in accordance with an embodiment of the disclosure.
  • the device 102 includes one or more wireless transceivers 202, a computing processor 204 (e.g., a microprocessor, microcomputer, application-specific integrated circuit, digital signal processor, etc.), a memory 206, one or more output devices 208, and one or more input devices 210.
  • the device 102 can further include a component interface 212 to provide a direct connection to auxiliary components or accessories for additional or enhanced functionality.
  • the device 102 may also include a power supply 214, such as a battery, for providing power to the other internal components while enabling the mobile device to be portable.
  • the device 102 additionally includes one or more sensors 228. All of the components of the device 102 can be coupled to one another and be in communication with one another, by way of one or more internal communication links 232 (e.g., an internal bus).
  • the wireless transceivers 202 particularly include a cellular transceiver 203 and a wireless local area network (WLAN) transceiver 205.
  • the cellular transceiver 203 is configured to conduct cellular communications, such as 3G, 4G, 4G-LTE, vis-a-vis cell towers (not shown), albeit in other embodiments, the cellular transceiver 203 can be configured to utilize any of a variety of other cellular-based communication technologies such as analog communications (using AMPS), digital communications (using CDMA, TDMA, GSM, iDEN, GPRS, EDGE, etc.), and/or next generation communications (using UMTS, WCDMA, LTE, IEEE 802.16, etc.) or variants thereof.
  • analog communications using AMPS
  • digital communications using CDMA, TDMA, GSM, iDEN, GPRS, EDGE, etc.
  • next generation communications using UMTS, WCDMA, LTE, IEEE 802.16, etc.
  • the WLAN transceiver 205 is configured to conduct communications in accordance with the IEEE 802.1 1 (a, b, g, or n) standard with access points.
  • the WLAN transceiver 205 can instead (or in addition) conduct other types of communications commonly understood as being encompassed within WLAN communications such as some types of peer-to-peer (e.g., Wi-Fi Peer-to-Peer) communications.
  • peer-to-peer e.g., Wi-Fi Peer-to-Peer
  • the Wi-Fi transceiver 205 can be replaced or supplemented with one or more other wireless transceivers configured for non-cellular wireless communications including, for example, wireless transceivers employing ad hoc communication technologies such as HomeRF (radio frequency), Home Node B (3G femtocell), Bluetooth and/or other wireless communication technologies such as infrared technology.
  • wireless transceivers employing ad hoc communication technologies such as HomeRF (radio frequency), Home Node B (3G femtocell), Bluetooth and/or other wireless communication technologies such as infrared technology.
  • the device 102 has two of the wireless transceivers 202 (that is, the transceivers 203 and 205), the present disclosure is intended to encompass numerous embodiments in which any arbitrary number of wireless transceivers employing any arbitrary number of communication technologies are present.
  • the device 102 is capable of communicating with any of a variety of other devices or systems (not shown) including, for example, other mobile devices, web servers, cell towers, access points, other remote devices, etc.
  • wireless communication between the device 102 and any arbitrary number of other devices or systems can be achieved.
  • Operation of the wireless transceivers 202 in conjunction with other internal components of the device 102 can take a variety of forms.
  • operation of the wireless transceivers 202 can proceed in a manner in which, upon reception of wireless signals, the internal components of the device 102 detect communication signals and the transceivers 202 demodulate the communication signals to recover incoming information, such as voice and/or data, transmitted by the wireless signals.
  • the computing processor 204 formats the incoming information for the one or more output devices 208.
  • the computing processor 204 formats outgoing information, which can but need not be activated by the input devices 210, and conveys the outgoing information to one or more of the wireless transceivers 202 for modulation so as to provide modulated communication signals to be transmitted.
  • the input and output devices 208 and 210 of the device 102 can include a variety of visual, audio and/or mechanical outputs.
  • the output device(s) 208 can include one or more visual output devices 216 such as a liquid crystal display and/or light emitting diode indicator, one or more audio output devices 218 such as a speaker, alarm, and/or buzzer, and/or one or more mechanical output devices 220 such as a vibrating mechanism.
  • the visual output devices 216 among other things can also include a video screen.
  • the input device(s) 210 can include one or more visual input devices 222 such as an optical sensor (for example, a camera lens and photosensor), one or more audio input devices 224 such as the microphone 108 of FIG. 1 (or further for example a microphone of a Bluetooth headset), and/or one or more mechanical input devices 226 such as a flip sensor, keyboard, keypad, selection button, navigation cluster, touch pad, capacitive sensor, motion sensor, and/or switch.
  • an optical sensor for example, a camera lens and photosensor
  • audio input devices 224 such as the microphone 108 of FIG. 1 (or further for example a microphone of a Bluetooth headset)
  • mechanical input devices 226 such as a flip sensor, keyboard, keypad, selection button, navigation cluster, touch pad, capacitive sensor, motion sensor, and/or switch.
  • Operations that can actuate one or more of the input devices 210 can include not only the physical pressing/actuation of buttons or other actuators, but can also include, for example, opening the mobile device, unlocking the device, moving the device to actuate a motion, moving the device to actuate a location positioning system, and operating the device.
  • the device 102 also can include one or more of various types of sensors 228 as well as a sensor hub to manage one or more functions of the sensors.
  • the sensors 228 may include, for example, proximity sensors (e.g., a light detecting sensor, an ultrasound transceiver or an infrared transceiver), touch sensors, altitude sensors, and one or more location circuits/components that can include, for example, a Global Positioning System (GPS) receiver, a triangulation receiver, an accelerometer, a tilt sensor, a gyroscope, or any other information collecting device that can identify a current location or user-device interface (carry mode) of the device 102.
  • GPS Global Positioning System
  • the input devices 210 are considered to be distinct from the input devices 210, in other embodiments it is possible that one or more of the input devices can also be considered to constitute one or more of the sensors (and vice- versa). Additionally, although in the present embodiment the input devices 210 are shown to be distinct from the output devices 208, it should be recognized that in some embodiments one or more devices serve both as input device(s) and output device(s). In particular, in the present embodiment in which the device 102 includes the touch screen display 1 10, the touch screen display can be considered to constitute both a visual output device and a mechanical input device (by contrast, the keys or buttons 1 12 are merely mechanical input devices).
  • the memory 206 can encompass one or more memory devices of any of a variety of forms (e.g., read-only memory, random access memory, static random access memory, dynamic random access memory, etc.), and can be used by the computing processor 204 to store and retrieve data.
  • the memory 206 can be integrated with the computing processor 204 in a single device (e.g., a processing device including memory or processor-in-memory (PIM)), albeit such a single device will still typically have distinct portions/sections that perform the different processing and memory functions and that can be considered separate devices.
  • a single device e.g., a processing device including memory or processor-in-memory (PIM)
  • the memory 206 of the device 102 can be supplemented or replaced by other memory(s) located elsewhere apart from the device 102 and, in such embodiments, the device 102 can be in communication with or access such other memory device(s) by way of any of various communications techniques, for example, wireless communications afforded by the wireless transceivers 202, or connections via the component interface 212.
  • the data that is stored by the memory 206 can include, but need not be limited to, operating systems, programs (applications), modules, and informational data.
  • Each operating system includes executable code that controls basic functions of the device 102, such as interaction among the various components included among the internal components of the device 102, communication with external devices via the wireless transceivers 202 and/or the component interface 212, and storage and retrieval of programs and data, to and from the memory 206.
  • each program includes executable code that utilizes an operating system to provide more specific functionality, such as file system service and handling of protected and unprotected data stored in the memory 206.
  • Such programs can include, among other things, programming for enabling the device 102 to perform a process such as the process for speech recognition shown in FIG. 3 and discussed further below.
  • this is non-executable code or information that can be referenced and/or manipulated by an operating system or program for performing functions of the device 102.
  • a configuration for the electronic device 102 will now be described.
  • Stored in the memory 206 of the electronic device 102 are a VR model database 308, an utterance database 309, and a noise database 310, all of which are accessible to the computing processor 204, the audio input device 224 (e.g., microphones), and the audio output device 218 (e.g., a speaker).
  • the VR model database 308 contains data that associates sounds with speech phonemes or commands or both.
  • the utterance database 309 contains samples of user speech utterances that are recorded of or by the user.
  • the noise database 310 contains samples of noise that are recorded from different environments, digitally generated or both.
  • the device 102 is capable of accessing a network such as the Internet. While the figure shows direct coupling of components such as audio input device 224, audio output device 218, etc., the connection to the computing processor 204 may be through other components or circuitry in the device. Additionally, utterances and noise that the device 102 captures may be temporarily stored in the memory 206, or more persistently in the utterance database 309 and noise database 310, respectively. Whether stored temporarily or not, the utterances and noises can be subsequently accessed by the computing processor 204. The computing processor 204 may reside external to the electronic device 102, such as on a server on the internet.
  • the computing processor 204 executes a speech recognition engine 305, which may be resident in the memory 206, and which has access to the noise database 310, the utterance database 309, and the VR model database 308.
  • a speech recognition engine 305 which may be resident in the memory 206, and which has access to the noise database 310, the utterance database 309, and the VR model database 308.
  • one or more of the noise database 310, the utterance database 309, the VR model database 308, and the speech recognition engine 305 are stored and executed by a remotely located server 301.
  • the procedure 400 shown in FIG. 4 is a passive training system that updates and improves VR model database 308 in a way that is transparent to the user since it does not require the user's cognizant interaction to augment the model.
  • the procedure 400 starts with the electronic device 102 being in an STT command session, during which the speech recognition engine 305 is in a mode in which it interprets utterances as commands rather than as words that are to be converted into text.
  • the electronic device 102 records an utterance of the user's speech including the natural background noise.
  • the recorded utterance and noise may be stored in the utterance database 309 and noise database 310 for future use.
  • the speech recognition engine determines whether the utterance is an STT command. In doing so, the speech recognition engine 305 determines the most likely candidate STT command given the utterance. The speech recognition engine 305 assigns a confidence score to that candidate and, if the confidence score is above a predetermined threshold, deems the utterance to be an STT command. Among the factors influencing the confidence score is the methodology used in performing the training. If the utterance is determined not to be an STT command, then the process returns to step 402. If it is determined to be an STT command, the electronic device 102 performs a function based on the STT command at step 406.
  • the electronic device 102 determines whether the function performed is a valid operation. If so, then at step 410, the electronic device 102 trains the VR model database 308 by, for example, associating the user's utterances with the command. This process executed during normal operation allows the electronic device 102 to update the original VR model database 308 to reflect actual usage in multiple environments which naturally include the noise inherent in those environments. The device 102 may also use previously-recorded utterances from the utterance database 309 and previously-recorded noise from the noise database 310 during this training process.
  • a "No" response during step 408 will result in the device 102 asking the user to enter the text for the command they wish to execute in step 41 1. This text and the utterance captured in step 402 will then be used to train and update the VR model database 308.
  • the procedure 500 is a procedure in which the user knowingly interacts with the electronic device 102.
  • the procedure 500 begins at step 502, at which the electronic device 102 records an utterance, e.g., by converting it into digital data and storing it as a digital file. This storage location can be volatile memory or in more persistent memory (e.g., in the utterance database 309).
  • the electronic device 102 retrieves data of a noise sample from the noise database 310 (e.g., restaurant noise). The electronic device 102 may select the noise sample (e.g.
  • the electronic device 102 digitally combines the noise sample and the utterance.
  • the electronic device 102 trains the VR model database 308 using the combined noise sample and utterance.
  • the electronic device 102 updates the VR model database 308.
  • the electronic device 102 determines whether there are any more noises with which to train the VR model database 308. If there are none, then the process ends. If there are, then the process loops back to step 504, at which the electronic device 102 retrieves another noise sample from the noise database 310.
  • the procedure 600 begins at step 602, at which the electronic device 102 prompts a user for an utterance.
  • the electronic device 102 plays a noise sample of the noise database 310 via the speaker 306.
  • the electronic device carries out step 606 at the same time it carries out step 604.
  • the electronic device 102 records the user's utterance along with the played noise sample.
  • the electronic device 102 stores the acoustically combined noise sample and utterance in volatile memory or in the noise database 310 and the utterance database 309.
  • the electronic device 102 trains the VR model database 308 using the combined noise sample and utterance.
  • the electronic device 102 updates the VR model database 308.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Un dispositif électronique (102) combine de manière numérique une entrée vocale unique à chaque échantillon d'une série d'échantillons de bruit. Chaque échantillon de bruit est prélevé à partir d'un environnement audio différent (par exemple un bruit de rue, un murmure, un bruit d'habitacle de voiture). Les combinaisons de l'entrée vocale et des échantillons de bruit sont utilisées pour un apprentissage d'une base de données de modèles de reconnaissance vocale (308) sans que l'utilisateur (104) ait à répéter l'entrée vocale dans chacun des environnements différents. Dans une variante, le dispositif électronique (102) transmet l'entrée vocale de l'utilisateur à un serveur (301) qui gère la base de données de modèles de reconnaissance vocale (308) et assure son apprentissage.
EP14725344.7A 2013-05-06 2014-04-23 Procédé et appareil d'apprentissage d'une base de données de modèles de reconnaissance vocale Withdrawn EP2994907A2 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201361819985P 2013-05-06 2013-05-06
US14/094,875 US9275638B2 (en) 2013-03-12 2013-12-03 Method and apparatus for training a voice recognition model database
PCT/US2014/035117 WO2014182453A2 (fr) 2013-05-06 2014-04-23 Procédé et appareil d'apprentissage d'une base de données de modèles de reconnaissance vocale

Publications (1)

Publication Number Publication Date
EP2994907A2 true EP2994907A2 (fr) 2016-03-16

Family

ID=51867838

Family Applications (1)

Application Number Title Priority Date Filing Date
EP14725344.7A Withdrawn EP2994907A2 (fr) 2013-05-06 2014-04-23 Procédé et appareil d'apprentissage d'une base de données de modèles de reconnaissance vocale

Country Status (3)

Country Link
EP (1) EP2994907A2 (fr)
CN (1) CN105580071B (fr)
WO (1) WO2014182453A2 (fr)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232909A (zh) * 2018-03-02 2019-09-13 北京搜狗科技发展有限公司 一种音频处理方法、装置、设备及可读存储介质
CN109192216A (zh) * 2018-08-08 2019-01-11 联智科技(天津)有限责任公司 一种声纹识别用训练数据集仿真获取方法及其获取装置
KR20200033707A (ko) * 2018-09-20 2020-03-30 삼성전자주식회사 전자 장치, 및 이의 학습 데이터 제공 또는 획득 방법
CN109545196B (zh) * 2018-12-29 2022-11-29 深圳市科迈爱康科技有限公司 语音识别方法、装置及计算机可读存储介质
CN109545195B (zh) * 2018-12-29 2023-02-21 深圳市科迈爱康科技有限公司 陪伴机器人及其控制方法
CN110544469B (zh) * 2019-09-04 2022-04-19 秒针信息技术有限公司 语音识别模型的训练方法及装置、存储介质、电子装置
CN110808030B (zh) * 2019-11-22 2021-01-22 珠海格力电器股份有限公司 语音唤醒方法、系统、存储介质及电子设备
CN111128141B (zh) * 2019-12-31 2022-04-19 思必驰科技股份有限公司 音频识别解码方法和装置
CN111369979B (zh) * 2020-02-26 2023-12-19 广州市百果园信息技术有限公司 训练样本获取方法、装置、设备及计算机存储介质
CN113099353A (zh) * 2021-04-21 2021-07-09 浙江吉利控股集团有限公司 一种用于车辆的集成麦克风、安全带、方向盘及车辆

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4590692B2 (ja) * 2000-06-28 2010-12-01 パナソニック株式会社 音響モデル作成装置及びその方法
US6556971B1 (en) * 2000-09-01 2003-04-29 Snap-On Technologies, Inc. Computer-implemented speech recognition system training
US6876966B1 (en) * 2000-10-16 2005-04-05 Microsoft Corporation Pattern recognition training method and apparatus using inserted noise followed by noise reduction
US6889189B2 (en) * 2003-09-26 2005-05-03 Matsushita Electric Industrial Co., Ltd. Speech recognizer performance in car and home applications utilizing novel multiple microphone configurations
US20060149693A1 (en) * 2005-01-04 2006-07-06 Isao Otsuka Enhanced classification using training data refinement and classifier updating
US8762143B2 (en) * 2007-05-29 2014-06-24 At&T Intellectual Property Ii, L.P. Method and apparatus for identifying acoustic background environments based on time and speed to enhance automatic speech recognition
US8234111B2 (en) * 2010-06-14 2012-07-31 Google Inc. Speech and noise models for speech recognition
TWI442384B (zh) * 2011-07-26 2014-06-21 Ind Tech Res Inst 以麥克風陣列為基礎之語音辨識系統與方法
CN102426837B (zh) * 2011-12-30 2013-10-16 中国农业科学院农业信息研究所 农业现场数据采集的移动设备语音识别的鲁棒性方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GIBAK KIM ET AL: "Improving Speech Intelligibility in Noise Using Environment-Optimized Algorithms", IEEE TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING, IEEE, vol. 18, no. 8, 1 November 2010 (2010-11-01), pages 2080 - 2090, XP011300614, ISSN: 1558-7916, DOI: 10.1109/TASL.2010.2041116 *
See also references of WO2014182453A2 *

Also Published As

Publication number Publication date
CN105580071B (zh) 2020-08-21
WO2014182453A3 (fr) 2014-12-31
CN105580071A (zh) 2016-05-11
WO2014182453A2 (fr) 2014-11-13

Similar Documents

Publication Publication Date Title
US9275638B2 (en) Method and apparatus for training a voice recognition model database
US11676581B2 (en) Method and apparatus for evaluating trigger phrase enrollment
CN105580071B (zh) 用于训练声音识别模型数据库的方法和装置
US11557310B2 (en) Voice trigger for a digital assistant
US20200279563A1 (en) Method and apparatus for executing voice command in electronic device
US9542947B2 (en) Method and apparatus including parallell processes for voice recognition
US9418651B2 (en) Method and apparatus for mitigating false accepts of trigger phrases
JP2019117623A (ja) 音声対話方法、装置、デバイス及び記憶媒体
US9570076B2 (en) Method and system for voice recognition employing multiple voice-recognition techniques
US20140244273A1 (en) Voice-controlled communication connections
JP6844608B2 (ja) 音声処理装置および音声処理方法
US20140278392A1 (en) Method and Apparatus for Pre-Processing Audio Signals
US20210110838A1 (en) Acoustic aware voice user interface

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20151112

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20190417

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20221031

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230524