WO2019169685A1 - 语音处理方法、装置和电子设备 - Google Patents

语音处理方法、装置和电子设备 Download PDF

Info

Publication number
WO2019169685A1
WO2019169685A1 PCT/CN2018/082036 CN2018082036W WO2019169685A1 WO 2019169685 A1 WO2019169685 A1 WO 2019169685A1 CN 2018082036 W CN2018082036 W CN 2018082036W WO 2019169685 A1 WO2019169685 A1 WO 2019169685A1
Authority
WO
WIPO (PCT)
Prior art keywords
zero
crossing rate
voice
speech
voiced
Prior art date
Application number
PCT/CN2018/082036
Other languages
English (en)
French (fr)
Inventor
安黄彬
Original Assignee
深圳市沃特沃德股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市沃特沃德股份有限公司 filed Critical 深圳市沃特沃德股份有限公司
Publication of WO2019169685A1 publication Critical patent/WO2019169685A1/zh

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/14Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L2015/088Word spotting
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/93Discriminating between voiced and unvoiced parts of speech signals

Definitions

  • the present invention relates to the field of electronic technologies, and in particular, to a voice processing method, apparatus, and electronic device.
  • Voice wake-up technology is a branch of artificial intelligence.
  • the voice wake-up technology has a wide range of applications, such as robots, mobile phones, wearable devices, smart homes, and automobiles. Many devices with voice recognition use voice wake-up technology as the beginning of human-machine interaction.
  • Voice wake-up means that when a user speaks a specific voice command, the device switches from the sleep state to the work state and gives a specified response.
  • the purpose of the wake-up technology is that the user's operation of the device can be completely performed by voice, and the help of the hands can be separated.
  • the device does not need to be in a working state at all times, and the energy consumption is greatly saved.
  • the key to voice wake-up is to perform keyword matching.
  • voice activity detection VAD
  • VAD voice activity detection
  • the main object of the present invention is to provide a voice processing method, apparatus and electronic device, which aim to reduce system power consumption and improve the accuracy of keyword matching.
  • an embodiment of the present invention provides a voice processing method, where the method includes the following steps.
  • Embodiments of the present invention simultaneously provide a voice processing device, where the device includes:
  • a first detecting module configured to perform voice activity detection on the sound signal, and extract a voice signal from the sound signal
  • a second detecting module configured to perform voiced sound detection on the voice signal, and extract a voiced sound segment from the voice signal
  • a calculation module configured to calculate a zero-crossing rate characteristic parameter of the voiced segment
  • a matching module configured to perform keyword matching by using the zero-crossing rate feature parameter.
  • Embodiments of the present invention also provide an electronic device including a memory, a processor, and at least one application stored in the memory and configured to be executed by the processor, the application being configured It is used to perform the aforementioned speech processing method.
  • a speech processing method provided by an embodiment of the present invention, by extracting a voiced sound segment from a voice signal, and calculating a zero-crossing rate characteristic parameter of the voiced sound segment, using the zero-crossing rate characteristic parameter of the voiced sound segment to perform a keyword Matching, thereby filtering out the interference items such as unvoiced sound and noise in the voice signal, and only performing keyword matching on the effective voice (voiced sound segment), thereby greatly reducing the calculation amount of the feature parameters, effectively reducing the system power consumption, and the other
  • the aspect improves the robustness of the feature parameters, thereby improving the accuracy of keyword matching.
  • the calculation of the zero-crossing rate characteristic parameter used in the embodiment of the present invention is smaller, further reducing the system power consumption, and
  • the embodiment of the invention adopts a Gaussian mixture model for keyword matching, which further improves the accuracy of keyword matching.
  • FIG. 1 is a flow chart of an embodiment of a voice processing method of the present invention
  • FIG. 2 is a schematic diagram of voice activity detection of a sound signal in an embodiment of the present invention
  • FIG. 3 is a schematic diagram of correcting a voice activity detection result in an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a voiced sound segment extracted from a voice signal in an embodiment of the present invention.
  • FIG. 5 is a block diagram showing an embodiment of a voice processing device according to the present invention.
  • FIG. 6 is a block diagram of a second detection module of FIG. 5;
  • FIG. 7 is another block diagram of the second detecting module of FIG. 5;
  • FIG. 8 is a block diagram of a computing module of FIG. 5;
  • FIG. 9 is a block diagram of the matching module of FIG. 5;
  • FIG. 10 is a block diagram of the determination unit of FIG. 9.
  • terminal and terminal device used herein include both a device of a wireless signal receiver, a device having only a wireless signal receiver without a transmitting capability, and a receiving and receiving device.
  • Such a device may comprise: a cellular or other communication device having a single line display or a multi-line display or a cellular or other communication device without a multi-line display; PCS (Persona 1 Communications Service), which may combine voice, Data processing, fax and/or data communication capabilities; PDA (Personal Digital Assistant), which can include radio frequency receivers, pagers, Internet/Intranet access, web browsers, notepads, calendars and/or GPS ( Global Positioning System, Receiver; Conventional laptop and/or palmtop computer or other device having a conventional laptop and/or palmtop computer or other device that includes and/or includes a radio frequency receiver.
  • PCS Personala 1 Communications Service
  • PDA Personal Digital Assistant
  • terminal may be portable, transportable, installed in a vehicle (aviation, sea and/or land), or adapted and/or configured to operate locally, and/or Run in any other location on the Earth and/or space in a distributed fashion.
  • the "terminal” and “terminal device” used herein may also be a communication terminal, an internet terminal, a music/video playback terminal, and may be, for example, a PDA, a MID (Mobile Internet Device), and/or have a music/video playback.
  • Functional mobile phones can also be smart TVs, set-top boxes and other devices.
  • FIG. 1 an embodiment of a voice processing method according to the present invention is provided.
  • the method includes the following steps:
  • the electronic device collects a sound signal through a microphone or receives a sound signal sent by an external device, and performs voice activity detection on the sound signal, and extracts a voice signal from the sound signal.
  • the electronic device may be a terminal device such as a mobile phone, a tablet, a personal computer, a notebook computer, or the like, or may be an electronic device such as a wearable device, a smart home device, an in-vehicle device, or a robot.
  • the electronic device may perform voice activity detection on the sound signal based on the zero-crossing rate.
  • the zero-crossing rate is combined with the short-time energy, wherein the threshold value of the zero-crossing rate is the first threshold value.
  • the zero-crossing rate herein refers to a short-time zero-crossing rate, which can be regarded as a simple measure of the signal frequency, and is a characteristic parameter in the time domain analysis of the voice signal.
  • Zero-crossing means that the signal passes the zero value.
  • the zero-crossing rate is the number of times the signal passes through the zero value in unit time.
  • the time-domain waveform of the voice can be observed through the horizontal axis.
  • zero crossing means that the sequence sample changes the sign, and the zero crossing rate is the number of times each sample changes the symbol.
  • a speech signal it refers to the number of times a speech signal waveform crosses the horizontal axis (zero level) in one frame of speech, and can be calculated by the number of times the adjacent two samples change the symbol.
  • Two parameters, short-time energy and zero-crossing rate, can be used for voice activity detection, and are mainly used to identify the positions of the start and end points of the silent segment and the voice segment. It is effective to use short-time energy to identify when the background noise is relatively small. It is effective to use the zero-crossing rate when the background noise is relatively large, but it is usually the case that the two parameters are combined to perform better.
  • the electronic device determines that the sound frame has passed zero once, according to which the zero-crossing rate of the sound frame is counted, where T1 is the first threshold value; then the electronic device extracts the sound signal from the sound signal.
  • a sound frame having a zero rate greater than a preset value is used as a voice signal, or a sound signal in which a zero-crossing rate of the sound signal is less than or equal to a preset value is filtered out to obtain a voice signal.
  • the preset value here can be set according to actual needs.
  • the electronic device determines that the zero-crossing rate is 1, otherwise the zero-crossing rate is 0, where T1 is the first threshold; then the electronic device extracts from the sound signal all the sampling points corresponding to the zero-crossing rate of 1
  • the data segment is used as a voice signal, or the sampling point of the zero-crossing rate of the sound signal is filtered by the corresponding data segment to obtain a voice signal.
  • the obtained speech signal includes unvoiced and voiced sounds, and may also include noise of the head and tail portions, which is related to the speech duration parameter and the mute duration parameter set by the speech activity detection algorithm.
  • the electronic device may further filter the sound signal to filter Sound signals other than the range of the voice band.
  • the voice frequency range is preferably 200-3400 Hz.
  • the electronic device may further perform noise reduction processing on the sound signal to reduce noise in the frequency band of 200-3400HZ.
  • the electronic device may further perform pre-emphasis processing on the sound signal, so that the unvoiced and voiced sounds can be better distinguished later.
  • FIG. 2 a schematic diagram of voice activity detection, in which the horizontal axis is time and the vertical axis is the amplitude of the sound signal, and the portion between the two line segments is the result of the voice activity detection in step S11. The part is the detected speech signal.
  • S12 Perform voiced sound detection on the voice signal, and extract the voiced sound segment from the voice signal.
  • the speech signal obtained after the speech activity detection includes not only valid speech (ie, voiced sound) but also partial noise and unvoiced sound.
  • voiced sound ie, voiced sound
  • partial noise and unvoiced sound we know that the noise zero-crossing rate is high, and the short-time energy is small, while the clear audio spectrum has more high-frequency components, so its zero-crossing rate is relatively high, while the voiced spectrum is mostly concentrated below 3 kHz, and the zero-crossing rate is low. .
  • the zero-crossing rate of the voiced sound is basically relatively stable for a specific key word, while the voiceless sound is not.
  • the electronic device may perform voiced sound detection on the voice signal based on the zero-crossing rate, and extract the voiced sound segment from the voice signal, where the threshold of the zero-crossing rate is the second threshold value. And the second threshold is greater than the first threshold.
  • the electronic device extracts the zero-crossing rate from the speech signal is greater than the pre- The set speech frames form a voiced segment.
  • the preset value here can be set according to actual needs.
  • the second threshold value T2 is greater than the aforementioned first threshold value T1, preferably 8%-15% (e.g., 10%) of the average amplitude of the voice signal.
  • signs are positions where zero crossing occurs
  • tmpl and tmp2 are adjacent pairs of sample points in the speech signal
  • tmpl and tmp2 are multiplied by position data (.* represents the dot product of two vectors), less than 0.
  • Signs is 1, otherwise 0; diffs is the position difference based on the point.
  • variable value diffs is 1, otherwise 0;
  • zcr is based on The zero-crossing rate of the point, when signscO and diffs>T2, the shell IJzcr is 1, otherwise it is 0, so the zero-crossing rate of the unvoiced and noise is all set to zero, and only the zero-crossing rate of the voice (voiced sound) is retained.
  • FIG. 3 is a schematic diagram of the speech signal corrected for the speech activity detection result
  • FIG. 3 it can be seen from FIG. 3 that the unvoiced portions at both ends of the speech signal shown in FIG. 2 have been filtered out.
  • FIG. 4 is a schematic diagram of a voiced sound segment extracted from a voice signal
  • FIG. 4 it can be seen from FIG. 4 that the unvoiced portion between voiced sounds in the voice signal shown in FIG. 3 has been filtered out.
  • the electronic device first splits the voiced segment into at least two voice frames, and the overlap length between the adjacent two voice frames is preferably half of the length of the voice frame, and then each voice frame is removed. Dividing into at least two sub-frames, then calculating an average zero-crossing rate of each sub-frame in each speech frame, and finally composing an average zero-crossing rate of all sub-frames in each speech frame into a feature vector of the speech frame, and The feature vector of all speech frames in the voiced segment is used as the zero-crossing rate feature parameter of the voiced segment.
  • a voiced segment is framed according to a length of 480 samples per frame and an interframe overlap length of 240 samples. Then, each speech frame is split into 6 sub-frames, and the average zero-crossing rate of each sub-frame is calculated. Therefore, one speech frame includes 6 average zero-crossing rates, and the six average zero-crossing rates constitute the feature vector of the speech frame. , using the formula to express as follows:
  • the feature vector feajector of all the speech frames in the voiced segment is calculated, that is, the zero-crossing characteristic parameter of the voiced segment is obtained.
  • S14 Perform keyword matching by using a zero-crossing rate characteristic parameter of the voiced segment.
  • the electronic device inputs the zero-crossing rate characteristic parameter into a Gaussian Mixture Model (GMM) to perform a matching degree evaluation, and determines whether the matching is successful according to the evaluation result.
  • GMM Gaussian Mixture Model
  • the aforementioned Gaussian mixture model is an acoustic parameter model trained using a keyword sound sample.
  • the keyword sound samples of about 500 people can be collected for Gaussian mixture model training, that is, the keyword sound samples are processed by the foregoing steps S11-S13, and the zero-crossing rate characteristic parameters are obtained, and input into the training module of the electronic device for Gaussian. Mixed model training.
  • the electronic device when determining whether the matching is successful according to the evaluation result, the electronic device first obtains the evaluation score of the feature vector of each voice frame in the voiced segment output by the Gaussian mixture model, and then calculates the evaluation of all the feature vectors. The average of the scores, comparing the average value with the threshold value, determining whether the average value is greater than or equal to the threshold value, and when the average value is greater than or equal to the threshold value, determining that the matching is successful, otherwise determining that the matching fails.
  • the electronic device may also select a minimum number, a maximum number, or a median from the evaluation scores to compare with a threshold, and determine that the matching is successful when the comparison result is greater than or equal to the threshold.
  • the embodiment of the present invention only calculates the feature parameters of the effective voice, that is, the voiced segment, and uses the feature parameter to perform keyword matching, thereby greatly reducing the calculation amount of the feature parameter, effectively reducing the system power consumption, and the other
  • the aspect removes the interference items such as unvoiced sound and noise in the voice signal, improves the robustness of the feature parameters, and improves the accuracy of keyword matching.
  • the calculation of the zero-crossing rate characteristic parameter used in the embodiment of the present invention is smaller, further reducing the system power consumption, and
  • the embodiment of the invention adopts a Gaussian mixture model for keyword matching, which further improves the accuracy of keyword matching.
  • the voice processing method in the embodiment of the present invention may be applied to application scenarios such as device wakeup and device unlocking. If applied to device wakeup, when the keyword match is successful, the wakeup module of the electronic device wakes up the device. If the device is unlocked, when the keyword is successfully matched, the unlocking module of the electronic device is unlocked.
  • the speech processing method of the embodiment of the present invention extracts a voiced sound segment from a voice signal, calculates a zero-crossing rate characteristic parameter of the voiced sound segment, and uses the zero-crossing rate characteristic parameter of the voiced sound segment to perform keyword matching, thereby filtering
  • keyword matching is only performed on the effective speech (voiced segment), which greatly reduces the calculation of the feature parameters, effectively reduces the system power consumption, and on the other hand improves the features.
  • the robustness of the parameters improves the accuracy of keyword matching.
  • the calculation of the zero-crossing rate characteristic parameter adopted by the embodiment of the present invention is smaller, further reducing the system power consumption, and at the same time
  • the embodiment of the invention adopts a Gaussian mixture model for keyword matching, which further improves the accuracy of keyword matching.
  • the calculation of the feature parameters of the embodiment of the present invention is all performed in the time domain, effectively avoiding complicated calculation in the frequency domain.
  • the device includes a first detecting module 10, a second detecting module 20, a calculating module 30, and a matching module 40, where: the first detecting module 10 For detecting voice activity of the sound signal, extracting the voice signal from the sound signal; the second detecting module 20 is configured to perform voiced sound detection on the voice signal, and extract the voiced sound segment from the voice signal; the calculation module 30, The zero-crossing rate characteristic parameter of the voiced segment is calculated; the matching module 40 is configured to perform keyword matching by using the zero-crossing rate feature parameter.
  • the first detecting module 10 is configured to perform voice activity detection on the sound signal based on the zero-crossing rate, and preferably the zero-crossing rate is combined with the short-time energy, wherein the threshold of the zero-crossing rate is the first Threshold.
  • the first detecting module 10 determines that the sound frame has passed a zero, according to which the zero-crossing rate of the sound frame is counted, where T1 is the first threshold; then the first detecting module 10 A sound frame with a zero-crossing rate greater than a preset value is extracted from the sound signal as a voice signal, or a sound frame in which the zero-crossing rate of the sound signal is less than or equal to a preset value is filtered out to obtain a voice signal.
  • the preset value here can be set according to actual needs.
  • the first detecting module 10 determines that the zero crossing rate is 1, otherwise the zero crossing rate is 0, where T1 is the first a threshold value; then the first detecting module 10 extracts, from the sound signal, all the data segments corresponding to the sampling point pair having a zero crossing rate of 1 as a voice signal, or pairs the sampling points of the sound signal with a zero crossing rate of 0. The corresponding data segment is filtered to obtain a speech signal.
  • the obtained speech signal includes unvoiced and voiced sounds, and may also include noise of the head and tail portions, which is related to the speech duration parameter and the mute duration parameter set by the speech activity detection algorithm.
  • the voice processing device may further filter the sound signal to filter out the sound signal outside the range of the voice frequency band.
  • the voice band range is preferably 200-3400 HZ.
  • the voice processing device may further perform noise reduction processing on the sound signal to reduce noise in the frequency band of 200-3400 Hz.
  • the voice processing device may further perform pre-emphasis processing on the sound signal, so that the voiceless and voiced sound can be better distinguished later.
  • the second detecting module 20 is configured to perform voiced sound detection on the voice signal based on the zero-crossing rate, where the threshold of the zero-crossing rate is the second threshold, and the second threshold is greater than the The first threshold is stated.
  • the second detecting module 20 includes a statistic unit 21 and a first extracting unit 22, where: the statistic unit 21 is configured to use two adjacent samples in the speech frame of the speech signal. Point tmpl and tm P 2 , when tmpl*tmp2 ⁇ 0 and ltmpl-tmp2l>T2 are satisfied at the same time, it is determined that the speech frame passes through zero once, and the zero-crossing rate of the sound frame is counted according to this, wherein T2 is the second threshold value.
  • the first extracting unit 22 is configured to extract, from the voice signal, a voice frame whose zero-crossing rate is greater than a preset value to form a voiced segment.
  • the preset value here can be set according to actual needs.
  • the second threshold value T2 is greater than the aforementioned first threshold value T1, preferably 8%-15% (e.g., 10%) of the average amplitude of the voice signal.
  • the second detecting module 20 includes a determining unit 23 and a second extracting unit 24, where: the determining unit 23 is configured to target adjacent sampling points tmpl and tm in the voice signal. P 2, when tmpl*tmp2 ⁇ 0 and ltmpl-tmp2l>T2 are satisfied at the same time, the zero-crossing rate is determined to be 1, otherwise the zero-crossing rate is determined to be 0, where T2 is the second threshold; the second extracting unit 24, The data segment corresponding to the sampling point pair for extracting all zero-crossing ratios from the speech signal constitutes a voiced segment.
  • the second detecting module 20 performs voiced sound detection using the following formula:
  • signs are locations where zero crossing occurs
  • tmpl and tmp2 are adjacent pairs of sample points in the speech signal
  • tmpl and tmp2 are multiplied by position data (.* represents the dot product of the two vectors), less than 0.
  • Signs is 1, otherwise 0; diffs is the position difference based on the point.
  • variable value diffs is 1, otherwise 0;
  • zcr is based on The zero-crossing rate of the point, when signscO and diffs>T2, the shell IJzcr is 1, otherwise it is 0, so the zero-crossing rate of the unvoiced and noise is all set to zero, and only the zero-crossing rate of the voice (voiced sound) is retained.
  • the calculation module 30 calculates the zero-crossing rate characteristic parameter of the voiced segment.
  • the calculation module 30 includes a first splitting unit 31, a second splitting unit 32, a calculating unit 33, and a combining unit 34, wherein: the first splitting unit 31 is configured to The voiced segment is split into at least two voice frames; the second splitting unit 32 is configured to split each voice frame into at least two subframes; and the calculating unit 33 is configured to calculate each subframe in each voice frame.
  • An average zero-crossing rate configured to form an average zero-crossing rate of all subframes in each voice frame into a feature vector of a voice frame, and use a feature vector of all voice frames in the voiced segment as a zero-crossing of the voiced segment Rate characteristic parameter.
  • the first splitting unit 31 divides the voiced segments into frames according to a length of 480 samples per frame and an interframe overlap length of 240 samples.
  • the second splitting unit 32 then splits each speech frame into six sub-frames, and the calculation unit 33 calculates the average zero-crossing rate of each sub-frame, so that one speech frame includes six average zero-crossing rates, and the combining unit 34 will
  • the six average zero-crossing rates constitute the feature vector of the speech frame and are expressed as follows:
  • the final calculation module 30 calculates the feature vector feajector of all the speech frames in the voiced segment, that is, obtains the zero-crossing rate characteristic parameter of the voiced segment.
  • the matching module 40 After obtaining the zero-crossing rate characteristic parameter, the matching module 40 performs keyword matching using the zero-crossing rate characteristic parameter.
  • the matching module 40 includes an input unit 41 and a determining unit 42, wherein: the input unit 41 is configured to input the zero-crossing rate characteristic parameter into the Gaussian mixture model for matching degree evaluation; , used to judge whether the match is successful according to the judgment result.
  • the aforementioned Gaussian mixture model is an acoustic parameter model trained using a keyword sound sample.
  • the keyword sound samples of about 500 people can be collected for Gaussian mixture model training, that is, the first sound detection module 10, the second detection module 20, and the calculation module 30 are used to process the keyword sound samples to obtain the zero-crossing rate characteristic parameters, and It is input to the training module of the speech processing device for Gaussian mixture model training.
  • the determining unit 42 includes an obtaining subunit 421, a calculating subunit 422, a determining subunit 423, and a determining subunit 424, where: the obtaining subunit 421 is configured to obtain a Gaussian mixture model outputting a judgment score for a feature vector of each voice frame in the voiced segment; a calculation subunit 4 22 for calculating an average value of the evaluation scores of all feature vectors; a judgment subunit 423 for determining whether the average value is Greater than or equal to the threshold; the determining sub-unit 424 is configured to determine that the matching is successful when the average value is greater than or equal to the threshold.
  • the determining sub-unit 423 may also select a minimum number, a maximum number, or a median from the evaluation score to compare with a threshold, and when the comparison result is greater than or equal to the threshold, the determining sub-unit 424 determines that the matching is performed. success.
  • the voice processing device of the embodiment of the present invention can be applied to application scenarios such as device wakeup and device unlocking. If applied to device wake-up, the device further includes a wake-up module, and the wake-up module is configured to: wake up the device when the keyword match succeeds. If the device is used for unlocking, the device further includes an unlocking module, and the unlocking module is configured to: when the keyword is successfully matched, unlock the device.
  • the speech processing apparatus of the embodiment of the present invention extracts a voiced sound segment from a voice signal, and calculates a zero-crossing rate characteristic parameter of the voiced sound segment, and uses the zero-crossing rate characteristic parameter of the voiced sound segment to perform keyword matching, thereby filtering
  • Keyword matching greatly reduces the computational complexity of the feature parameters, effectively reduces the system power consumption, and on the other hand improves the robustness of the feature parameters, thereby improving the accuracy of keyword matching.
  • the calculation of the zero-crossing rate characteristic parameter used in the embodiment of the present invention is smaller, further reducing the system power consumption, and
  • the embodiment of the invention adopts a Gaussian mixture model for keyword matching, which further improves the accuracy of keyword matching.
  • the invention simultaneously proposes an electronic device comprising a memory, a processor and at least one application stored in the memory and configured to be executed by the processor, the application being configured to perform speech processing method.
  • the speech processing method comprises the following steps: performing voice activity detection on a sound signal, extracting a voice signal from the sound signal; performing voiced sound detection on the voice signal, extracting a voiced sound segment from the voice signal; and calculating a zero-crossing rate of the voiced sound segment Feature parameters; keyword matching using zero-crossing rate feature parameters.
  • the voice processing method described in this embodiment is the voice processing method involved in the foregoing embodiment of the present invention, and details are not described herein again.
  • the present invention includes apparatus related to performing one or more of the operations described herein.
  • These devices may be specially designed and manufactured for the required purposes, or may also include known devices in a general purpose computer.
  • These devices have computer programs stored therein that are selectively activated or reconfigured.
  • Such computer programs may be stored in a device (eg, computer) readable medium or in any type of medium suitable for storing electronic instructions and respectively coupled to a bus, including but not limited to any Types of disks (including floppy disks, hard disks, CDs, CD-ROMs, and magneto-optical disks), ROM (Read-Only Memory, read-only memory), RAM (Random Access Memory), EPROM (Erasable Programmable Read-Only)
  • a readable medium includes any medium that is stored or transmitted by a device (e.g., a computer) in a readable form.
  • each block of the block diagrams and/or block diagrams and/or flow diagrams can be implemented with computer program instructions, and/or in the block diagrams and/or block diagrams and/or flow diagrams.
  • these computer program instructions can be implemented by a general purpose computer, a professional computer, or a processor of other programmable data processing methods, such that the processor is executed by a computer or other programmable data processing method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Telephone Function (AREA)

Abstract

本发明揭示了一种语音处理方法、装置和电子设备,所述方法包括以下步骤:对声音信号进行语音活动检测,从声音信号中提取出语音信号;对语音信号进行浊音检测,从语音信号中提取出浊音片段;计算出浊音片段的过零率特征参数;利用过零率特征参数进行关键词匹配。从而滤除了语音信号中的清音、噪音等干扰项,只对有效语音(浊音片段)进行关键词匹配,一方面大大减小了特征参数的计算量,有效降低了系统功耗,另一方面提高了特征参数的稳健性,进而提高了关键词匹配的准确度。并且,本发明实施例采用的过零率特征参数的计算量更小,进一步降低了系统功耗,同时本发明实施例采用高斯混合模型进行关键词匹配,进一步提高了关键词匹配的准确度。

Description

语音处理方法、 装置和电子设备 技术领域
[0001] 本发明涉及电子技术领域, 特别是涉及到一种语音处理方法、 装置和电子设备 背景技术
[0002] 语音唤醒技术是人工智能的一个分支, 语音唤醒技术的应用领域很广泛, 比如 可以应用于机器人、 手机、 可穿戴设备、 智能家居、 车载等领域。 许多带有语 音识别功能的设备都会利用语音唤醒技术作为人与机器互动的开始。
[0003] 语音唤醒是指用户说出特定的语音指令时, 设备从休眠状态切换到工作状态, 并给出指定响应。 唤醒技术的用途在于, 用户对于设备的操作可以完全用语音 进行, 脱离双手的帮助; 同时, 利用唤醒这样的机制, 设备不需要时时处于工 作状态, 大大节省能耗。
[0004] 语音唤醒的关键是进行关键词匹配。 目前在进行关键词匹配时, 首先对声音信 号进行语音活动检测 (Voice Activity Detection, VAD), 从声音信号中提取出语音 信号, 然后利用语音信号进行关键词匹配, 判断语音信号中是否包含唤醒关键 词。
[0005] 由于语音活动检测不完善, 导致有效语音 (即浊音) 的首尾和中间都可能存在 噪音和清音, 并且清音复杂多变, 从而使得系统的计算量较大, 增大了系统的 功耗。 同时, 清音和噪音等干扰项对匹配的特征参数的稳健性产生了不利影响 , 进而影响关键词匹配的准确度。
发明概述
技术问题
[0006] 本发明的主要目的为提供一种语音处理方法、 装置和电子设备, 旨在降低系统 功耗, 提高关键词匹配的准确度。
问题的解决方案
技术解决方案 [0007] 为达以上目的, 本发明实施例提出一种语音处理方法, 所述方法包括以下步骤
[0008] 对声音信号进行语音活动检测, 从所述声音信号中提取出语音信号;
[0009] 对所述语音信号进行浊音检测, 从所述语音信号中提取出浊音片段;
[0010] 计算出所述浊音片段的过零率特征参数;
[0011] 利用所述过零率特征参数进行关键词匹配。
[0012] 本发明实施例同时提出一种语音处理装置, 所述装置包括:
[0013] 第一检测模块, 用于对声音信号进行语音活动检测, 从所述声音信号中提取出 语音信号;
[0014] 第二检测模块, 用于对所述语音信号进行浊音检测, 从所述语音信号中提取出 浊音片段;
[0015] 计算模块, 用于计算出所述浊音片段的过零率特征参数;
[0016] 匹配模块, 用于利用所述过零率特征参数进行关键词匹配。
[0017] 本发明实施例还提出一种电子设备, 其包括存储器、 处理器和至少一个被存储 在所述存储器中并被配置为由所述处理器执行的应用程序, 所述应用程序被配 置为用于执行前述语音处理方法。
[0018] 本发明实施例所提供的一种语音处理方法, 通过从语音信号中提取出浊音片段 , 并计算出浊音片段的过零率特征参数, 利用浊音片段的过零率特征参数进行 关键词匹配, 从而滤除了语音信号中的清音、 噪音等干扰项, 只对有效语音 ( 浊音片段) 进行关键词匹配, 一方面大大减小了特征参数的计算量, 有效降低 了系统功耗, 另一方面提高了特征参数的稳健性, 进而提高了关键词匹配的准 确度。
发明的有益效果
有益效果
[0019] 并且, 相对于现有技术中采用的 LPC、 PLP、 LPCC、 MFCC等特征参数, 本发 明实施例采用的过零率特征参数的计算量更小, 进一步降低了系统功耗, 同时 本发明实施例采用高斯混合模型进行关键词匹配, 进一步提高了关键词匹配的 准确度。 对附图的简要说明
附图说明
[0020] 图 1是本发明的语音处理方法一实施例的流程图;
[0021] 图 2是本发明实施例中对声音信号进行语音活动检测的示意图;
[0022] 图 3是本发明实施例中对语音活动检测结果进行修正后的示意图;
[0023] 图 4是本发明实施例中从语音信号中提取出的浊音片段的示意图;
[0024] 图 5是本发明的语音处理装置一实施例的模块示意图;
[0025] 图 6是图 5中的第二检测模块的模块示意图;
[0026] 图 7是图 5中的第二检测模块的又一模块示意图;
[0027] 图 8是图 5中的计算模块的模块示意图;
[0028] 图 9是图 5中的匹配模块的模块示意图;
[0029] 图 10是图 9中的判断单元的模块示意图。
[0030] 本发明目的的实现、 功能特点及优点将结合实施例, 参照附图做进一步说明。
实施该发明的最佳实施例
本发明的最佳实施方式
[0031] 应当理解, 此处所描述的具体实施例仅仅用以解释本发明, 并不用于限定本发 明。
[0032] 下面详细描述本发明的实施例, 所述实施例的示例在附图中示出, 其中自始至 终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。 下 面通过参考附图描述的实施例是示例性的, 仅用于解释本发明, 而不能解释为 对本发明的限制。
[0033] 本技术领域技术人员可以理解, 除非特意声明, 这里使用的单数形式“一”、 “ 一个”、 “所述”和“该”也可包括复数形式。 应该进一步理解的是, 本发明的说明 书中使用的措辞“包括”是指存在所述特征、 整数、 步骤、 操作、 元件和 /或组件 , 但是并不排除存在或添加一个或多个其他特征、 整数、 步骤、 操作、 元件、 组件和 /或它们的组。 应该理解, 当我们称元件被“连接”或“耦接”到另一元件时 , 它可以直接连接或耦接到其他元件, 或者也可以存在中间元件。 此外, 这里 使用的“连接”或“稱接”可以包括无线连接或无线耦接。 这里使用的措辞“和 /或”包 括一个或更多个相关联的列出项的全部或任一单元和全部组合。
[0034] 本技术领域技术人员可以理解, 除非另外定义, 这里使用的所有术语 (包括技 术术语和科学术语) , 具有与本发明所属领域中的普通技术人员的一般理解相 同的意义。 还应该理解的是, 诸如通用字典中定义的那些术语, 应该被理解为 具有与现有技术的上下文中的意义一致的意义, 并且除非像这里一样被特定定 义, 否则不会用理想化或过于正式的含义来解释。
[0035] 本技术领域技术人员可以理解, 这里所使用的“终端”、 “终端设备”既包括无线 信号接收器的设备, 其仅具备无发射能力的无线信号接收器的设备, 又包括接 收和发射硬件的设备, 其具有能够在双向通信链路上, 执行双向通信的接收和 发射硬件的设备。 这种设备可以包括: 蜂窝或其他通信设备, 其具有单线路显 示器或多线路显示器或没有多线路显示器的蜂窝或其他通信设备; PCS (Persona 1 Communications Service, 个人通信系统) , 其可以组合语音、 数据处理、 传真 和 /或数据通信能力; PDA (Personal Digital Assistant, 个人数字助理) , 其可以 包括射频接收器、 寻呼机、 互联网 /内联网访问、 网络浏览器、 记事本、 日历和 / 或 GPS (Global Positioning System, 全球定位系统) 接收器; 常规膝上型和 /或掌 上型计算机或其他设备, 其具有和 /或包括射频接收器的常规膝上型和 /或掌上型 计算机或其他设备。 这里所使用的“终端”、 “终端设备”可以是便携式、 可运输、 安装在交通工具 (航空、 海运和 /或陆地) 中的, 或者适合于和 /或配置为在本地 运行, 和 /或以分布形式, 运行在地球和 /或空间的任何其他位置运行。 这里所使 用的“终端”、 “终端设备”还可以是通信终端、 上网终端、 音乐 /视频播放终端, 例如可以是 PDA、 MID (Mobile Internet Device, 移动互联网设备) 和 /或具有音 乐 /视频播放功能的移动电话, 也可以是智能电视、 机顶盒等设备。
[0036] 参照图 1, 提出本发明的语音处理方法一实施例, 所述方法包括以下步骤:
[0037] S11、 对声音信号进行语音活动检测, 从声音信号中提取出语音信号。
[0038] 本发明实施例中, 电子设备通过麦克风采集声音信号或接收外部设备发送的声 音信号, 并对声音信号进行语音活动检测, 从声音信号中提取出语音信号。 所 述电子设备可以是手机、 平板、 个人电脑、 笔记本电脑等终端设备, 也可以是 可穿戴设备、 智能家居设备、 车载设备、 机器人等电子设备。 [0039] 本发明实施例中, 电子设备可以基于过零率对声音信号进行语音活动检测, 优 选过零率与短时能量相结合, 其中过零率的门限值为第一门限值。
[0040] 这里的过零率指短时过零率, 短时过零率可以看作信号频率的简单度量, 是语 音信号时域分析中的一种特征参数。 过零就是指信号通过零值, 过零率就是单 位时间内信号通过零值的次数, 对有时间横轴的连续语音信号, 可以观察到语 音的时域波形通过横轴的情况。 对于离散时间序列, 过零则是指序列取样值改 变符号, 过零率则是每个样本改变符号的次数。 对于语音信号, 则是指在一帧 语音中语音信号波形穿过横轴 (零电平) 的次数, 可以用相邻两个取样改变符 号的次数来计算。
[0041] 短时能量和过零率两个参数, 可以用于语音活动检测, 主要用于识别无声段和 语音段的起点和终点的位置。 在背景噪音比较小时用短时能量来识别比较有效 , 在背景噪音比较大的时用过零率来识别比较有效, 但是通常情况是两个参数 联合进行识别效果更好。
[0042] 可选地, 在基于过零率对声音信号进行语音活动检测时, 针对声音信号的声音 帧中相邻的两个采样点 tmp 1和 tmp2, 当同时满足 tmp 1 *tmp2<0和 Itmp 1 -tmp2l>T 1 时, 电子设备则认定该声音帧过一次零, 据此统计出声音帧的过零率, 其中 T1 为第一门限值; 然后电子设备从声音信号中提取出过零率大于预设值的声音帧 作为语音信号, 或者将声音信号中过零率小于或等于预设值的声音帧滤除后得 到语音信号。 这里的预设值可以根据实际需要设定。
[0043] 可选地, 在基于过零率对声音信号进行语音活动检测时, 针对声音信号中相邻 的采样点对 tmpl和 tmp2, 当同时满足 tmpl*tmp2<0和 ltmpl-tmp2l>Tl时, 电子设 备则判决过零率为 1, 否则判决过零率为 0, 其中 T1为第一门限值; 然后电子设 备从声音信号中提取出所有过零率为 1的采样点对所对应的数据段作为语音信号 , 或者将声音信号中过零率为 0的采样点对所对应的数据段滤除后得到语音信号
[0044] 获得的语音信号中包含清音、 浊音, 还可能包含首尾部分的噪音, 这与语音活 动检测算法设置的语音时长参数和静音时长参数有关。
[0045] 进一步地, 在步骤 S11之前, 电子设备还可以对声音信号进行滤波处理, 以滤 除语音频段范围以外的声音信号。 语音频段范围优选为为 200-3400HZ。
[0046] 进一步地, 在对声音信号进行滤波处理后, 步骤 S11之前, 电子设备还可以对 声音信号进行降噪处理, 以降低 200-3400HZ频段内的噪音。
[0047] 进一步地, 在对声音信号进行降噪处理后, 步骤 S11之前, 电子设备还可以对 声音信号进行预加重处理, 以使得后续能够更好的区分清音和浊音。
[0048] 如图 2所示, 为语音活动检测示意图, 图示中横轴为时间, 纵轴为声音信号的 幅度, 两条线段之间的部分为本步骤 S11中语音活动检测的结果, 该部分即为检 测到的语音信号。
[0049] S12、 对语音信号进行浊音检测, 从语音信号中提取出浊音片段。
[0050] 语音活动检测后获得的语音信号, 不但包括有效语音 (即浊音) , 还包括部分 噪音和清音。 我们知道, 噪音过零率高, 且短时能量小, 而清音频谱中高频成 分较多, 因此其过零率相对较高, 而浊音的频谱则大多数集中在 3kHz以下, 过 零率较低。 经过大量实验数据分析, 我们发现对于特定的人, 对于特定的关键 词, 其浊音的过零率基本上是相对稳定的, 而清音则不然。
[0051] 有鉴于此, 本发明实施例中, 电子设备可以基于过零率对语音信号进行浊音检 测, 从语音信号中提取出浊音片段, 其中过零率的门限值为第二门限值, 且第 二门限值大于第一门限值。
[0052] 可选地, 在基于过零率对语音信号进行浊音检测时, 针对语音信号的语音帧中 相邻的两个采样点 tmp 1和 tmp2, 当同时满足 tmp 1 *tmp2<0和 Itmp 1 -tmp2l>T2时, 则认定该语音帧过一次零, 据此统计出语音帧的过零率, 其中 T2为第二门限值 ; 然后电子设备从语音信号中提取出过零率大于预设值的语音帧组成浊音片段 。 这里的预设值可以根据实际需要设定。 第二门限值 T2大于前述第一门限值 T1 , 优选为语音信号的平均幅值的 8%-15% (如 10%) 。
[0053] 可选地, 在基于过零率对语音信号进行浊音检测时, 针对语音信号中相邻的采 样点对 tmpl和 tmp2, 当同时满足 tmpl*tmp2<0和 ltmpl-tmp2l>T2时, 则判决过零 率为 1, 否则判决过零率为 0, 其中 T2为第二门限值; 然后电子设备从语音信号 中提取出所有过零率为 1的采样点对所对应的数据段组成浊音片段。
[0054] 例如, 采用以下公式进行浊音检测: [0055] signs =(tmpl.*tmp2)<0;
[0056] diffs = Itmpl -tmp2l>T2;
[0057] zcr=(signs.*diffs);
[0058] 其中, signs是发生过零的位置, tmpl和 tmp2是语音信号中相邻的采样点对, tmpl和 tmp2对应位置数据相乘(.*代表两个向量的点积), 小于 0则 signs为 1, 否则 为 0; diffs是基于点的幅值差位置, tmpl与 tmP2之差的绝对值大于第二门限值 T2 时, 变量值 diffs为 1, 否则为 0; zcr是基于点的过零率, 当 signscO且 diffs>T2时, 贝 IJzcr为 1, 否则为 0, 从而就把清音和噪音的过零率全部置零, 而只保留了语音 (浊音) 的过零率。
[0059] 第二门限值 T2可以为检测到的语音信号的幅度的平均值 (即平均幅值) 的 8%- 20% , 例如, 假设平均幅值为 0.2, 第二门限值 T2=0.2xl0%=0.02。
[0060] 如图 3所示, 为对语音活动检测结果进行修正后的语音信号的示意图, 从图 3中 可以看出, 图 2所示语音信号中首尾两端的清音部分已被滤除。
[0061] 如图 4所示, 为从语音信号中提取出的浊音片段的示意图, 从图 4中可以看出, 图 3所示语音信号中浊音之间的清音部分已被滤除。
[0062] S13、 计算出浊音片段的过零率特征参数。
[0063] 本发明实施例中, 电子设备先将浊音片段拆分为至少两个语音帧, 相邻两个语 音帧的帧间重叠长度优选为语音帧长度的一半, 再将每个语音帧拆分为至少两 个子帧, 然后计算出每个语音帧中各个子帧的平均过零率, 最后将每个语音帧 中的所有子帧的平均过零率组成为语音帧的特征向量, 并将浊音片段中所有语 音帧的特征向量作为浊音片段的过零率特征参数。
[0064] 例如, 按照每帧长 480个采样点, 帧间重叠长度为 240个采样点来对浊音片段进 行分帧。 然后将每个语音帧拆分为 6个子帧, 计算出每个子帧的平均过零率, 因 此一个语音帧包括 6个平均过零率, 这 6个平均过零率组成该语音帧的特征向量 , 用公式表达如下:
[0065] [0066] 上式中, j=l,2,...,6。 其中, fea①为第 j子帧的平均过零率, zero_cross (k) 为 第 k个采样点的过零率。 通过上式的计算, 可以得到本语音帧最终的特征向量 fea
_vector:
[0067]
[0068] 最后计算出浊音片段中所有语音帧的特征向量 feajector, 即得到浊音片段的过 零率特征参数。
[0069] S14、 利用浊音片段的过零率特征参数进行关键词匹配。
[0070] 本发明实施例中, 电子设备将过零率特征参数输入高斯混合模型 (Gaussian Mixture Model, GMM)进行匹配度评判, 根据评判结果判断是否匹配成功。
[0071] 前述高斯混合模型为利用关键词声音样本训练出的声学参数模型。 可以采集大 约 500人的关键词声音样本进行高斯混合模型训练, 即采用前述步骤 S11-S13对关 键词声音样本进行处理, 获得过零率特征参数, 并将其输入到电子设备的训练 模块进行高斯混合模型训练。
[0072] 本发明实施例中, 在根据评判结果判断是否匹配成功时, 电子设备先获取高斯 混合模型输出的针对浊音片段中每个语音帧的特征向量的评判分数, 然后计算 所有特征向量的评判分数的平均值, 比较平均值与阈值的大小, 判断平均值是 否大于或等于阈值, 当平均值大于或等于阈值时, 判定匹配成功, 否则判定匹 配失败。
[0073] 在其它实施例中, 电子设备也可以从评判分数中选取最小数、 最大数或中位数 与阈值进行比较, 当比较结果为大于或等于阈值时则判定匹配成功。
[0074] 由于本发明实施例只计算有效语音即浊音片段的特征参数并利用该特征参数进 行关键词匹配, 从而一方面大大减小了特征参数的计算量, 有效降低了系统功 耗, 另一方面滤除了语音信号中的清音、 噪音等干扰项, 提高了特征参数的稳 健性, 进而提高了关键词匹配的准确度。
[0075] 并且, 相对于现有技术中采用的 LPC、 PLP、 LPCC、 MFCC等特征参数, 本发 明实施例采用的过零率特征参数的计算量更小, 进一步降低了系统功耗, 同时 本发明实施例采用高斯混合模型进行关键词匹配, 进一步提高了关键词匹配的 准确度。 [0076] 本发明实施例的语音处理方法可以应用于设备唤醒、 设备解锁等应用场景。 若 应用于设备唤醒, 当关键词匹配成功时, 电子设备的唤醒模块则唤醒设备。 若 应用于设备解锁, 当关键词匹配成功时, 电子设备的解锁模块则进行解锁。
[0077] 本发明实施例的语音处理方法, 通过从语音信号中提取出浊音片段, 并计算出 浊音片段的过零率特征参数, 利用浊音片段的过零率特征参数进行关键词匹配 , 从而滤除了语音信号中的清音、 噪音等干扰项, 只对有效语音 (浊音片段) 进行关键词匹配, 一方面大大减小了特征参数的计算量, 有效降低了系统功耗 , 另一方面提高了特征参数的稳健性, 进而提高了关键词匹配的准确度。
[0078] 并且, 相对于现有技术中采用的 LPC、 PLP、 LPCC、 MFCC等特征参数, 本发 明实施例采用的过零率特征参数的计算量更小, 进一步降低了系统功耗, 同时 本发明实施例采用高斯混合模型进行关键词匹配, 进一步提高了关键词匹配的 准确度。 而且, 本发明实施例的特征参数的计算全部在时域中进行, 有效避免 了频域的复杂计算。
[0079] 参照图 5, 提出本发明的语音处理装置一实施例, 所述装置包括第一检测模块 1 0、 第二检测模块 20、 计算模块 30和匹配模块 40, 其中: 第一检测模块 10, 用于 对声音信号进行语音活动检测, 从声音信号中提取出语音信号; 第二检测模块 2 0, 用于对语音信号进行浊音检测, 从语音信号中提取出浊音片段; 计算模块 30 , 用于计算出浊音片段的过零率特征参数; 匹配模块 40, 用于利用过零率特征 参数进行关键词匹配。
[0080] 本发明实施例中, 第一检测模块 10用于基于过零率对声音信号进行语音活动检 测, 优选过零率与短时能量相结合, 其中过零率的门限值为第一门限值。
[0081] 可选地, 在基于过零率对声音信号进行语音活动检测时, 针对声音信号的声音 帧中相邻的两个采样点 tmp 1和 tmp2, 当同时满足 tmp 1 *tmp2<0和 Itmp 1 -tmp2l>T 1 时, 第一检测模块 10则认定该声音帧过一次零, 据此统计出声音帧的过零率, 其中 T1为第一门限值; 然后第一检测模块 10从声音信号中提取出过零率大于预 设值的声音帧作为语音信号, 或者将声音信号中过零率小于或等于预设值的声 音帧滤除后得到语音信号。 这里的预设值可以根据实际需要设定。
[0082] 可选地, 在基于过零率对声音信号进行语音活动检测时, 针对声音信号中相邻 的米样点对 tmpl和 tmp2, 当同时满足 tmpl*tmp2<0和 ltmpl-tmp2l>Tl时, 第一检 测模块 10则判决过零率为 1, 否则判决过零率为 0, 其中 T1为第一门限值; 然后 第一检测模块 10从声音信号中提取出所有过零率为 1的采样点对所对应的数据段 作为语音信号, 或者将声音信号中过零率为 0的采样点对所对应的数据段滤除后 得到语音信号。
[0083] 获得的语音信号中包含清音、 浊音, 还可能包含首尾部分的噪音, 这与语音活 动检测算法设置的语音时长参数和静音时长参数有关。
[0084] 进一步地, 在进行语音活动检测之前, 语音处理装置还可以对声音信号进行滤 波处理, 以滤除语音频段范围以外的声音信号。 语音频段范围优选为为 200-3400 HZ。
[0085] 进一步地, 在对声音信号进行滤波处理后, 语音活动检测之前, 语音处理装置 还可以对声音信号进行降噪处理, 以降低 200-3400HZ频段内的噪音。
[0086] 进一步地, 在对声音信号进行降噪处理后, 语音活动检测之前, 语音处理装置 还可以对声音信号进行预加重处理, 以使得后续能够更好的区分清音和浊音。
[0087] 本发明实施例中, 第二检测模块 20用于基于过零率对语音信号进行浊音检测, 其中过零率的门限值为第二门限值, 且第二门限值大于所述第一门限值。
[0088] 可选地, 如图 6所示, 第二检测模块 20包括统计单元 21和第一提取单元 22, 其 中: 统计单元 21, 用于针对语音信号的语音帧中相邻的两个采样点 tmpl和 tmP2 , 当同时满足 tmpl*tmp2<0和 ltmpl-tmp2l>T2时, 则认定语音帧过一次零, 据此 统计出所音帧的过零率, 其中 T2为第二门限值; 第一提取单元 22, 用于从语音 信号中提取出过零率大于预设值的语音帧组成浊音片段。
[0089] 这里的预设值可以根据实际需要设定。 第二门限值 T2大于前述第一门限值 T1, 优选为语音信号的平均幅值的 8%-15% (如 10%) 。
[0090] 可选地, 如图 7所示, 第二检测模块 20包括判决单元 23和第二提取单元 24, 其 中: 判决单元 23 , 用于针对语音信号中相邻的采样点对 tmpl和 tmP2, 当同时满 足 tmpl*tmp2<0和 ltmpl-tmp2l>T2时, 则判决过零率为 1, 否则判决过零率为 0, 其中 T2为第二门限值; 第二提取单元 24, 用于从语音信号中提取出所有过零率 为 1的采样点对所对应的数据段组成浊音片段。 [0091] 例如, 第二检测模块 20采用以下公式进行浊音检测:
[0092] signs = (tmpl.*tmp2)<0;
[0093] diffs = Itmpl -tmp2l>T2;
[0094] zcr=(signs.*diffs);
[0095] 其中, signs是发生过零的位置, tmpl和 tmp2是语音信号中相邻的采样点对, tmpl和 tmp2对应位置数据相乘 (.*代表两个向量的点积), 小于 0则 signs为 1, 否则 为 0; diffs是基于点的幅值差位置, tmpl与 tmP2之差的绝对值大于第二门限值 T2 时, 变量值 diffs为 1, 否则为 0; zcr是基于点的过零率, 当 signscO且 diffs>T2时, 贝 IJzcr为 1, 否则为 0, 从而就把清音和噪音的过零率全部置零, 而只保留了语音 (浊音) 的过零率。
[0096] 第二门限值 T2可以为检测到的语音信号的幅度的平均值 (即平均幅值) 的 8%- 20% , 例如, 假设平均幅值为 0.2, 第二门限值 T2=0.2xl0%=0.02。
[0097] 当提取出浊音片段后, 计算模块 30则计算出浊音片段的过零率特征参数。 本发 明实施例中, 计算模块 30如图 8所示, 包括第一拆分单元 31、 第二拆分单元 32、 计算单元 33和组合单元 34, 其中: 第一拆分单元 31, 用于将浊音片段拆分为至 少两个语音帧; 第二拆分单元 32, 用于将每个语音帧拆分为至少两个子帧; 计 算单元 33 , 用于计算出每个语音帧中各个子帧的平均过零率; 组合单元 34, 用 于将每个语音帧中的所有子帧的平均过零率组成为语音帧的特征向量, 将浊音 片段中所有语音帧的特征向量作为浊音片段的过零率特征参数。
[0098] 例如, 第一拆分单元 31按照每帧长 480个采样点, 帧间重叠长度为 240个采样点 来对浊音片段进行分帧。 然后第二拆分单元 32将每个语音帧拆分为 6个子帧, 计 算单元 33计算出每个子帧的平均过零率, 因此一个语音帧包括 6个平均过零率, 组合单元 34将这 6个平均过零率组成该语音帧的特征向量, 用公式表达如下:
[0099]
[0100] 上式中, j=l,2,...,6。 其中, fea(j)为第 j子帧的平均过零率, zero_cross (k) 为 第 k个采样点的过零率。 通过上式的计算, 可以得到本语音帧最终的特征向量 fea —vector:
[0101]
[0102] 最后计算模块 30计算出浊音片段中所有语音帧的特征向量 feajector, 即得到浊 音片段的过零率特征参数。
[0103] 当获得过零率特征参数之后, 匹配模块 40则利用过零率特征参数进行关键词匹 配。 本发明实施例中, 匹配模块 40如图 9所示, 包括输入单元 41和判断单元 42, 其中: 输入单元 41, 用于将过零率特征参数输入高斯混合模型进行匹配度评判 ; 判断单元 42, 用于根据评判结果判断是否匹配成功。
[0104] 前述高斯混合模型为利用关键词声音样本训练出的声学参数模型。 可以采集大 约 500人的关键词声音样本进行高斯混合模型训练, 即利用前述第一检测模块 10 、 第二检测模块 20和计算模块 30对关键词声音样本进行处理, 获得过零率特征 参数, 并将其输入到语音处理装置的训练模块进行高斯混合模型训练。
[0105] 本发明实施例中, 判断单元 42如图 10所示, 包括获取子单元 421、 计算子单元 4 22、 判断子单元 423和判定子单元 424, 其中: 获取子单元 421, 用于获取高斯混 合模型输出的针对浊音片段中每个语音帧的特征向量的评判分数; 计算子单元 4 22, 用于计算所有特征向量的评判分数的平均值; 判断子单元 423 , 用于判断平 均值是否大于或等于阈值; 判定子单元 424, 用于当平均值大于或等于阈值时, 判定匹配成功。
[0106] 在其它实施例中, 判断子单元 423也可以从评判分数中选取最小数、 最大数或 中位数与阈值进行比较, 当比较结果为大于或等于阈值时判定子单元 424则判定 匹配成功。
[0107] 本发明实施例的语音处理装置可以应用于设备唤醒、 设备解锁等应用场景。 若 应用于设备唤醒, 该装置还包括唤醒模块, 该唤醒模块用于: 当关键词匹配成 功时, 唤醒设备。 若应用于设备解锁, 该装置还包括解锁模块, 该解锁模块用 于: 当关键词匹配成功时, 对设备解锁。
[0108] 本发明实施例的语音处理装置, 通过从语音信号中提取出浊音片段, 并计算出 浊音片段的过零率特征参数, 利用浊音片段的过零率特征参数进行关键词匹配 , 从而滤除了语音信号中的清音、 噪音等干扰项, 只对有效语音 (浊音片段) 进行关键词匹配, 一方面大大减小了特征参数的计算量, 有效降低了系统功耗 , 另一方面提高了特征参数的稳健性, 进而提高了关键词匹配的准确度。
[0109] 并且, 相对于现有技术中采用的 LPC、 PLP、 LPCC、 MFCC等特征参数, 本发 明实施例采用的过零率特征参数的计算量更小, 进一步降低了系统功耗, 同时 本发明实施例采用高斯混合模型进行关键词匹配, 进一步提高了关键词匹配的 准确度。
[0110] 本发明同时提出一种电子设备, 其包括存储器、 处理器和至少一个被存储在存 储器中并被配置为由处理器执行的应用程序, 所述应用程序被配置为用于执行 语音处理方法。 所述语音处理方法包括以下步骤: 对声音信号进行语音活动检 测, 从声音信号中提取出语音信号; 对语音信号进行浊音检测, 从语音信号中 提取出浊音片段; 计算出浊音片段的过零率特征参数; 利用过零率特征参数进 行关键词匹配。 本实施例中所描述的语音处理方法为本发明中上述实施例所涉 及的语音处理方法, 在此不再赘述。
[0111] 本领域技术人员可以理解, 本发明包括涉及用于执行本申请中所述操作中的一 项或多项的设备。 这些设备可以为所需的目的而专门设计和制造, 或者也可以 包括通用计算机中的已知设备。 这些设备具有存储在其内的计算机程序, 这些 计算机程序选择性地激活或重构。 这样的计算机程序可以被存储在设备 (例如 , 计算机) 可读介质中或者存储在适于存储电子指令并分别耦联到总线的任何 类型的介质中, 所述计算机可读介质包括但不限于任何类型的盘 (包括软盘、 硬盘、 光盘、 CD-ROM、 和磁光盘) 、 ROM (Read-Only Memory, 只读存储器 ) 、 RAM (Random Access Memory, 随机存储器) 、 EPROM (Erasable Programmable Read-Only
Memory , 可擦写可编程只读存储器) 、 EEPROM (Electrically Erasable Programmable Read-Only Memory , 电可擦可编程只读存储器) 、 闪存、 磁性卡 片或光线卡片。 也就是, 可读介质包括由设备 (例如, 计算机) 以能够读的形 式存储或传输信息的任何介质。
[0112] 本技术领域技术人员可以理解, 可以用计算机程序指令来实现这些结构图和 / 或框图和 /或流图中的每个框以及这些结构图和 /或框图和 /或流图中的框的组合。 本技术领域技术人员可以理解, 可以将这些计算机程序指令提供给通用计算机 、 专业计算机或其他可编程数据处理方法的处理器来实现, 从而通过计算机或 其他可编程数据处理方法的处理器来执行本发明公开的结构图和 /或框图和 /或流 图的框或多个框中指定的方案。
[0113] 本技术领域技术人员可以理解, 本发明中已经讨论过的各种操作、 方法、 流程 中的步骤、 措施、 方案可以被交替、 更改、 组合或删除。 进一步地, 具有本发 明中已经讨论过的各种操作、 方法、 流程中的其他步骤、 措施、 方案也可以被 交替、 更改、 重排、 分解、 组合或删除。 进一步地, 现有技术中的具有与本发 明中公开的各种操作、 方法、 流程中的步骤、 措施、 方案也可以被交替、 更改 、 重排、 分解、 组合或删除。
[0114] 以上所述仅为本发明的优选实施例, 并非因此限制本发明的专利范围, 凡是利 用本发明说明书及附图内容所作的等效结构或等效流程变换, 或直接或间接运 用在其他相关的技术领域, 均同理包括在本发明的专利保护范围内。

Claims

权利要求书
[权利要求 1] 一种语音处理方法, 其特征在于, 包括以下步骤:
对声音信号进行语音活动检测, 从所述声音信号中提取出语音信号; 对所述语音信号进行浊音检测, 从所述语音信号中提取出浊音片段; 计算出所述浊音片段的过零率特征参数;
利用所述过零率特征参数进行关键词匹配。
[权利要求 2] 根据权利要求 i所述的语音处理方法, 其特征在于, 所述根据所述浊 音片段计算过零率特征参数的步骤包括:
将所述浊音片段拆分为至少两个语音帧;
将每个语音帧拆分为至少两个子帧;
计算出每个语音帧中各个子帧的平均过零率;
将每个语音帧中的所有子帧的平均过零率组成为所述语音帧的特征向 量, 将所述浊音片段中所有语音帧的特征向量作为所述浊音片段的过 零率特征参数。
[权利要求 3] 根据权利要求 2所述的语音处理方法, 其特征在于, 相邻两个语音帧 的帧间重叠长度是所述语音帧长度的一半。
[权利要求 4] 根据权利要求 2所述的语音处理方法, 其特征在于, 所述利用所述过 零率特征参数进行关键词匹配的步骤包括:
将所述过零率特征参数输入高斯混合模型进行匹配度评判, 所述高斯 混合模型为利用所述关键词声音样本训练出的声学参数模型; 根据评判结果判断是否匹配成功。
[权利要求 5] 根据权利要求 4所述的语音处理方法, 其特征在于, 所述根据评判结 果判断是否匹配成功的步骤包括:
获取所述高斯混合模型输出的针对所述浊音片段中每个语音帧的特征 向量的评判分数;
计算所有特征向量的评判分数的平均值;
判断所述平均值是否大于或等于阈值;
当所述平均值大于或等于阈值时, 判定匹配成功。
[权利要求 6] 根据权利要求 1所述的语音处理方法, 其特征在于:
所述对声音信号进行语音活动检测的步骤包括: 基于过零率对声音信 号进行语音活动检测, 所述过零率的门限值为第一门限值; 所述对所述语音信号进行浊音检测的步骤包括: 基于过零率对所述语 音信号进行浊音检测, 所述过零率的门限值为第二门限值, 且所述第 二门限值大于所述第一门限值。
[权利要求 7] 根据权利要求 6所述的语音处理方法, 其特征在于, 所述第二门限值 为所述语音信号的平均幅值的 8%-15%。
[权利要求 8] 根据权利要求 6所述的语音处理方法, 其特征在于, 所述对所述语音 信号进行浊音检测, 从所述语音信号中提取出浊音片段的步骤包括: 针对所述语音信号的语音帧中相邻的两个采样点 tmpl和 tmP2, 当同 时满足 tmpl*tmp2<0和 ltmpl-tmp2l>T2时, 则认定所述语音帧过一次 零, 据此统计出所述语音帧的过零率, 其中 T2为第二门限值; 从所述语音信号中提取出过零率大于预设值的语音帧组成浊音片段。
[权利要求 9] 根据权利要求 6所述的语音处理方法, 其特征在于, 所述对所述语音 信号进行浊音检测, 从所述语音信号中提取出浊音片段的步骤包括: 针对所述语音信号中相邻的采样点对 tmpl和 tmp2, 当同时满足 tmpl*t mp2<0和 ltmpl-tmp2l>T2时, 则判决过零率为 1, 否则判决过零率为 0 , 其中 T2为第二门限值;
从所述语音信号中提取出所有过零率为 1的采样点对所对应的数据段 组成浊音片段。
[权利要求 10] 根据权利要求 1所述的语音处理方法, 其特征在于, 所述利用所述过 零率特征参数进行关键词匹配的步骤之后还包括: 当关键词匹配成功 时, 唤醒设备。
[权利要求 11] 一种语音处理装置, 其特征在于, 包括:
第一检测模块, 用于对声音信号进行语音活动检测, 从所述声音信号 中提取出语音信号;
第二检测模块, 用于对所述语音信号进行浊音检测, 从所述语音信号 中提取出浊音片段;
计算模块, 用于计算出所述浊音片段的过零率特征参数;
匹配模块, 用于利用所述过零率特征参数进行关键词匹配。
[权利要求 12] 根据权利要求 11所述的语音处理装置, 其特征在于, 所述计算模块包 括:
第一拆分单元, 用于将所述浊音片段拆分为至少两个语音帧; 第二拆分单元, 用于将每个语音帧拆分为至少两个子帧;
计算单元, 用于计算出每个语音帧中各个子帧的平均过零率; 组合单元, 用于将每个语音帧中的所有子帧的平均过零率组成为所述 语音帧的特征向量, 将所述浊音片段中所有语音帧的特征向量作为所 述浊音片段的过零率特征参数。
[权利要求 13] 根据权利要求 12所述的语音处理装置, 其特征在于, 相邻两个语音帧 的帧间重叠长度是所述语音帧长度的一半。
[权利要求 14] 根据权利要求 12所述的语音处理装置, 其特征在于, 所述匹配模块包 括:
输入单元, 用于将所述过零率特征参数输入高斯混合模型进行匹配度 评判, 所述高斯混合模型为利用所述关键词声音样本训练出的声学参 数模型;
判断单元, 用于根据评判结果判断是否匹配成功。
[权利要求 15] 根据权利要求 14所述的语音处理装置, 其特征在于, 所述判断单元包 括:
获取子单元, 用于获取所述高斯混合模型输出的针对所述浊音片段中 每个语音帧的特征向量的评判分数;
计算子单元, 用于计算所有特征向量的评判分数的平均值; 判断子单元, 用于判断所述平均值是否大于或等于阈值;
判定子单元, 用于当所述平均值大于或等于阈值时, 判定匹配成功。
[权利要求 16] 根据权利要求 11所述的语音处理装置, 其特征在于:
所述第一检测模块用于: 基于过零率对声音信号进行语音活动检测, 所述过零率的门限值为第一门限值;
所述第二检测模块用于: 基于过零率对所述语音信号进行浊音检测, 所述过零率的门限值为第二门限值, 且所述第二门限值大于所述第一 门限值。
[权利要求 17] 根据权利要求 16所述的语音处理装置, 其特征在于, 所述第二门限值 为所述语音信号的平均幅值的 8%-15%。
[权利要求 18] 根据权利要求 16所述的语音处理装置, 其特征在于, 所述第二检测模 块包括:
统计单元, 用于针对所述语音信号的语音帧中相邻的两个采样点 tmpl 和 tmp2, 当同时满足 tmpl*tmp2<0和 ltmpl-tmp2l>T2时, 则认定所述 语音帧过一次零, 据此统计出所述语音帧的过零率, 其中 T2为第二 门限值;
第一提取单元, 用于从所述语音信号中提取出过零率大于预设值的语 音帧组成浊音片段。
[权利要求 19] 根据权利要求 16所述的语音处理装置, 其特征在于, 所述第二检测模 块包括:
判决单元, 用于针对所述语音信号中相邻的采样点对 tmpl和 tmP2, 当同时满足 tmpl*tmp2<0和 ltmpl-tmp2l>T2时, 则判决过零率为 1, 否 则判决过零率为 0, 其中 T2为第二门限值;
第二提取单元, 用于从所述语音信号中提取出所有过零率为 1的采样 点对所对应的数据段组成浊音片段。
[权利要求 20] 一种电子设备, 包括存储器、 处理器和至少一个被存储在所述存储器 中并被配置为由所述处理器执行的应用程序, 其特征在于, 所述应用 程序被配置为用于执行权利要求 1至 10任一项所述的语音处理方法。
PCT/CN2018/082036 2018-03-06 2018-04-04 语音处理方法、装置和电子设备 WO2019169685A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810184535.7 2018-03-06
CN201810184535.7A CN108711437A (zh) 2018-03-06 2018-03-06 语音处理方法和装置

Publications (1)

Publication Number Publication Date
WO2019169685A1 true WO2019169685A1 (zh) 2019-09-12

Family

ID=63866292

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/082036 WO2019169685A1 (zh) 2018-03-06 2018-04-04 语音处理方法、装置和电子设备

Country Status (2)

Country Link
CN (1) CN108711437A (zh)
WO (1) WO2019169685A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019169551A1 (zh) * 2018-03-06 2019-09-12 深圳市沃特沃德股份有限公司 语音处理方法、装置和电子设备
CN111696564B (zh) * 2020-06-05 2023-08-18 北京搜狗科技发展有限公司 语音处理方法、装置和介质
CN112735469B (zh) * 2020-10-28 2024-05-17 西安电子科技大学 低内存语音关键词检测方法、系统、介质、设备及终端

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103943104A (zh) * 2014-04-15 2014-07-23 海信集团有限公司 一种语音信息识别的方法及终端设备
CN104700843A (zh) * 2015-02-05 2015-06-10 海信集团有限公司 一种年龄识别的方法及装置
CN105721651A (zh) * 2016-01-19 2016-06-29 海信集团有限公司 一种语音拨号方法和设备
US20170294188A1 (en) * 2016-04-12 2017-10-12 Fujitsu Limited Apparatus, method for voice recognition, and non-transitory computer-readable storage medium
CN107610715A (zh) * 2017-10-10 2018-01-19 昆明理工大学 一种基于多种声音特征的相似度计算方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100952457B1 (ko) * 2008-02-19 2010-04-13 연세대학교 산학협력단 신호판별장치와 방법 및 음악신호 추출장치와 방법
CN101308653A (zh) * 2008-07-17 2008-11-19 安徽科大讯飞信息科技股份有限公司 一种应用于语音识别系统的端点检测方法
CN106328168B (zh) * 2016-08-30 2019-10-18 成都普创通信技术股份有限公司 一种语音信号相似度检测方法
CN106328125B (zh) * 2016-10-28 2023-08-04 许昌学院 一种河南方言语音识别系统
CN106601234A (zh) * 2016-11-16 2017-04-26 华南理工大学 一种面向货物分拣的地名语音建模系统的实现方法
CN107274911A (zh) * 2017-05-03 2017-10-20 昆明理工大学 一种基于声音特征的相似度分析方法
CN107045870B (zh) * 2017-05-23 2020-06-26 南京理工大学 一种基于特征值编码的语音信号端点检测方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103943104A (zh) * 2014-04-15 2014-07-23 海信集团有限公司 一种语音信息识别的方法及终端设备
CN104700843A (zh) * 2015-02-05 2015-06-10 海信集团有限公司 一种年龄识别的方法及装置
CN105721651A (zh) * 2016-01-19 2016-06-29 海信集团有限公司 一种语音拨号方法和设备
US20170294188A1 (en) * 2016-04-12 2017-10-12 Fujitsu Limited Apparatus, method for voice recognition, and non-transitory computer-readable storage medium
CN107610715A (zh) * 2017-10-10 2018-01-19 昆明理工大学 一种基于多种声音特征的相似度计算方法

Also Published As

Publication number Publication date
CN108711437A (zh) 2018-10-26

Similar Documents

Publication Publication Date Title
CN111816218B (zh) 语音端点检测方法、装置、设备及存储介质
US9775113B2 (en) Voice wakeup detecting device with digital microphone and associated method
CN103236260B (zh) 语音识别系统
WO2020181824A1 (zh) 声纹识别方法、装置、设备以及计算机可读存储介质
CN108597505B (zh) 语音识别方法、装置及终端设备
CN108922541B (zh) 基于dtw和gmm模型的多维特征参数声纹识别方法
JP2004527006A (ja) 分散型音声認識システムにおける音声アクティブな状態を送信するためのシステム及び方法
CN109584896A (zh) 一种语音芯片及电子设备
CN105206271A (zh) 智能设备的语音唤醒方法及实现所述方法的系统
EP1569422A2 (en) Method and apparatus for multi-sensory speech enhancement on a mobile device
US20120303369A1 (en) Energy-Efficient Unobtrusive Identification of a Speaker
WO2021082572A1 (zh) 一种唤醒模型生成方法、智能终端唤醒方法及装置
WO2015161240A2 (en) Speaker verification
CN109524011A (zh) 一种基于声纹识别的冰箱唤醒方法及装置
WO2019169685A1 (zh) 语音处理方法、装置和电子设备
CN105679312A (zh) 一种噪声环境下声纹识别的语音特征处理方法
CN108447506A (zh) 语音处理方法和语音处理装置
WO2019075829A1 (zh) 语音翻译方法、装置和翻译设备
WO2023030235A1 (zh) 目标音频的输出方法及系统、可读存储介质、电子装置
CN109215634A (zh) 一种多词语音控制通断装置的方法及其系统
CN108091340B (zh) 声纹识别方法、声纹识别系统和计算机可读存储介质
CN104732972A (zh) 一种基于分组统计的hmm声纹识别签到方法及系统
CN100541609C (zh) 一种实现开环基音搜索的方法和装置
WO2019071723A1 (zh) 语音翻译方法、装置和翻译机
WO2019051668A1 (zh) 一种智能终端的启动控制方法及启动控制系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18909052

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18909052

Country of ref document: EP

Kind code of ref document: A1