WO2017114201A1 - 一种设定操作的执行方法及装置 - Google Patents

一种设定操作的执行方法及装置 Download PDF

Info

Publication number
WO2017114201A1
WO2017114201A1 PCT/CN2016/110671 CN2016110671W WO2017114201A1 WO 2017114201 A1 WO2017114201 A1 WO 2017114201A1 CN 2016110671 W CN2016110671 W CN 2016110671W WO 2017114201 A1 WO2017114201 A1 WO 2017114201A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
speech signal
network model
phoneme
acoustic characteristics
Prior art date
Application number
PCT/CN2016/110671
Other languages
English (en)
French (fr)
Inventor
王志铭
李宏言
Original Assignee
阿里巴巴集团控股有限公司
王志铭
李宏言
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司, 王志铭, 李宏言 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2017114201A1 publication Critical patent/WO2017114201A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks

Definitions

  • the present application relates to the field of computer technology, and in particular, to a method and an apparatus for performing a setting operation.
  • the voice wake-up technology has been widely used due to its non-contact control characteristics, enabling users to conveniently start control for devices with voice wake-up function.
  • a specific wake-up word needs to be preset in the device, and the corresponding pronunciation phoneme is determined according to the wake-up word and the pronunciation dictionary (where the pronunciation phoneme is simply referred to as a phoneme, which refers to the minimum voice of the pronunciation syllable of the wake-up word). unit).
  • the device collects the voice signal sent by the user and determines whether the acoustic characteristics of the voice signal match the phonemes of the wake-up word according to the acoustic characteristics of the voice signal. To determine if the user is speaking as a wake-up word, and if so, the device will perform a self-wake operation, such as automatic start, or switch from sleep to active state, and so on.
  • the Hidden Markov Model is usually used to implement the above judgment, specifically: pre-loading wake-up words and non-awake words in the voice wake-up module.
  • the HMM after receiving the voice signal sent by the user, uses the Viterbi algorithm to decode the voice signal frame by frame to the phoneme level, and finally determines whether the phonetic acoustic feature of the voice signal sent by the user and the phoneme of the wake-up word are based on the decoded result. Matches to determine whether the user has spoken as a wake-up word.
  • the above prior art has the drawback that in the process of performing frame-by-frame decoding calculation on the voice signal sent by the user by using the Viterbi algorithm, dynamic programming calculation is involved, and the calculation amount is extremely large, thereby causing the entire voice wake-up process to be more processed. Resources.
  • the device in the similar method described above, it is also possible to trigger the device to perform other setting operations other than the self-wake operation (such as issuing a designated signal, or making a call, etc.) by setting the acoustic characteristics of the voice signal corresponding to the word.
  • the set words It refers to the general term of the word or word corresponding to the acoustic characteristics of the speech signal used to trigger the device to perform the setting operation.
  • the awakening word mentioned above belongs to one of the set words.
  • the embodiment of the present application provides a method for performing a setting operation, which is used to solve the problem that the triggering device performs a setting operation in the prior art, which consumes more processing resources.
  • the embodiment of the present application further provides an apparatus for performing a setting operation, which is used to solve the problem that the triggering device performs a setting operation in the prior art, which consumes more processing resources.
  • the sample used for training the neural network model includes at least a sample of the acoustic characteristics of the speech signal corresponding to the set word;
  • a neural network module configured to input the obtained acoustic characteristics of the speech signal into the trained neural network model; wherein the sample used for training the neural network model includes at least a sample of the acoustic signal corresponding to the set word;
  • the judgment confirmation module is configured to determine whether to perform the setting operation according to the probability that the acoustic characteristics of the respective speech signals corresponding to the phonemes corresponding to the set words are output according to the trained neural network model.
  • the solution provided by the embodiment of the present application can reduce the processing resources consumed by the setting operation process.
  • FIG. 1 is a flowchart of an execution process of a setting operation according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a neural network model provided by an embodiment of the present application.
  • 3a and 3b are schematic diagrams showing the regular statistics of the corresponding phonemes of the wake-up words according to the output of the neural network model according to an embodiment of the present application;
  • FIG. 4 is a schematic structural diagram of an apparatus for performing a setting operation according to an embodiment of the present application.
  • the Viterbi algorithm to decode the speech signal frame by frame to the phoneme level requires a lot of computing resources, especially for devices with voice wake-up, such as smart audio, smart home devices, etc., larger calculations.
  • the amount will not only increase the workload of the equipment, but also increase the energy consumption of the equipment, resulting in reduced equipment efficiency.
  • the neural network model has strong feature learning ability and lightweight structure, it is suitable for various types of devices with voice wake-up function in practical applications.
  • the present application proposes an execution process of the setting operation as shown in FIG. 1, and the process specifically includes the following steps:
  • a speech device when a user performs a setting operation by a voice triggering method for a device having a voice wake-up function (hereinafter referred to as a “speech device”), it is usually necessary to say a setting word, and the user speaks the setting word. The sound is the voice signal from the user. Accordingly, the voice device can receive the voice signal sent by the user. For a voice device, any voice signal it receives can be considered to need to be identified to determine whether the user has spoken as a setting word.
  • the setting operations include, but are not limited to, a voice-triggered wake-up operation, a call operation, a multimedia control operation, and the like.
  • the setting words in the present application include, but are not limited to, a preset password for a voice mode trigger, such as an awakening word, a call instruction word, a control instruction word, etc. (in some cases, the setting words may only include a Chinese character or word).
  • the voice device After the voice device receives the voice signal sent by the user, the corresponding voice signal acoustic feature is extracted and obtained from the voice signal to identify the voice signal.
  • the acoustic characteristics of the speech signal described in the embodiment of the present application may specifically be an acoustic feature of the speech signal in units of frames extracted from the speech signal.
  • the extraction of the acoustic characteristics of the signal can be achieved by a chip with a voice pickup function carried in the voice device. More specifically, the extraction of the acoustic characteristics of the speech signal can be performed by the speech wake-up module in the speech device, which does not constitute a limitation on the present application.
  • the sample used for training the neural network model includes at least a sample of the acoustic characteristics of the speech signal corresponding to the set word.
  • the neural network model has the characteristics of small calculation level and accurate calculation result, and is suitable for different devices. Considering that in the practical application, the deep neural network (DNN) with strong feature learning ability and easy training can be better adapted to the scene of speech recognition, so in the embodiment of the present application, A trained deep neural network can be used.
  • DNN deep neural network
  • the trained neural network model in this application can be provided by the device provider, that is, the voice device provider will use the trained neural network model as part of the voice wake-up module, and set the voice wake-up module to the chip or process.
  • a voice device is embedded in the device.
  • the training sample In order to ensure the accuracy of the output of the trained neural network model, a certain scale of training samples can be used for training in the process of training in order to optimize and perfect the neural network model.
  • the training sample usually includes the acoustic signal sample of the voice signal corresponding to the set word.
  • the voice signals received by the voice device do not all correspond to the set words, then, in order to distinguish the non-set words,
  • the training sample may also generally include a voice signal acoustic feature sample of a non-set word.
  • the input result of the trained neural network model includes at least a probability that the acoustic characteristics of the speech signal correspond to the phoneme corresponding to the set word.
  • the acoustic characteristics of the previously obtained speech signal (eg, speech feature vector) can be input as input to the neural network model for calculation, and the corresponding output result is obtained.
  • the acoustic characteristics of the obtained voice signals are input to The neural network model described above.
  • the voice signal sent by the user is a time series signal
  • the acquired acoustic characteristics of the voice signal can be continuously input to the neural network model in a time series manner. Medium (that is, input while getting).
  • the manner in which the above two acoustic characteristics of the input speech signal are selected may be selected according to the needs of the actual application, and does not constitute a limitation on the present application.
  • the sound signal acoustic characteristics correspond to a probability of a phoneme corresponding to the set word, that is, a probability that each of the voice signal acoustic features matches the phoneme corresponding to the set word. It can be understood that the greater the probability, the more likely the acoustic characteristics of the speech signal are to set the acoustic characteristics of the speech signal corresponding to the correct pronunciation of the word; conversely, the less likely it is.
  • the performing the setting operation refers to waking up the voice device to be woken up by means of voice wake-up.
  • the execution subject of the method provided by the embodiment of the present application is the device itself
  • the performing the setting operation means waking up the device itself.
  • the method provided by the embodiment of the present application is also applicable to a scenario in which another device is woken up by another device.
  • the neural network model may be based on the input acoustic characteristics of the speech signal, and after being calculated, output the acoustic characteristics of the speech signal corresponding to different phonemes (including corresponding words)
  • the probability distribution of the phoneme and other phonemes can determine the phoneme that best matches the acoustic characteristics of the speech signal from the different phonemes, that is, determine the maximum probability corresponding to the probability distribution.
  • phoneme The phoneme is a phoneme that best matches the acoustic characteristics of the speech signal.
  • the history window is also a certain duration, which is the duration of the speech signal, and the speech signal having the duration is generally considered to contain enough acoustic characteristics of the speech signal.
  • the device inputs the acoustic characteristics of the obtained speech signal into the trained neural network model, and the neural network model can calculate the probability distribution of the phonemes that may be represented by the acoustic features of each speech signal, such as: calculating the acoustic characteristics of the speech signal
  • the probability of each of the phonemes "q”, “i3", “d”, “ong4" may be represented, and the acoustic characteristics of the speech signal are mapped to the phonemes with the highest probability, and thus the acoustic characteristics of each speech signal are obtained.
  • Matching phonemes Based on this, in a history window, it is determined whether the voice signals sequentially correspond to the four phonemes "q", "i3", “d”, “ong4", and if so, the voice signal corresponds to the "start” setting. The word.
  • such a method can determine whether the phoneme corresponding to the acoustic characteristics of the voice signal is the phoneme of the set word, and can further determine whether the user speaks the set word, thereby determining whether to perform the setting operation. .
  • the neural network model determines that the acoustic characteristics of the obtained speech signal correspond to the probability of the phoneme corresponding to the set word, and then determining whether to perform the wake-up operation according to the probability. Since the use of the neural network to determine the probability does not consume more resources than the frame-by-frame decoding of the speech signal to the phoneme level by using the Viterbi algorithm, the solution provided by the embodiment of the present application is compared to the prior art. It can reduce the processing resources that are used to set the operation process.
  • the device before the setting operation is performed, the device is usually in an inactive state such as sleep, shutdown, etc. (in this case, only the voice wake-up module in the device is in the monitoring state), and the setting operation is performed by the user.
  • the voice wake-up module in the device controls the device to enter the active state. Therefore, in the present application, before obtaining the acoustic characteristics of the voice signal, the method further includes: determining whether a voice signal exists by performing voice activity detection (VAD), and when the determination is yes, performing step S101, that is, Acquire acoustic characteristics of the speech signal.
  • VAD voice activity detection
  • obtaining the acoustic characteristics of the speech signal comprises: obtaining the acoustic characteristics of the speech signal from the speech signal frame. That is to say, the above acoustic characteristics of the speech signal are usually obtained after extracting from the speech signal, and the accuracy of the acoustic feature extraction of the speech signal will have an influence on the generalization prediction of the subsequent neural network model, and will also recognize the lifting wakeup. The accuracy has a major impact.
  • the process of acoustic feature extraction of speech signals will be specifically described below.
  • the characteristics of each frame of the speech signal are typically sampled within a fixed size time window.
  • the time length of the signal collection window is set to 25 ms, and the collection period is set to 10 ms, that is, after the device receives the to-be-identified voice signal, it will be every 10 ms.
  • a window with a length of 25 ms is sampled.
  • the original features of the speech signal are obtained by sampling, and after further feature extraction, a fixed dimension is obtained (assuming N, the value of N will be determined according to different feature extraction methods used in actual application, here
  • the acoustic characteristics of the speech signal are not specifically limited and have a certain degree of discrimination.
  • commonly used speech acoustic features include a filter bank feature (Filter Bank feature), a Mel Cepstrum Coefficient (MFCC feature), and a Perceptual Linear Predictive feature (Perceptual Linear Predictive, PLP) and so on.
  • each speech signal frame herein may also be referred to as a per-frame speech feature vector.
  • the speech is a time-series signal and the context frames have correlations
  • the speech signal frames can be sequentially arranged in the order of time on the time axis.
  • the frame speech feature vectors are spliced to obtain a combined form of acoustic characteristics of the speech signal.
  • obtaining the acoustic characteristics of the speech signal from the speech signal frame includes: sequentially performing, for each reference frame in the speech signal frame, acquiring a frame of the speech signal and arranging on the time axis before the reference frame Acoustic features of the first number of speech signal frames, and acoustic features of the second number of speech signal frames in the speech signal frame that are arranged on the time axis after the reference frame, wherein the acquired acoustic features are stitched Obtaining the acoustic characteristics of the speech signal.
  • the reference frame generally refers to the voice signal frame currently sampled by the voice device.
  • the voice device performs multiple samples, so that multiple reference frames will be generated throughout the process.
  • the second quantity may be smaller than the first quantity.
  • the acoustic characteristics of the speech signal obtained by splicing can be regarded as the acoustic characteristics of the speech signal of the corresponding reference frame, and the time stamp mentioned in the following may be the relative timing sequence of the corresponding reference frame in the speech signal, ie The position of the reference frame on the time axis.
  • the current frame (that is, the reference frame) is generally spliced together with the left L frame and the right R frame of the context to form a size (L+1+R).
  • a feature vector of *N (where the number "1" represents the current frame itself) as an input to the deep neural network model.
  • L>R that is, the number of left and right asymmetric frames.
  • the current frame is used as the reference frame, then the current frame and the first 30 frames and the last 10 frames may be selected and stitched together to form a voice signal composed of 41 frames (including the current frame itself). Acoustic features as input to the input layer of the deep neural network.
  • the above is a detailed description of the acoustic characteristics of the speech signal in the present application.
  • the trained neural network model After obtaining the acoustic characteristics of the speech signal described above, it is input into the trained neural network model for calculation. That
  • the neural network model in this application it may be a deep neural network model, and the structure of the model is as shown in FIG. 2, for example.
  • the deep neural network model has three parts: an input layer, a hidden layer, and an output layer.
  • the speech feature vector is input from the input layer to the hidden layer for calculation processing.
  • Each layer of the hidden layer includes 128 or 256 nodes (also called a neuron), and each node is provided with a corresponding activation function to implement a specific calculation process, which is an optional method in the embodiment of the present application.
  • Rectified Linear Units (ReLU) is used as the activation function of the hidden layer node, and the SoftMax regression function is set in the output layer to normalize the output of the hidden layer.
  • ReLU Rectified Linear Units
  • the deep neural network model is trained.
  • the above-described deep neural network model is trained in the following manner:
  • depth neural network model convergence refers to: deep nerve
  • the maximum probability value in the probability distribution output by the network corresponds to the phoneme of the correct pronunciation corresponding to the acoustic feature sample of the speech signal
  • the training samples are input to the deep neural network model such that the deep neural network model performs forward propagation calculation on the characteristics of the input samples up to the output layer, and uses a preset objective function (generally based on Cross Entropy)
  • the criterion is to calculate the error and start the back propagation error from the output layer through the deep neural network model, and adjust the weight of the deep neural network model layer by layer according to the error.
  • the trained deep neural network can be embedded into the corresponding device for application by chip.
  • the application of the deep neural network model in embedded devices needs to be explained.
  • a lightweight model is needed in the application, that is, the number of hidden layers in the neural network and the number of nodes in each hidden layer need to be Limited, so the appropriate size of the deep neural network model can be used; on the other hand, it is necessary to optimize the performance of the deep neural network model calculation based on the specific platform using the optimized instruction set (such as NEON on the ARM platform). To meet the requirements of real-time.
  • the number of nodes of the output layer of the trained deep neural network model corresponds to the number of phonemes corresponding to the set words and one "Garbage" node, that is, assuming The term is "start” in the above example, corresponding to 4 phonemes, then the number of nodes in the output layer of the trained deep neural network model is 5.
  • the "Garbage" node corresponds to other phonemes other than the set word phoneme, that is, corresponding to other phonemes that are different from the phonemes of the set words.
  • the training can be based on the Large Vocabulary Continuous Speech Recognition (LVCSR) for training.
  • LVCSR Large Vocabulary Continuous Speech Recognition
  • Each frame feature in the sample is strongly aligned (Forced Align) to the phoneme level.
  • the training sample may generally include a positive sample (including a set word) and a negative sample (with no set words).
  • a setting word whose pronunciation begins with a vowel (or contains a vowel) is generally selected, and such a setting word is full, which helps to improve the false rejection rate of the wake-up system.
  • the setting words of the training sample can be, for example, "big white, hello”, and their corresponding phonemes are: d, a4, b, ai2, n, i3, h, ao3.
  • the set words exemplified herein are merely examples, and do not constitute a limitation on the present application, and may be analogized to other valuable setting words in practical applications.
  • a convergence-optimized deep neural network model can be obtained, which can map the speech acoustic features to the correct phonemes with the greatest probability.
  • the transfer learning method can be used to train the appropriate DNN of the topology using the Internet voice big data as the target deep neural network (mainly except the output layer). The other values of the other layers).
  • the advantage of this processing is to obtain a more robust "characteristic representation” and avoid falling into local optimum during training.
  • the concept of “migration learning” makes good use of the powerful capabilities of deep neural network “feature learning”. Of course, this does not constitute a limitation on the present application.
  • the device can receive the voice signal sent by the user, and obtain the acoustic signal corresponding to the voice signal input to the trained neural network model, so that after the neural network model is calculated, the corresponding word is output.
  • Determining whether to perform a wake-up operation according to a probability that the feature corresponds to the phoneme corresponding to the set word includes: determining that the acoustic characteristics of the respective voice signals output by the neural network model correspond to phonemes corresponding to the set words The maximum likelihood probability in the probability, determining the mapping relationship between the obtained maximum likelihood probability and the corresponding phoneme, and determining whether to perform the wake-up operation according to the mapping relationship and the confidence threshold.
  • the neural network model outputs the probability distribution of the acoustic features of each speech signal, and the probability distribution reflects the acoustic characteristics of the speech signal corresponding to the set words.
  • the distribution of various possibilities of phoneme matching obviously, for any acoustic characteristics of a speech signal, the maximum value in the probability distribution (ie, the maximum likelihood probability) indicates that the acoustic characteristics of the speech signal correspond to the set words.
  • the phoneme is most likely to match, so in the above steps of the present application, the maximum likelihood probability of each of the acoustic characteristics of the speech signal corresponding to the phoneme corresponding to the set word will be determined.
  • determining whether to perform the wake-up operation according to the mapping relationship and the confidence threshold specifically: calculating, for each phoneme corresponding to each set word, a maximum likelihood probability having a mapping relationship with the phoneme The quantity, as the confidence level corresponding to the phoneme, determines whether the confidence level of each phoneme is greater than a confidence threshold, and if so, performs the setting operation; otherwise, the setting operation is not performed.
  • the acoustic characteristics of the speech signal can be input into the speech awakening module neural network model for calculation, and the probability distribution of each phoneme that may be characterized by the acoustic characteristics of the speech signal is obtained.
  • the neural network model maps the acoustic characteristics of the speech signal to the phoneme with the highest probability, so that the phoneme regularity of the acoustic characteristics of each frame of the speech signal in a history window is counted to determine whether the speech signal is set.
  • the corresponding words correspond.
  • the calculation method of the neural network model used in the present application can effectively reduce the calculation magnitude and reduce the processing resources consumed. At the same time, the neural network model is easy to train and can effectively improve its applicability.
  • the phonemes corresponding to the preset wake-up words are called standard phonemes: d, a4, b, ai2, n, i3, h, ao3.
  • each phoneme in order to be able to visually represent the probability distribution of each phoneme, it can be represented by a graphical method such as a histogram.
  • a histogram is taken as an example, that is, each phoneme and "Garbage" will be established through the above-described deep neural network model.
  • the histogram corresponding to the node.
  • each phoneme including the "Garbage” node
  • corresponds to a histogram bar the height of the histogram bar of each phoneme is zero in Figure 3a since the speech signal recognition process has not been performed
  • the height of the histogram bar reflects the statistical value of the acoustic characteristics of the speech signal mapped to the phoneme.
  • the statistical value here can be regarded as the confidence of the phoneme.
  • the voice wake-up module in the voice wake-up device receives the voice signal to be recognized.
  • the voice signal detection operation is typically performed by the VAD module prior to execution of the voice wake-up module in order to detect the presence or absence of a voice signal (to distinguish it from a silent state).
  • the speech wake-up system begins to work, i.e., uses a neural network model for computational processing.
  • the speech wake-up module extracts the acoustic characteristics of the speech signal obtained from the speech signal emitted by the user (including the acoustic characteristics of the speech signal obtained by splicing the speech feature vectors of several frames in the manner described above). ) Input to the deep neural network model for forward propagation calculations.
  • a "block calculation" method may also be adopted here, that is, a speech feature vector of a plurality of consecutive speech signal frames (forming an active window) is simultaneously input into the deep neural network model, followed by matrix calculation.
  • this does not constitute a limitation on the present application.
  • the value output by the output layer of the deep neural network model represents the probability distribution of the corresponding phoneme based on a given speech feature vector. Obviously, the probability that the pronunciation phoneme corresponding to the wake-up word covers a non-Garbage node is greater. Take the phoneme corresponding to the maximum likelihood probability of the output layer, add a unit to the histogram, and record the corresponding timestamp (in frames).
  • the coverage ratio of each histogram can be considered as the confidence of each phoneme.
  • the reliability threshold may be preset. For example, after the deep neural network training is completed, the cross-experiment may be performed on a verification set to obtain the confidence threshold.
  • the value of the confidence threshold is: for a certain voice signal, if the histogram of each pronunciation phoneme of the wake-up word corresponding to the voice signal is determined according to the procedure described above, then the histogram and the The confidence threshold determines whether the histogram height (ie, confidence) of each phoneme of the wake-up word exceeds the confidence threshold, and if so, it can be determined that the voice signal is a voice signal corresponding to the wake-up word, and the corresponding voice can be executed. Wake up operation.
  • the voice wake-up device records the corresponding timestamp.
  • the timestamp is in units of frames, and represents a relative temporal sequence of the speech signal frames to which the speech acoustic features belong in the speech signal, that is, an arrangement position of the speech signal frames to which the speech acoustic features belong on the time axis. If a time stamp is recorded when a unit is added to the histogram for the speech acoustic feature, the time stamp may indicate that the speech signal frame to which the speech acoustic feature of the frame belongs is the Xth frame.
  • the position of the speech signal frame to which the different speech acoustic features belong can be determined on the time axis. It can be considered that if the speech signal to be recognized also contains the "big white, hello" awakening word, then, in the histogram shown in Fig. 3b, the time corresponding to the histogram corresponding to "d" to "ao3" is recorded.
  • the stamp should be monotonically increasing.
  • a timestamp is introduced as a determination condition for performing a wake-up operation, if the histogram heights of "d" to "ao3" exceed the confidence threshold, and the "d” is determined according to the recorded timestamp.
  • the speech signal is considered to be the speech signal corresponding to the wake-up word, thereby performing the wake-up operation.
  • the method of introducing a time stamp as a determination condition for performing a wake-up operation is more suitable for a scenario in which each word included in the wake-up word is required to be sequentially pronounced to perform a wake-up operation.
  • the above content is not limited to the voice wake-up operation, and is also applicable to the voice-triggered setting operation in different scenarios. I won’t go into too much detail here.
  • the embodiment of the present application further provides an execution device for setting operation, as shown in FIG. 4 .
  • the execution device of the setting operation includes: an acquisition module 401, a neural network module 402, and a determination confirmation module 403, where
  • the obtaining module 401 is configured to obtain a voice signal acoustic feature.
  • the neural network module 402 is configured to input the obtained acoustic characteristics of the speech signals into the trained neural network model; wherein the samples used for training the neural network model include at least the acoustic signal acoustic feature samples corresponding to the set words.
  • the determination confirmation module 403 is configured to determine whether to perform a setting operation according to a probability that the acoustic characteristics of the respective speech signals corresponding to the phonemes corresponding to the set words are output according to the trained neural network model.
  • the obtaining module 401 is specifically configured to obtain the acoustic characteristics of the speech signal from a speech signal frame.
  • the obtaining module 401 is specifically configured to perform, by using the currently sampled voice signal frame as a reference frame, starting from the first frame after the first number of voice signal frames, performing frame-by-frame on subsequent voice signal frames.
  • the apparatus further includes: a voice activity detecting module 404, configured to determine whether a voice signal exists by performing a voice activity detection VAD before obtaining the voice signal acoustic feature, and obtain a voice signal acoustic feature when the determination is YES.
  • a voice activity detecting module 404 configured to determine whether a voice signal exists by performing a voice activity detection VAD before obtaining the voice signal acoustic feature, and obtain a voice signal acoustic feature when the determination is YES.
  • the neural network module 402 is specifically configured to: train the neural network model according to the following manner: determining an output layer of a deep neural network to be trained according to the number of phoneme samples corresponding to the set word Number of nodes;
  • the determination confirmation module 403 is specifically configured to determine the probability that the acoustic characteristics of the voice signals output by the neural network model correspond to the phonemes corresponding to the set words.
  • the maximum likelihood probability determines a mapping relationship between each obtained maximum likelihood probability and a corresponding phoneme, and determines whether to perform a wake-up operation according to the mapping relationship and the confidence threshold.
  • the judgment confirmation module 403 is specifically configured to count the number of maximum likelihood probabilities that have a mapping relationship with the phoneme for each phoneme corresponding to each set word, and determine each phoneme as the confidence level corresponding to the phoneme. Whether the confidence level is greater than the confidence threshold, and if so, the setting operation is performed; otherwise, the setting operation is not performed.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.

Abstract

一种设定操作的执行方法及装置,该方法包括:获得语音信号声学特征(S101),将获得的各语音信号声学特征输入训练好的神经网络模型;其中,对所述神经网络模型进行训练所用的样本,至少包含设定词对应的语音信号声学特征样本(S102);根据训练好的神经网络模型输出的、所述各语音信号声学特征对应于与所述设定词对应的音素的概率,判断是否执行设定操作(S103)。采用神经网络模型进行计算的方式,可以有效降低计算量级,减少耗费的处理资源。

Description

一种设定操作的执行方法及装置 技术领域
本申请涉及计算机技术领域,尤其涉及一种设定操作的执行方法及装置。
背景技术
随着信息技术的发展,语音唤醒技术由于其非接触式的操控特性,使得用户可以便捷地针对具有语音唤醒功能的设备进行启动控制,从而得到了广泛地应用。
若要实现对设备的语音唤醒,需要在设备中预先设置特定的唤醒词,根据唤醒词和发音词典确定相应的发音音素(其中,发音音素简称为音素,是指唤醒词的发音音节的最小语音单位)。在实际使用时,用户在设备附近的一定范围内说出唤醒词时,设备就会采集用户发出的语音信号,并根据语音信号声学特征,进而判断语音信号声学特征是否与唤醒词的音素相匹配,以确定用户说出的是否为唤醒词,若是,则设备会执行自我唤醒的操作,比如自动启动、或者从休眠状态切换为激活状态,等等。
现有技术中,对于具有语音唤醒功能的设备而言,通常采用隐马尔可夫模型(Hidden Markov Model,HMM)实现上述判断,具体为:在语音唤醒模块中分别预加载唤醒词和非唤醒词的HMM,当接收到用户发出的语音信号后,使用维特比算法对语音信号逐帧解码至音素级别,最后根据解码后的结果,判断用户发出的语音信号的语音声学特征是否与唤醒词的音素相匹配,从而判断出用户说出的是否为唤醒词。
上述现有技术存在的缺陷在于,在采用维特比算法对用户发出的语音信号进行逐帧解码计算的过程中会涉及到动态规划计算,计算量极大,从而导致整个语音唤醒过程耗费较多处理资源。
类似地,在采用上述类似方法,以设定词对应的语音信号声学特征,触发设备执行自我唤醒的操作外的其他设定操作(比如发出指定信号,或者拨打电话,等等)时,也可能面临相同的问题。其中,所述的设定词, 是指用于触发设备执行设定操作的语音信号声学特征对应的字或词的统称,前文所述的唤醒词,属于设定词的一种。
发明内容
本申请实施例提供一种设定操作的执行方法,用以解决现有技术中的触发设备执行设定操作的过程会耗费较多处理资源的问题。
本申请实施例还提供一种设定操作的执行装置,用以解决现有技术中的触发设备执行设定操作的过程会耗费较多处理资源的问题。
本申请实施例提供的设定操作的执行方法,包括:
获得语音信号声学特征;
将获得的各语音信号声学特征输入训练好的神经网络模型;其中,对所述神经网络模型进行训练所用的样本,至少包含设定词对应的语音信号声学特征样本;
根据训练好的神经网络模型输出的、所述各语音信号声学特征对应于与所述唤醒词对应的音素的概率,判断是否执行唤醒操作。
本申请实施例提供的设定操作的执行装置,包括:
获取模块,用于获得语音信号声学特征;
神经网络模块,用于将获得的各语音信号声学特征输入训练好的神经网络模型;其中,对所述神经网络模型进行训练所用的样本,至少包含设定词对应的语音信号声学特征样本;
判断确认模块,用于根据训练好的神经网络模型输出的、所述各语音信号声学特征对应于与所述设定词对应的音素的概率,判断是否执行设定操作。
采用本申请实施例提供的上述至少一个方案,通过采用神经网络模型,来确定获得的语音信号声学特征对应于与设定词对应的音素的概率,进而根据概率确定是否执行设定操作。由于相比于采用维特比算法对语音信号逐帧解码至音素级别而言,采用神经网络来确定所述概率不会耗费较多资 源,因此相比于现有技术,本申请实施例提供的方案可减少设定操作过程耗费的处理资源。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1为本申请实施例提供的设定操作的执行过程;
图2为本申请实施例提供的神经网络模型的示意图;
图3a、3b为本申请实施例提供的根据神经网络模型的输出,对唤醒词对应音素进行规律统计的示意图;
图4本申请实施例提供的设定操作的执行装置结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请具体实施例及相应的附图对本申请技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
如前所述,采用维特比算法对语音信号逐帧解码至音素级别需要耗费大量计算资源,尤其对于具备语音唤醒功能的设备而言,如:智能音响、智能家居设备等等,较大的计算量不仅会增加设备的工作负荷,而且会增加设备能耗,导致设备的工作效率降低。而考虑到神经网络模型具有较强的特征学习能力以及计算结构轻量级的特点,适于实际应用中具备语音唤醒功能的各类设备。
正是基于此,本申请提出了如图1所示的设定操作的执行过程,该过程具体包括以下步骤:
S101,获得语音信号声学特征。
在实际应用场景下,当用户针对具有语音唤醒功能的设备(下文中称为“语音设备”)通过语音触发方式执行设定操作时,通常需要说出设定词,用户说出设定词的声音就是用户发出的语音信号。相应地,语音设备便可以接收到用户发出的语音信号。对于语音设备而言,可以认为其接收到的任何语音信号,都是需要进行识别处理的,以便确定出用户说出的是否为设定词。
这里需要说明的是,在本申请中,设定操作包括但不限于:以语音方式触发的唤醒操作、呼叫操作、多媒体控制操作等等。本申请中的设定词包括但不限于:唤醒词、呼叫指令词、控制指令词等预先设定的、用于进行语音方式触发的口令词语(在某些情况下,设定词可以只包含一个汉字或单词)。
在语音设备接收到用户发出的语音信号之后,会从该语音信号中提取并获得相应的语音信号声学特征,以便对语音信号进行识别。本申请实施例中所述的语音信号声学特征,具体可以是从语音信号中提取的以帧为单位的语音信号的声学特征。
当然,对于语音信号而言,可由语音设备中携带的具有语音拾音功能的芯片实现信号声学特征的提取。更为具体地,语音信号声学特征的提取,可由语音设备中的语音唤醒模块完成,这里并不构成对本申请的限定。一旦语音设备获得了上述语音信号声学特征,就可以对语音信号声学特征进行计算处理,也即,可以执行下述步骤S102。
S102,将获得的各语音信号声学特征输入训练好的神经网络模型。
其中,对所述神经网络模型进行训练所用的样本,至少包含设定词对应的语音信号声学特征样本。
所述的神经网络模型,具备了计算量级小、计算结果准确的特点,适用于不同的设备中。考虑到在实际应用中,具有极强的特征学习能力、易训练的深度神经网络(Deep Neural Network,DNN),可以较好的适应于语音识别的场景中,故在本申请实施例中,具体可以采用训练好的深度神经网络。
在实际应用场景下,本申请中训练好的神经网络模型可由设备供应商提供,即,语音设备供应商会将训练好的神经网络模型作为语音唤醒模块的一部分,将语音唤醒模块设置在芯片或处理器中嵌入语音设备。当然,这里只是对神经网络模型设置方式的示例性说明,并不构成对本申请的限定。
为了保证训练好的神经网络模型的输出结果的准确性,在训练的过程中,可使用一定规模的训练样本进行训练,以便优化并完善神经网络模型。对于训练样本而言,训练样本中通常包含设定词对应的语音信号声学特征样本,当然,语音设备所接收到的语音信号并非都对应着设定词,那么,为了区别出非设定词,在实际应用中,训练样本中一般还可以包含非设定词的语音信号声学特征样本。
本申请实施例中,该训练好的神经网络模型的输入结果,至少包括语音信号声学特征对应于与设定词对应的音素的概率。
在神经网络模型生成后,便可以将之前得到的语音信号声学特征(如:语音特征向量)作为输入,输入至神经网络模型中进行计算,得到相应的输出结果。这里需要说明的是,作为本申请实施例在实际应用场景下的一种方式,可以在获得了设定词对应的全部语音信号声学特征后,一并将获得到的各语音信号声学特征输入至上述的神经网络模型。而作为本申请实施例在实际应用场景下的另一种方式,考虑到用户发出的语音信号是时序信号,那么,可以将获取到的语音信号声学特征以时序方式连续输入至上述的神经网络模型中(也即,边获取边输入)。上述两种输入语音信号声学特征的方式可根据实际应用的需要而选定,并不构成对本申请的限定。
S103,根据训练好的神经网络模型输出的、所述各语音信号声学特征对应于与所述设定词对应的音素的概率,判断是否执行设定操作。
其中,所述各语音信号声学特征对应于与设定词对应的音素的概率,即各语音信号声学特征与所述设定词对应的音素相匹配的概率。可以理解,该概率越大,语音信号声学特征为设定词对应的正确发音的语音信号声学特征的可能性越大;反之,则可能性越小。
所述执行设定操作,是指以语音唤醒的方式唤醒待唤醒的语音设备。 比如,若本申请实施例提供的方法的执行主体是该设备本身,则所述执行设定操作,是指唤醒该设备本身。当然,本申请实施例提供的该方法,也适用于由一设备唤醒另一设备的场景。
本申请实施例中,针对某个语音信号声学特征而言,神经网络模型可以根据输入的该语音信号声学特征,经过计算后,输出该语音信号声学特征对应于不同音素(包括设定词对应的音素和其他音素)的概率分布,根据输出的概率分布,就可以从所述不同音素中,确定出与该语音信号声学特征最为匹配的音素,即确定出所述概率分布中的最大概率对应的音素。该音素,为与该语音信号声学特征最为匹配的音素。
以此类推,可以统计出与从长度为一个历史窗口的语音信号内提取的每个语音信号声学特征分别最为匹配的音素,及相应的概率;进一步地,基于与每个语音信号声学特征分别最为匹配的音素,及相应的概率,可以确定语音信号是否与设定词相对应。需要说明的是,所述历史窗口也即一定时长,该时长为语音信号时长,具备该时长的语音信号一般被认为包含足够多的语音信号声学特征。
以下举例说明上述特征的具体实现过程:
假设以设定词为汉语中“启动”二字为例:其发音包含“q”、“i3”、“d”、“ong4”四个音素,这里的数字3和4分别表示不同的声调,也即,“i3”表示发出“i”音时是第三声调,相类似的,“ong4”表示发出“ong”音时是第四声调。在实际应用时,设备将获得的语音信号声学特征输入至训练好的神经网络模型中,神经网络模型可计算出各语音信号声学特征可能表示的音素的概率分布,如:计算出语音信号声学特征可能表示的每一种音素“q”、“i3”、“d”、“ong4”的概率,并将语音信号声学特征映射到概率最大的音素,从而,也就得到了各语音信号声学特征相匹配的音素。基于此,在一个历史窗口内,确定语音信号是否依次对应着“q”、“i3”、“d”、“ong4”这四个音素,若是,那么,语音信号就对应着“启动”这个设定词。
从上例可见,这样的方式可确定出语音信号声学特征所对应的音素是否为设定词的音素,也就能进一步确定出用户说出的是否为设定词,从而判断是否执行设定操作。
通过上述步骤,通过采用神经网络模型,来确定获得的语音信号声学特征对应于与设定词对应的音素的概率,进而根据概率确定是否执行唤醒操作。由于相比于采用维特比算法对语音信号逐帧解码至音素级别而言,采用神经网络来确定所述概率不会耗费较多资源,因此相比于现有技术,本申请实施例提供的方案可减少设定操作过程耗费的处理资源。
对于上述步骤,需要说明的是,在执行设定操作之前,设备通常处于休眠、关闭等未激活状态(此时,只有设备中的语音唤醒模块处于监控状态),设定操作是在用户说出设定词通过认证后,设备中的语音唤醒模块会控制设备进入激活状态。因此,在本申请中,获得语音信号声学特征之前,所述方法还包括:通过执行语音活动检测(Voice Activity Detection,VAD),判断是否存在语音信号,在判断为是时,执行步骤S101,即获得语音信号声学特征。
在实际应用时,对于上述步骤S101而言,获得语音信号声学特征,包括:从语音信号帧中获得所述语音信号声学特征。也就是说,上述的语音信号声学特征通常是从语音信号中提取后获得的,而语音信号声学特征提取的准确性,将对后续神经网络模型的泛化预测产生影响,也会对提升唤醒识别的准确度有重大的影响。下面将对语音信号声学特征提取的过程进行具体说明。
在特征的提取阶段,一般在一个固定大小的时间窗口内采样每一帧语音信号的特征。例如:作为本申请实施例中的一种可选方式,信号采集窗口的时间长度设置为25ms,采集周期设置为10ms,也就是说,当设备接收到待识别语音信号之后,将每隔10ms对一个时间长度为25ms的窗口进行采样。
在上述示例中,采样得到的是语音信号的原始特征,经过进一步特征提取后,获得固定维度(假设为N,N的取值将根据实际应用时所采用的不同的特征提取方式来确定,这里不作具体限定)的且具备一定区分度的语音信号声学特征。在本申请实施例中,常用的语音声学特征包括滤波器组特征(Filter Bank特征)、梅尔倒谱特征(Mel Frequency Cepstrum Coefficient,MFCC特征),感知线性预测特征(Perceptual Linear Predictive, PLP)等。
经过这样的提取过程,便得到了包含有N维语音信号声学特征的语音信号帧(在本申请中,这里的每一个语音信号帧也可称为每一帧语音特征向量)。另外需要说明的是,由于语音是时序信号,上下文帧之间具有相关性,所以,在获得了上述的各帧语音特征向量后,可以按照语音信号帧在时间轴上的排列顺序,依次将各帧语音特征向量进行拼接,得到一个组合形式的语音信号声学特征。
具体而言,从语音信号帧中获得所述语音信号声学特征,包括:依次针对语音信号帧中的各基准帧,执行:获取语音信号帧中的、在时间轴上排列在该基准帧之前的第一数量的语音信号帧的声学特征,以及语音信号帧中的、在时间轴上排列在该基准帧之后的第二数量的语音信号帧的声学特征,其中,对获取的各声学特征进行拼接,得到所述语音信号声学特征。
基准帧通常是指语音设备当前采样的语音信号帧,对于连续的语音信号而言,语音设备会执行多次采样,从而在整个过程中将产生多个基准帧。
在本实施例中,所述第二数量可以小于所述第一数量。拼接得到的该语音信号声学特征,可以视为相应的基准帧的语音信号声学特征,后文中提及的时间戳,则可以是所述相应的基准帧的在语音信号中的相对时序次序,即该基准帧在时间轴上的排列位置。
也就是说,为了提高深度神经网络模型的泛化预测能力,一般将当前帧(也即,基准帧)与其上下文的左L帧,右R帧拼接起来,组成一个大小为(L+1+R)*N的特征向量(其中,数字“1”表示当前帧本身),作为深度神经网络模型的输入。通常地,L>R,也即,左右不对称的帧数。这里之所以用到不对称的左、右上下文帧数,是因为流式音频存在延时解码问题,不对称的上下文帧可以尽量减少或避免延时解码的影响。
例如,在本申请实施例中,以当前帧作为基准帧,那么,可以选定该当前帧及其前30帧、后10帧拼接起来,形成了41帧(包含当前帧本身)组成的语音信号声学特征,作为深度神经网络输入层的输入。
以上内容是本申请中语音信号声学特征的详细描述,在获得了上述的语音信号声学特征后,就会输入至训练好的神经网络模型中进行计算。那 么,对于本申请中的神经网络模型而言,可以是一种深度神经网络模型,该模型的结构比如如图2所示。
在图2中,深度神经网络模型具有输入层、隐层和输出层三部分。语音特征向量从输入层中输入至隐层进行计算处理。每一层隐层中包括128个或者256个节点(也称为神经元),每个节点中设置有相应的激活函数,实现具体的计算过程,作为本申请实施例中的一种可选方式,以线性修正函数(Rectified Linear Units,ReLU)作为隐层节点的激活函数,并在输出层中设置SoftMax回归函数,对隐层的输出进行规整化处理。
建立了上述的深度神经网络模型后,便要对该深度神经网络模型进行训练。在本申请中,采用下述方式,训练上述的深度神经网络模型:
根据所述设定词对应的音素样本的数量,确定待训练的深度神经网络中输出层的节点数量,循环执行下述步骤,直至深度神经网络模型收敛(深度神经网络模型收敛是指:深度神经网络所输出的概率分布中的最大概率值,对应的是所述语音信号声学特征样本对应的正确发音的音素):
将训练样本输入至所述深度神经网络模型,使得所述深度神经网络模型对输入的样本的特征进行前向传播计算直至输出层,并使用预设目标函数(一般是基于交叉熵(Cross Entropy)准则)计算误差,并通过深度神经网络模型从输出层开始反向传播误差,并根据误差逐层调节所述深度神经网络模型的权重。
当算法收敛时,深度神经网络模型中存在的误差降至最低。
经过上述步骤,训练好的深度神经网络便可以采用芯片方式嵌入到相应的设备中进行应用。这里针对深度神经网络模型在嵌入式设备的应用需要说明的是,一方面,在应用时需要用到轻量级的模型,即:神经网络中隐层数量和每个隐层的节点数量需要有所限制,故采用适当规模的深度神经网络模型即可;另一方面,还需要根据特定的平台利用优化指令集(如:ARM平台上的NEON)对深度神经网络模型的计算进行性能提升的优化,以满足实时性的要求。
本申请中,经过训练后的深度神经网络模型的输出层的节点的数量,与设定词对应的音素的数量以及1个“Garbage”节点相对应,也即,假设设 定词为上例中的“启动”,对应4个音素,那么,经过训练后的深度神经网络模型的输出层的节点数量就为5。其中“Garbage”节点对应于除了设定词音素之外的其他音素,也即,对应于与设定词的音素不相同的其他音素。
为了能够准确地得到与设定词对应的音素以及与设定词的音素不相符的其他音素,在训练过程中,可以基于大词汇连续语音识别系统(Large Vocabulary Continuous Speech Recognition,LVCSR),为训练样本中的每一帧特征强对齐(Forced Align)至音素级别。
其中,对于训练样本而言,一般可以包括正样本(包含设定词)和负样本(不包含设定词)。本申请实施例中,通常选择发音以元音开头(或包含元音)的设定词,这样的设定词发音饱满,有助于提高唤醒系统的误拒比率。鉴于此,训练样本的设定词可以例如:“大白,你好”,其对应的音素分别为:d、a4、b、ai2、n、i3、h、ao3。这里举例说明的设定词只是一种示例,并不构成对本申请的限定,在实际应用中还可以类推至其他有价值的设定词。
经过上述训练样本数据的训练后,将获得一个收敛优化的深度神经网络模型,其可以最大概率将语音声学特征映射到正确的音素上。
另外,为了使得神经网络模型的拓扑结构达到最优状态,可以采用迁移学习(Transfer Learning)的方式,利用互联网语音大数据训练拓扑结构合适的DNN,作为目标深度神经网络(主要是除了输出层之外的其他层)参数的初始值。这样处理的好处是为了获得鲁棒性更好的“特征表示”,避免训练过程中陷入局部最优。“迁移学习”的概念很好地利用了深度神经网络“特征学习”的强大能力。当然,这里并不构成对本申请的限定。
经过上述内容,便得到了本申请中训练好的神经网络模型。从而可以进行实际使用。下面将针对实际使用的场景进行说明。
在实际应用时,设备可接收用户发出的语音信号,并获取该语音信号对应的语音信号声学特征输入至训练好的神经网络模型,从而神经网络模型经过计算后,输出所述设定词对应的音素与所述各语音信号声学特征分别相匹配的概率,进而判断是否执行设定操作。
具体而言,根据训练好的神经网络模型输出的、所述各语音信号声学 特征对应于与所述设定词对应的音素的概率,判断是否执行唤醒操作,包括:确定所述神经网络模型输出的、所述各语音信号声学特征对应于与所述设定词对应的音素的概率中的最大似然概率,确定获得的各最大似然概率与相应的音素的映射关系,根据所述映射关系,以及置信度阈值,判断是否执行唤醒操作。
这里需要说明的是,当各语音信号声学特征经过上述的神经网络模型的计算处理后,神经网络模型输出各语音信号声学特征的概率分布,概率分布反映了语音信号声学特征与设定词对应的音素相匹配的各种可能性分布,显然,对于任一语音信号声学特征而言,其概率分布中的最大值(即,最大似然概率),就表示该语音信号声学特征与设定词对应的音素相匹配的可能性的最大,故在本申请的上述步骤中,将确定出各语音信号声学特征对应于与所述设定词对应的音素的概率中最大的似然概率。
另外,在上述步骤中,根据所述映射关系,以及置信度阈值,判断是否执行唤醒操作,具体包括:针对每一设定词对应的音素,统计与该音素具有映射关系的最大似然概率的数量,作为该音素对应的置信度,判断每一音素的置信度是否均大于置信度阈值,若是,则执行所述设定操作;否则,则不执行所述设定操作。
至此,在本申请中,当语音设备获得了语音信号声学特征之后,可将该语音信号声学特征输入至语音唤醒模块神经网络模型中进行计算,得到语音信号声学特征可能表征的各音素的概率分布,并且,神经网络模型会将语音信号声学特征映射到概率最大的音素,这样一来,在统计了一个历史窗口内各帧语音信号声学特征的音素规律特性,以确定所述语音信号是否与设定词相对应。本申请中所采用的神经网络模型进行计算的方式,可以有效降低计算量级,减少耗费的处理资源,同时,神经网络模型易于训练,能够有效提升其适用性。
为了清楚地说明上述设定操作操作的执行过程,下面以设定词为唤醒词、设定操作为针对语音设备的唤醒操作的场景进行详细说明:
在本场景中,假设语音设备预先设定的唤醒词为“大白,你好”,该唤醒词对应的标准音素(为了区分识别过程中用户说出的词组所对应的音素, 这里将预设的唤醒词对应的音素称为标准音素)分别为:d、a4、b、ai2、n、i3、h、ao3。
首先,为了能够直观地表示各音素的概率分布,可以采用诸如直方图的图形方式进行表示,本示例中以直方图为例,即,将通过上述深度神经网络模型建立每个音素和“Garbage”节点对应的直方图。如图3a所示,每一个音素(包括“Garbage”节点)对应一个直方图柱(由于还未进行语音信号识别处理过程,所以图3a中,每个音素的直方图柱的高度为零),直方图柱的高度反映了语音信号声学特征映射到该音素的统计值。这里的统计值,便可以看作该音素的置信度。
之后,语音唤醒设备中的语音唤醒模块接收待识别语音信号。通常地,在语音唤醒模块执行前,通常由VAD模块执行语音信号的检测操作,目的是为了检测语音信号是否存在(以区别于静音状态)。一旦检测语音信号,语音唤醒系统开始工作,即,利用神经网络模型进行计算处理。
在深度神经网络模型进行计算的过程中,语音唤醒模块会从用户发出的语音信号中获得的语音信号声学特征(其中包含采用前文所述方式对若干帧语音特征向量进行拼接得到的语音信号声学特征)输入到深度神经网络模型,进行前向传播计算。为了提高计算的效率,这里也可以采用“块计算”的方式,即:将连续若干语音信号帧(形成一个活动窗口)的语音特征向量同时输入到深度神经网络模型,接着进行矩阵计算。当然,这里并不构成对本申请的限定。
深度神经网络模型的输出层所输出的数值,表示基于给定语音特征向量对应音素的概率分布。显然,唤醒词对应的发音音素覆盖非“Garbage”节点的概率是更大的。取输出层最大似然概率对应的音素,其直方图增加一个单位,并记录相应的时间戳(以帧为单位)。
具体而言,假设,对于某一语音信号帧的语音特征向量而言,其输出层最大概率对应的发音音素为唤醒词发音音素“d”,那么,在如图3a所示的直方图中,对应于标准音素“d”的直方图的高度就增加一个单位;而如果其输出层最大概率对应的发音音素不是唤醒词的任何发音音素,那么,“garbage”对应的直方图将增加一个单位,表示这一语音信号帧的语音特征 向量不对应于唤醒词的任何发音音素。按照这样的方式,最终可以形成如图3b所示的直方图。
在一个历史窗口内,每个直方图的覆盖占比可以视作每个音素的置信度。本申请实施例中,可以预设置信度阈值,比如可以在深度神经网络训练完成后,在一个验证集上进行交叉实验获得该置信度阈值。该置信度阈值的作用在于:针对某个语音信号而言,若按照上文介绍的过程,确定出该语音信号对应的唤醒词的各发音音素的直方图,那么,可以根据该直方图以及该置信度阈值,判断唤醒词的各发音音素的直方图高度(即置信度)是否均超过置信度阈值,若是,那么可以确定该语音信号是唤醒词对应的语音信号,也就可以执行相应的语音唤醒操作。
此外需要说明的是,直方图中每增加一个单位,语音唤醒设备都会记录相应的时间戳。其中,该时间戳以帧为单位,表示语音声学特征所属的语音信号帧在语音信号中的相对时序次序,即该语音声学特征所属的语音信号帧在时间轴上的排列位置。若针对语音声学特征,在直方图中增加一个单位时,记录了时间戳为X,则该时间戳可以表示该帧语音声学特征所属的语音信号帧为第X帧。根据时间戳,可以确定出不同语音声学特征所属的语音信号帧在时间轴上的排列位置。可以认为,如果待识别语音信号中也包含着“大白,你好”这个唤醒词,那么,如图3b所示的直方图中,针对与“d”至“ao3”的直方图对应记录的时间戳应该单调递增。
在实际应用中,若引入时间戳作为是否执行唤醒操作的判定条件,则若“d”至“ao3”的直方图高度均超过置信度阈值,且根据记录的时间戳,判断出与“d”至“ao3”的直方图对应的时间戳单调递增时,才认为语音信号是唤醒词对应的语音信号,从而执行唤醒操作。
引入时间戳作为是否执行唤醒操作的判定条件的方式,比较适合于要求对唤醒词包含的各个字进行依次发音,才能执行唤醒操作的场景。
在实际应用中,上述内容并不限于语音唤醒操作,同样适用于不同场景下以语音方式触发的设定操作。这里不再过多赘述。
以上为本申请实施例提供的设定操作操作的执行方法,基于同样的思路,本申请实施例还提供一种设定操作的执行装置,如图4所示。
在图4中,设定操作的执行装置包括:获取模块401、神经网络模块402、判断确认模块403,其中,
获取模块401,用于获得语音信号声学特征。
神经网络模块402,用于将获得的各语音信号声学特征输入训练好的神经网络模型;其中,对所述神经网络模型进行训练所用的样本,至少包含设定词对应的语音信号声学特征样本。
判断确认模块403,用于根据训练好的神经网络模型输出的、所述各语音信号声学特征对应于与所述设定词对应的音素的概率,判断是否执行设定操作。
获取模块401,具体用于从语音信号帧中获得所述语音信号声学特征。
更为具体地,获取模块401,具体用于采用以当前采样的语音信号帧作为基准帧的方式,从第一数量的语音信号帧之后的第一帧开始,逐帧对后续各语音信号帧执行:获取各语音信号帧中的、在时间轴上排列在该基准帧之前的第一数量的语音信号帧的声学特征,以及各语音信号帧中的、在时间轴上排列在该基准帧之后的第二数量的语音信号帧的声学特征,并对获取的各声学特征进行拼接,得到所述语音信号声学特征。
对于上述内容而言,其中,所述第二数量小于所述第一数量。
此外,所述装置还包括:语音活动检测模块404,用于在获得语音信号声学特征之前,通过执行语音活动检测VAD,判断是否存在语音信号,在判断为是时,获得语音信号声学特征。
在本申请实施例中,神经网络模块402,具体用于采用下述方式,训练所述神经网络模型:根据所述设定词对应的音素样本的数量确定待训练的深度神经网络中输出层的节点数量;
循环执行下述步骤,直至待训练的深度神经网络所输出的、设定词对应的语音信号声学特征样本对应的音素的概率分布中的最大概率值,为所述语音信号声学特征样本对应的正确发音的音素:将训练样本输入至所述待训练的深度神经网络,使得所述待训练的深度神经网络对输入的样本的特征进行前向传播计算直至输出层,使用预设目标函数计算该误差,并通 过所述深度神经网络模型从输出层反向传播误差,根据误差逐层调节所述深度神经网络模型的权重。
在上述神经网络模块402完成训练的基础上,判断确认模块403,具体用于确定所述神经网络模型输出的、所述各语音信号声学特征对应于与所述设定词对应的音素的概率中的最大似然概率,确定获得的各最大似然概率与相应的音素的映射关系,根据所述映射关系,以及置信度阈值,判断是否执行唤醒操作。
更为具体地,判断确认模块403,具体用于针对每一设定词对应的音素,统计与该音素具有映射关系的最大似然概率的数量,作为该音素对应的置信度,判断每一音素的置信度是否均大于置信度阈值,若是,则执行所述设定操作;否则,则不执行所述设定操作。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备 不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本领域技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (16)

  1. 一种设定操作的执行方法,其特征在于,包括:
    获得语音信号声学特征;
    将获得的各语音信号声学特征输入训练好的神经网络模型;其中,对所述神经网络模型进行训练所用的样本,至少包含设定词对应的语音信号声学特征样本;
    根据训练好的神经网络模型输出的、所述各语音信号声学特征对应于与所述设定词对应的音素的概率,判断是否执行设定操作。
  2. 如权利要求1所述的方法,其特征在于,获得语音信号声学特征,具体包括:
    从语音信号帧中获得所述语音信号声学特征。
  3. 如权利要求2所述的方法,其特征在于,从语音信号帧中获得所述语音信号声学特征,包括:
    依次针对语音信号帧中的各基准帧,执行:获取语音信号帧中的、在时间轴上排列在基准帧之前的第一数量的语音信号帧的声学特征,以及语音信号帧中的、在时间轴上排列在该基准帧之后的第二数量的语音信号帧的声学特征;
    对获取的各声学特征进行拼接,得到所述语音信号声学特征。
  4. 如权利要求3所述的方法,其特征在于,所述第二数量小于所述第一数量。
  5. 如权利要求1所述的方法,其特征在于,获得语音信号声学特征之前,所述方法还包括:
    通过执行语音活动检测VAD,判断是否存在语音信号;
    在判断为是时,获得语音信号声学特征。
  6. 如权利要求1所述的方法,其特征在于,采用下述方式,训练所述神经网络模型:
    根据所述设定词对应的音素样本的数量确定待训练的深度神经网络 中输出层的节点数量;
    循环执行下述步骤,直至待训练的深度神经网络所输出的概率分布中的最大概率值,对应的是所述语音信号声学特征样本对应的正确发音的音素:
    将训练样本输入至所述待训练的深度神经网络,使得所述待训练的深度神经网络对输入的样本的特征进行前向传播计算直至输出层,使用预设目标函数计算该误差,并通过所述深度神经网络模型从输出层反向传播误差,根据误差逐层调节所述深度神经网络模型的权重。
  7. 如权利要求1所述的方法,其特征在于,根据训练好的神经网络模型输出的、所述各语音信号声学特征对应于与所述设定词对应的音素的概率,判断是否执行设定操作,包括:
    确定所述神经网络模型输出的、所述各语音信号声学特征对应于与所述设定词对应的音素的概率中的最大似然概率;
    确定获得的各最大似然概率与相应的音素的映射关系;
    根据所述映射关系,以及置信度阈值,判断是否执行设定操作。
  8. 如权利要求7所述的方法,其特征在于,根据所述映射关系,以及置信度阈值,判断是否执行设定操作,具体包括:
    针对每一设定词对应的音素,统计与该音素具有映射关系的最大似然概率的数量,作为该音素对应的置信度;
    判断每一音素的置信度是否均大于置信度阈值;
    若是,则执行所述设定操作;
    否则,则不执行所述设定操作。
  9. 一种设定操作的执行装置,其特征在于,包括:
    获取模块,用于获得语音信号声学特征;
    神经网络模块,用于将获得的各语音信号声学特征输入训练好的神经网络模型;其中,对所述神经网络模型进行训练所用的样本,至少包含设定词对应的语音信号声学特征样本;
    判断确认模块,用于根据训练好的神经网络模型输出的、所述各语音信号声学特征对应于与所述设定词对应的音素的概率,判断是否执行设定操作。
  10. 如权利要求9所述的装置,其特征在于,所述获取模块,具体用于从语音信号帧中获得所述语音信号声学特征。
  11. 如权利要求10所述的装置,其特征在于,所述获取模块,具体用于依次针对语音信号帧中的各基准帧,执行:获取语音信号帧中的、在时间轴上排列在该基准帧之前的第一数量的语音信号帧的声学特征,以及语音信号帧中的、在时间轴上排列在该基准帧之后的第二数量的语音信号帧的声学特征;
    对获取的各声学特征进行拼接,得到所述语音信号声学特征。
  12. 如权利要求11所述的装置,其特征在于,所述第二数量小于所述第一数量。
  13. 如权利要求9所述的装置,其特征在于,所述装置还包括:语音活动检测模块,用于在获得语音信号声学特征之前,通过执行语音活动检测VAD,判断是否存在语音信号,在判断为是时,获得语音信号声学特征。
  14. 如权利要求9所述的装置,其特征在于,所述神经网络模块,具体用于采用下述方式,训练所述神经网络模型:根据所述设定词对应的音素样本的数量确定待训练的深度神经网络中输出层的节点数量;
    循环执行下述步骤,直至待训练的深度神经网络所输出的概率分布中的最大概率值,对应的是所述语音信号声学特征样本对应的正确发音的音素:将训练样本输入至所述待训练的深度神经网络,使得所述待训练的深度神经网络对输入的样本的特征进行前向传播计算直至输出层,使用预设目标函数计算该误差,并通过所述深度神经网络模型从输出层反向传播误差,根据误差逐层调节所述深度神经网络模型的权重。
  15. 如权利要求9所述的装置,其特征在于,所述判断确认模块,具体用于确定所述神经网络模型输出的、所述各语音信号声学特征对应于与所述设定词对应的音素的概率中的最大似然概率,确定获得的各最大似然 概率与相应的音素的映射关系,根据所述映射关系,以及置信度阈值,判断是否执行设定操作。
  16. 如权利要求9所述的装置,其特征在于,所述判断确认模块,具体用于针对每一设定词对应的音素,统计与该音素具有映射关系的最大似然概率的数量,作为该音素对应的置信度,判断每一音素的置信度是否均大于置信度阈值,若是,则执行所述设定操作;否则,则不执行所述设定操作。
PCT/CN2016/110671 2015-12-31 2016-12-19 一种设定操作的执行方法及装置 WO2017114201A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201511029741.3A CN106940998B (zh) 2015-12-31 2015-12-31 一种设定操作的执行方法及装置
CN201511029741.3 2015-12-31

Publications (1)

Publication Number Publication Date
WO2017114201A1 true WO2017114201A1 (zh) 2017-07-06

Family

ID=59224454

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/110671 WO2017114201A1 (zh) 2015-12-31 2016-12-19 一种设定操作的执行方法及装置

Country Status (2)

Country Link
CN (1) CN106940998B (zh)
WO (1) WO2017114201A1 (zh)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109615066A (zh) * 2019-01-30 2019-04-12 新疆爱华盈通信息技术有限公司 一种针对neon优化的卷积神经网络的裁剪方法
CN110556099A (zh) * 2019-09-12 2019-12-10 出门问问信息科技有限公司 一种命令词控制方法及设备
CN110622176A (zh) * 2017-11-15 2019-12-27 谷歌有限责任公司 视频分区
CN110619871A (zh) * 2018-06-20 2019-12-27 阿里巴巴集团控股有限公司 语音唤醒检测方法、装置、设备以及存储介质
CN110782898A (zh) * 2018-07-12 2020-02-11 北京搜狗科技发展有限公司 端到端语音唤醒方法、装置及计算机设备
CN111128134A (zh) * 2018-10-11 2020-05-08 阿里巴巴集团控股有限公司 声学模型训练方法和语音唤醒方法、装置及电子设备
CN111816160A (zh) * 2020-07-28 2020-10-23 苏州思必驰信息科技有限公司 普通话和粤语混合语音识别模型训练方法及系统
CN111862963A (zh) * 2019-04-12 2020-10-30 阿里巴巴集团控股有限公司 语音唤醒方法、装置和设备
CN112259089A (zh) * 2019-07-04 2021-01-22 阿里巴巴集团控股有限公司 语音识别方法及装置
CN112668310A (zh) * 2020-12-17 2021-04-16 杭州国芯科技股份有限公司 一种语音深度神经网络模型输出音素概率的方法
CN112751633A (zh) * 2020-10-26 2021-05-04 中国人民解放军63891部队 一种基于多尺度窗口滑动的宽带频谱检测方法
CN113053377A (zh) * 2021-03-23 2021-06-29 南京地平线机器人技术有限公司 语音唤醒方法和装置、计算机可读存储介质、电子设备
CN113593527A (zh) * 2021-08-02 2021-11-02 北京有竹居网络技术有限公司 一种生成声学特征、语音模型训练、语音识别方法及装置
CN115101063A (zh) * 2022-08-23 2022-09-23 深圳市友杰智新科技有限公司 低算力语音识别方法、装置、设备及介质
CN111862963B (zh) * 2019-04-12 2024-05-10 阿里巴巴集团控股有限公司 语音唤醒方法、装置和设备

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107507621B (zh) * 2017-07-28 2021-06-22 维沃移动通信有限公司 一种噪声抑制方法及移动终端
US20190114543A1 (en) * 2017-10-12 2019-04-18 British Cayman Islands Intelligo Technology Inc. Local learning system in artificial intelligence device
CN109754789B (zh) * 2017-11-07 2021-06-08 北京国双科技有限公司 语音音素的识别方法及装置
CN110444195B (zh) * 2018-01-31 2021-12-14 腾讯科技(深圳)有限公司 语音关键词的识别方法和装置
CN108763920A (zh) * 2018-05-23 2018-11-06 四川大学 一种基于集成学习的口令强度评估模型
CN108766420B (zh) * 2018-05-31 2021-04-02 中国联合网络通信集团有限公司 语音交互设备唤醒词生成方法及装置
CN108711429B (zh) * 2018-06-08 2021-04-02 Oppo广东移动通信有限公司 电子设备及设备控制方法
CN108766461B (zh) * 2018-07-17 2021-01-26 厦门美图之家科技有限公司 音频特征提取方法及装置
CN109036412A (zh) * 2018-09-17 2018-12-18 苏州奇梦者网络科技有限公司 语音唤醒方法和系统
CN110969805A (zh) * 2018-09-30 2020-04-07 杭州海康威视数字技术股份有限公司 一种安全检测方法、装置和系统
CN109358543B (zh) * 2018-10-23 2020-12-01 南京迈瑞生物医疗电子有限公司 手术室控制系统、方法、计算机设备和存储介质
KR20200059054A (ko) * 2018-11-20 2020-05-28 삼성전자주식회사 사용자 발화를 처리하는 전자 장치, 및 그 전자 장치의 제어 방법
CN110033785A (zh) * 2019-03-27 2019-07-19 深圳市中电数通智慧安全科技股份有限公司 一种呼救识别方法、装置、可读存储介质及终端设备
CN112185425A (zh) * 2019-07-05 2021-01-05 阿里巴巴集团控股有限公司 音频信号处理方法、装置、设备及存储介质
CN110751958A (zh) * 2019-09-25 2020-02-04 电子科技大学 一种基于rced网络的降噪方法
CN111145748B (zh) * 2019-12-30 2022-09-30 广州视源电子科技股份有限公司 音频识别置信度确定方法、装置、设备及存储介质
CN112750425B (zh) * 2020-01-22 2023-11-03 腾讯科技(深圳)有限公司 语音识别方法、装置、计算机设备及计算机可读存储介质
CN113744732A (zh) * 2020-05-28 2021-12-03 阿里巴巴集团控股有限公司 设备唤醒相关方法、装置及故事机
CN111785256A (zh) * 2020-06-28 2020-10-16 北京三快在线科技有限公司 声学模型训练方法、装置、电子设备及存储介质
CN112509568A (zh) * 2020-11-26 2021-03-16 北京华捷艾米科技有限公司 一种语音唤醒方法及装置
CN112735463A (zh) * 2020-12-16 2021-04-30 杭州小伴熊科技有限公司 一种音频播放延迟ai修正方法和装置
CN114783438B (zh) * 2022-06-17 2022-09-27 深圳市友杰智新科技有限公司 自适应解码方法、装置、计算机设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060136207A1 (en) * 2004-12-21 2006-06-22 Electronics And Telecommunications Research Institute Two stage utterance verification device and method thereof in speech recognition system
CN103117060A (zh) * 2013-01-18 2013-05-22 中国科学院声学研究所 用于语音识别的声学模型的建模方法、建模系统
CN104575490A (zh) * 2014-12-30 2015-04-29 苏州驰声信息科技有限公司 基于深度神经网络后验概率算法的口语发音评测方法
CN104681036A (zh) * 2014-11-20 2015-06-03 苏州驰声信息科技有限公司 一种语言音频的检测系统及方法
CN105070288A (zh) * 2015-07-02 2015-11-18 百度在线网络技术(北京)有限公司 车载语音指令识别方法和装置
CN105096939A (zh) * 2015-07-08 2015-11-25 百度在线网络技术(北京)有限公司 语音唤醒方法和装置

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7072837B2 (en) * 2001-03-16 2006-07-04 International Business Machines Corporation Method for processing initially recognized speech in a speech recognition session
US7203643B2 (en) * 2001-06-14 2007-04-10 Qualcomm Incorporated Method and apparatus for transmitting speech activity in distributed voice recognition systems
KR100449912B1 (ko) * 2002-02-20 2004-09-22 대한민국 음성인식시스템의 핵심어 검출을 위한 후처리방법
US7092883B1 (en) * 2002-03-29 2006-08-15 At&T Generating confidence scores from word lattices
US8959019B2 (en) * 2002-10-31 2015-02-17 Promptu Systems Corporation Efficient empirical determination, computation, and use of acoustic confusability measures
JP4843987B2 (ja) * 2005-04-05 2011-12-21 ソニー株式会社 情報処理装置、情報処理方法、およびプログラム
JP4827721B2 (ja) * 2006-12-26 2011-11-30 ニュアンス コミュニケーションズ,インコーポレイテッド 発話分割方法、装置およびプログラム
US20110311144A1 (en) * 2010-06-17 2011-12-22 Microsoft Corporation Rgb/depth camera for improving speech recognition
EP2736042A1 (en) * 2012-11-23 2014-05-28 Samsung Electronics Co., Ltd Apparatus and method for constructing multilingual acoustic model and computer readable recording medium for storing program for performing the method
CN102945673A (zh) * 2012-11-24 2013-02-27 安徽科大讯飞信息科技股份有限公司 一种语音指令范围动态变化的连续语音识别方法
CN103971686B (zh) * 2013-01-30 2015-06-10 腾讯科技(深圳)有限公司 自动语音识别方法和系统
CN103971685B (zh) * 2013-01-30 2015-06-10 腾讯科技(深圳)有限公司 语音命令识别方法和系统
US9721561B2 (en) * 2013-12-05 2017-08-01 Nuance Communications, Inc. Method and apparatus for speech recognition using neural networks with speaker adaptation
CN104751842B (zh) * 2013-12-31 2019-11-15 科大讯飞股份有限公司 深度神经网络的优化方法及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060136207A1 (en) * 2004-12-21 2006-06-22 Electronics And Telecommunications Research Institute Two stage utterance verification device and method thereof in speech recognition system
CN103117060A (zh) * 2013-01-18 2013-05-22 中国科学院声学研究所 用于语音识别的声学模型的建模方法、建模系统
CN104681036A (zh) * 2014-11-20 2015-06-03 苏州驰声信息科技有限公司 一种语言音频的检测系统及方法
CN104575490A (zh) * 2014-12-30 2015-04-29 苏州驰声信息科技有限公司 基于深度神经网络后验概率算法的口语发音评测方法
CN105070288A (zh) * 2015-07-02 2015-11-18 百度在线网络技术(北京)有限公司 车载语音指令识别方法和装置
CN105096939A (zh) * 2015-07-08 2015-11-25 百度在线网络技术(北京)有限公司 语音唤醒方法和装置

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110622176B (zh) * 2017-11-15 2023-07-25 谷歌有限责任公司 视频分区
CN110622176A (zh) * 2017-11-15 2019-12-27 谷歌有限责任公司 视频分区
CN110619871A (zh) * 2018-06-20 2019-12-27 阿里巴巴集团控股有限公司 语音唤醒检测方法、装置、设备以及存储介质
CN110782898B (zh) * 2018-07-12 2024-01-09 北京搜狗科技发展有限公司 端到端语音唤醒方法、装置及计算机设备
CN110782898A (zh) * 2018-07-12 2020-02-11 北京搜狗科技发展有限公司 端到端语音唤醒方法、装置及计算机设备
CN111128134B (zh) * 2018-10-11 2023-06-06 阿里巴巴集团控股有限公司 声学模型训练方法和语音唤醒方法、装置及电子设备
CN111128134A (zh) * 2018-10-11 2020-05-08 阿里巴巴集团控股有限公司 声学模型训练方法和语音唤醒方法、装置及电子设备
CN109615066A (zh) * 2019-01-30 2019-04-12 新疆爱华盈通信息技术有限公司 一种针对neon优化的卷积神经网络的裁剪方法
CN111862963A (zh) * 2019-04-12 2020-10-30 阿里巴巴集团控股有限公司 语音唤醒方法、装置和设备
CN111862963B (zh) * 2019-04-12 2024-05-10 阿里巴巴集团控股有限公司 语音唤醒方法、装置和设备
CN112259089A (zh) * 2019-07-04 2021-01-22 阿里巴巴集团控股有限公司 语音识别方法及装置
CN110556099A (zh) * 2019-09-12 2019-12-10 出门问问信息科技有限公司 一种命令词控制方法及设备
CN111816160A (zh) * 2020-07-28 2020-10-23 苏州思必驰信息科技有限公司 普通话和粤语混合语音识别模型训练方法及系统
CN112751633A (zh) * 2020-10-26 2021-05-04 中国人民解放军63891部队 一种基于多尺度窗口滑动的宽带频谱检测方法
CN112668310A (zh) * 2020-12-17 2021-04-16 杭州国芯科技股份有限公司 一种语音深度神经网络模型输出音素概率的方法
CN112668310B (zh) * 2020-12-17 2023-07-04 杭州国芯科技股份有限公司 一种语音深度神经网络模型输出音素概率的方法
CN113053377A (zh) * 2021-03-23 2021-06-29 南京地平线机器人技术有限公司 语音唤醒方法和装置、计算机可读存储介质、电子设备
CN113593527A (zh) * 2021-08-02 2021-11-02 北京有竹居网络技术有限公司 一种生成声学特征、语音模型训练、语音识别方法及装置
CN113593527B (zh) * 2021-08-02 2024-02-20 北京有竹居网络技术有限公司 一种生成声学特征、语音模型训练、语音识别方法及装置
CN115101063B (zh) * 2022-08-23 2023-01-06 深圳市友杰智新科技有限公司 低算力语音识别方法、装置、设备及介质
CN115101063A (zh) * 2022-08-23 2022-09-23 深圳市友杰智新科技有限公司 低算力语音识别方法、装置、设备及介质

Also Published As

Publication number Publication date
CN106940998A (zh) 2017-07-11
CN106940998B (zh) 2021-04-16

Similar Documents

Publication Publication Date Title
WO2017114201A1 (zh) 一种设定操作的执行方法及装置
US11699433B2 (en) Dynamic wakeword detection
US10510340B1 (en) Dynamic wakeword detection
CN110364143B (zh) 语音唤醒方法、装置及其智能电子设备
US11657832B2 (en) User presence detection
US11915699B2 (en) Account association with device
US11361763B1 (en) Detecting system-directed speech
US9754584B2 (en) User specified keyword spotting using neural network feature extractor
CN108320733B (zh) 语音数据处理方法及装置、存储介质、电子设备
JP7336537B2 (ja) 組み合わせで行うエンドポイント決定と自動音声認識
US7693713B2 (en) Speech models generated using competitive training, asymmetric training, and data boosting
US10872599B1 (en) Wakeword training
US11069352B1 (en) Media presence detection
US11205420B1 (en) Speech processing using a recurrent neural network
JP2019211749A (ja) 音声の始点及び終点の検出方法、装置、コンピュータ設備及びプログラム
JP2023089116A (ja) エンドツーエンドストリーミングキーワードスポッティング
US11386887B1 (en) Natural language processing using context
US20230162728A1 (en) Wakeword detection using a neural network
US20240029739A1 (en) Sensitive data control
US11557292B1 (en) Speech command verification
US11769491B1 (en) Performing utterance detection using convolution
US11763806B1 (en) Speaker recognition adaptation
US11437043B1 (en) Presence data determination and utilization
TWI776799B (zh) 一種設定操作的執行方法及裝置
KR20220129034A (ko) 작은 풋프린트 멀티-채널 키워드 스포팅

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16880990

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16880990

Country of ref document: EP

Kind code of ref document: A1