CN112712801A - Voice wake-up method and device, electronic equipment and storage medium - Google Patents
Voice wake-up method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112712801A CN112712801A CN202011474857.9A CN202011474857A CN112712801A CN 112712801 A CN112712801 A CN 112712801A CN 202011474857 A CN202011474857 A CN 202011474857A CN 112712801 A CN112712801 A CN 112712801A
- Authority
- CN
- China
- Prior art keywords
- model
- awakening
- training
- adaptive
- wake
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000012549 training Methods 0.000 claims abstract description 189
- 238000001514 detection method Methods 0.000 claims abstract description 22
- 230000003044 adaptive effect Effects 0.000 claims description 81
- 230000006870 function Effects 0.000 claims description 57
- 238000004590 computer program Methods 0.000 claims description 10
- 238000013528 artificial neural network Methods 0.000 claims description 8
- 230000000306 recurrent effect Effects 0.000 claims description 7
- 230000001537 neural effect Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 abstract description 17
- 238000010586 diagram Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 230000003993 interaction Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Acoustics & Sound (AREA)
- Human Computer Interaction (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Telephonic Communication Services (AREA)
- Telephone Function (AREA)
Abstract
The application discloses a voice awakening method and device, electronic equipment and a storage medium. The method comprises the following steps: training the initial awakening model based on first training sample data to obtain a seed awakening model; initializing according to the seed awakening model to obtain a self-adaptive awakening model, and training the self-adaptive awakening model based on second training sample data, wherein the second training sample data comprises voice recognition data and awakening word data; and performing keyword detection on the data to be detected by adopting the trained self-adaptive awakening model so as to realize voice awakening. By obtaining the seed awakening model and obtaining the self-adaptive awakening model on the basis of the seed awakening model, the self-adaptive awakening model is trained by adopting the training sample containing the voice recognition data and the awakening word data, so that the over-fitting condition in the model training process is avoided, and the awakening accuracy is improved when the trained self-adaptive model is adopted for voice awakening.
Description
Technical Field
The embodiment of the disclosure relates to the technical field of data processing, and in particular, to a voice wake-up method and apparatus, an electronic device, and a storage medium.
Background
Voice wakeup is one of the main application fields of keyword detection, a user needs to start the whole voice interaction process through a wakeup word, at present, real-time detection is mainly realized by applying a Recurrent Neural Network converter (RNN-T) model, and the RNN-T model mainly comprises three parts, namely an Encoder Network Encoder, a prediction Network and a joint Network join Network.
For the RNN-T model applied to voice wakeup, it is common to directly mix wakeup word training data and voice recognition training data to train the model, and perform voice wakeup using the trained data. However, because the role of the prediction network in the RNN-T is similar to that of a language model in speech recognition, the next word (word) is predicted by inputting the previous word (word), and because there are many data of awakening words in the training data, and the texts corresponding to these data are all the same, the prediction model is over-fitted, so that a lot of false awakening phenomena occur when the model is used for speech awakening, and the accuracy of speech awakening is affected.
Disclosure of Invention
The embodiment of the disclosure provides a voice awakening method and device, electronic equipment and a storage medium, so as to realize voice awakening through detection of keywords.
In a first aspect, an embodiment of the present disclosure provides a voice wake-up method, including:
training the initial awakening model based on first training sample data to obtain a seed awakening model, wherein the first training sample data comprises voice recognition data;
initializing the seed awakening model to obtain a self-adaptive awakening model, and training the self-adaptive awakening model based on second training sample data, wherein the second training sample data comprises voice recognition data and awakening word data;
and performing keyword detection on the data to be detected by adopting the trained self-adaptive awakening model so as to realize voice awakening.
In a second aspect, an embodiment of the present disclosure further provides a voice wake-up apparatus, where the apparatus includes:
the seed awakening model obtaining module is used for training the initial awakening model based on first training sample data to obtain a seed awakening model, wherein the first training sample data comprises voice recognition data;
and the self-adaptive awakening model training module is used for initializing the seed awakening model to obtain a self-adaptive awakening model and training the self-adaptive awakening model based on second training sample data, wherein the second training sample data comprises voice recognition data and awakening word data.
And the voice awakening module is used for performing keyword detection on the data to be detected by adopting the trained self-adaptive awakening model so as to realize voice awakening.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement a method according to any embodiment of the present disclosure.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements a method according to any of the embodiments of the present disclosure.
In the embodiment of the disclosure, the seed awakening model is obtained, the adaptive awakening model is obtained on the basis of the seed awakening model, and the training sample containing the voice recognition data and the awakening word data is adopted to train the adaptive awakening model, so that the over-fitting condition in the model training process is avoided, and the awakening accuracy is improved when the trained adaptive model is adopted to perform voice awakening.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1A is a flowchart of a voice wake-up method according to an embodiment of the disclosure;
fig. 1B is a schematic structural diagram of an adaptive wake-up model according to an embodiment of the disclosure;
fig. 2 is a flowchart of a voice wake-up method according to a second embodiment of the disclosure;
fig. 3 is a schematic structural diagram of a voice wake-up apparatus according to a third embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Example one
Fig. 1A is a flowchart of a voice wake-up method provided in an embodiment of the present disclosure, where the present embodiment is applicable to an end-to-end voice wake-up situation, and the method may be executed by a voice wake-up apparatus provided in an embodiment of the present disclosure, where the apparatus may be implemented in a software and/or hardware manner, and may be generally integrated in a computer device. The method of the embodiment of the disclosure specifically comprises the following steps:
as shown in fig. 1A, the method in the embodiments of the present disclosure may include the following steps:
step S101, training the initial awakening model based on first training sample data to obtain a seed awakening model, wherein the first training sample data comprises voice recognition data.
Optionally, the wake-up model includes a recurrent neural Network (RNN-T) model, the RNN-T model includes an Encoder Network, which may be represented by a symbol Encoder, a Prediction Network may be represented by a symbol Prediction Network, and a Joint Network, which may be represented by a symbol Joint Network, and the Joint Network is connected to the Encoder Network and the Prediction Network, respectively. Where the input to the encoder network is the acoustic signature, the input to the prediction network is the last predicted symbol (text message), and the output of the entire RNN-T model is the probability distribution of the current symbol.
Optionally, training the initial wake-up model based on the first training sample data to obtain a seed wake-up model, which may include: acquiring first initial sample data; expanding the first initial sample data to obtain first training sample data; and training the initial RNN-T model based on first training sample data to obtain a seed RNN-T model.
Specifically, in this embodiment, first initial sample data is obtained, where the first initial sample data includes a small amount of voice recognition data, the voice recognition data refers to a voice of any content and a text corresponding to the voice, and means such as indoor impulse response, speed change, noise addition and the like may be adopted for the first initial sample data to expand the diversity of the data, so as to obtain first training sample data with richer data.
It should be noted that the initial RNN-T model is trained based on the first training sample data obtained after the expansion to obtain the seed RNN-T model, wherein the seed RNN-T model is more optimized with respect to the network parameters of the initial RNN-T model.
Step S102, initializing the seed awakening model to obtain a self-adaptive awakening model, and training the self-adaptive awakening model based on second training sample data, wherein the second training sample data comprises voice recognition data and awakening word data.
Optionally, initializing the seed wake-up model to obtain the adaptive wake-up model may include: adding a forward neural FFNN network on the basis of the seed RNN-T model, wherein the FFNN network is connected with an encoder network; taking the seed RNN-T model as a first branch, and taking the FFNN network and the encoder network as a second branch; an adaptive wake-up model is obtained from the first branch and the second branch.
Specifically, as shown in fig. 1B, a structural diagram of the adaptive wake-up model in the present embodiment is mainly obtained by adding a Forward Neural network (FFNN) on the basis of the seed RNN-T model, taking the seed RNN-T model on the left in the figure as a first branch, and taking the FFNN and the encoder on the right in the figure as a second branch, so that it can be seen that the first branch and the second branch have a common structural part, i.e., an encoder network.
Optionally, training the adaptive wake-up model based on second training sample data includes: acquiring second initial sample data; expanding the second initial sample data to obtain second training sample data; training the adaptive wake-up model based on the second training sample data.
Specifically, when the adaptive wake-up model is trained, first, second initial sample data is obtained, where the second initial sample data includes a small amount of speech recognition data and wake-up word data, and although the first initial sample data also includes speech recognition data, the first initial sample data only includes the same data format and the contents included in the speech recognition data are different; and the wakeup word data refers to the voice and text corresponding to the keyword. The method for expanding the second initial sample data is approximately the same as the method for expanding the first initial sample data, mainly aiming at the second initial sample data, the method of indoor shock response, speed change, noise addition and the like can be adopted to expand the diversity of the data, so that the second training sample data with richer data is obtained, and although the second training sample data is subjected to expansion processing, the data form contained in the expanded second training sample data is still the voice recognition data and the awakening word data, and only the type of the data is more diversified.
Optionally, training the adaptive wake-up model based on second training sample data includes: training the first branch based on the voice recognition data in the second training sample to obtain a first loss function result; training the second branch based on the voice recognition data and the awakening word data in the second training sample to obtain a second loss function result; and determining the weighted sum of the loss functions of the first loss function result and the second loss function result, and determining that the adaptive model training is finished when the weighted sum of the loss functions is smaller than a preset loss threshold value.
In this embodiment, the first branch and the second branch of the adaptive wake-up model respectively correspond to different loss functions, and when the adaptive wake-up model is trained based on the second training sample data, the first branch and the second branch adopt different data, where the first branch mainly adopts voice recognition data, and the second branch adopts voice recognition data and wake-up word data, and since the wake-up word data does not pass through the prediction network when the adaptive wake-up model is trained, the problem that the prediction network is over-fitted due to the fact that the texts corresponding to the wake-up words are the same can be solved. And in the process of carrying out the adaptive model training, the following formula (1) can be specifically adopted as a loss function of the whole model
LMT=αLRNN-T+βLCTC (1)
Wherein L isMTA loss function, L, representing the whole of the adaptive modelRNN-TRepresenting the loss function, L, corresponding to the first branchCTCThe loss function corresponding to the second branch is represented, α represents the weight coefficient corresponding to the first branch, and β represents the weight coefficient corresponding to the second branch.
The method comprises the steps of training a first branch based on voice recognition data to obtain a first loss function result, training a second branch based on the voice recognition data and awakening word data to obtain a second loss function result, obtaining a loss function weighted sum corresponding to a loss function of the whole model as the weight coefficient corresponding to the first branch is alpha and the weight coefficient corresponding to the second branch is beta, comparing the loss function weighted sum with a preset loss function set in advance, and determining that the adaptive model training is finished if the loss function weighted sum is smaller than the preset loss function.
And step S103, carrying out keyword detection on the data to be detected by adopting the trained self-adaptive awakening model so as to realize voice awakening.
Optionally, the trained adaptive wake-up model is used to perform keyword detection on the data to be detected, so as to implement voice wake-up, which may include: inputting the data to be tested into the trained self-adaptive awakening model, and respectively obtaining a first prediction probability value of the first branch and a second prediction probability value of the second branch; determining a probability weighted sum of a first preset probability value and a second preset probability value; and when the probability weighted sum is larger than a preset probability threshold value, taking the symbol corresponding to the probability weighted sum as a keyword for voice awakening.
Specifically, in the training process, the awakening word data cannot pass through the prediction network, so that the problem that the prediction network is over-fitted due to the fact that texts corresponding to the awakening words are the same can be solved. Therefore, the adaptive wake-up model obtained through training is more complete. In the process of realizing voice awakening by adopting the trained self-adaptive awakening model to detect the keywords of the data to be detected, because the self-adaptive awakening model comprises two branches, when the prediction is carried out, a first prediction probability value is output by the first branch, a second prediction probability value is output by the second branch, the two prediction probability values are weighted according to the weight coefficient corresponding to each branch, a probability weighted sum is obtained, and when the probability weighted sum is greater than a preset threshold value, the symbol corresponding to the probability weighted sum is used as the keyword for voice awakening.
It should be noted that, in the embodiment, when the trained adaptive wake-up model is used to perform keyword detection, probability weighting obtained by comprehensive calculation is mainly performed according to the predicted probability values obtained by the two branches of the adaptive wake-up word, and detection of the keyword is performed, so that the detection result is more accurate, and meanwhile, the accuracy of wake-up is improved.
In the embodiment of the disclosure, the seed awakening model is obtained, the adaptive awakening model is obtained on the basis of the seed awakening model, and the training sample containing the voice recognition data and the awakening word data is adopted to train the adaptive awakening model, so that the over-fitting condition in the model training process is avoided, and the awakening accuracy is improved when the trained adaptive model is adopted to perform voice awakening.
Example two
Fig. 2 is a flowchart of a voice wakeup method provided in the second embodiment of the present disclosure, where the second embodiment of the present disclosure may be combined with various alternatives in the foregoing embodiments, and in the second embodiment of the present disclosure, after performing keyword detection on data to be detected by using a trained adaptive wakeup model to implement voice wakeup, the method further includes: and detecting a voice awakening result.
As shown in fig. 2, the method of the embodiment of the present disclosure specifically includes:
step S201, training the initial awakening model based on first training sample data to obtain a seed awakening model, wherein the first training sample data comprises voice recognition data.
Optionally, the wake-up model includes a recurrent neural Network (RNN-T) model, the RNN-T model includes an Encoder Network, which may be represented by a symbol Encoder, a Prediction Network may be represented by a symbol Prediction Network, and a Joint Network, which may be represented by a symbol Joint Network, and the Joint Network is connected to the Encoder Network and the Prediction Network, respectively. Where the input to the encoder network is the acoustic signature, the input to the prediction network is the last predicted symbol (text message), and the output of the entire RNN-T model is the probability distribution of the current symbol.
Optionally, training the initial wake-up model based on the first training sample data to obtain a seed wake-up model, which may include: acquiring first initial sample data; expanding the first initial sample data to obtain first training sample data; and training the initial RNN-T model based on first training sample data to obtain a seed RNN-T model.
Step S202, initializing the seed awakening model to obtain a self-adaptive awakening model, and training the self-adaptive awakening model based on second training sample data, wherein the second training sample data comprises voice recognition data and awakening word data.
Optionally, initializing the seed wake-up model to obtain the adaptive wake-up model may include: adding a forward neural FFNN network on the basis of the seed RNN-T model, wherein the FFNN network is connected with an encoder network; taking the seed RNN-T model as a first branch, and taking the FFNN network and the encoder network as a second branch; an adaptive wake-up model is obtained from the first branch and the second branch.
Optionally, training the adaptive wake-up model based on second training sample data includes: acquiring second initial sample data; expanding the second initial sample data to obtain second training sample data; training the adaptive wake-up model based on the second training sample data.
Optionally, training the adaptive wake-up model based on second training sample data includes: training the first branch based on the voice recognition data in the second training sample to obtain a first loss function result; training the second branch based on the voice recognition data and the awakening word data in the second training sample to obtain a second loss function result; and determining the weighted sum of the loss functions of the first loss function result and the second loss function result, and determining that the adaptive model training is finished when the weighted sum of the loss functions is smaller than a preset loss threshold value.
Step S203, performing keyword detection on the data to be detected by adopting the trained adaptive wake-up model so as to realize voice wake-up.
Optionally, the trained adaptive wake-up model is used to perform keyword detection on the data to be detected, so as to implement voice wake-up, which may include: inputting the data to be tested into the trained self-adaptive awakening model, and respectively obtaining a first prediction probability value of the first branch and a second prediction probability value of the second branch; determining a probability weighted sum of a first preset probability value and a second preset probability value; and when the probability weighted sum is larger than a preset probability threshold value, taking the symbol corresponding to the probability weighted sum as a keyword for voice awakening.
Step S204, detecting the voice awakening result.
Specifically, in this embodiment, after the trained adaptive wake-up model is used to perform keyword detection on data to be detected to implement voice wake-up, a voice wake-up result needs to be detected, that is, whether the device can start a voice interaction process according to the keyword, for example, the keyword is determined to be "ABAB", when it is determined that a user sends voice information corresponding to the keyword, whether the device can perform interaction according to the keyword included in the voice information, for example, it is determined whether the device can give a voice response, "what instruction is requested to ask", if the device can start the voice interaction process, it is determined that the voice wake-up result is accurate, otherwise, it is determined that the voice wake-up result is failed.
It should be noted that, when it is determined that the voice wakeup result fails, the reason for the failure may be caused by a hardware fault of the device itself, or may be caused by inaccuracy of sample data in the training process of the adaptive wakeup model. And the voice awakening method can carry out alarm prompt under the condition that the voice awakening result fails, the alarm prompt can adopt a voice form or a character form, the specific form of the alarm prompt is not limited in the embodiment, and a user can be prompted to maintain the equipment or adjust the voice awakening process as soon as possible through the alarm prompt, so that the accuracy of voice awakening is ensured.
In the embodiment of the disclosure, the seed awakening model is obtained, the adaptive awakening model is obtained on the basis of the seed awakening model, and the training sample containing the voice recognition data and the awakening word data is adopted to train the adaptive awakening model, so that the over-fitting condition in the model training process is avoided, and the awakening accuracy is improved when the trained adaptive model is adopted to perform voice awakening. And the voice awakening result is detected, and the alarm prompt is carried out under the condition that the voice awakening is determined to be failed, so that a user can be prompted to maintain the equipment or adjust the voice awakening process in time, and the accuracy of the voice awakening is ensured.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a voice wake-up apparatus according to a third embodiment of the present disclosure. The apparatus may be implemented in software and/or hardware and may generally be integrated in an electronic device performing the method. As shown in fig. 3, the apparatus may include:
a seed wake-up model obtaining module 310, configured to train the initial wake-up model based on first training sample data to obtain a seed wake-up model, where the first training sample data includes voice recognition data;
the adaptive wake-up model training module 320 is configured to initialize the seed wake-up model to obtain an adaptive wake-up model, and train the adaptive wake-up model based on second training sample data, where the second training sample data includes speech recognition data and wake-up word data.
And the voice awakening module 330 is configured to perform keyword detection on the data to be detected by using the trained adaptive awakening model, so as to implement voice awakening.
In the embodiment of the disclosure, the seed awakening model is obtained, the adaptive awakening model is obtained on the basis of the seed awakening model, and the training sample containing the voice recognition data and the awakening word data is adopted to train the adaptive awakening model, so that the over-fitting condition in the model training process is avoided, and the awakening accuracy is improved when the trained adaptive model is adopted to perform voice awakening.
Optionally, on the basis of the above technical solution, the wake-up model includes a recurrent neural network transform (RNN-T) model, and the RNN-T model includes an encoder network, a prediction network, and a joint network, and the joint network is connected to the encoder network and the prediction network, respectively.
Optionally, on the basis of the above technical scheme, the seed wakeup model obtaining module is configured to obtain first initial sample data;
expanding the first initial sample data to obtain first training sample data;
and training the initial RNN-T model based on first training sample data to obtain a seed RNN-T model.
Optionally, on the basis of the above technical solution, the adaptive wake-up model training module includes an adaptive wake-up obtaining sub-module, configured to add a forward neural FFNN network on the basis of the seed RNN-T model, where the FFNN network is connected to the encoder network;
taking the seed RNN-T model as a first branch, and taking the FFNN network and the encoder network as a second branch;
an adaptive wake-up model is obtained from the first branch and the second branch.
Optionally, on the basis of the above technical scheme, the adaptive wake-up model training module includes an adaptive wake-up model training submodule configured to obtain second initial sample data;
expanding the second initial sample data to obtain second training sample data;
training the adaptive wake-up model based on the second training sample data.
Optionally, on the basis of the above technical scheme, the adaptive wake-up model training sub-module is further configured to train the first branch based on the speech recognition data in the second training sample to obtain a first loss function result;
training the second branch based on the voice recognition data and the awakening word data in the second training sample to obtain a second loss function result;
and determining the weighted sum of the loss functions of the first loss function result and the second loss function result, and determining that the adaptive model training is finished when the weighted sum of the loss functions is smaller than a preset loss threshold value.
Optionally, on the basis of the above technical scheme, the voice wake-up module is configured to input the data to be tested into the trained adaptive wake-up model, and obtain a first prediction probability value of the first branch and a second prediction probability value of the second branch, respectively;
determining a probability weighted sum of a first preset probability value and a second preset probability value;
and when the probability weighted sum is larger than a preset probability threshold value, taking the symbol corresponding to the probability weighted sum as a keyword for voice awakening.
The voice wake-up device provided by the embodiment of the present disclosure is similar to the voice wake-up method provided by the embodiments, and technical details that are not described in detail in the embodiment of the present disclosure may be referred to the embodiments, and the embodiment of the present disclosure has the same beneficial effects as the embodiments.
Example four
Referring now to FIG. 4, a block diagram of an electronic device 400 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device in the embodiment of the present disclosure may be a device corresponding to a backend service platform of an application program, and may also be a mobile terminal device installed with an application program client. In particular, the electronic device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), etc., and a stationary terminal such as a digital TV, a desktop computer, etc. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 409, or from the storage device 408, or from the ROM 402. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 401.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the internal processes of the electronic device to perform: training the initial awakening model based on first training sample data to obtain a seed awakening model, wherein the first training sample data comprises voice recognition data; initializing the seed awakening model to obtain a self-adaptive awakening model, and training the self-adaptive awakening model based on second training sample data, wherein the second training sample data comprises voice recognition data and awakening word data; and performing keyword detection on the data to be detected by adopting the trained self-adaptive awakening model so as to realize voice awakening.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, [ example 1 ] there is provided a voice wake-up method comprising:
training an initial awakening model based on first training sample data to obtain a seed awakening model, wherein the first training sample data comprises voice recognition data;
initializing the seed awakening model to obtain a self-adaptive awakening model, and training the self-adaptive awakening model based on second training sample data, wherein the second training sample data comprises voice recognition data and awakening word data;
and performing keyword detection on the data to be detected by adopting the trained self-adaptive awakening model so as to realize voice awakening.
According to one or more embodiments of the present disclosure, [ example 2 ] there is provided the method of example 1, the wake-up model comprising a recurrent neural network transition machine, RNN-T, model comprising an encoder network, a prediction network and a joint network, and the joint network being connected with the encoder network and the prediction network, respectively.
According to one or more embodiments of the present disclosure, [ example 3 ] there is provided the method of example 2, the training an initial wake-up model based on first training sample data to obtain a seed wake-up model, comprising:
acquiring first initial sample data;
expanding the first initial sample data to obtain the first training sample data;
and training the initial RNN-T model based on the first training sample data to obtain a seed RNN-T model.
According to one or more embodiments of the present disclosure, [ example 4 ] there is provided the method of example 2, the initializing the seed wake model to obtain an adaptive wake model, comprising:
adding a forward neural FFNN network on the basis of the seed RNN-T model, wherein the FFNN network is connected with the encoder network;
taking the seed RNN-T model as a first branch, and taking the FFNN network and the encoder network as a second branch;
and obtaining the self-adaptive awakening model according to the first branch and the second branch.
According to one or more embodiments of the present disclosure, [ example 5 ] there is provided the method of example 4, the training the adaptive wake-up model based on second training sample data, comprising:
acquiring second initial sample data;
expanding the second initial sample data to obtain second training sample data;
training the adaptive wake-up model based on the second training sample data.
According to one or more embodiments of the present disclosure, [ example 6 ] there is provided the method of example 5, the training the adaptive wake-up model based on the second training sample data, comprising:
training the first branch based on voice recognition data in a second training sample to obtain a first loss function result;
training the second branch based on voice recognition data and awakening word data in a second training sample to obtain a second loss function result;
and determining a weighted sum of the loss functions of the first loss function result and the second loss function result, and determining that the adaptive model training is completed when the weighted sum of the loss functions is smaller than a preset loss threshold.
According to one or more embodiments of the present disclosure, [ example 7 ] there is provided the method of example 1, the performing keyword detection on data to be detected by using the trained adaptive wake-up model to implement voice wake-up, including:
inputting the data to be tested into the trained self-adaptive awakening model, and respectively obtaining a first prediction probability value of the first branch and a second prediction probability value of the second branch;
determining a probability weighted sum of the first preset probability value and the second preset probability value;
and when the probability weighted sum is larger than a preset probability threshold value, taking the symbol corresponding to the probability weighted sum as a keyword for voice awakening.
According to one or more embodiments of the present disclosure, [ example 8 ] there is provided a voice wake-up apparatus comprising:
the seed awakening model obtaining module is used for training the initial awakening model based on first training sample data to obtain a seed awakening model, wherein the first training sample data comprises voice recognition data;
and the adaptive wake-up model training module is used for initializing the seed wake-up model to obtain an adaptive wake-up model and training the adaptive wake-up model based on second training sample data, wherein the second training sample data comprises voice recognition data and wake-up word data.
And the voice awakening module is used for carrying out keyword detection on the data to be detected by adopting the trained self-adaptive awakening model so as to realize voice awakening.
According to one or more embodiments of the present disclosure, [ example 9 ] there is provided the apparatus of example 8, the wake model comprising a recurrent neural network transition machine, RNN-T, model comprising an encoder network, a prediction network, and a joint network, and the joint network being connected with the encoder network and the prediction network, respectively.
According to one or more embodiments of the present disclosure, [ example 10 ] there is provided the apparatus of example 9, a seed wake-up model obtaining module to obtain first initial sample data;
expanding the first initial sample data to obtain the first training sample data;
and training the initial RNN-T model based on the first training sample data to obtain a seed RNN-T model.
According to one or more embodiments of the present disclosure, [ example 11 ] there is provided the apparatus of example 9, the adaptive wake-up model training module comprising an adaptive wake-up acquisition sub-module for adding a forward neural, FFNN, network on the basis of the seed RNN-T model, wherein the FFNN network is connected to the encoder network;
taking the seed RNN-T model as a first branch, and taking the FFNN network and the encoder network as a second branch;
and obtaining the self-adaptive awakening model according to the first branch and the second branch.
According to one or more embodiments of the present disclosure, [ example 12 ] there is provided the apparatus of example 11, the adaptive wake-up model training module comprising an adaptive wake-up model training sub-module for obtaining second initial sample data;
expanding the second initial sample data to obtain second training sample data;
training the adaptive wake-up model based on the second training sample data.
According to one or more embodiments of the present disclosure, [ example 13 ] there is provided the apparatus of example 12, the adaptive wake-up model training sub-module further configured to train the first branch based on speech recognition data in a second training sample to obtain a first loss function result;
training the second branch based on voice recognition data and awakening word data in a second training sample to obtain a second loss function result;
and determining a weighted sum of the loss functions of the first loss function result and the second loss function result, and determining that the adaptive model training is completed when the weighted sum of the loss functions is smaller than a preset loss threshold.
According to one or more embodiments of the present disclosure, [ example 14 ] there is provided the apparatus of example 8, the voice wake-up module is configured to input the data to be tested into the trained adaptive wake-up model, and obtain a first prediction probability value of the first branch and a second prediction probability of the second branch, respectively;
determining a probability weighted sum of the first preset probability value and the second preset probability value;
and when the probability weighted sum is larger than a preset probability threshold value, taking the symbol corresponding to the probability weighted sum as a keyword for voice awakening.
According to one or more embodiments of the present disclosure, [ example 15 ] there is provided an electronic device comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any of examples 1-7.
According to one or more embodiments of the present disclosure, [ example 16 ] there is provided a storage medium containing computer executable instructions, having a computer program stored thereon, characterized in that the program, when executed by a processor, implements the method as in any of examples 1-7.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Claims (10)
1. A voice wake-up method, comprising:
training an initial awakening model based on first training sample data to obtain a seed awakening model, wherein the first training sample data comprises voice recognition data;
initializing the seed awakening model to obtain a self-adaptive awakening model, and training the self-adaptive awakening model based on second training sample data, wherein the second training sample data comprises voice recognition data and awakening word data;
and performing keyword detection on the data to be detected by adopting the trained self-adaptive awakening model so as to realize voice awakening.
2. The method of claim 1, wherein the wake model comprises a recurrent neural network transform (RNN-T) model, wherein the RNN-T model comprises an encoder network, a prediction network, and a joint network, and wherein the joint network is respectively connected to the encoder network and the prediction network.
3. The method of claim 2, wherein training the initial wake-up model based on the first training sample data to obtain a seed wake-up model comprises:
acquiring first initial sample data;
expanding the first initial sample data to obtain the first training sample data;
and training the initial RNN-T model based on the first training sample data to obtain a seed RNN-T model.
4. The method of claim 3, wherein initializing the seed wake model obtains an adaptive wake model, comprising:
adding a forward neural FFNN network on the basis of the seed RNN-T model, wherein the FFNN network is connected with the encoder network;
taking the seed RNN-T model as a first branch, and taking the FFNN network and the encoder network as a second branch;
and obtaining the self-adaptive awakening model according to the first branch and the second branch.
5. The method according to claim 4, wherein training the adaptive wake-up model based on second training sample data comprises:
acquiring second initial sample data;
expanding the second initial sample data to obtain second training sample data;
training the adaptive wake-up model based on the second training sample data.
6. The method according to claim 5, wherein training the adaptive wake-up model based on the second training sample data comprises:
training the first branch based on voice recognition data in a second training sample to obtain a first loss function result;
training the second branch based on voice recognition data and awakening word data in a second training sample to obtain a second loss function result;
and determining a weighted sum of the loss functions of the first loss function result and the second loss function result, and determining that the adaptive model training is completed when the weighted sum of the loss functions is smaller than a preset loss threshold.
7. The method according to claim 1, wherein the performing keyword detection on the data to be detected by using the trained adaptive wake-up model to realize voice wake-up comprises:
inputting the data to be tested into the trained self-adaptive awakening model, and respectively obtaining a first prediction probability value of the first branch and a second prediction probability value of the second branch;
determining a probability weighted sum of the first preset probability value and the second preset probability value;
and when the probability weighted sum is larger than a preset probability threshold value, taking the symbol corresponding to the probability weighted sum as a keyword for voice awakening.
8. A voice wake-up apparatus, comprising:
the seed awakening model obtaining module is used for training the initial awakening model based on first training sample data to obtain a seed awakening model, wherein the first training sample data comprises voice recognition data;
and the adaptive wake-up model training module is used for initializing the seed wake-up model to obtain an adaptive wake-up model and training the adaptive wake-up model based on second training sample data, wherein the second training sample data comprises voice recognition data and wake-up word data.
And the voice awakening module is used for carrying out keyword detection on the data to be detected by adopting the trained self-adaptive awakening model so as to realize voice awakening.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011474857.9A CN112712801B (en) | 2020-12-14 | 2020-12-14 | Voice wakeup method and device, electronic equipment and storage medium |
PCT/CN2021/135387 WO2022127620A1 (en) | 2020-12-14 | 2021-12-03 | Voice wake-up method and apparatus, electronic device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011474857.9A CN112712801B (en) | 2020-12-14 | 2020-12-14 | Voice wakeup method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112712801A true CN112712801A (en) | 2021-04-27 |
CN112712801B CN112712801B (en) | 2024-02-02 |
Family
ID=75542087
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011474857.9A Active CN112712801B (en) | 2020-12-14 | 2020-12-14 | Voice wakeup method and device, electronic equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112712801B (en) |
WO (1) | WO2022127620A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113593546A (en) * | 2021-06-25 | 2021-11-02 | 青岛海尔科技有限公司 | Terminal device awakening method and device, storage medium and electronic device |
WO2022127620A1 (en) * | 2020-12-14 | 2022-06-23 | 北京有竹居网络技术有限公司 | Voice wake-up method and apparatus, electronic device, and storage medium |
CN116682432A (en) * | 2022-09-23 | 2023-09-01 | 荣耀终端有限公司 | Speech recognition method, electronic device and readable medium |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115064160B (en) * | 2022-08-16 | 2022-11-22 | 阿里巴巴(中国)有限公司 | Voice wake-up method and device |
CN117079653A (en) * | 2023-10-11 | 2023-11-17 | 荣耀终端有限公司 | Speech recognition method, training method, device and medium for speech recognition model |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160189706A1 (en) * | 2014-12-30 | 2016-06-30 | Broadcom Corporation | Isolated word training and detection |
CN107123417A (en) * | 2017-05-16 | 2017-09-01 | 上海交通大学 | Optimization method and system are waken up based on the customized voice that distinctive is trained |
US20190287526A1 (en) * | 2016-11-10 | 2019-09-19 | Nuance Communications, Inc. | Techniques for language independent wake-up word detection |
CN110491373A (en) * | 2019-08-19 | 2019-11-22 | Oppo广东移动通信有限公司 | Model training method, device, storage medium and electronic equipment |
US20200105256A1 (en) * | 2018-09-28 | 2020-04-02 | Sonos, Inc. | Systems and methods for selective wake word detection using neural network models |
CN111508481A (en) * | 2020-04-24 | 2020-08-07 | 展讯通信(上海)有限公司 | Training method and device of voice awakening model, electronic equipment and storage medium |
CN111667818A (en) * | 2020-05-27 | 2020-09-15 | 北京声智科技有限公司 | Method and device for training awakening model |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110035215A1 (en) * | 2007-08-28 | 2011-02-10 | Haim Sompolinsky | Method, device and system for speech recognition |
CN111312222B (en) * | 2020-02-13 | 2023-09-12 | 北京声智科技有限公司 | Awakening and voice recognition model training method and device |
CN111640426A (en) * | 2020-06-10 | 2020-09-08 | 北京百度网讯科技有限公司 | Method and apparatus for outputting information |
CN112712801B (en) * | 2020-12-14 | 2024-02-02 | 北京有竹居网络技术有限公司 | Voice wakeup method and device, electronic equipment and storage medium |
-
2020
- 2020-12-14 CN CN202011474857.9A patent/CN112712801B/en active Active
-
2021
- 2021-12-03 WO PCT/CN2021/135387 patent/WO2022127620A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160189706A1 (en) * | 2014-12-30 | 2016-06-30 | Broadcom Corporation | Isolated word training and detection |
US20190287526A1 (en) * | 2016-11-10 | 2019-09-19 | Nuance Communications, Inc. | Techniques for language independent wake-up word detection |
CN107123417A (en) * | 2017-05-16 | 2017-09-01 | 上海交通大学 | Optimization method and system are waken up based on the customized voice that distinctive is trained |
US20200105256A1 (en) * | 2018-09-28 | 2020-04-02 | Sonos, Inc. | Systems and methods for selective wake word detection using neural network models |
CN110491373A (en) * | 2019-08-19 | 2019-11-22 | Oppo广东移动通信有限公司 | Model training method, device, storage medium and electronic equipment |
CN111508481A (en) * | 2020-04-24 | 2020-08-07 | 展讯通信(上海)有限公司 | Training method and device of voice awakening model, electronic equipment and storage medium |
CN111667818A (en) * | 2020-05-27 | 2020-09-15 | 北京声智科技有限公司 | Method and device for training awakening model |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022127620A1 (en) * | 2020-12-14 | 2022-06-23 | 北京有竹居网络技术有限公司 | Voice wake-up method and apparatus, electronic device, and storage medium |
CN113593546A (en) * | 2021-06-25 | 2021-11-02 | 青岛海尔科技有限公司 | Terminal device awakening method and device, storage medium and electronic device |
CN113593546B (en) * | 2021-06-25 | 2023-09-15 | 青岛海尔科技有限公司 | Terminal equipment awakening method and device, storage medium and electronic device |
CN116682432A (en) * | 2022-09-23 | 2023-09-01 | 荣耀终端有限公司 | Speech recognition method, electronic device and readable medium |
CN116682432B (en) * | 2022-09-23 | 2024-05-31 | 荣耀终端有限公司 | Speech recognition method, electronic device and readable medium |
Also Published As
Publication number | Publication date |
---|---|
WO2022127620A1 (en) | 2022-06-23 |
CN112712801B (en) | 2024-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112966712B (en) | Language model training method and device, electronic equipment and computer readable medium | |
CN112712801B (en) | Voice wakeup method and device, electronic equipment and storage medium | |
CN112634876B (en) | Speech recognition method, device, storage medium and electronic equipment | |
CN113327598B (en) | Model training method, voice recognition method, device, medium and equipment | |
CN111968647B (en) | Voice recognition method, device, medium and electronic equipment | |
CN112509562B (en) | Method, apparatus, electronic device and medium for text post-processing | |
CN111597825B (en) | Voice translation method and device, readable medium and electronic equipment | |
CN113488050B (en) | Voice wakeup method and device, storage medium and electronic equipment | |
CN110009101B (en) | Method and apparatus for generating a quantized neural network | |
CN112380876B (en) | Translation method, device, equipment and medium based on multilingual machine translation model | |
CN111968648B (en) | Voice recognition method and device, readable medium and electronic equipment | |
CN112309384B (en) | Voice recognition method, device, electronic equipment and medium | |
CN115908640A (en) | Method and device for generating image, readable medium and electronic equipment | |
CN112562633A (en) | Singing synthesis method and device, electronic equipment and storage medium | |
CN116072108A (en) | Model generation method, voice recognition method, device, medium and equipment | |
CN113051933B (en) | Model training method, text semantic similarity determination method, device and equipment | |
CN114765025A (en) | Method for generating and recognizing speech recognition model, device, medium and equipment | |
CN112380883B (en) | Model training method, machine translation method, device, equipment and storage medium | |
US20240221525A1 (en) | Content output method and apparatus, computer-readable medium, and electronic device | |
CN111680754A (en) | Image classification method and device, electronic equipment and computer-readable storage medium | |
CN112669816A (en) | Model training method, speech recognition method, device, medium and equipment | |
CN110852043A (en) | Text transcription method, device, equipment and storage medium | |
CN115374320B (en) | Text matching method and device, electronic equipment and computer medium | |
CN115565607B (en) | Method, device, readable medium and electronic equipment for determining protein information | |
CN112417151B (en) | Method for generating classification model, text relationship classification method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |