WO2022121188A1 - 关键词检测方法、装置、设备和存储介质 - Google Patents
关键词检测方法、装置、设备和存储介质 Download PDFInfo
- Publication number
- WO2022121188A1 WO2022121188A1 PCT/CN2021/084545 CN2021084545W WO2022121188A1 WO 2022121188 A1 WO2022121188 A1 WO 2022121188A1 CN 2021084545 W CN2021084545 W CN 2021084545W WO 2022121188 A1 WO2022121188 A1 WO 2022121188A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- loss function
- function
- task
- loss
- value
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 105
- 230000006870 function Effects 0.000 claims abstract description 352
- 238000000034 method Methods 0.000 claims abstract description 31
- 238000012549 training Methods 0.000 claims description 102
- 238000004364 calculation method Methods 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 16
- 238000000605 extraction Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 4
- 230000008569 process Effects 0.000 description 9
- 238000012821 model calculation Methods 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013434 data augmentation Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 241001417527 Pempheridae Species 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000037433 frameshift Effects 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/24—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- This application relates to the field of neural networks for artificial intelligence.
- the current voice intelligent assistant is only a keyword detection system, which satisfies any user-machine dialogue without identifying the user's identity characteristics.
- an additional model such as a voiceprint recognition model, needs to be trained separately, that is, keyword detection and speaker recognition tasks need to be modeled separately, increasing the amount of model computation and feedback. Latency, and is not suitable for simultaneous deployment on small smart devices.
- a keyword detection method is proposed.
- the keyword detection network includes a first fully connected layer and a second fully connected layer connected in parallel, and the method includes:
- the first probability is the probability corresponding to the current user identification
- a second aspect of the present application provides a keyword detection device, the device deploys a keyword detection network, the keyword detection network includes a first fully connected layer and a second fully connected layer connected in parallel, and the device includes:
- the first acquisition module is used to acquire the speech sentence to be detected input by the current user
- an extraction module used for extracting the speech feature parameters corresponding to the speech sentence to be detected
- a first input module for inputting the speech feature parameters into the keyword detection network
- a first judgment module configured to judge whether the first probability output by the first fully connected layer is higher than a preset probability threshold, wherein the first probability is the probability corresponding to the current user identification
- a determination module configured to determine the keyword of the speech sentence to be detected according to the second probability output by the second fully connected layer if it is higher than a preset probability threshold, wherein the second probability is the corresponding keyword recognition The probability.
- a third aspect of the present application provides a computer device, comprising a memory and a processor, wherein the memory stores a computer program, the processor implements the steps of the above keyword detection method when the computer program is executed, and a keyword detection network It includes a first fully connected layer and a second fully connected layer that are connected in parallel, and the method includes: acquiring a speech sentence to be detected input by a current user; extracting a speech feature parameter corresponding to the speech sentence to be detected; Inputting the keyword detection network; judging whether the first probability output by the first fully connected layer is higher than a preset probability threshold, wherein the first probability is the probability corresponding to the current user identification; if so, according to The second probability output by the second fully connected layer determines the keyword of the speech sentence to be detected, wherein the second probability is a probability corresponding to keyword recognition.
- the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the above keyword detection method are implemented, and the keyword detection network includes a first complete network connected in parallel.
- a connection layer and a second fully-connected layer the method includes: acquiring a speech sentence to be detected input by a current user; extracting a speech feature parameter corresponding to the speech sentence to be detected; inputting the speech feature parameter into the keyword detection network ; determine whether the first probability output by the first fully connected layer is higher than a preset probability threshold, wherein the first probability is the probability corresponding to the current user identification; if so, according to the second fully connected layer The outputted second probability determines the keyword of the speech sentence to be detected, wherein the second probability is a probability corresponding to keyword recognition.
- FIG. 1 is a schematic flowchart of a keyword detection method according to an embodiment of the present application.
- FIG. 2 is a schematic structural diagram of a keyword detection device according to an embodiment of the present application.
- FIG. 3 is a schematic diagram of an internal structure of a computer device according to an embodiment of the present application.
- the keyword detection network includes a first fully connected layer and a second fully connected layer connected in parallel, and the method includes:
- S4 Determine whether the first probability output by the first fully-connected layer is higher than a preset probability threshold, where the first probability is the probability corresponding to the current user identification;
- the keyword detection network of the embodiment of the present application includes a feature input layer, a multi-layer residual structure, a convolution layer, a batch normalization layer, an average pooling layer, and a first fully connected layer connected in parallel after the average pooling layer. and the second fully connected layer.
- the above-mentioned speech sentence to be detected undergoes operations such as pre-emphasis, framing, windowing, etc., and the MFCC (Mel-frequency Cepstrum Coefficients, Mel-frequency Cepstrum Coefficients) of the speech sentence to be detected is extracted as a speech feature parameter.
- the above-mentioned MFCC is 40-dimensional, the frame shift is 10ms, the frame length is 30ms, and the Hamming window is used to add a window to divide the frame to smooth the edge signal of each frame.
- the above-mentioned multi-layer residual layer includes 6 layers, each residual layer includes two sequentially connected data processing units, and each data processing unit is sequentially composed of a convolutional layer connected to a batch normalization layer, wherein the volume of the convolutional layer is The product kernel is 3*3, and the number of convolution kernels is 45.
- the military base layer uses atrous convolution to increase the receptive field, and the expansion rate is Because there are 6 residual layers, each residual layer has two convolutional layers, a total of 12 convolutional layers, so the value of l has 12, and the expansion of the convolutional layer after the last residual layer The rate is set to (16,16).
- This application realizes two task processing channels by connecting the first fully connected layer and the second fully connected layer in parallel after the average pooling layer, and the first channel corresponding to the first fully connected layer is used for the first task.
- the loss function in the first channel is set as the sigmoid function to realize the identification of whether the current user is the target user; by setting the loss function in the second channel as the softmax function, the identification of keywords is realized.
- This application is based on the same set of training data and the same feature processing process, by connecting the task channels constrained by two different loss functions in parallel, and by designing a reasonable training logic, the parameters of the network model that executes two tasks at the same time are controlled to increase slightly to achieve Two tasks can share computation, and both tasks are implemented in the same network model.
- step S4 of judging whether the first probability output by the first fully connected layer is higher than a preset probability threshold includes:
- S41 Calculate the probability that the current user is a target user according to a specified calculation method, wherein the specified calculation method is P(S u
- X) 1-P(S e
- the task output by the first fully connected layer is additionally designed in parallel. channel, and set the loss function of the task channel as the sigmoid function, so as to obtain the conditional probability P(S u
- X) 1-P(S e
- X) represents the probability that the current user is not the target user
- the network part responsible for feature calculation including feature input layer, multi-layer residual structure, convolution layer, batch normalization layer and average pooling layer, Share parameters with keyword recognition tasks to reduce computation and memory.
- the output probability of the above sigmoid function is a probability value ranging from 0 to 1. Only when P(S u
- the above-mentioned preset probability threshold is, for example, 0.9 or above.
- the first fully-connected layer corresponds to the output channel of the first task
- the second fully-connected layer corresponds to the output channel of the second task
- the obtained voice to be detected input by the current user is obtained.
- the keyword detection network in the embodiment of the present application is a multi-task model.
- the loss functions corresponding to the two tasks are set to form a total loss function by setting weights. , which constrains the parameter adjustment of the multi-task model during training.
- a dynamic adjustment of two loss weights is designed to balance the training and learning levels of the two tasks, so that the parameters finally learned by the multi-task model can better identify the two tasks. Accuracy.
- represents how many elements there are in the set, and how many elements represent how many tasks.
- represents how many elements there are in the set, and how many elements represent how many tasks.
- represents how many elements there are in the set, and how many elements represent how many tasks.
- represents how many elements there are in the set, and how many elements represent how many tasks.
- represents how many elements there are in the set, and how many elements represent how many tasks.
- represents how many elements there are in the set, and how many elements represent how
- the embodiments of the present application perform data augmentation on the training data, thereby improving the robustness of the keyword detection network.
- the data augmentation includes, but is not limited to, random time shifting the training data, random Add noise, some training data will be regenerated in each round of training, etc.
- step S12 of acquiring the function value of the first loss function corresponding to the first task and the function value of the second loss function corresponding to the second task in real time includes:
- S121 Obtain the current predicted value of the sigmoid function corresponding to the first task, and the preset first real value, and obtain the current predicted value of the softmax function corresponding to the second task, and the preset second real value;
- S122 Calculate the first loss function value according to the current predicted value of the sigmoid function and the preset first real value, and calculate the first loss function value according to the current predicted value of the softmax function and the preset second real value the second loss function value.
- two tasks are trained on one model architecture at the same time, and the parameter adjustment of the model architecture is simultaneously constrained by the loss functions corresponding to the two tasks respectively.
- the loss function value represents the gap between the predicted value and the true value, thereby constraining the parameter adjustment of the model architecture through backpropagation.
- the step S13 of the corresponding loss weight includes:
- S131 Calculate the difference between the function value of the first loss function and the function value of the second loss function
- the loss weight of the loss function of the task in the total loss function will be increased, so that the model architecture of the current keyword detection network will be increased.
- the parameters are more biased towards the task.
- the higher the training accuracy of a task the lower the corresponding loss weight.
- step S14 of judging whether the total loss function reaches a preset condition includes:
- S142 Calculate the average training accuracy corresponding to the current moment of the first task according to the first training accuracy and the second training accuracy;
- S144 Calculate the loss weight of the second task according to the calculation method of the loss weight of the first task
- S145 Obtain the total loss function according to the loss weight of the first task, the first loss function, the loss weight of the second task, and the second loss function.
- the above two loss weights obtained from the respective training accuracy rates are normalized, so that two of the total loss functions are normalized.
- the sum of the loss weights of the loss function is equal to the total number of tasks, i.e. established. In this embodiment of the present application, even if the sum of the loss weights of the two loss functions is equal to 2.
- the terminal receiving the to-be-detected speech sentence is an intelligent device, and after the step S5 of determining the keyword of the to-be-detected speech sentence according to the second probability output by the second fully connected layer, it includes:
- the embodiments of the present application take deploying a keyword detection network on smart devices to recognize voice commands of specific people as an example.
- the smart devices include but are not limited to small human interaction devices such as smart phones, smart speakers, smart computers, and smart sweepers. By simultaneously identifying the identity of the target person and the keywords in the voice command initiated by the target person, only the keyword recognition of a specific person and the realization of the instruction instruction are realized.
- a keyword detection device deploys a keyword detection network
- the keyword detection network includes a first fully connected layer and a second fully connected layer that are connected in parallel
- the device includes:
- the first obtaining module 1 is used to obtain the speech sentence to be detected input by the current user;
- Extraction module 2 for extracting the speech feature parameters corresponding to the speech sentence to be detected
- the first input module 3 is used to input the speech feature parameter into the keyword detection network
- a first judgment module 4 configured to judge whether the first probability output by the first fully connected layer is higher than a preset probability threshold, wherein the first probability is the probability corresponding to the current user identification;
- a determination module 5 configured to determine the keyword of the speech sentence to be detected according to the second probability output by the second fully connected layer if it is higher than a preset probability threshold, wherein the second probability is keyword recognition corresponding probability.
- judgment module 4 includes:
- a first calculation unit configured to calculate the probability that the current user is a target user according to a specified calculation method, wherein the specified calculation method is P(S u
- X) 1-P(S e
- X) that the current user is the target user is used as the first probability
- a first judging unit configured to judge whether the P(S u
- a determination unit configured to determine that the first probability output by the first fully connected layer is higher than the preset probability threshold if it is higher than the preset probability threshold.
- the first fully connected layer corresponds to the output channel of the first task
- the second fully connected layer corresponds to the output channel of the second task
- the keyword detection device includes:
- the second input module is used to input the speech feature parameters corresponding to each training data into the keyword detection network for training;
- a second acquisition module configured to acquire, in real time, the function value of the first loss function corresponding to the first task, and the function value of the second loss function corresponding to the second task;
- An adjustment module configured to adjust in real time the first loss function and the second loss function in the total loss function according to the numerical relationship between the function value of the first loss function and the function value of the second loss function. Corresponding loss weight;
- a second judgment module configured to judge whether the total loss function reaches a preset condition
- a determination module configured to determine that the training of the keyword detection network is completed if a preset condition is reached, and to fix the parameters of the keyword detection network.
- the second acquisition module includes:
- the obtaining unit is used to obtain the current predicted value of the sigmoid function corresponding to the first task, and the preset first real value, obtain the current predicted value of the softmax function corresponding to the second task, and the preset second actual value;
- the second calculation unit is configured to calculate the first loss function value according to the current predicted value of the sigmoid function and the preset first real value, and calculate the first loss function value according to the current predicted value of the softmax function and the preset first loss function value. Two real values, the second loss function value is calculated.
- the adjustment module includes:
- a third calculation unit configured to calculate the difference between the function value of the first loss function and the function value of the second loss function
- a second judging unit configured to judge whether the difference is greater than zero
- the increasing unit is used to increase the first loss weight corresponding to the first loss function in the total loss function if it is greater than zero, and decrease the second loss weight corresponding to the second loss function in the total loss function .
- the keyword detection device includes:
- a third acquisition module configured to acquire the first training accuracy of the first task corresponding to the current moment, and the second training accuracy of the first task corresponding to the previous moment adjacent to the current moment;
- a first calculation module configured to calculate the average training accuracy corresponding to the current moment of the first task according to the first training accuracy and the second training accuracy;
- a third calculation module configured to calculate the loss weight of the second task according to the calculation method of the loss weight of the first task
- the obtaining module is configured to obtain the total loss function according to the loss weight of the first task, the first loss function, the loss weight of the second task, and the second loss function.
- the terminal receiving the to-be-detected speech sentence is an intelligent device
- the keyword detection device includes:
- a fourth acquiring module configured to acquire manipulation instruction information corresponding to the keyword, wherein the manipulation instruction information includes a running link of the manipulation instruction;
- an operation module configured to run the manipulation instruction on the intelligent device according to the operation link to obtain an operation result
- the feedback module is used for feeding back the running result to the display terminal of the smart device.
- an embodiment of the present application further provides a computer device.
- the computer device may be a server, and its internal structure may be as shown in FIG. 3 .
- the computer device includes a processor, memory, a network interface, and a database connected by a system bus. Among them, the processor of the computer design is used to provide computing and control capabilities.
- the memory of the computer device includes a non-volatile storage medium, an internal memory.
- the nonvolatile storage medium stores an operating system, a computer program, and a database.
- the memory provides an environment for the execution of the operating system and computer programs in the non-volatile storage medium.
- the database of the computer equipment is used to store all the data required for the keyword detection process.
- the network interface of the computer device is used to communicate with an external terminal through a network connection.
- the computer program when executed by the processor, implements the keyword detection method.
- the above processor executes the above keyword detection method
- the keyword detection network includes a first fully connected layer and a second fully connected layer that are connected in parallel
- the method includes: acquiring a voice sentence to be detected input by a current user; extracting the voice sentence to be detected corresponding voice feature parameters; inputting the voice feature parameters into the keyword detection network; judging whether the first probability output by the first fully connected layer is higher than a preset probability threshold, wherein the first probability is the the probability corresponding to the current user identity recognition; if so, determine the keyword of the speech sentence to be detected according to the second probability output by the second fully connected layer, wherein the second probability is the probability corresponding to the keyword recognition .
- the above computer equipment by setting two different loss functions to constrain the task channels corresponding to different fully-connected layers respectively, realizes multi-task operation in the same network model and shared computing, so as to achieve low device memory requirements, reduce computing time and battery.
- the effect of power consumption can reduce the amount of model calculation and feedback delay, and meet the requirements of embedded devices for small model parameters. It is suitable for deployment on small smart devices.
- the step of the processor judging whether the first probability output by the first fully connected layer is higher than a preset probability threshold includes: calculating the probability that the current user is the target user according to a specified calculation method, wherein , the specified calculation method is P(S u
- X) 1-P(S e
- the specified calculation method is P(S u
- X) 1-P(S e
- X) represents the probability that the current user is the target user
- X) represents the probability
- the first fully-connected layer in the keyword detection network corresponds to the output channel of the first task
- the second fully-connected layer corresponds to the output channel of the second task
- the processor obtains the current user input Before the step of the speech sentence to be detected, it includes: inputting the speech feature parameters corresponding to each training data into the keyword detection network for training; acquiring the function value of the first loss function corresponding to the first task in real time , and the function value of the second loss function corresponding to the second task; according to the numerical relationship between the function value of the first loss function and the function value of the second loss function, the first loss function and the function value are adjusted in real time.
- the step of obtaining the function value of the first loss function corresponding to the first task and the function value of the second loss function corresponding to the second task by the processor in real time includes: obtaining the first loss function.
- the current predicted value of the sigmoid function corresponding to a task, and the preset first real value obtain the current predicted value of the softmax function corresponding to the second task, and the preset second real value;
- the current predicted value and the preset first real value, the first loss function value is calculated, and the second loss function value is calculated according to the current predicted value of the softmax function and the preset second real value.
- the above-mentioned processor adjusts the total value of the first loss function and the second loss function in real time according to the numerical relationship between the function value of the first loss function and the function value of the second loss function.
- the steps of corresponding loss weights in the loss function include: calculating the difference between the function value of the first loss function and the function value of the second loss function; judging whether the difference is greater than zero; if so, increasing The first loss weight corresponding to the first loss function in the total loss function is increased, and the second loss weight corresponding to the second loss function in the total loss function is reduced.
- the terminal receiving the speech sentence to be detected is an intelligent device, and after the processor determines the keyword of the speech sentence to be detected according to the second probability output by the second fully connected layer, The method includes: acquiring the manipulation instruction information corresponding to the keyword, wherein the manipulation instruction information includes the operation link of the manipulation instruction; and running the manipulation instruction on the smart device according to the operation link, to obtain an operation result; The running result is fed back to the display terminal of the smart device.
- FIG. 3 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied.
- An embodiment of the present application further provides a computer-readable storage medium, the computer-readable storage medium may be non-volatile or volatile, and a computer program is stored thereon, and the computer program is implemented when executed by a processor
- a keyword detection method the keyword detection network includes a first fully connected layer and a second fully connected layer connected in parallel, and the method includes: acquiring a speech sentence to be detected input by a current user; extracting a speech feature parameter corresponding to the speech sentence to be detected ; Input the speech feature parameter into the keyword detection network; determine whether the first probability output by the first fully connected layer is higher than a preset probability threshold, wherein the first probability is the current user identification The corresponding probability; if so, determine the keyword of the speech sentence to be detected according to the second probability output by the second fully connected layer, where the second probability is a probability corresponding to keyword recognition.
- the above computer-readable storage medium by setting two different loss functions to constrain the task channels corresponding to different fully-connected layers respectively, realizes that multiple tasks run in the same network model and share computing, so as to achieve low requirements on device memory and reduce computing.
- the effect of time and battery power consumption can reduce the amount of model calculation and feedback delay, and meet the requirements of embedded devices for small model parameters. It is suitable for deployment on small smart devices.
- the step of the processor judging whether the first probability output by the first fully connected layer is higher than a preset probability threshold includes: calculating the probability that the current user is the target user according to a specified calculation method, wherein , the specified calculation method is P(S u
- X) 1-P(S e
- the specified calculation method is P(S u
- X) 1-P(S e
- X) represents the probability that the current user is the target user
- X) represents the probability
- the first fully-connected layer in the keyword detection network corresponds to the output channel of the first task
- the second fully-connected layer corresponds to the output channel of the second task
- the processor obtains the current user input Before the step of the speech sentence to be detected, it includes: inputting the speech feature parameters corresponding to each training data into the keyword detection network for training; acquiring the function value of the first loss function corresponding to the first task in real time , and the function value of the second loss function corresponding to the second task; according to the numerical relationship between the function value of the first loss function and the function value of the second loss function, the first loss function and the function value are adjusted in real time.
- the step of obtaining the function value of the first loss function corresponding to the first task and the function value of the second loss function corresponding to the second task by the processor in real time includes: obtaining the first loss function.
- the current predicted value of the sigmoid function corresponding to a task, and the preset first real value obtain the current predicted value of the softmax function corresponding to the second task, and the preset second real value;
- the current predicted value and the preset first real value, the first loss function value is calculated, and the second loss function value is calculated according to the current predicted value of the softmax function and the preset second real value.
- the above-mentioned processor adjusts the total value of the first loss function and the second loss function in real time according to the numerical relationship between the function value of the first loss function and the function value of the second loss function.
- the steps of corresponding loss weights in the loss function include: calculating the difference between the function value of the first loss function and the function value of the second loss function; judging whether the difference is greater than zero; if so, increasing The first loss weight corresponding to the first loss function in the total loss function is increased, and the second loss weight corresponding to the second loss function in the total loss function is reduced.
- the terminal receiving the speech sentence to be detected is an intelligent device, and after the processor determines the keyword of the speech sentence to be detected according to the second probability output by the second fully connected layer, The method includes: acquiring the manipulation instruction information corresponding to the keyword, wherein the manipulation instruction information includes the operation link of the manipulation instruction; and running the manipulation instruction on the smart device according to the operation link, to obtain an operation result; The running result is fed back to the display terminal of the smart device.
- Nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
- Volatile memory may include random access memory (RAM) or external cache memory.
- RAM is available in various forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Telephonic Communication Services (AREA)
- Machine Translation (AREA)
Abstract
一种关键词检测方法,该方法包括:获取当前用户输入的待检测语音语句(S1);提取待检测语音语句对应的语音特征参数(S2);将语音特征参数输入关键词检测网络(S3);判断第一全连接层输出的第一概率是否高于预设概率阈值(S4);若是,根据第二全连接层输出的第二概率,确定待检测语音语句的关键词(S5)。该方法通过设定两个不同损失函数分别约束不同全连接层对应的任务通道,实现了多任务在同一个网络模型中运行,共享计算,达到了对设备内存要求低、降低计算时间和电池耗电量的效果。
Description
本申请要求于2020年12月11日提交中国专利局、申请号为202011462771.4,发明名称为“关键词检测方法、装置、设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及人工智能的神经网络领域。
随着人工智能技术的发展,越来越多的智能设备上配置智能语音助手功能,实现用户和机器的语音对话。发明人发现,目前的语音智能助手仅为关键词检测系统,满足任何用户与机器的对话,无需对用户身份特征进行识别。即便是有特殊要求进行用户身份识别的系统,也多通过另外训练一模型,比如声纹识别模型,即关键词检测和说话人识别任务需分别进行建模处理,增大了模型计算量和反馈延迟,且不适合同时部署于小型的智能设备上。
现有关键词检测和说话人识别任务不能通过一个模型实现,导致计算量大、反馈延迟的技术问题。
本申请的第一方面,提出一种关键词检测方法,关键词检测网络包括并行连接的第一全连接层和第二全连接层,方法包括:
获取当前用户输入的待检测语音语句;
提取所述待检测语音语句对应的语音特征参数;
将所述语音特征参数输入所述关键词检测网络;
判断所述第一全连接层输出的第一概率是否高于预设概率阈值,其中,所述第一概率为所述当前用户身份识别对应的概率;
若是,根据所述第二全连接层输出的第二概率,确定所述待检测语音语句的关键词,其中,所述第二概率为关键词识别对应的概率。
本申请的第二方面提供了一种关键词检测装置,所述装置部署关键词检测网络,所述关键词检测网络包括并行连接的第一全连接层和第二全连接层,装置包括:
第一获取模块,用于获取当前用户输入的待检测语音语句;
提取模块,用于提取所述待检测语音语句对应的语音特征参数;
第一输入模块,用于将所述语音特征参数输入所述关键词检测网络;
第一判断模块,用于判断所述第一全连接层输出的第一概率是否高于预设概率阈值,其中,所述第一概率为所述当前用户身份识别对应的概率;
确定模块,用于若高于预设概率阈值,根据所述第二全连接层输出的第二概率,确定所述待检测语音语句的关键词,其中,所述第二概率为关键词识别对应的概率。
本申请的第三方面提供了一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述关键词检测方法的步骤,关键词检测网络包括并行连接的第一全连接层和第二全连接层,所述方法包括:获取当前用户输入的待检测语音语句;提取所述待检测语音语句对应 的语音特征参数;将所述语音特征参数输入所述关键词检测网络;判断所述第一全连接层输出的第一概率是否高于预设概率阈值,其中,所述第一概率为所述当前用户身份识别对应的概率;若是,根据所述第二全连接层输出的第二概率,确定所述待检测语音语句的关键词,其中,所述第二概率为关键词识别对应的概率。本申请还提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述关键词检测方法的步骤,,关键词检测网络包括并行连接的第一全连接层和第二全连接层,所述方法包括:获取当前用户输入的待检测语音语句;提取所述待检测语音语句对应的语音特征参数;将所述语音特征参数输入所述关键词检测网络;判断所述第一全连接层输出的第一概率是否高于预设概率阈值,其中,所述第一概率为所述当前用户身份识别对应的概率;若是,根据所述第二全连接层输出的第二概率,确定所述待检测语音语句的关键词,其中,所述第二概率为关键词识别对应的概率。
本申请通过设定两个不同损失函数分别约束不同全连接层对应的任务通道,实现多任务在同一个网络模型中运行,共享计算,从而达到对设备内存要求低、降低计算时间和电池耗电量的效果,满足降低模型计算量和反馈延迟,满足嵌入设备对模型参数量小的要求,适合部署于小型的智能设备上。
图1本申请一实施例的关键词检测方法流程示意图;
图2本申请一实施例的关键词检测装置结构示意图;
图3本申请一实施例的计算机设备内部结构示意图。
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
参照图1,本申请一实施例的关键词检测方法,关键词检测网络包括并行连接的第一全连接层和第二全连接层,方法包括:
S1:获取当前用户输入的待检测语音语句;
S2:提取所述待检测语音语句对应的语音特征参数;
S3:将所述语音特征参数输入所述关键词检测网络;
S4:判断所述第一全连接层输出的第一概率是否高于预设概率阈值,其中,所述第一概率为所述当前用户身份识别对应的概率;
S5:若是,根据所述第二全连接层输出的第二概率,确定所述待检测语音语句的关键词,其中,所述第二概率为关键词识别对应的概率。
本申请实施例的关键词检测网络包括依次连接的特征输入层、多层残差结构、卷积层、批标准化层、平均池化层以及并列连接于平均池化层之后的第一全连接层和第二全连接层。上述待检测语音语句经过预加重、分帧、加窗等操作,提取待检测语音语句的MFCC(Mel-frequency Cepstrum Coefficients,梅尔频率倒谱系数)作为语音特征参数。上述的MFCC为40维,帧移10ms,帧长30ms,使用汉明窗加窗分帧,以平滑各帧的边缘信号。上述的多层残差层包括6层,每个残差层中包括两个依次连接的数据处理单元,每个数据处理单元中依次由卷积层 连接批标准化层组成,其中卷积层的卷积核为3*3,卷积核个数为45。残差层中军基层使用空洞卷积,以增大感受野,扩张率为
因为有6个残差层,每个残差层有两个卷积层,一共12个卷积层,所以l的取值有12个,连在最后一个残差层之后的卷积层的扩张率设置为(16,16)。
本申请通过在平均池化层之后并列连接第一全连接层和第二全连接层,实现两个任务处理通道,第一全连接层对应的第一通道用于第一任务,本申请中通过设定第一通道中的损失函数为sigmoid函数,实现对当前用户是否为目标用户的身份识别;通过设定第二通道中的损失函数为softmax函数,实现对关键词的识别。本申请基于同一套训练数据以及相同的特征处理过程,通过并列连接由两个不同损失函数约束的任务通道,通过设计合理的训练逻辑,控制同时执行两个任务的网络模型的参数小幅增长,达到两个任务可共享计算,两个任务在同一个网络模型中实现。
本申请通过设定两个不同损失函数分别约束不同全连接层对应的任务通道,实现多任务在同一个网络模型中运行,共享计算,从而达到对设备内存要求低、降低计算时间和电池耗电量的效果,满足降低模型计算量和反馈延迟,满足嵌入设备对模型参数量小的要求,适合部署于小型的智能设备上。
进一步地,所述判断所述第一全连接层输出的第一概率是否高于预设概率阈值的步骤S4,包括:
S41:根据指定计算方式计算所述当前用户为目标用户的概率,其中,所述指定计算方式为P(S
u|X)=1-P(S
e|X),P(S
u|X)表示所述当前用户为所述目标用户的概率,P(S
e|X)表示所述当前用户不是所述目标用户的概率;
S42:将所述当前用户为所述目标用户的概率P(S
u|X),作为所述第一概率;
S43:判断所述P(S
u|X)是否高于预设概率阈值;
S44:若是,则判定所述第一全连接层输出的第一概率高于所述预设概率阈值。
本申请实施例为了使关键词检测网络中的深度残差层不仅可以做关键词检测的任务,还可以检测是否是目标用户对应的关键词,额外并列设计了由第一全连接层输出的任务通道,并设定该任务通道的损失函数为sigmoid函数,从而得到条件概率P(S
u|X)=1-P(S
e|X),P(S
u|X)表示当前用户为目标用户的概率,P(S
e|X)表示当前用户不是目标用户的概率,负责特征计算的网络部分,包括特征输入层、多层残差结构、卷积层、批标准化层和平均池化层,与关键词识别任务共享参数,减少计算量和内存。上述sigmoid函数输出概率取值为0到1的一个概率值,只有当P(S
u|X)高于预设概率阈值的时候,才会被认为是目标用户启动了关键词检测。上述预设概率阈值比如为0.9或以上。
进一步地,所述关键词检测网络中所述第一全连接层对应第一任务的输出通道,所述第二全连接层对应第二任务的输出通道,所述获取当前用户输入的待检测语音语句的步骤S1之前,包括:
S11:将各训练数据分别对应的语音特征参数,输入至所述关键词检测网络中进行训练;
S12:实时获取所述第一任务对应的第一损失函数的函数值,以及所述第二任务对应的第二损失函数的函数值;
S13:根据所述第一损失函数的函数值和所述第二损失函数的函数值的数值关系,实时调整所述第一损失函数和所述第二损失函数在总损失函数中分别对应的损失权重;
S14:判断所述总损失函数是否达到预设条件;
S15:若是,则判定完成对所述关键词检测网络的训练,并固定所述关键词检测网络的参数。
本申请实施例的关键词检测网络为多任务模型,为达到各任务均有较好的预测准确率,在训练过程中通过将两个任务分别对应的损失函数通过设置权重的方式组成总损失函数,约束多任务模型在训练过程中的参数调整。在训练中,为加快总损失函数的收敛,设计了动态调整两个损失权重,以平衡两个任务的训练学习水平,使多任务模型最终学习到的参数对两个任务均有较好的识别准确度。
本申请实施例中将关键词检测任务和目标说话人检测任务分别标记为T
1和T
2,令T={T
1,T
2}为所有任务的集合,令λ
j(i)和L
j(i)分别为在训练第i轮时第j个任务的损失权重和损失函数,则第i轮训练时的总损失函数为:
其中|T|代表集合里一共有多少个元素,有多少个元素代表多少个任务。上述预设条件包括各任务的训练精准度达到预设要求,或关键词识别任务和目标说话人检测任务的准确率,不会因彼此的共存而受较大影响。实现证明,当
时,当个任务均可精准地执行,且减少计算量。上述
时,表示总权重和等于总任务数量2时,各任务分别对应的损失权重均为1。
本申请实施例为提高训练效果,对训练数据进行了数据增广,从而提高关键词检测网络的鲁棒性,数据增广包括但不限于随机的对训练数据进行时间平移、对训练数据进行随机加噪、每轮训练有部分训练数据会重新生成等。
进一步地,所述实时获取所述第一任务对应的第一损失函数的函数值,以及所述第二任务对应的第二损失函数的函数值的步骤S12,包括:
S121:获取所述第一任务对应的sigmoid函数的当前预测值,以及预设的第一真实值,获取所述第二任务对应的softmax函数的当前预测值,以及预设的第二真实值;
S122:根据所述sigmoid函数的当前预测值,以及预设的第一真实值,计算所述第一损失函数值,根据所述softmax函数的当前预测值,以及预设的第二真实值,计算所述第二损失函数值。
本申请实施例中,两个任务同时在一个模型架构上训练,模型架构的参数调整,同时受两个任务分别对应的损失函数的约束。通过实时获取两个任务过程中两个函数的函数值,来确定优先以那个函数约束训练为准。损失函数值表示预测值与真实值之间的差距,从而通过反向传播约束模型架构的参数调整。
进一步地,所述根据所述第一损失函数的函数值和所述第二损失函数的函数值的数值关系,实时调整所述第一损失函数和所述第二损失函数在总损失函数中分别对应的损失权重的步骤S13,包括:
S131:计算所述第一损失函数的函数值和所述第二损失函数的函数值的差值;
S132:判断所述差值是否大于零;
S133:若是,则增大所述第一损失函数在总损失函数中对应的第一损失权重,减小所述第二损失函数在总损失函数中对应的第二损失权重。
本申请实施例中,损失函数值大的任务,认为距离训练目标远,不容易训练,则会加大该任务的损失函数在总损失函数中的损失权重,使当前关键词检测网络 的模型架构的参数更偏向于该任务。某一任务的训练准确率越高,其对应的损失权重就越低。通过逐步调整总损失函数的台阶式递进方式,获取最终两个任务均能较好执行的参数。
进一步地,所述判断所述总损失函数是否达到预设条件的步骤S14之前,包括:
S141:获取当前时刻对应的第一任务的第一训练准确度,以及与所述当前时刻相邻的前一时刻对应的第一任务的第二训练准确度;
S142:根据所述第一训练准确度和所述第二训练准确度,计算所述第一任务当前时刻对应的平均训练精准度;
S143:根据所述平均训练精准度,根据指定函数计算所述第一任务的损失权重,其中,所述指定函数为λ
j(i)=-(1-k
j(i))log(k
j(i)),k
j(i)表示训练第i轮时第j个任务的平均训练精准度;
S144:根据所述第一任务的损失权重的计算方式,计算所述第二任务的损失权重;
S145:根据所述第一任务的损失权重、所述第一损失函数、所述第二任务的损失权重以及所述第二损失函数,得到所述总损失函数。
本申请实施例中,每个损失函数的损失权重跟各自任务的训练准确率相关,损失权重表示为λ
j(i)=-(1-k
j(i))log(k
j(i)),其中k
j(i)是通过滑动平均得到的平均训练准确率。上述的滑动平均指当前时刻的训练准确率等于当前时刻的训练准确率和上一时刻的训练准确率做加权平均,比如为X(t)=alpha*X(t-1)+(1-alpha)*X(t),其中,X(t)表示当前时刻的训练准确率,X(t-1)表示当前时刻的上一时刻的训练准确率,alpha表示加权权重。
本申请实施例为方便调控第一损失函数和第二损失函数的损失权重的调控幅度,对上述两个由各自训练准确率得到的损失权重进行了归一化处理,使总损失函数中两个损失函数的损失权重的加和等于任务总数量,即使得
成立。本申请实施例中,即使两个损失函数的损失权重的加和等于2。
进一步地,接收所述待检测语音语句的终端为智能设备,所述根据所述第二全连接层输出的第二概率,确定所述待检测语音语句的关键词的步骤S5之后,包括:
S6:获取所述关键词对应的操控指令信息,其中,所述操控指令信息包括所述操控指令的运行链接;
S7:根据所述运行链接在所述智能设备上运行所述操控指令,得到运行结果;
S8:将所述运行结果反馈至所述智能设备的显示终端。
本申请实施例以将关键词检测网络部署于智能设备,识别特定人的语音指令为例,上述智能设备包括但不限于智能手机、智能音响、智能电脑、智能扫地机等小型的人工交互设备。通过同时识别目标人身份以及目标人发起的语音指令中的关键词,实现只对特定人关键词识别以及指令指示的实现。
参照图2,本申请一实施例的关键词检测装置,所述装置部署关键词检测网络,所述关键词检测网络包括并行连接的第一全连接层和第二全连接层,装置包括:
第一获取模块1,用于获取当前用户输入的待检测语音语句;
提取模块2,用于提取所述待检测语音语句对应的语音特征参数;
第一输入模块3,用于将所述语音特征参数输入所述关键词检测网络;
第一判断模块4,用于判断所述第一全连接层输出的第一概率是否高于预设概率阈值,其中,所述第一概率为所述当前用户身份识别对应的概率;
确定模块5,用于若高于预设概率阈值,根据所述第二全连接层输出的第二概率,确定所述待检测语音语句的关键词,其中,所述第二概率为关键词识别对应的概率。
本申请装置实施例的解释,适用方法对应部分的解释,不赘述。
进一步地,判断模块4,包括:
第一计算单元,用于根据指定计算方式计算所述当前用户为目标用户的概率,其中,所述指定计算方式为P(S
u|X)=1-P(S
e|X),P(S
u|X)表示所述当前用户为所述目标用户的概率,P(S
e|X)表示所述当前用户不是所述目标用户的概率;
作为单元,用于将所述当前用户为所述目标用户的概率P(S
u|X),作为所述第一概率;
第一判断单元,用于判断所述P(S
u|X)是否高于预设概率阈值;
判定单元,用于若高于预设概率阈值,则判定所述第一全连接层输出的第一概率高于所述预设概率阈值。
进一步地,所述关键词检测网络中所述第一全连接层对应第一任务的输出通道,所述第二全连接层对应第二任务的输出通道,关键词检测装置,包括:
第二输入模块,用于将各训练数据分别对应的语音特征参数,输入至所述关键词检测网络中进行训练;
第二获取模块,用于实时获取所述第一任务对应的第一损失函数的函数值,以及所述第二任务对应的第二损失函数的函数值;
调整模块,用于根据所述第一损失函数的函数值和所述第二损失函数的函数值的数值关系,实时调整所述第一损失函数和所述第二损失函数在总损失函数中分别对应的损失权重;
第二判断模块,用于判断所述总损失函数是否达到预设条件;
判定模块,用于若达到预设条件,则判定完成对所述关键词检测网络的训练,并固定所述关键词检测网络的参数。
进一步地,第二获取模块,包括:
获取单元,用于获取所述第一任务对应的sigmoid函数的当前预测值,以及预设的第一真实值,获取所述第二任务对应的softmax函数的当前预测值,以及预设的第二真实值;
第二计算单元,用于根据所述sigmoid函数的当前预测值,以及预设的第一真实值,计算所述第一损失函数值,根据所述softmax函数的当前预测值,以及预设的第二真实值,计算所述第二损失函数值。
进一步地,调整模块,包括:
第三计算单元,用于计算所述第一损失函数的函数值和所述第二损失函数的函数值的差值;
第二判断单元,用于判断所述差值是否大于零;
增大单元,用于若大于零,则增大所述第一损失函数在总损失函数中对应的第一损失权重,减小所述第二损失函数在总损失函数中对应的第二损失权重。
进一步地,关键词检测装置,包括:
第三获取模块,用于获取当前时刻对应的第一任务的第一训练准确度,以及与所述当前时刻相邻的前一时刻对应的第一任务的第二训练准确度;
第一计算模块,用于根据所述第一训练准确度和所述第二训练准确度,计算所述第一任务当前时刻对应的平均训练精准度;
第二计算模块,用于根据所述平均训练精准度,根据指定函数计算所述第一任务的损失权重,其中,所述指定函数为λ
j(i)=-(1-k
j(i))log(k
j(i)),k
j(i)表示训练第i轮时第j个任务的平均训练精准度;
第三计算模块,用于根据所述第一任务的损失权重的计算方式,计算所述第二任务的损失权重;
得到模块,用于根据所述第一任务的损失权重、所述第一损失函数、所述第二任务的损失权重以及所述第二损失函数,得到所述总损失函数。
进一步地,接收所述待检测语音语句的终端为智能设备,关键词检测装置,包括:
第四获取模块,用于获取所述关键词对应的操控指令信息,其中,所述操控指令信息包括所述操控指令的运行链接;
运行模块,用于根据所述运行链接在所述智能设备上运行所述操控指令,得到运行结果;
反馈模块,用于将所述运行结果反馈至所述智能设备的显示终端。
参照图3,本申请实施例中还提供一种计算机设备,该计算机设备可以是服务器,其内部结构可以如图3所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设计的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的数据库用于存储关键词检测过程需要的所有数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现关键词检测方法。
上述处理器执行上述关键词检测方法,关键词检测网络包括并行连接的第一全连接层和第二全连接层,方法包括:获取当前用户输入的待检测语音语句;提取所述待检测语音语句对应的语音特征参数;将所述语音特征参数输入所述关键词检测网络;判断所述第一全连接层输出的第一概率是否高于预设概率阈值,其中,所述第一概率为所述当前用户身份识别对应的概率;若是,根据所述第二全连接层输出的第二概率,确定所述待检测语音语句的关键词,其中,所述第二概率为关键词识别对应的概率。
上述计算机设备,通过设定两个不同损失函数分别约束不同全连接层对应的任务通道,实现多任务在同一个网络模型中运行,共享计算,从而达到对设备内存要求低、降低计算时间和电池耗电量的效果,满足降低模型计算量和反馈延迟,满足嵌入设备对模型参数量小的要求,适合部署于小型的智能设备上。
在一个实施例中,上述处理器判断所述第一全连接层输出的第一概率是否高于预设概率阈值的步骤,包括:根据指定计算方式计算所述当前用户为目标用户的概率,其中,所述指定计算方式为P(S
u|X)=1-P(S
e|X),P(S
u|X)表示所述当前用户为所述目标用户的概率,P(S
e|X)表示所述当前用户不是所述目标用户的概率;将所述当前用户为所述目标用户的概率P(S
u|X),作为所述第一概率;判断所述P(S
u|X)是否高于预设概率阈值;若是,则判定所述第一全连接层输出的第一概率高于所述预设概率阈值。
在一个实施例中,所述关键词检测网络中所述第一全连接层对应第一任务的输出通道,所述第二全连接层对应第二任务的输出通道,上述处理器获取当前用户输入的待检测语音语句的步骤之前,包括:将各训练数据分别对应的语音特征参数,输入至所述关键词检测网络中进行训练;实时获取所述第一任务对应的第一损失函数的函数值,以及所述第二任务对应的第二损失函数的函数值;根据所述第一损失函数的函数值和所述第二损失函数的函数值的数值关系,实时调整所述第一损失函数和所述第二损失函数在总损失函数中分别对应的损失权重;判断所述总损失函数是否达到预设条件;若是,则判定完成对所述关键词检测网络的训练,并固定所述关键词检测网络的参数。
在一个实施例中,上述处理器实时获取所述第一任务对应的第一损失函数的函数值,以及所述第二任务对应的第二损失函数的函数值的步骤,包括:获取所述第一任务对应的sigmoid函数的当前预测值,以及预设的第一真实值,获取所述第二任务对应的softmax函数的当前预测值,以及预设的第二真实值;根据所述sigmoid函数的当前预测值,以及预设的第一真实值,计算所述第一损失函数值,根据所述softmax函数的当前预测值,以及预设的第二真实值,计算所述第二损失函数值。
在一个实施例中,上述处理器根据所述第一损失函数的函数值和所述第二损失函数的函数值的数值关系,实时调整所述第一损失函数和所述第二损失函数在总损失函数中分别对应的损失权重的步骤,包括:计算所述第一损失函数的函数值和所述第二损失函数的函数值的差值;判断所述差值是否大于零;若是,则增大所述第一损失函数在总损失函数中对应的第一损失权重,减小所述第二损失函数在总损失函数中对应的第二损失权重。
在一个实施例中,上述处理器判断所述总损失函数是否达到预设条件的步骤之前,包括:获取当前时刻对应的第一任务的第一训练准确度,以及与所述当前时刻相邻的前一时刻对应的第一任务的第二训练准确度;根据所述第一训练准确度和所述第二训练准确度,计算所述第一任务当前时刻对应的平均训练精准度;根据所述平均训练精准度,根据指定函数计算所述第一任务的损失权重,其中,所述指定函数为λ
j(i)=-(1-k
j(i))log(k
j(i)),k
j(i)表示训练第i轮时第j个任务的平均训练精准度;根据所述第一任务的损失权重的计算方式,计算所述第二任务的损失权重;根据所述第一任务的损失权重、所述第一损失函数、所述第二任务的损失权重以及所述第二损失函数,得到所述总损失函数。
在一个实施例中,接收所述待检测语音语句的终端为智能设备,上述处理器根据所述第二全连接层输出的第二概率,确定所述待检测语音语句的关键词的步骤之后,包括:获取所述关键词对应的操控指令信息,其中,所述操控指令信息包括所述操控指令的运行链接;根据所述运行链接在所述智能设备上运行所述操控指令,得到运行结果;将所述运行结果反馈至所述智能设备的显示终端。
本领域技术人员可以理解,图3中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定。
本申请一实施例还提供一种计算机可读存储介质,所述计算机可读存储介质可以是非易失性,也可以是易失性,其上存储有计算机程序,计算机程序被处理器执行时实现关键词检测方法,关键词检测网络包括并行连接的第一全连接层和第二全连接层,方法包括:获取当前用户输入的待检测语音语句;提取所述待检测语音语句对应的语音特征参数;将所述语音特征参数输入所述关键词检测网络;判断所述第一全连接层输出的第一概率是否高于预设概率阈值,其中,所述第一 概率为所述当前用户身份识别对应的概率;若是,根据所述第二全连接层输出的第二概率,确定所述待检测语音语句的关键词,其中,所述第二概率为关键词识别对应的概率。
上述计算机可读存储介质,通过设定两个不同损失函数分别约束不同全连接层对应的任务通道,实现多任务在同一个网络模型中运行,共享计算,从而达到对设备内存要求低、降低计算时间和电池耗电量的效果,满足降低模型计算量和反馈延迟,满足嵌入设备对模型参数量小的要求,适合部署于小型的智能设备上。
在一个实施例中,上述处理器判断所述第一全连接层输出的第一概率是否高于预设概率阈值的步骤,包括:根据指定计算方式计算所述当前用户为目标用户的概率,其中,所述指定计算方式为P(S
u|X)=1-P(S
e|X),P(S
u|X)表示所述当前用户为所述目标用户的概率,P(S
e|X)表示所述当前用户不是所述目标用户的概率;将所述当前用户为所述目标用户的概率P(S
u|X),作为所述第一概率;判断所述P(S
u|X)是否高于预设概率阈值;若是,则判定所述第一全连接层输出的第一概率高于所述预设概率阈值。
在一个实施例中,所述关键词检测网络中所述第一全连接层对应第一任务的输出通道,所述第二全连接层对应第二任务的输出通道,上述处理器获取当前用户输入的待检测语音语句的步骤之前,包括:将各训练数据分别对应的语音特征参数,输入至所述关键词检测网络中进行训练;实时获取所述第一任务对应的第一损失函数的函数值,以及所述第二任务对应的第二损失函数的函数值;根据所述第一损失函数的函数值和所述第二损失函数的函数值的数值关系,实时调整所述第一损失函数和所述第二损失函数在总损失函数中分别对应的损失权重;判断所述总损失函数是否达到预设条件;若是,则判定完成对所述关键词检测网络的训练,并固定关键词检测网络的参数。
在一个实施例中,上述处理器实时获取所述第一任务对应的第一损失函数的函数值,以及所述第二任务对应的第二损失函数的函数值的步骤,包括:获取所述第一任务对应的sigmoid函数的当前预测值,以及预设的第一真实值,获取所述第二任务对应的softmax函数的当前预测值,以及预设的第二真实值;根据所述sigmoid函数的当前预测值,以及预设的第一真实值,计算所述第一损失函数值,根据所述softmax函数的当前预测值,以及预设的第二真实值,计算所述第二损失函数值。
在一个实施例中,上述处理器根据所述第一损失函数的函数值和所述第二损失函数的函数值的数值关系,实时调整所述第一损失函数和所述第二损失函数在总损失函数中分别对应的损失权重的步骤,包括:计算所述第一损失函数的函数值和所述第二损失函数的函数值的差值;判断所述差值是否大于零;若是,则增大所述第一损失函数在总损失函数中对应的第一损失权重,减小所述第二损失函数在总损失函数中对应的第二损失权重。
在一个实施例中,上述处理器判断所述总损失函数是否达到预设条件的步骤之前,包括:获取当前时刻对应的第一任务的第一训练准确度,以及与所述当前时刻相邻的前一时刻对应的第一任务的第二训练准确度;根据所述第一训练准确度和所述第二训练准确度,计算所述第一任务当前时刻对应的平均训练精准度;根据所述平均训练精准度,根据指定函数计算所述第一任务的损失权重,其中,所述指定函数为λ
j(i)=-(1-k
j(i))log(k
j(i)),k
j(i)表示训练第i轮时第j个任务的平均训练精准度;根据所述第一任务的损失权重的计算方式,计算所述第二 任务的损失权重;根据所述第一任务的损失权重、所述第一损失函数、所述第二任务的损失权重以及所述第二损失函数,得到所述总损失函数。
在一个实施例中,接收所述待检测语音语句的终端为智能设备,上述处理器根据所述第二全连接层输出的第二概率,确定所述待检测语音语句的关键词的步骤之后,包括:获取所述关键词对应的操控指令信息,其中,所述操控指令信息包括所述操控指令的运行链接;根据所述运行链接在所述智能设备上运行所述操控指令,得到运行结果;将所述运行结果反馈至所述智能设备的显示终端。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,上述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的和实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可以包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双速据率SDRAM(SSRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
以上所述仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。
Claims (20)
- 一种关键词检测方法,其中,关键词检测网络包括并行连接的第一全连接层和第二全连接层,方法包括:获取当前用户输入的待检测语音语句;提取所述待检测语音语句对应的语音特征参数;将所述语音特征参数输入所述关键词检测网络;判断所述第一全连接层输出的第一概率是否高于预设概率阈值,其中,所述第一概率为所述当前用户身份识别对应的概率;若是,根据所述第二全连接层输出的第二概率,确定所述待检测语音语句的关键词,其中,所述第二概率为关键词识别对应的概率。
- 根据权利要求1所述的关键词检测方法,其中,所述判断所述第一全连接层输出的第一概率是否高于预设概率阈值的步骤,包括:根据指定计算方式计算所述当前用户为目标用户的概率,其中,所述指定计算方式为P(S u|X)=1-P(S e|X),P(S u|X)表示所述当前用户为所述目标用户的概率,P(S e|X)表示所述当前用户不是所述目标用户的概率;将所述当前用户为所述目标用户的概率P(S u|X),作为所述第一概率;判断所述P(S u|X)是否高于预设概率阈值;若是,则判定所述第一全连接层输出的第一概率高于所述预设概率阈值。
- 根据权利要求1所述的关键词检测方法,其中,所述关键词检测网络中所述第一全连接层对应第一任务的输出通道,所述第二全连接层对应第二任务的输出通道,所述获取当前用户输入的待检测语音语句的步骤之前,包括:将各训练数据分别对应的语音特征参数,输入至所述关键词检测网络中进行训练;实时获取所述第一任务对应的第一损失函数的函数值,以及所述第二任务对应的第二损失函数的函数值;根据所述第一损失函数的函数值和所述第二损失函数的函数值的数值关系,实时调整所述第一损失函数和所述第二损失函数在总损失函数中分别对应的损失权重;判断所述总损失函数是否达到预设条件;若是,则判定完成对所述关键词检测网络的训练,并固定所述关键词检测网络的参数。
- 根据权利要求3所述的关键词检测方法,其中,所述实时获取所述第一任务对应的第一损失函数的函数值,以及所述第二任务对应的第二损失函数的函数值的步骤,包括:获取所述第一任务对应的sigmoid函数的当前预测值,以及预设的第一真实值,获取所述第二任务对应的softmax函数的当前预测值,以及预设的第二真实值;根据所述sigmoid函数的当前预测值,以及预设的第一真实值,计算所述第一损失函数值,根据所述softmax函数的当前预测值,以及预设的第二真实值,计算所述第二损失函数值。
- 根据权利要求3所述的关键词检测方法,其中,所述根据所述第一损失函数的函数值和所述第二损失函数的函数值的数值关系,实时调整所述第一损失函数和所述第二损失函数在总损失函数中分别对应的损失权重的步骤,包括:计算所述第一损失函数的函数值和所述第二损失函数的函数值的差值;判断所述差值是否大于零;若是,则增大所述第一损失函数在总损失函数中对应的第一损失权重,减小所述第二损失函数在总损失函数中对应的第二损失权重。
- 根据权利要求5所述的关键词检测方法,其中,所述判断所述总损失函数是否达到预设条件的步骤之前,包括:获取当前时刻对应的第一任务的第一训练准确度,以及与所述当前时刻相邻的前一时刻对应的第一任务的第二训练准确度;根据所述第一训练准确度和所述第二训练准确度,计算所述第一任务当前时刻对应的平均训练精准度;根据所述平均训练精准度,根据指定函数计算所述第一任务的损失权重,其中,所述指定函数为λ j(i)=-(1-k j(i))log(k j(i)),k j(i)表示训练第i轮时第j个任务的平均训练精准度;根据所述第一任务的损失权重的计算方式,计算所述第二任务的损失权重;根据所述第一任务的损失权重、所述第一损失函数、所述第二任务的损失权重以及所述第二损失函数,得到所述总损失函数。
- 根据权利要求1所述的关键词检测方法,其中,接收所述待检测语音语句的终端为智能设备,所述根据所述第二全连接层输出的第二概率,确定所述待检测语音语句的关键词的步骤之后,包括:获取所述关键词对应的操控指令信息,其中,所述操控指令信息包括所述操控指令的运行链接;根据所述运行链接在所述智能设备上运行所述操控指令,得到运行结果;将所述运行结果反馈至所述智能设备的显示终端。
- 一种关键词检测装置,其中,所述装置部署关键词检测网络,所述关键词检测网络包括并行连接的第一全连接层和第二全连接层,装置包括:第一获取模块,用于获取当前用户输入的待检测语音语句;提取模块,用于提取所述待检测语音语句对应的语音特征参数;第一输入模块,用于将所述语音特征参数输入所述关键词检测网络;第一判断模块,用于判断所述第一全连接层输出的第一概率是否高于预设概率阈值,其中,所述第一概率为所述当前用户身份识别对应的概率;确定模块,用于若高于预设概率阈值,根据所述第二全连接层输出的第二概率,确定所述待检测语音语句的关键词,其中,所述第二概率为关键词识别对应的概率。
- 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其中,所述处理器执行所述计算机程序时实现一种关键词检测方法,其中,关键词检测网络包括并行连接的第一全连接层和第二全连接层,所述方法包括:获取当前用户输入的待检测语音语句;提取所述待检测语音语句对应的语音特征参数;将所述语音特征参数输入所述关键词检测网络;判断所述第一全连接层输出的第一概率是否高于预设概率阈值,其中,所述第一概率为所述当前用户身份识别对应的概率;若是,根据所述第二全连接层输出的第二概率,确定所述待检测语音语句的关键词,其中,所述第二概率为关键词识别对应的概率。
- 根据权利要求9所述的计算机设备,其中,所述关键词检测网络中所述第一全连接层对应第一任务的输出通道,所述第二全连接层对应第二任务的输出通道,所述获取当前用户输入的待检测语音语句的步骤之前,包括:将各训练数据分别对应的语音特征参数,输入至所述关键词检测网络中进行训练;实时获取所述第一任务对应的第一损失函数的函数值,以及所述第二任务对应的第二损失函数的函数值;根据所述第一损失函数的函数值和所述第二损失函数的函数值的数值关系,实时调整所述第一损失函数和所述第二损失函数在总损失函数中分别对应的损失权重;判断所述总损失函数是否达到预设条件;若是,则判定完成对所述关键词检测网络的训练,并固定所述关键词检测网络的参数。
- 根据权利要求9所述的计算机设备,其中,所述关键词检测网络中所述第一全连接层对应第一任务的输出通道,所述第二全连接层对应第二任务的输出通道,所述获取当前用户输入的待检测语音语句的步骤之前,包括:将各训练数据分别对应的语音特征参数,输入至所述关键词检测网络中进行训练;实时获取所述第一任务对应的第一损失函数的函数值,以及所述第二任务对应的第二损失函数的函数值;根据所述第一损失函数的函数值和所述第二损失函数的函数值的数值关系,实时调整所述第一损失函数和所述第二损失函数在总损失函数中分别对应的损失权重;判断所述总损失函数是否达到预设条件;若是,则判定完成对所述关键词检测网络的训练,并固定所述关键词检测网络的参数。
- 根据权利要求11所述的计算机设备,其中,所述实时获取所述第一任务对应的第一损失函数的函数值,以及所述第二任务对应的第二损失函数的函数值的步骤,包括:获取所述第一任务对应的sigmoid函数的当前预测值,以及预设的第一真实值,获取所述第二任务对应的softmax函数的当前预测值,以及预设的第二真实值;根据所述sigmoid函数的当前预测值,以及预设的第一真实值,计算所述第一损失函数值,根据所述softmax函数的当前预测值,以及预设的第二真实值,计算所述第二损失函数值。
- 根据权利要求11所述的计算机设备,其中,所述根据所述第一损失函数的函数值和所述第二损失函数的函数值的数值关系,实时调整所述第一损失函数和所述第二损失函数在总损失函数中分别对应的损失权重的步骤,包括:计算所述第一损失函数的函数值和所述第二损失函数的函数值的差值;判断所述差值是否大于零;若是,则增大所述第一损失函数在总损失函数中对应的第一损失权重,减小所述第二损失函数在总损失函数中对应的第二损失权重。
- 根据权利要求13所述的计算机设备,其中,所述判断所述总损失函数是否达到预设条件的步骤之前,包括:获取当前时刻对应的第一任务的第一训练准确度,以及与所述当前时刻相邻的前一时刻对应的第一任务的第二训练准确度;根据所述第一训练准确度和所述第二训练准确度,计算所述第一任务当前时刻对应的平均训练精准度;根据所述平均训练精准度,根据指定函数计算所述第一任务的损失权重,其中,所述指定函数为λ j(i)=-(1-k j(i))log(k j(i)),k j(i)表示训练第i轮时第j个任务的平均训练精准度;根据所述第一任务的损失权重的计算方式,计算所述第二任务的损失权重;根据所述第一任务的损失权重、所述第一损失函数、所述第二任务的损失权重以及所述第二损失函数,得到所述总损失函数。
- 根据权利要求9所述的计算机设备,其中,接收所述待检测语音语句的终端为智能设备,所述根据所述第二全连接层输出的第二概率,确定所述待检测语音语句的关键词的步骤之后,包括:获取所述关键词对应的操控指令信息,其中,所述操控指令信息包括所述操控指令的运行链接;根据所述运行链接在所述智能设备上运行所述操控指令,得到运行结果;将所述运行结果反馈至所述智能设备的显示终端。
- 一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现一种关键词检测方法,其中,关键词检测网络包括并行连接的第一全连接层和第二全连接层,所述方法包括:获取当前用户输入的待检测语音语句;提取所述待检测语音语句对应的语音特征参数;将所述语音特征参数输入所述关键词检测网络;判断所述第一全连接层输出的第一概率是否高于预设概率阈值,其中,所述第一概率为所述当前用户身份识别对应的概率;若是,根据所述第二全连接层输出的第二概率,确定所述待检测语音语句的关键词,其中,所述第二概率为关键词识别对应的概率。
- 根据权利要求16所述的计算机可读存储介质,其中,所述关键词检测网络中所述第一全连接层对应第一任务的输出通道,所述第二全连接层对应第二任务的输出通道,所述获取当前用户输入的待检测语音语句的步骤之前,包括:将各训练数据分别对应的语音特征参数,输入至所述关键词检测网络中进行训练;实时获取所述第一任务对应的第一损失函数的函数值,以及所述第二任务对应的第二损失函数的函数值;根据所述第一损失函数的函数值和所述第二损失函数的函数值的数值关系,实时调整所述第一损失函数和所述第二损失函数在总损失函数中分别对应的损失权重;判断所述总损失函数是否达到预设条件;若是,则判定完成对所述关键词检测网络的训练,并固定所述关键词检测网络的参数。
- 根据权利要求16所述的计算机可读存储介质,其中,所述关键词检测网络中所述第一全连接层对应第一任务的输出通道,所述第二全连接层对应第二任务的输出通道,所述获取当前用户输入的待检测语音语句的步骤之前,包括:将各训练数据分别对应的语音特征参数,输入至所述关键词检测网络中进行训练;实时获取所述第一任务对应的第一损失函数的函数值,以及所述第二任务对应的第二损失函数的函数值;根据所述第一损失函数的函数值和所述第二损失函数的函数值的数值关系,实时调整所述第一损失函数和所述第二损失函数在总损失函数中分别对应的损失权重;判断所述总损失函数是否达到预设条件;若是,则判定完成对所述关键词检测网络的训练,并固定所述关键词检测网络的参数。
- 根据权利要求18所述的计算机可读存储介质,其中,所述实时获取所述第一任务对应的第一损失函数的函数值,以及所述第二任务对应的第二损失函数的函数值的步骤,包括:获取所述第一任务对应的sigmoid函数的当前预测值,以及预设的第一真实值,获取所述第二任务对应的softmax函数的当前预测值,以及预设的第二真实值;根据所述sigmoid函数的当前预测值,以及预设的第一真实值,计算所述第一损失函数值,根据所述softmax函数的当前预测值,以及预设的第二真实值,计算所述第二损失函数值。
- 根据权利要求18所述的计算机可读存储介质,其中,所述根据所述第一损失函数的函数值和所述第二损失函数的函数值的数值关系,实时调整所述第一损失函数和所述第二损失函数在总损失函数中分别对应的损失权重的步骤,包括:计算所述第一损失函数的函数值和所述第二损失函数的函数值的差值;判断所述差值是否大于零;若是,则增大所述第一损失函数在总损失函数中对应的第一损失权重,减小所述第二损失函数在总损失函数中对应的第二损失权重。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011462771.4 | 2020-12-11 | ||
CN202011462771.4A CN112634870B (zh) | 2020-12-11 | 2020-12-11 | 关键词检测方法、装置、设备和存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022121188A1 true WO2022121188A1 (zh) | 2022-06-16 |
Family
ID=75312406
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/084545 WO2022121188A1 (zh) | 2020-12-11 | 2021-03-31 | 关键词检测方法、装置、设备和存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112634870B (zh) |
WO (1) | WO2022121188A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116805253A (zh) * | 2023-08-18 | 2023-09-26 | 腾讯科技(深圳)有限公司 | 干预增益预测方法、装置、存储介质及计算机设备 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113408718B (zh) * | 2021-06-07 | 2024-05-31 | 厦门美图之家科技有限公司 | 设备处理器选择方法、系统、终端设备及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10147442B1 (en) * | 2015-09-29 | 2018-12-04 | Amazon Technologies, Inc. | Robust neural network acoustic model with side task prediction of reference signals |
CN110246490A (zh) * | 2019-06-26 | 2019-09-17 | 合肥讯飞数码科技有限公司 | 语音关键词检测方法及相关装置 |
CN111223489A (zh) * | 2019-12-20 | 2020-06-02 | 厦门快商通科技股份有限公司 | 一种基于Attention注意力机制的特定关键词识别方法及系统 |
CN111276125A (zh) * | 2020-02-11 | 2020-06-12 | 华南师范大学 | 一种面向边缘计算的轻量级语音关键词识别方法 |
CN111798840A (zh) * | 2020-07-16 | 2020-10-20 | 中移在线服务有限公司 | 语音关键词识别方法和装置 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5326169B2 (ja) * | 2009-05-13 | 2013-10-30 | 株式会社日立製作所 | 音声データ検索システム及び音声データ検索方法 |
JP6679898B2 (ja) * | 2015-11-24 | 2020-04-15 | 富士通株式会社 | キーワード検出装置、キーワード検出方法及びキーワード検出用コンピュータプログラム |
CN110444193B (zh) * | 2018-01-31 | 2021-12-14 | 腾讯科技(深圳)有限公司 | 语音关键词的识别方法和装置 |
CN110767214A (zh) * | 2018-07-27 | 2020-02-07 | 杭州海康威视数字技术股份有限公司 | 语音识别方法及其装置和语音识别系统 |
CN111429912B (zh) * | 2020-03-17 | 2023-02-10 | 厦门快商通科技股份有限公司 | 关键词检测方法、系统、移动终端及存储介质 |
-
2020
- 2020-12-11 CN CN202011462771.4A patent/CN112634870B/zh active Active
-
2021
- 2021-03-31 WO PCT/CN2021/084545 patent/WO2022121188A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10147442B1 (en) * | 2015-09-29 | 2018-12-04 | Amazon Technologies, Inc. | Robust neural network acoustic model with side task prediction of reference signals |
CN110246490A (zh) * | 2019-06-26 | 2019-09-17 | 合肥讯飞数码科技有限公司 | 语音关键词检测方法及相关装置 |
CN111223489A (zh) * | 2019-12-20 | 2020-06-02 | 厦门快商通科技股份有限公司 | 一种基于Attention注意力机制的特定关键词识别方法及系统 |
CN111276125A (zh) * | 2020-02-11 | 2020-06-12 | 华南师范大学 | 一种面向边缘计算的轻量级语音关键词识别方法 |
CN111798840A (zh) * | 2020-07-16 | 2020-10-20 | 中移在线服务有限公司 | 语音关键词识别方法和装置 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116805253A (zh) * | 2023-08-18 | 2023-09-26 | 腾讯科技(深圳)有限公司 | 干预增益预测方法、装置、存储介质及计算机设备 |
CN116805253B (zh) * | 2023-08-18 | 2023-11-24 | 腾讯科技(深圳)有限公司 | 干预增益预测方法、装置、存储介质及计算机设备 |
Also Published As
Publication number | Publication date |
---|---|
CN112634870A (zh) | 2021-04-09 |
CN112634870B (zh) | 2023-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11450312B2 (en) | Speech recognition method, apparatus, and device, and storage medium | |
KR102072782B1 (ko) | 심층 신경망을 사용한 단-대-단 화자 인식 | |
CN110444193B (zh) | 语音关键词的识别方法和装置 | |
US9818431B2 (en) | Multi-speaker speech separation | |
US10008209B1 (en) | Computer-implemented systems and methods for speaker recognition using a neural network | |
WO2021184902A1 (zh) | 图像分类方法、装置、及其训练方法、装置、设备、介质 | |
US10580432B2 (en) | Speech recognition using connectionist temporal classification | |
CN111081230B (zh) | 语音识别方法和设备 | |
JP2023089116A (ja) | エンドツーエンドストリーミングキーワードスポッティング | |
CN112183107B (zh) | 音频的处理方法和装置 | |
WO2022121188A1 (zh) | 关键词检测方法、装置、设备和存储介质 | |
KR20190136578A (ko) | 음성 인식 방법 및 장치 | |
CN113129900A (zh) | 一种声纹提取模型构建方法、声纹识别方法及其相关设备 | |
CN113345464B (zh) | 语音提取方法、系统、设备及存储介质 | |
CN116312494A (zh) | 语音活动检测方法、装置、电子设备及可读存储介质 | |
Kadyan et al. | Developing in-vehicular noise robust children ASR system using Tandem-NN-based acoustic modelling | |
CN114756662A (zh) | 基于多模态输入的任务特定文本生成 | |
US12014728B2 (en) | Dynamic combination of acoustic model states | |
Namburi | Speaker Recognition Based on Mutated Monarch Butterfly Optimization Configured Artificial Neural Network | |
CN114765028A (zh) | 声纹识别方法、装置、终端设备及计算机可读存储介质 | |
Pedalanka et al. | An Enhanced Deep Neural Network-Based Approach for Speaker Recognition Using Triumvirate Euphemism Strategy | |
TWI795173B (zh) | 多語言語音辨識系統、方法及電腦可讀媒介 | |
Segarceanu et al. | Evaluation of deep learning techniques for acoustic environmental events detection | |
US20240105206A1 (en) | Seamless customization of machine learning models | |
CN115273832A (zh) | 唤醒优化模型的训练方法、唤醒优化的方法和相关设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21901904 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21901904 Country of ref document: EP Kind code of ref document: A1 |