CN110415686A - Method of speech processing, device, medium, electronic equipment - Google Patents

Method of speech processing, device, medium, electronic equipment Download PDF

Info

Publication number
CN110415686A
CN110415686A CN201910741367.1A CN201910741367A CN110415686A CN 110415686 A CN110415686 A CN 110415686A CN 201910741367 A CN201910741367 A CN 201910741367A CN 110415686 A CN110415686 A CN 110415686A
Authority
CN
China
Prior art keywords
speech
voice
voice signal
signal
acoustic model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910741367.1A
Other languages
Chinese (zh)
Other versions
CN110415686B (en
Inventor
吴渤
于蒙
陈联武
金明杰
苏丹
俞栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910741367.1A priority Critical patent/CN110415686B/en
Publication of CN110415686A publication Critical patent/CN110415686A/en
Application granted granted Critical
Publication of CN110415686B publication Critical patent/CN110415686B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/20Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/18Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/93Discriminating between voiced and unvoiced parts of speech signals
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • G10L2015/025Phonemes, fenemes or fenones being the recognition units

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Quality & Reliability (AREA)
  • Telephonic Communication Services (AREA)
  • Machine Translation (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

This application discloses a kind of method of speech processing, device, medium, electronic equipments, are related to the voice technology of artificial intelligence, and the method for speech processing includes: the frequency spectrum for obtaining voice signal;Input feature vector extraction is carried out to the frequency spectrum of the voice signal, by the convolutional layer of the input feature vector extracted input acoustic model, the acoustic model is used to voice signal being identified as phoneme;Based on the convolutional layer of the acoustic model, extracted to obtain convolution feature by the input feature vector of the acoustic model, and export to the LSTM layer of the acoustic model;The output of LSTM layer based on the acoustic model, obtains the bottleneck characteristic of the target phoneme.The technical solution of the embodiment of the present application can effectively extract the bottleneck characteristic of phoneme, and then speech enhan-cement effect and phonetic recognization rate can be improved.

Description

Voice processing method, device, medium and electronic equipment
The application is a divisional application with application number 201910425255.5 and invented name of 'speech processing method, recognition method and device, system and electronic equipment' filed in 2019, 05, month and 21.
Technical Field
The present application relates to the field of speech processing, and in particular, to a speech processing method, apparatus, medium, and electronic device.
Background
Key technologies of Speech Technology (Speech Technology) include Automatic Speech Recognition (ASR) Technology and Speech synthesis (Text To Speech, TTS) Technology, as well as voiceprint Recognition Technology. The computer can listen, see, speak and feel, and the development direction of the future human-computer interaction is provided, wherein the voice becomes one of the best viewed human-computer interaction modes in the future. In many application scenarios in the field of speech processing, both speech enhancement and speech recognition play a crucial role. For example, in smart home scenes such as smart speakers, speech picked up by the smart speakers may be subjected to speech enhancement to improve speech quality, thereby facilitating subsequent better speech recognition.
However, the inventor finds that in the existing speech enhancement process, because the energy of the unvoiced part in the speech is weak and is very similar to noise in the speech spectrum structure, the speech enhancement effect on the unvoiced part is not ideal, and further, the recognition rate of speech recognition is not high, especially the recognition rate for the unvoiced part in the speech is not high. In the speech enhancement process, the bottleneck feature of the phoneme needs to be considered, so how to effectively extract the bottleneck feature of the phoneme to improve the speech enhancement effect, and thus improving the speech recognition rate becomes a technical problem to be solved urgently.
Disclosure of Invention
Embodiments of the present application provide a speech processing method, apparatus, medium, and electronic device, so that a bottleneck feature of a phoneme can be effectively extracted at least to a certain extent, and a speech enhancement effect and a speech recognition rate can be further improved.
Wherein, the technical scheme who this application adopted does:
according to an aspect of the present application, a speech processing method includes: acquiring a frequency spectrum of a voice signal; performing input feature extraction on the frequency spectrum of the voice signal, and inputting the extracted input features into a convolution layer of an acoustic model, wherein the acoustic model is used for recognizing the voice signal as a phoneme; extracting convolution characteristics from input characteristics of the acoustic model based on the convolution layer of the acoustic model, and outputting the convolution characteristics to an LSTM layer of the acoustic model; and obtaining the bottleneck characteristic of the target phoneme based on the output of the LSTM layer of the acoustic model.
According to an aspect of the present application, a speech processing apparatus includes: the acquisition module is used for acquiring the frequency spectrum of the voice signal; the first feature extraction module is used for performing input feature extraction on the frequency spectrum of the voice signal and inputting the extracted input features into a convolutional layer of an acoustic model, wherein the acoustic model is used for recognizing the voice signal as phonemes; the second feature extraction module is used for extracting convolution features from the input features of the acoustic model based on the convolution layer of the acoustic model and outputting the convolution features to the LSTM layer of the acoustic model; and the third feature extraction module is used for obtaining the bottleneck feature of the target phoneme based on the output of the LSTM layer of the acoustic model.
According to an aspect of the application, an electronic device comprises a processor and a memory, the memory having stored thereon computer-readable instructions, which, when executed by the processor, implement a speech processing method as described above.
According to an aspect of the application, a storage medium has stored thereon a computer program which, when executed by a processor, implements a speech processing method as described above.
In the technical solutions provided in some embodiments of the present application, a spectrum of a speech signal is obtained, input features of the spectrum of the speech signal are extracted, the extracted input features are input into a convolutional layer of an acoustic model, the convolutional layer is based on the acoustic model, convolution features are extracted from the input features of the acoustic model and output to an LSTM layer of the acoustic model, and a bottleneck feature of a target phoneme is obtained based on the output of the LSTM layer of the acoustic model, so that the bottleneck feature of the phoneme can be effectively extracted through the acoustic model, and then the bottleneck feature of the phoneme can be used as a supplement to a magnitude spectrum feature corresponding to the speech signal, thereby improving a speech enhancement effect and further achieving a purpose of improving a recognition rate of speech recognition.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic illustration of an implementation environment according to the present application.
Fig. 2 is a block diagram illustrating a hardware configuration of an electronic device according to an example embodiment.
FIG. 3 is a flow diagram illustrating a method of speech processing according to an example embodiment.
FIG. 4 is a flow diagram of one embodiment of step 350 of the corresponding embodiment of FIG. 3.
FIG. 5 is a flow chart of step 350 of the corresponding embodiment of FIG. 3 in another embodiment.
FIG. 6 is a flowchart of one embodiment of step 352 of the corresponding embodiment of FIG. 5.
Fig. 7 is a flow chart of one embodiment of step 370 in the corresponding embodiment of fig. 3.
FIG. 8 is a flow diagram illustrating another method of speech processing according to an example embodiment.
FIG. 9 is a flow chart of one embodiment of step 410 of the corresponding embodiment of FIG. 8.
FIG. 10 is a flowchart of one embodiment of step 373 of the corresponding embodiment of FIG. 7.
FIG. 11 is a schematic diagram of a system architecture in which speech enhancement models and acoustic models are fused with each other according to the present application.
FIG. 12 is a flowchart of one embodiment of step 3735 in the corresponding embodiment of FIG. 10.
FIG. 13 is a flow diagram illustrating a method of speech recognition according to an example embodiment.
Fig. 14 is a schematic diagram of a system architecture in which speech enhancement and speech recognition supplement each other in the corresponding embodiment of fig. 13.
FIG. 15 is a block diagram illustrating a speech processing apparatus according to an example embodiment.
FIG. 16 is a block diagram illustrating a speech recognition apparatus according to an example embodiment.
FIG. 17 is a block diagram illustrating an electronic device in accordance with an example embodiment.
While certain embodiments of the present application have been illustrated by the accompanying drawings and described in detail below, such drawings and description are not intended to limit the scope of the inventive concepts in any manner, but are rather intended to explain the concepts of the present application to those skilled in the art by reference to the particular embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and applied in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical care, smart customer service, and the like.
The scheme provided by the embodiment of the application relates to an artificial intelligent voice technology and the like, and is specifically explained by the following embodiment:
fig. 1 is a schematic diagram of an implementation environment involved with a speech processing method.
Taking a smart home scene such as a smart speaker as an example, as shown in fig. 1(a), the implementation environment includes a user 110 and a smart home, for example, the smart home is a smart speaker 130.
When user 110 needs smart sound box 130 to play a song a, the user 110 may input the song a by voice, for example: "smart sound box 130 plays song a", and then sends a voice signal of a play instruction about playing song a to smart sound box 130.
For smart sound box 130, the voice signal may be received, so that after performing voice recognition on the voice signal, the user 110 may know the specific content of the playing instruction.
In order to improve the recognition rate of speech recognition, before performing speech recognition, smart sound box 130 may further perform speech enhancement processing on the speech signal to improve the speech quality, so as to improve the recognition rate of subsequent speech recognition.
Specifically, based on the amplitude spectrum feature corresponding to the speech signal and the bottleneck feature of the phoneme, the speech enhancement processing is performed on the speech signal, so that the smart sound box 130 can effectively distinguish the unvoiced portion and the noise in the speech, and further the quality of the enhanced speech is improved, and the recognition rate of speech recognition is effectively improved.
Of course, in other application scenarios, the speech enhancement processing and the speech recognition may also be performed separately, for example, the electronic device 150 is configured to perform the speech enhancement processing on the speech signal, transmit the enhanced speech to the electronic device 170, and perform the speech recognition on the enhanced speech by the electronic device 170, so as to obtain the speech recognition result and feed the speech recognition result back to the electronic device 150, as shown in fig. 1 (b).
The electronic device 150 is configured with a voice pickup component, for example, the voice pickup component is a microphone, and the electronic device 150 may be a smart speaker, a smart phone, a tablet computer, a notebook computer, a palm computer, a personal digital assistant, a portable wearable device, and the like.
The electronic device 170 is configured with a communication interface, for example, the communication interface is a wired or wireless network interface, and the electronic device 170 may be a desktop computer, a server, or the like, so as to establish a communication connection between the electronic device 150 and the electronic device 170, and further, data transmission between the electronic device 150 and the electronic device 170 is realized through the established communication connection, for example, the transmitted data includes, but is not limited to, enhanced voice, a voice recognition result, and the like.
Fig. 2 is a block diagram illustrating a hardware configuration of an electronic device according to an example embodiment.
It should be noted that this electronic device is only an example adapted to the application and should not be considered as providing any limitation to the scope of use of the application. Nor should such electronic device be interpreted as requiring reliance on, or necessity of, one or more components of the exemplary electronic device 200 illustrated in fig. 2.
The hardware structure of the electronic device 200 may have a large difference due to the difference of configuration or performance, as shown in fig. 2, the electronic device 200 includes: a power supply 210, an interface 230, at least one memory 250, and at least one Central Processing Unit (CPU) 270.
Specifically, the power supply 210 is used to provide operating voltages for various hardware devices on the electronic device 200.
The interface 230 includes at least one input/output interface 235 for receiving external signals. For example, smart sound box 130 in the implementation environment shown in fig. 1 is made to pick up voice signals.
Of course, in other examples of the present application, the interface 230 may further include at least one wired or wireless network interface 231, at least one serial-to-parallel conversion interface 233, and at least one USB interface 237, etc., as shown in fig. 2, which is not limited thereto.
The storage 250 is used as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., and the resources stored thereon include an operating system 251, an application 253, data 255, etc., and the storage manner may be a transient storage or a permanent storage.
The operating system 251 is used for managing and controlling hardware devices and application programs 253 on the electronic device 200 to implement operations and processing of the mass data 255 in the memory 250 by the central processing unit 270, and may be windows server, Mac OS XTM, unix, linux, FreeBSDTM, or the like.
The application 253 is a computer program that performs at least one specific task on the operating system 251, and may include at least one module (not shown in fig. 2), each of which may contain a series of computer-readable instructions for the electronic device 200. For example, the speech processing apparatus can be considered as an application 253 deployed on an electronic device.
The data 255 may be a photograph, a picture, or the like stored in a magnetic disk, or may be a voice signal, or the like, and is stored in the memory 250.
The central processor 270 may include one or more processors and is configured to communicate with the memory 250 through at least one communication bus to read computer-readable instructions stored in the memory 250, and further implement operations and processing of the mass data 255 in the memory 250. The speech processing method is accomplished, for example, by central processor 270 reading a series of computer readable instructions stored in memory 250.
Furthermore, the present application can be implemented by hardware circuits or by hardware circuits in combination with software, and therefore, the implementation of the present application is not limited to any specific hardware circuits, software, or a combination of the two.
Referring to fig. 3, in an exemplary embodiment, a speech processing method is applied to an electronic device, for example, the electronic device is the smart sound box 130 of the implementation environment shown in fig. 1, and the structure of the electronic device may be as shown in fig. 2.
The voice processing method can be executed by the electronic equipment, and can also be understood as being executed by a voice processing device deployed in the electronic equipment. In the following method embodiments, for convenience of description, the main execution subject of each step is described as an electronic device, but the method is not limited thereto.
The speech processing method may include the steps of:
step 310, a voice signal is obtained.
First, in this embodiment, the voice signal is collected in real time by a voice pickup component configured in the electronic device, for example, the voice pickup component is a microphone.
As described above, in smart home scenes such as smart speakers, voice signals are sent out by a user to smart homes such as smart speakers in a voice input mode, and then the smart homes such as smart speakers can be acquired in real time by picking up a voice component.
Or in an instant messaging scene, the instant messaging client provides a function of converting voice into text, at the moment, a voice signal is sent to terminal equipment such as a smart phone in a voice input mode by a user, and accordingly the terminal equipment such as the smart phone can be collected in real time by means of a voice pickup assembly.
It should be noted that the speech signal may be an original speech signal containing no noise or a noisy speech signal containing noise, which is not limited in this embodiment.
Secondly, it can be understood that after the voice signal is collected in real time by the pickup voice component, the electronic device may store the collected voice signal in consideration of the processing performance. For example, the speech signal is stored in a memory.
Therefore, regarding the acquisition of the voice signal, the acquired voice signal may be acquired in real time so as to perform the relevant processing on the voice signal in real time, or the voice signal acquired in a historical time period may be acquired so as to perform the relevant processing on the voice signal when the processing task is few, or the relevant processing on the voice signal is performed under the instruction of the operator, which is not limited in this embodiment.
In other words, the acquired voice signal may be derived from a voice signal acquired in real time or from a voice signal stored in advance.
After the electronic device acquires the voice signal, correlation processing may be performed on the voice signal, for example, the correlation processing includes voice enhancement processing, voice recognition, and the like.
Step 330, converting the speech signal from time domain to frequency domain to obtain the frequency spectrum of the speech signal.
It should be understood that, in the speech enhancement process, in order to conveniently characterize voiced parts, unvoiced parts and noise in speech, for example, to uniquely characterize a speech signal on a speech spectrum structure through amplitude spectrum features, first, a time-frequency transform needs to be performed on the speech signal, that is, the speech signal is converted from a time domain to a frequency domain to obtain a frequency spectrum of the speech signal, and then a subsequent speech enhancement process is performed based on the frequency spectrum of the speech signal, for example, amplitude spectrum features are obtained by extracting the frequency spectrum of the speech signal.
Specifically, Short Time Fourier Transform (STFT) processing is performed on the speech signal to obtain a spectrum of the speech signal.
That is, x (k, f) ═ STFT (x (t)).
Where x (t) represents a speech signal and STFT represents a short-time fourier transform algorithm.
x (k, f) denotes the spectrum of the speech signal, k, f respectively denote the frame and frequency indices in the spectrum.
Certainly, in other embodiments, the time-frequency Transformation may also be implemented by Fast Fourier Transform (FFT) processing, and this embodiment is not limited to this specifically.
Step 350, extracting a magnitude spectrum feature from the frequency spectrum of the voice signal, and taking the output of one network layer in the acoustic model as a bottleneck feature of the target phoneme based on the acoustic model for identifying the voice signal as the target phoneme.
First, a magnitude spectrum feature (LPS) is used to uniquely characterize a speech signal on a speech spectrum structure, so that a voiced part in speech can be effectively distinguished on the speech spectrum structure by speech enhancement processing based on the magnitude spectrum feature, and the speech quality of the voiced part in speech can be effectively improved.
Specifically, the amplitude spectrum feature is extracted from the frequency spectrum of the voice signal.
However, the inventor also realizes that, on one hand, the unvoiced part in the speech is relatively weak in energy and very similar to noise in speech spectrum structure, and the unvoiced part and the noise in the speech cannot be effectively distinguished based on the amplitude spectrum characteristics, and the unvoiced part is often treated as noise, so that the speech enhancement effect on the unvoiced part in the speech is not ideal, that is, the speech enhancement on the unvoiced part in the speech is not obvious; on the other hand, since the speech signal also involves other factors such as phase, phoneme, speaker, acoustic environment, etc., the feature of only using the magnitude spectrum as a single dimension is not enough to accurately represent the speech signal, which will limit the speech enhancement effect and further affect the recognition rate of speech recognition.
For this reason, in the present embodiment, a bottleneck (bottleneck) feature of the target phoneme is introduced into the speech enhancement process as a supplement to the magnitude spectrum feature. The target phoneme is obtained by performing voice recognition on a voice signal based on an acoustic model.
Specifically, the bottleneck characteristic of the target phoneme is output by one of the network layers in the acoustic model in the process of recognizing the speech signal as the target phoneme by the acoustic model.
Optionally, one of the network layers in the acoustic model of the bottleneck feature of the output target phoneme is an LSTM (Long-Short Term Memory) layer.
Of course, in other embodiments, the feature of the speech enhancement processing may also be a feature different from other dimensions of the amplitude spectrum feature, such as any one or more of a phase feature, an acoustic environment feature, and a speaker feature, which is not limited in this embodiment.
And 370, performing voice enhancement processing on the voice signal according to the amplitude spectrum characteristic and the bottleneck characteristic of the target phoneme to obtain an enhanced voice signal.
That is, the features introduced in the speech enhancement process include not only the magnitude spectrum feature but also the bottleneck feature of the phoneme. Because the bottleneck characteristic is related to the phoneme, on one hand, the unvoiced part and the noise in the voice can be effectively distinguished, and the voice enhancement effect of the unvoiced part is further improved, so that the contribution of the unvoiced part on the quality and intelligibility of the voice signal is fully guaranteed, and on the other hand, because the phoneme relates to the speaking content of a speaker, and the speaking content is the final target of voice recognition, the voice recognition rate is favorably improved subsequently.
Through the process, the voice signals are represented by the features of different dimensionalities based on the magnitude spectrum features and the bottleneck features of the phonemes, the objective index of voice enhancement processing is improved, the voice enhancement effect of the voice signals is further improved, and the recognition rate of subsequent voice recognition is improved.
It is noted that objective indicators of speech enhancement processing include, but are not limited to: signal-to-noise ratio, noise type, and PESQ (subjective speech quality assessment) in a reverberant environment, SNR (signal-to-noise ratio), STOI (Short-Time Objective Intelligibility), and the like.
Referring to fig. 4, in an exemplary embodiment, the step 350 of obtaining the magnitude spectrum feature from the spectrum extraction of the speech signal may include the following steps:
step 351, performing a squaring operation on the frequency spectrum of the voice signal.
And 353, carrying out log taking operation on the operation result to obtain the amplitude spectrum characteristic.
Specifically, LPS ═ log | x (k, f) axially2
Wherein, LPS represents the amplitude spectrum characteristic, x (k, f) represents the spectrum of the speech signal, and k and f represent the indexes of the frame and the frequency in the spectrum, respectively.
Under the effect of the embodiment, the extraction of the amplitude spectrum characteristic is realized, and the speech enhancement processing based on the amplitude spectrum characteristic is realized.
Referring to fig. 5, in an exemplary embodiment, the step 350 of outputting one of the network layers in the acoustic model as the bottleneck feature of the target phoneme based on the acoustic model that identifies the speech signal as the target phoneme may include the following steps:
step 352, performing input feature extraction on the frequency spectrum of the speech signal, and inputting the extracted input feature into the convolution layer of the acoustic model.
Step 354, extracting convolution characteristics from the input characteristics of the acoustic model based on the convolution layer of the acoustic model, and outputting the convolution characteristics to the LSTM layer of the acoustic model.
As shown in fig. 12, the model topology of the acoustic model includes: input layer, convolutional layer (CNN network), LSTM layer, full connectivity layer, activation function (Softmax) layer, output layer.
The input layer extracts input features of the acoustic model from the frequency spectrum of the voice signal and transmits the input features to the convolutional layer.
And a convolution layer for extracting convolution characteristics from the input characteristics of the acoustic model and transmitting the convolution characteristics to the LSTM layer.
And the LSTM layer comprises a plurality of network nodes, local feature extraction is carried out on the convolution features based on different network nodes in the LSTM layer, and the local features extracted by each network node are transmitted to the full connection layer.
And the full connection layer fuses the local features based on the forward propagation of the local features extracted by each network node to obtain global features, and transmits the global features to the activation function layer.
And activating the function layer, and performing phoneme classification prediction on the global features based on the phoneme posterior probability to obtain phoneme classifications corresponding to the voice signals, namely the target phonemes.
And the output layer outputs the target phoneme as a voice recognition result.
Thus, based on the acoustic model, the speech signal can be recognized as a target phoneme.
Step 356, obtaining the bottleneck feature of the target phoneme based on the output of the LSTM layer of the acoustic model.
In the above process, it is assumed that the output of the LSTM layer is represented by p (k, m), and m represents the number of network nodes included in the LSTM layer in the acoustic model, where LPS ═ log | x (k, f) — the viable count due to the amplitude spectrum characteristics2And f denotes the index of frequencies in the spectrum, the inventors have appreciated that log | x (k, f) & gtcan be used to calculate luminance2Is comparable to the characteristic dimension of p (k, m), and can be regarded as the representation of the speech signal in different dimensions, so that the output of the LSTM layer is a good complement to the amplitude spectrum characteristic.
Based on this, in the present embodiment, in the process of recognizing the speech signal as the target phoneme by the acoustic model, the output of the LSTM layer in the acoustic model is used as the bottleneck feature of the target phoneme to introduce the speech enhancement processing.
Further, referring to fig. 6, in an exemplary embodiment, step 352 may include the following steps:
step 3521, the Fbank feature, the first order difference and the second order difference of the voice signal are respectively calculated according to the frequency spectrum of the voice signal.
Specifically, the calculation formula is as follows:
F(k)=Fbank[x(k,f)],
Y(k)=x(k+1,f)-x(k,f),
Z(k)=Y(k+1)-Y(k)=x(k+2,f)-2×x(k+1,f)+x(k,f)。
wherein, f (k) represents the Fbank feature of the voice signal, and Fbank represents the Fbank feature extraction algorithm.
Y (k) represents the first order difference of the speech signal, and z (k) represents the second order difference of the speech signal.
x (k, f) denotes the spectrum of the speech signal, k, f respectively denote the frame and frequency indices in the spectrum.
Step 3523, performing splicing operation on the Fbank features, the first order difference and the second order difference of the voice signals to obtain the input features of the acoustic model.
Based on the foregoing, [ f (k), y (k), z (k) ], i.e., representing input features of the acoustic model.
Step 3525, input features of the acoustic model into a convolutional layer of the acoustic model.
Under the cooperation of the above embodiments, the extraction of the bottleneck feature of the phoneme is realized, and further, the speech enhancement processing based on the bottleneck feature of the phoneme is realized.
Referring to fig. 7, in an exemplary embodiment, step 370 may include the steps of:
and step 371, splicing the magnitude spectrum characteristic and the bottleneck characteristic of the target phoneme to obtain the input characteristic of the speech enhancement model.
After the magnitude spectrum feature and the bottleneck feature of the target phoneme are obtained, the magnitude spectrum feature and the bottleneck feature can be spliced to serve as the input feature of the speech enhancement model.
Specifically, [ log | x (k, f) & gtnon-volatile phosphor2,p(k,m)]I.e. input features representing a speech enhancement model.
Wherein log | x (k, f) & gtis non-linear2Representing the amplitude spectral characteristics, x (k, f) representing the spectrum of the speech signal, k, f representing the indices of the frames and frequencies in the spectrum, respectively.
p (k, m) represents the bottleneck characteristic of the target phoneme, and m represents the number of network nodes contained in the LSTM layer in the acoustic model.
Step 373, performing speech enhancement processing on the input feature based on the speech enhancement model constructed by the neural network to obtain the enhanced speech signal.
The speech enhancement model, which is essentially based on a neural network, establishes a mathematical mapping between the speech signal and the enhanced speech signal. Then, after obtaining the speech signal, an enhanced speech signal can be derived from the speech signal based on the mathematical mapping provided by the speech enhancement model.
Regarding the generation of the speech enhancement model, specifically, the neural network is trained according to training samples to obtain the speech enhancement model. Wherein the training samples comprise an original voice signal containing no noise and a noisy voice signal generated by carrying a noise signal by the original voice signal.
The acquisition of the original speech signal in the training sample may be from a real-time acquisition of the picked-up speech component configured in the electronic device, or from a recording of the operator by a recording component (e.g., a recorder), which is not limited herein. As shown in fig. 8, in an implementation of an embodiment, the training process may include the following steps:
step 410, obtaining the input features and the output target of the neural network according to the original voice signal and the noisy voice signal in the training sample.
The input characteristic of the neural network refers to a magnitude spectrum characteristic corresponding to a noisy speech signal.
The output target of the neural network is related to the frequency spectrum of the original voice signal and the frequency spectrum of the voice signal with noise.
And 430, combining the parameters of the neural network, and constructing a convergence function according to the input characteristics and the output target of the neural network.
Wherein the convergence function includes, but is not limited to: a maximum expectation function, a loss function, etc.
Based on this, training is essentially to perform iterative optimization on parameters of the neural network through a training sample, so that the convergence function meets the convergence condition, and the mathematical mapping relation between the input features and the output target is optimized.
And step 450, when the parameters of the neural network make the convergence function converge, converging the neural network to obtain the voice enhancement model.
The convergence function is taken as an example for explanation.
And randomly initializing parameters of the neural network, and calculating a loss value of the loss function by combining a first input characteristic and a first output target of the neural network.
If the loss value of the loss function indicates that the loss function converges, i.e. the loss value of the loss function reaches a minimum, the speech enhancement model is obtained by neural network convergence.
Otherwise, if the loss value of the loss function indicates that the loss function is not converged, that is, the loss value of the loss function does not reach the minimum, the parameters of the neural network are updated, and the loss value of the reconstructed loss function is continuously calculated by combining the next input feature and the next output target of the neural network until the loss value of the loss function reaches the minimum.
It is worth mentioning that if the iteration number reaches the iteration threshold before the loss value of the loss function reaches the minimum, the parameters of the neural network are stopped from being updated continuously, so as to ensure the training efficiency.
Then, when the loss function converges and meets the precision requirement, the training is completed, and thus the speech enhancement model is obtained, so that the speech enhancement model has the capability of performing speech enhancement on the speech signal.
Referring to fig. 9, in an exemplary embodiment, step 410 may include the steps of:
step 411, respectively converting the original speech signal and the noisy speech signal from time domain to frequency domain.
Specifically, s (k, f) ═ STFT (s (t)), and x '(k, f) ═ STFT (x' (t)).
Where s (t) represents the original speech signal, x' (t) represents the noisy speech signal, and STFT represents the short-time fourier transform algorithm.
s (k, f) represents the spectrum of the original speech signal, x' (k, f) represents the spectrum of the noisy speech signal, and k, f represent the indices of the frames and frequencies in the spectrum, respectively.
Step 413, extracting a magnitude spectrum feature from the frequency spectrum of the noisy speech signal, and using the magnitude spectrum feature as an input feature of the neural network.
Specifically, the frequency spectrum of the noisy speech signal is squared.
And (4) carrying out log operation on the operation result to obtain the amplitude spectrum characteristic which is used as the input characteristic of the neural network.
That is, LPS ═ log | x' (k, f) visually2
Wherein, LPS 'represents the input characteristic of the neural network, i.e. the amplitude spectrum characteristic, x' (k, f) represents the spectrum of the noisy speech signal, and k, f represent the indexes of the frame and the frequency in the spectrum, respectively.
Step 415, performing quotient calculation between the frequency spectrum of the original voice signal and the frequency spectrum of the noisy voice signal, and using the calculation result as an output target of the neural network.
Specifically, s (k, f)/x' (k, f) is re (k, f) + j × im (k, f).
Where s (k, f) represents the spectrum of the original speech signal and x' (k, f) represents the spectrum of the noisy speech signal.
re (k, f) represents the real part mask of the output target, im (k, f) represents the imaginary part mask of the output target, i.e., re (k, f) + j × im (k, f) represents the output target of the neural network.
Under the action of the embodiment, the input characteristic LPS' and the output target re (k, f) + j × im (k, f) of the neural network are obtained, so that the training of the neural network is realized, and when the training is finished, the speech enhancement model is obtained through convergence of the neural network.
It can also be understood that the speech enhancement model essentially constructs the input features LPS and the output targetsThe optimal mathematical mapping relationship between the two. It should be noted that the input features LPS and the output targetInput features and output targets considered as speech enhancement models, distinct from input features LPS' and output targets of neural networksre(k,f)+j×im(k,f)。
At this time, after the speech enhancement model is constructed based on the neural network, the speech enhancement processing may be further performed on the speech signal x (t) based on the speech enhancement model to obtain an enhanced speech signalThat is, step 373 is executed to perform speech enhancement processing on the input features of the speech enhancement model to obtain an enhanced speech signal.
Following for enhanced speech signalsThe generation process of (a) is described in detail.
Referring to FIG. 10, in an exemplary embodiment, step 373 may include the steps of:
step 3731, inputting the input features of the speech enhancement model into the LSTM layer of the speech enhancement model, and extracting the local features.
Step 3733, inputting the extracted local features into the full connection layer of the speech enhancement model, and performing fusion of the local features to obtain an output target of the speech enhancement model.
Specifically, as shown in fig. 11, the model topology of the speech enhancement model includes: input layer, LSTM layer, full connection layer, output layer.
The input layer splices the amplitude spectrum feature extracted from the frequency spectrum of the voice signal and the bottleneck feature of the target phoneme output by the LSTM layer in the acoustic model to obtain the input feature of the voice enhancement model, and transmits the input feature to the LSTM layer.
And the LSTM layer comprises a plurality of network nodes, local feature extraction is carried out on the input features of the voice enhancement model based on different network nodes in the LSTM layer, and the local features extracted by each network node are transmitted to the full connection layer.
And the full connection layer fuses the local features based on the forward propagation of the local features extracted by each network node to obtain the global features, namely the output target of the voice enhancement model.
And the output layer outputs the output target of the voice enhancement model as an enhanced voice signal, namely, step 3735 is executed.
Step 3735, obtain the enhanced speech signal from the output target of the speech enhancement model.
Specifically, as shown in fig. 12, in an implementation of an embodiment, step 3735 may include the following steps:
step 3735a, the output target of the speech enhancement model is multiplied by the frequency spectrum of the speech signal to obtain the frequency spectrum of the enhanced speech signal.
Step 3735c, inverse short-time fourier transform processing is performed on the frequency spectrum of the enhanced speech signal to obtain the enhanced speech signal.
As shown in FIG. 11, assume that the output target of the speech enhancement model is represented as: wherein,a real part mask representing the output target,an imaginary mask representing the output target.
At this time, the process of the present invention,
wherein,represents the spectrum of the enhanced speech signal and x (k, f) represents the spectrum of the speech signal.
Finally, for enhancing the frequency spectrum of the speech signalInverse short-time Fourier transform processing is carried out to obtain the enhanced voice signal
Wherein,representing the enhanced speech signal and iSTFT representing the inverse short-time fourier transform algorithm.
In the implementation process, the speech enhancement of the speech enhancement model based on the neural network is realized, the speech quality is effectively improved, and the subsequent speech recognition is facilitated.
In addition, based on phoneme perception, the bottleneck characteristic of the phoneme is introduced into the speech enhancement processing, so that the electronic equipment can more effectively distinguish the unvoiced part and the noise in the speech, the speech enhancement effect of the unvoiced part is improved, and the improvement of the recognition rate of subsequent speech recognition is further facilitated.
Referring to fig. 13, in an exemplary embodiment, a speech recognition method is applied to an electronic device, for example, the electronic device is the smart sound box 130 of the implementation environment shown in fig. 1, and the structure of the electronic device may be as shown in fig. 2.
The speech recognition method may be performed by an electronic device and may include the steps of:
step 710, acquiring a voice signal.
Step 730, in the process that the acoustic model recognizes the speech signal as the first target phoneme, the output of one of the network layers in the acoustic model is used as the bottleneck feature of the first target phoneme.
And 750, performing voice enhancement processing on the voice signal according to the voice signal and the bottleneck characteristic of the first target phoneme to obtain an enhanced voice signal.
Step 770, inputting the enhanced speech signal to the acoustic model for speech recognition to obtain a second target phoneme.
As shown in fig. 14, in the speech enhancement process, on one hand, a bottleneck feature of the first target phoneme is introduced to implement speech enhancement based on phoneme perception, that is, speech recognition is utilized to assist speech enhancement.
On the other hand, in the speech recognition process, speech recognition is performed again based on the enhanced speech signal, thereby obtaining a second target phoneme as a speech recognition result. The second target phone has a more accurate recognition rate than the first target phone, i.e., speech enhancement is used to assist speech recognition.
Through the process, the voice enhancement and the voice recognition supplement each other to form a beneficial iterative process, so that the voice enhancement effect and the robustness of the voice enhancement are improved, and the recognition rate of the voice recognition is further and effectively improved on the aspect of the voice recognition performance.
The following are embodiments of the apparatus of the present application, which may be used to perform the speech processing method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to method embodiments of the speech processing method of the present application.
Referring to fig. 15, in an exemplary embodiment, a speech processing apparatus 900 includes, but is not limited to: a voice acquisition module 910, a spectrum acquisition module 930, an input feature acquisition module 950, and a voice enhancement module 970.
The voice acquiring module 910 is configured to acquire a voice signal.
A spectrum obtaining module 930, configured to convert the voice signal from a time domain to a frequency domain to obtain a spectrum of the voice signal.
An input feature obtaining module 950, configured to extract a magnitude spectrum feature from a spectrum of the speech signal, and output one of the network layers in the acoustic model as a bottleneck feature of the target phoneme based on the acoustic model that identifies the speech signal as the target phoneme.
And the speech enhancement module 970 is configured to perform speech enhancement processing on the speech signal according to the amplitude spectrum feature and the bottleneck feature of the target phoneme to obtain an enhanced speech signal.
In an exemplary embodiment, the speech processing apparatus 900 is further configured to implement the following functions, including but not limited to:
and performing input feature extraction on the frequency spectrum of the voice signal, and inputting the extracted input feature into the convolution layer of the acoustic model.
And extracting convolution characteristics from the input characteristics of the acoustic model based on the convolution layer of the acoustic model, and outputting the convolution characteristics to the LSTM layer of the acoustic model.
And obtaining the bottleneck characteristic of the target phoneme based on the output of the LSTM layer of the acoustic model.
In an exemplary embodiment, the speech processing apparatus 900 is further configured to implement the following functions, including but not limited to:
and respectively calculating the Fbank characteristic, the first-order difference and the second-order difference of the voice signal according to the frequency spectrum of the voice signal.
And splicing the Fbank characteristic, the first order difference and the second order difference of the voice signal to obtain the input characteristic of the acoustic model.
Inputting input features of the acoustic model to a convolutional layer of the acoustic model.
In an exemplary embodiment, the speech processing apparatus 900 is further configured to implement the following functions, including but not limited to:
and carrying out short-time Fourier transform processing on the voice signal to obtain the frequency spectrum of the voice signal.
In an exemplary embodiment, the speech processing apparatus 900 is further configured to implement the following functions, including but not limited to:
wherein the frequency spectrum of the speech signal is squared.
And carrying out log operation on the operation result to obtain the amplitude spectrum characteristic.
In an exemplary embodiment, the speech processing apparatus 900 is further configured to implement the following functions, including but not limited to:
and splicing the magnitude spectrum characteristic and the bottleneck characteristic of the target phoneme to obtain the input characteristic of the speech enhancement model.
And performing voice enhancement processing on the input features based on a voice enhancement model constructed by the neural network to obtain the enhanced voice signal.
In an exemplary embodiment, the speech processing apparatus 900 is further configured to implement the following functions, including but not limited to:
the neural network is trained according to training samples to obtain the speech enhancement model, wherein the training samples comprise original speech signals and noisy speech signals generated by the original speech signals carrying noise signals.
In an exemplary embodiment, the speech processing apparatus 900 is further configured to implement the following functions, including but not limited to:
and obtaining the input characteristics and the output target of the neural network according to the original voice signal and the noisy voice signal in the training sample.
And combining the parameters of the neural network, and constructing a convergence function according to the input characteristics and the output target of the neural network.
And when the parameters of the neural network make the convergence function converge, converging the neural network to obtain the voice enhancement model.
In an exemplary embodiment, the speech processing apparatus 900 is further configured to implement the following functions, including but not limited to:
and converting the original voice signal and the voice signal with noise from a time domain to a frequency domain respectively.
And extracting the frequency spectrum of the voice signal with the noise to obtain a magnitude spectrum characteristic which is used as an input characteristic of the neural network.
And carrying out quotient calculation between the frequency spectrum of the original voice signal and the frequency spectrum of the voice signal with noise, and taking a calculation result as an output target of the neural network.
In an exemplary embodiment, the speech processing apparatus 900 is further configured to implement the following functions, including but not limited to:
and inputting the input features of the voice enhancement model into an LSTM layer of the voice enhancement model, and extracting local features.
And inputting the extracted local features into a full connection layer of the voice enhancement model, and fusing the local features to obtain an output target of the voice enhancement model.
The enhanced speech signal is derived from an output target of the speech enhancement model.
In an exemplary embodiment, the speech processing apparatus 900 is further configured to implement the following functions, including but not limited to:
and multiplying the output target of the voice enhancement model and the frequency spectrum of the voice signal to obtain the frequency spectrum of the enhanced voice signal.
And carrying out inverse short-time Fourier transform processing on the frequency spectrum of the enhanced voice signal to obtain the enhanced voice signal.
It should be noted that, when the voice processing apparatus provided in the foregoing embodiment performs voice processing, only the division of the above functional modules is illustrated, and in practical applications, the functions may be distributed and completed by different functional modules according to needs, that is, the internal structure of the voice processing apparatus is divided into different functional modules to complete all or part of the functions described above.
In addition, the voice processing apparatus and the voice processing method provided by the above embodiments belong to the same concept, and the specific manner in which each module performs operations has been described in detail in the method embodiments, and is not described again here.
Referring to FIG. 16, in an exemplary embodiment, a speech recognition device 1100 includes, but is not limited to: a speech acquisition module 1110, a bottleneck feature acquisition module 1130, a speech enhancement module 1150, and a speech recognition module 1170.
The voice acquiring module 1110 is configured to acquire a voice signal.
A bottleneck characteristic obtaining module 1130, configured to, in a process that the acoustic model recognizes the speech signal as the first target phoneme, take an output of one of the network layers in the acoustic model as a bottleneck characteristic of the first target phoneme.
A speech enhancement module 1150, configured to perform speech enhancement processing on the speech signal according to the speech signal and the bottleneck characteristic of the first target phoneme, so as to obtain an enhanced speech signal.
And the speech recognition module 1170 is configured to input the enhanced speech signal to the acoustic model for speech recognition, so as to obtain a second target phoneme.
In an exemplary embodiment, the speech recognition apparatus 1100 is further configured to perform functions including, but not limited to:
and performing first input feature extraction on the frequency spectrum of the voice signal, and inputting the extracted first input feature into the convolution layer of the acoustic model.
And extracting first convolution characteristics from the first input characteristics of the acoustic model based on the convolution layer of the acoustic model, and outputting the first convolution characteristics to the LSTM layer of the acoustic model.
And obtaining the bottleneck characteristic of the first target phoneme based on the output of the LSTM layer of the acoustic model.
In an exemplary embodiment, the speech recognition apparatus 1100 is further configured to perform functions including, but not limited to:
and respectively calculating the Fbank characteristic, the first-order difference and the second-order difference of the voice signal according to the frequency spectrum of the voice signal.
And splicing the Fbank characteristic, the first order difference and the second order difference of the voice signal to obtain a first input characteristic of the acoustic model.
Inputting input features of the acoustic model to a convolutional layer of the acoustic model.
In an exemplary embodiment, the speech recognition apparatus 1100 is further configured to perform functions including, but not limited to:
and carrying out short-time Fourier transform processing on the voice signal to obtain the frequency spectrum of the voice signal.
In an exemplary embodiment, the speech recognition apparatus 1100 is further configured to perform functions including, but not limited to:
wherein the frequency spectrum of the speech signal is squared.
Carrying out log taking operation on the operation result to obtain the amplitude spectrum characteristic;
and performing voice enhancement processing on the voice signal according to the amplitude spectrum characteristic and the bottleneck characteristic of the first target phoneme to obtain the enhanced voice signal.
In an exemplary embodiment, the speech recognition apparatus 1100 is further configured to perform functions including, but not limited to:
and splicing the magnitude spectrum characteristic and the bottleneck characteristic of the first target phoneme to obtain the input characteristic of the speech enhancement model.
And performing voice enhancement processing on the input features based on a voice enhancement model constructed by the neural network to obtain the enhanced voice signal.
In an exemplary embodiment, the speech recognition apparatus 1100 is further configured to perform functions including, but not limited to:
the neural network is trained according to a first training sample to obtain the speech enhancement model, wherein the first training sample comprises an original speech signal and a noisy speech signal generated by carrying a noise signal by the original speech signal.
In an exemplary embodiment, the speech recognition apparatus 1100 is further configured to perform functions including, but not limited to:
and obtaining the input characteristics and the output target of the neural network according to the original voice signal and the noisy voice signal in the first training sample.
And combining the parameters of the neural network, and constructing a first convergence function according to the input characteristics and the output target of the neural network.
When the parameters of the neural network make the first convergence function converge, the speech enhancement model is converged by the neural network.
In an exemplary embodiment, the speech recognition apparatus 1100 is further configured to perform functions including, but not limited to:
and converting the original voice signal and the voice signal with noise from a time domain to a frequency domain respectively.
And extracting the frequency spectrum of the voice signal with the noise to obtain a magnitude spectrum characteristic which is used as an input characteristic of the neural network.
And carrying out quotient calculation between the frequency spectrum of the original voice signal and the frequency spectrum of the voice signal with noise, and taking a calculation result as an output target of the neural network.
In an exemplary embodiment, the speech recognition apparatus 1100 is further configured to perform functions including, but not limited to:
and inputting the input features of the voice enhancement model into an LSTM layer of the voice enhancement model, and extracting local features.
And inputting the extracted local features into a full connection layer of the voice enhancement model, and fusing the local features to obtain an output target of the voice enhancement model.
The enhanced speech signal is derived from an output target of the speech enhancement model.
In an exemplary embodiment, the speech recognition apparatus 1100 is further configured to perform functions including, but not limited to:
and multiplying the output target of the voice enhancement model and the frequency spectrum of the voice signal to obtain the frequency spectrum of the enhanced voice signal.
And carrying out inverse short-time Fourier transform processing on the frequency spectrum of the enhanced voice signal to obtain the enhanced voice signal.
In an exemplary embodiment, the speech recognition apparatus 1100 is further configured to perform functions including, but not limited to:
performing time-frequency transformation on the enhanced voice signal to obtain a frequency spectrum of the enhanced voice signal;
and performing second input feature extraction on the frequency spectrum of the enhanced voice signal, and inputting the extracted second input feature into the convolution layer of the acoustic model.
And extracting second convolution characteristics from second input characteristics of the acoustic model based on the convolution layer of the acoustic model, and outputting the second convolution characteristics to the LSTM layer of the acoustic model.
And based on a plurality of network nodes contained in an LSTM layer in the acoustic model, performing local feature extraction on the second convolution features, and transmitting the local features extracted by each network node to a full connection layer.
And based on the full connection layer of the acoustic model, performing forward propagation and local feature fusion on the local features extracted by each network node to obtain global features, and transmitting the global features to an activation function layer.
And performing phoneme classification prediction on the global features based on an activation function layer of the acoustic model to obtain phoneme classifications corresponding to the enhanced speech signal, wherein the phoneme classifications are used as the second target phonemes.
Outputting the second target phoneme as a speech recognition result based on an output layer of the acoustic model.
In an exemplary embodiment, the speech recognition apparatus 1100 is further configured to perform functions including, but not limited to:
and training a machine learning model according to a second training sample to obtain the acoustic model, wherein the second training sample is a voice signal subjected to phoneme labeling.
In an exemplary embodiment, the speech recognition apparatus 1100 is further configured to perform functions including, but not limited to:
performing time-frequency transformation on the second training sample to obtain a frequency spectrum of the second training sample;
and extracting the spectrum of the second training sample to obtain the training characteristics of the acoustic model.
And combining the parameters of the machine learning model, and constructing a second convergence function according to the training characteristics of the second training sample and the labeled phonemes.
When the parameters of the machine learning model make the second convergence function converge, the acoustic model is converged by the machine learning model.
It should be noted that, when performing voice recognition, the voice recognition apparatus provided in the above embodiment is only illustrated by the division of the above functional modules, and in practical applications, the functions may be distributed to different functional modules according to needs, that is, the internal structure of the voice recognition apparatus is divided into different functional modules to complete all or part of the functions described above.
In addition, the voice recognition apparatus provided by the above embodiment and the voice recognition method belong to the same concept, wherein the specific manner in which each module performs operations has been described in detail in the method embodiment, and is not described again here.
Referring to fig. 17, in an exemplary embodiment, an electronic device 1000 includes at least one processor 1001, at least one memory 1002, and at least one communication bus 1003.
Wherein the memory 1002 has computer readable instructions stored thereon, the processor 1001 reads the computer readable instructions stored in the memory 1002 through the communication bus 1003.
The computer readable instructions, when executed by the processor 1001, implement the speech processing method or the speech recognition method in the above embodiments.
In an exemplary embodiment, a storage medium has a computer program stored thereon, and the computer program realizes the voice processing method or the voice recognition method in the above embodiments when executed by a processor.
The above description is only a preferred exemplary embodiment of the present application, and is not intended to limit the embodiments of the present application, and those skilled in the art can easily make various changes and modifications according to the main concept and spirit of the present application, so that the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A method of speech processing, comprising:
acquiring a frequency spectrum of a voice signal;
performing input feature extraction on the frequency spectrum of the voice signal, and inputting the extracted input features into a convolution layer of an acoustic model, wherein the acoustic model is used for recognizing the voice signal as a phoneme;
extracting convolution characteristics from input characteristics of the acoustic model based on the convolution layer of the acoustic model, and outputting the convolution characteristics to an LSTM layer of the acoustic model;
and obtaining the bottleneck characteristic of the target phoneme based on the output of the LSTM layer of the acoustic model.
2. The method of claim 1, wherein the performing input feature extraction on the spectrum of the speech signal, and inputting the extracted input features into the convolutional layer of the acoustic model comprises:
respectively calculating Fbank characteristics, first-order difference and second-order difference of the voice signals according to the frequency spectrums of the voice signals;
splicing the Fbank characteristics, the first order difference and the second order difference of the voice signals to obtain the input characteristics of the acoustic model;
inputting input features of the acoustic model to a convolutional layer of the acoustic model.
3. The method of claim 1, wherein the obtaining the spectrum of the speech signal comprises:
acquiring a voice signal;
and converting the voice signal from a time domain to a frequency domain to obtain a frequency spectrum of the voice signal.
4. The method of claim 1, wherein the converting the speech signal from the time domain to the frequency domain to obtain the spectrum of the speech signal comprises:
and carrying out short-time Fourier transform processing on the voice signal to obtain the frequency spectrum of the voice signal.
5. The method of claim 1, further comprising:
extracting the frequency spectrum of the voice signal to obtain a magnitude spectrum characteristic;
and performing voice enhancement processing on the voice signal according to the amplitude spectrum characteristic and the bottleneck characteristic of the target phoneme to obtain an enhanced voice signal.
6. The method of claim 5, wherein extracting the magnitude spectrum feature from the spectrum of the speech signal comprises:
carrying out squaring operation on the frequency spectrum of the voice signal;
and carrying out log operation on the operation result to obtain the amplitude spectrum characteristic.
7. The method of claim 6, wherein the log operation is performed on the operation result by the following formula to obtain the amplitude spectrum characteristic:
LPS=log|x(k,f)|2
wherein LPS represents the magnitude spectral feature, x (k, f) represents a spectrum of the speech signal, k represents an index of a frame in the spectrum, and f represents an index of a frequency in the spectrum.
8. The method of claim 5, wherein said performing speech enhancement processing on said speech signal according to said magnitude spectrum feature and said bottleneck feature of said target phoneme to obtain an enhanced speech signal comprises:
splicing the magnitude spectrum characteristic and the bottleneck characteristic of the target phoneme to obtain an input characteristic of a voice enhancement model;
and performing voice enhancement processing on the input features based on a voice enhancement model constructed by the neural network to obtain the enhanced voice signal.
9. The method of claim 8, wherein the method further comprises: training the neural network according to a training sample to obtain the speech enhancement model, wherein the training sample comprises an original speech signal and a noise-carrying speech signal generated by carrying a noise signal by the original speech signal;
the training the neural network according to the training sample to obtain the speech enhancement model comprises:
obtaining the input characteristics and the output target of the neural network according to the original voice signal and the voice signal with noise in the training sample;
combining the parameters of the neural network, and constructing a convergence function according to the input characteristics and the output target of the neural network;
and when the parameters of the neural network make the convergence function converge, converging the neural network to obtain the voice enhancement model.
10. The method of claim 9, wherein obtaining input features and output targets for the neural network from the original speech signal and the noisy speech signal in the training samples comprises:
respectively converting the original voice signal and the voice signal with noise from a time domain to a frequency domain;
extracting the frequency spectrum of the voice signal with the noise to obtain a magnitude spectrum characteristic which is used as an input characteristic of the neural network;
and carrying out quotient calculation between the frequency spectrum of the original voice signal and the frequency spectrum of the voice signal with noise, and taking a calculation result as an output target of the neural network.
11. The method of claim 8, wherein the performing speech enhancement processing on the input feature based on the speech enhancement model constructed by the neural network to obtain the enhanced speech signal comprises:
inputting the input features of the voice enhancement model into an LSTM layer of the voice enhancement model, and extracting local features;
inputting the extracted local features into a full connection layer of the voice enhancement model, and fusing the local features to obtain an output target of the voice enhancement model;
the enhanced speech signal is derived from an output target of the speech enhancement model.
12. The method of claim 11, wherein said deriving the enhanced speech signal from an output target of the speech enhancement model comprises:
multiplying the output target of the voice enhancement model and the frequency spectrum of the voice signal to obtain the frequency spectrum of the enhanced voice signal;
and carrying out inverse short-time Fourier transform processing on the frequency spectrum of the enhanced voice signal to obtain the enhanced voice signal.
13. A speech processing apparatus, comprising:
the acquisition module is used for acquiring the frequency spectrum of the voice signal;
the first feature extraction module is used for performing input feature extraction on the frequency spectrum of the voice signal and inputting the extracted input features into a convolutional layer of an acoustic model, wherein the acoustic model is used for recognizing the voice signal as phonemes;
the second feature extraction module is used for extracting convolution features from the input features of the acoustic model based on the convolution layer of the acoustic model and outputting the convolution features to the LSTM layer of the acoustic model;
and the third feature extraction module is used for obtaining the bottleneck feature of the target phoneme based on the output of the LSTM layer of the acoustic model.
14. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 12.
15. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the method of any one of claims 1 to 12.
CN201910741367.1A 2019-05-21 2019-05-21 Voice processing method, device, medium and electronic equipment Active CN110415686B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910741367.1A CN110415686B (en) 2019-05-21 2019-05-21 Voice processing method, device, medium and electronic equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910425255.5A CN110223680B (en) 2019-05-21 2019-05-21 Voice processing method, voice recognition device, voice recognition system and electronic equipment
CN201910741367.1A CN110415686B (en) 2019-05-21 2019-05-21 Voice processing method, device, medium and electronic equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910425255.5A Division CN110223680B (en) 2019-05-21 2019-05-21 Voice processing method, voice recognition device, voice recognition system and electronic equipment

Publications (2)

Publication Number Publication Date
CN110415686A true CN110415686A (en) 2019-11-05
CN110415686B CN110415686B (en) 2021-08-17

Family

ID=67821539

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201910741794.XA Active CN110415687B (en) 2019-05-21 2019-05-21 Voice processing method, device, medium and electronic equipment
CN201910425255.5A Active CN110223680B (en) 2019-05-21 2019-05-21 Voice processing method, voice recognition device, voice recognition system and electronic equipment
CN201910741367.1A Active CN110415686B (en) 2019-05-21 2019-05-21 Voice processing method, device, medium and electronic equipment

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN201910741794.XA Active CN110415687B (en) 2019-05-21 2019-05-21 Voice processing method, device, medium and electronic equipment
CN201910425255.5A Active CN110223680B (en) 2019-05-21 2019-05-21 Voice processing method, voice recognition device, voice recognition system and electronic equipment

Country Status (1)

Country Link
CN (3) CN110415687B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111948622A (en) * 2020-08-07 2020-11-17 哈尔滨工程大学 Linear frequency modulation radar signal TOA estimation algorithm based on parallel CNN-LSTM
CN112201265A (en) * 2020-12-07 2021-01-08 成都启英泰伦科技有限公司 LSTM voice enhancement method based on psychoacoustic model
CN112750425A (en) * 2020-01-22 2021-05-04 腾讯科技(深圳)有限公司 Speech recognition method, speech recognition device, computer equipment and computer readable storage medium
CN112992126A (en) * 2021-04-22 2021-06-18 北京远鉴信息技术有限公司 Voice authenticity verification method and device, electronic equipment and readable storage medium
CN113178192A (en) * 2021-04-30 2021-07-27 平安科技(深圳)有限公司 Training method, device and equipment of speech recognition model and storage medium
CN115512693A (en) * 2021-06-23 2022-12-23 中移(杭州)信息技术有限公司 Audio recognition method, acoustic model training method, device and storage medium

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110808061B (en) * 2019-11-11 2022-03-15 广州国音智能科技有限公司 Voice separation method and device, mobile terminal and computer readable storage medium
CN110930995B (en) * 2019-11-26 2022-02-11 中国南方电网有限责任公司 Voice recognition model applied to power industry
CN111144347B (en) * 2019-12-30 2023-06-27 腾讯科技(深圳)有限公司 Data processing method, device, platform and storage medium
CN111261145B (en) * 2020-01-15 2022-08-23 腾讯科技(深圳)有限公司 Voice processing device, equipment and training method thereof
CN113763976B (en) * 2020-06-05 2023-12-22 北京有竹居网络技术有限公司 Noise reduction method and device for audio signal, readable medium and electronic equipment
WO2021248364A1 (en) * 2020-06-10 2021-12-16 深圳市大疆创新科技有限公司 Audio recording method and apparatus for unmanned aerial vehicle, chip, unmanned aerial vehicle, and system
CN111696532B (en) * 2020-06-17 2023-08-18 北京达佳互联信息技术有限公司 Speech recognition method, device, electronic equipment and storage medium
CN111986653B (en) * 2020-08-06 2024-06-25 杭州海康威视数字技术股份有限公司 Voice intention recognition method, device and equipment
CN113571063B (en) * 2021-02-02 2024-06-04 腾讯科技(深圳)有限公司 Speech signal recognition method and device, electronic equipment and storage medium
CN113823312B (en) * 2021-02-19 2023-11-07 北京沃东天骏信息技术有限公司 Speech enhancement model generation method and device, and speech enhancement method and device
CN112820300B (en) * 2021-02-25 2023-12-19 北京小米松果电子有限公司 Audio processing method and device, terminal and storage medium
CN113096682B (en) * 2021-03-20 2023-08-29 杭州知存智能科技有限公司 Real-time voice noise reduction method and device based on mask time domain decoder
CN113345461B (en) * 2021-04-26 2024-07-09 北京搜狗科技发展有限公司 Voice processing method and device for voice processing
CN114299977B (en) * 2021-11-30 2022-11-25 北京百度网讯科技有限公司 Method and device for processing reverberation voice, electronic equipment and storage medium
CN117219107B (en) * 2023-11-08 2024-01-30 腾讯科技(深圳)有限公司 Training method, device, equipment and storage medium of echo cancellation model

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170256254A1 (en) * 2016-03-04 2017-09-07 Microsoft Technology Licensing, Llc Modular deep learning model
CN107705801A (en) * 2016-08-05 2018-02-16 中国科学院自动化研究所 The training method and Speech bandwidth extension method of Speech bandwidth extension model
CN108170686A (en) * 2017-12-29 2018-06-15 科大讯飞股份有限公司 Text interpretation method and device
CN108417207A (en) * 2018-01-19 2018-08-17 苏州思必驰信息科技有限公司 A kind of depth mixing generation network self-adapting method and system
CN109192199A (en) * 2018-06-30 2019-01-11 中国人民解放军战略支援部队信息工程大学 A kind of data processing method of combination bottleneck characteristic acoustic model
US10210860B1 (en) * 2018-07-27 2019-02-19 Deepgram, Inc. Augmented generalized deep learning with special vocabulary

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9721559B2 (en) * 2015-04-17 2017-08-01 International Business Machines Corporation Data augmentation method based on stochastic feature mapping for automatic speech recognition
US9805305B2 (en) * 2015-08-07 2017-10-31 Yahoo Holdings, Inc. Boosted deep convolutional neural networks (CNNs)
US9693139B1 (en) * 2016-03-30 2017-06-27 Ford Global Tecghnologies, LLC Systems and methods for electronic sound enhancement tuning
JP6612796B2 (en) * 2017-02-10 2019-11-27 日本電信電話株式会社 Acoustic model learning device, speech recognition device, acoustic model learning method, speech recognition method, acoustic model learning program, and speech recognition program
RU2745298C1 (en) * 2017-10-27 2021-03-23 Фраунхофер-Гезелльшафт Цур Фердерунг Дер Ангевандтен Форшунг Е.Ф. Device, method, or computer program for generating an extended-band audio signal using a neural network processor
CN108694951B (en) * 2018-05-22 2020-05-22 华南理工大学 Speaker identification method based on multi-stream hierarchical fusion transformation characteristics and long-and-short time memory network
CN109346087B (en) * 2018-09-17 2023-11-10 平安科技(深圳)有限公司 Noise robust speaker verification method and apparatus against bottleneck characteristics of a network
CN109147810B (en) * 2018-09-30 2019-11-26 百度在线网络技术(北京)有限公司 Establish the method, apparatus, equipment and computer storage medium of speech enhan-cement network
CN109671446B (en) * 2019-02-20 2020-07-14 西华大学 Deep learning voice enhancement method based on absolute auditory threshold

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170256254A1 (en) * 2016-03-04 2017-09-07 Microsoft Technology Licensing, Llc Modular deep learning model
CN107705801A (en) * 2016-08-05 2018-02-16 中国科学院自动化研究所 The training method and Speech bandwidth extension method of Speech bandwidth extension model
CN108170686A (en) * 2017-12-29 2018-06-15 科大讯飞股份有限公司 Text interpretation method and device
CN108417207A (en) * 2018-01-19 2018-08-17 苏州思必驰信息科技有限公司 A kind of depth mixing generation network self-adapting method and system
CN109192199A (en) * 2018-06-30 2019-01-11 中国人民解放军战略支援部队信息工程大学 A kind of data processing method of combination bottleneck characteristic acoustic model
US10210860B1 (en) * 2018-07-27 2019-02-19 Deepgram, Inc. Augmented generalized deep learning with special vocabulary

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ATSUNORI OGAWA 等: ""Robust Example Search Using Bottleneck Features for Example-based Speech Enhancement"", 《INTERSPEECH 2016》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112750425A (en) * 2020-01-22 2021-05-04 腾讯科技(深圳)有限公司 Speech recognition method, speech recognition device, computer equipment and computer readable storage medium
CN112750425B (en) * 2020-01-22 2023-11-03 腾讯科技(深圳)有限公司 Speech recognition method, device, computer equipment and computer readable storage medium
CN111948622A (en) * 2020-08-07 2020-11-17 哈尔滨工程大学 Linear frequency modulation radar signal TOA estimation algorithm based on parallel CNN-LSTM
CN112201265A (en) * 2020-12-07 2021-01-08 成都启英泰伦科技有限公司 LSTM voice enhancement method based on psychoacoustic model
CN112992126A (en) * 2021-04-22 2021-06-18 北京远鉴信息技术有限公司 Voice authenticity verification method and device, electronic equipment and readable storage medium
CN113178192A (en) * 2021-04-30 2021-07-27 平安科技(深圳)有限公司 Training method, device and equipment of speech recognition model and storage medium
CN113178192B (en) * 2021-04-30 2024-05-24 平安科技(深圳)有限公司 Training method, device, equipment and storage medium of voice recognition model
CN115512693A (en) * 2021-06-23 2022-12-23 中移(杭州)信息技术有限公司 Audio recognition method, acoustic model training method, device and storage medium

Also Published As

Publication number Publication date
CN110415686B (en) 2021-08-17
CN110223680A (en) 2019-09-10
CN110223680B (en) 2021-06-29
CN110415687A (en) 2019-11-05
CN110415687B (en) 2021-04-13

Similar Documents

Publication Publication Date Title
CN110415686B (en) Voice processing method, device, medium and electronic equipment
Li et al. Two heads are better than one: A two-stage complex spectral mapping approach for monaural speech enhancement
JP7034339B2 (en) Audio signal processing system and how to convert the input audio signal
CN110600017B (en) Training method of voice processing model, voice recognition method, system and device
JP6903611B2 (en) Signal generators, signal generators, signal generators and programs
WO2022033327A1 (en) Video generation method and apparatus, generation model training method and apparatus, and medium and device
US11355097B2 (en) Sample-efficient adaptive text-to-speech
WO2017191249A1 (en) Speech enhancement and audio event detection for an environment with non-stationary noise
CN108922543B (en) Model base establishing method, voice recognition method, device, equipment and medium
WO2023116660A2 (en) Model training and tone conversion method and apparatus, device, and medium
EP4207195A1 (en) Speech separation method, electronic device, chip and computer-readable storage medium
WO2023030235A1 (en) Target audio output method and system, readable storage medium, and electronic apparatus
WO2024055752A1 (en) Speech synthesis model training method, speech synthesis method, and related apparatuses
EP4172987A1 (en) Speech enhancement
CN117059068A (en) Speech processing method, device, storage medium and computer equipment
WO2024114303A1 (en) Phoneme recognition method and apparatus, electronic device and storage medium
CN114065720A (en) Conference summary generation method and device, storage medium and electronic equipment
WO2021159734A1 (en) Data processing method and apparatus, device, and medium
JP2017191531A (en) Communication system, server, and communication method
CN111415662A (en) Method, apparatus, device and medium for generating video
US20230186943A1 (en) Voice activity detection method and apparatus, and storage medium
CN113314101A (en) Voice processing method and device, electronic equipment and storage medium
KR101862352B1 (en) Front-end processor for speech recognition, and apparatus and method of speech recognition using the same
CN112489678A (en) Scene recognition method and device based on channel characteristics
CN116741193B (en) Training method and device for voice enhancement network, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant