WO2023005729A1 - Speech information processing method and apparatus, and electronic device - Google Patents

Speech information processing method and apparatus, and electronic device Download PDF

Info

Publication number
WO2023005729A1
WO2023005729A1 PCT/CN2022/106426 CN2022106426W WO2023005729A1 WO 2023005729 A1 WO2023005729 A1 WO 2023005729A1 CN 2022106426 W CN2022106426 W CN 2022106426W WO 2023005729 A1 WO2023005729 A1 WO 2023005729A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
translation
speech information
speech
acoustic feature
Prior art date
Application number
PCT/CN2022/106426
Other languages
French (fr)
Chinese (zh)
Inventor
朱耀明
董倩倩
王明轩
李磊
Original Assignee
北京有竹居网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京有竹居网络技术有限公司 filed Critical 北京有竹居网络技术有限公司
Publication of WO2023005729A1 publication Critical patent/WO2023005729A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/005Language recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models

Definitions

  • the present disclosure relates to the technical field of artificial intelligence, and in particular to a voice information processing method, device and electronic equipment.
  • ST Speech Translation
  • Embodiments of the present disclosure provide a voice information processing method, device and electronic equipment.
  • an embodiment of the present disclosure provides a method for processing speech information, including: acquiring first acoustic feature information of at least one frame of speech information to be translated; and determining whether the first acoustic feature information is Corresponding to complete semantics; in response to the determination result being yes, performing a translation operation on the first acoustic feature information to obtain a corresponding translation result.
  • an embodiment of the present disclosure provides a speech information processing model, including: an acoustic model, a semantic recognition model, and a translation model, wherein the acoustic model is used to: receive at least one frame in the streaming speech recognition mode The speech information to be translated, and extracting the first acoustic feature information of the at least one frame of the speech information to be translated; the semantic recognition model is used to: receive the at least one frame of the first acoustic feature in the streaming speech recognition mode information, and determine whether the at least one frame of first acoustic feature information corresponds to complete semantics; the translation model is used to determine a translation result of the first acoustic feature information in a streaming speech recognition mode.
  • an embodiment of the present disclosure provides a training method for a speech information processing model, which is applied to the speech information processing model described in the second aspect, and the speech information processing model includes an acoustic model, a semantic recognition model, and a translation model , the method includes: obtaining a training sample set, the training sample set includes a plurality of training sample pairs, and the training sample pairs include original speech information in a first language and translation results corresponding to the original speech information in a second language; The original speech information in the sample pair is input to the acoustic model after initial training, and the translation result is used as the output of the translation model to train the speech information processing model to obtain a trained speech information processing model.
  • an embodiment of the present disclosure provides a voice information processing device, including: an acquisition unit, configured to acquire at least one frame of first acoustic feature information of the voice information to be translated; a determination unit, configured to Next, determine whether the first acoustic feature information of the at least one frame satisfies a preset translation condition; the translation unit is configured to perform a translation operation on the first acoustic feature information in response to the determination result being yes, to obtain a corresponding translation result.
  • an embodiment of the present disclosure provides a speech information processing model training device, including: an acquisition unit, configured to acquire at least one frame of first acoustic feature information of the speech information to be translated; a determination unit, configured to Under the speech recognition method, determine whether the at least one frame of first acoustic feature information satisfies a preset translation condition; the translation unit is configured to perform a translation operation on the first acoustic feature information in response to the determination result being yes, to obtain corresponding translation results.
  • an embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device for storing one or more programs, when the one or more programs are executed by the one or more processors, so that the one or more processors implement the speech information processing method as described in the first aspect, or the training method of a speech information processing model as described in the third aspect.
  • an embodiment of the present disclosure provides a computer-readable medium, on which a computer program is stored, and when the program is executed by a processor, the speech information processing method as described in the first aspect is implemented, or as described in the third aspect The training method of the speech information processing model described above.
  • FIG. 1 is a flowchart of an embodiment of a voice information processing method according to the present disclosure
  • FIG. 2 is a flow chart of another embodiment of the voice information processing method according to the present disclosure.
  • Fig. 3 shows a schematic diagram of processing acoustic feature information by the continuous integration and distribution module in the embodiment shown in Fig. 2;
  • Fig. 4 shows a schematic structural diagram of a speech information processing model according to the present disclosure
  • Fig. 5 shows a schematic flowchart of a training method of a speech information processing model according to the present disclosure
  • FIG. 6 is a schematic structural diagram of an embodiment of a speech information processing device according to the present disclosure.
  • FIG. 7 is a schematic structural diagram of an embodiment of a training device for a speech information processing model according to the present disclosure.
  • FIG. 8 is an exemplary system architecture in which the voice information processing method and the voice information processing device according to an embodiment of the present disclosure can be applied;
  • Fig. 9 is a schematic diagram of a basic structure of an electronic device provided according to an embodiment of the present disclosure.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • FIG. 1 shows the flow of an embodiment of the speech information processing method according to the present disclosure.
  • the speech information processing method comprises the following steps:
  • Step 101 acquire first acoustic feature information of at least one frame of speech information to be translated.
  • the voice information to be translated may be voice information in the first language.
  • the speech information to be translated may be the currently collected speech information of the speaker, or the pre-stored speech information of the speaker.
  • the first language here may be any language, such as English, Chinese, French, etc.
  • the translation result may correspond to the target language.
  • the target language can be, for example, any other language other than the first language.
  • Speech information may include sequences of words.
  • Various methods may be used to perform feature extraction on the above speech information to obtain the acoustic features of the speech information.
  • the acoustic features of the speech information here can be extracted from the logarithmic-mel spectrogram of the speech.
  • acoustic features can be extracted frame by frame from the speech information to be translated.
  • each audio frame may include multiple sampling points for discretizing a continuous audio signal, for example, an audio frame may include 1024 sampling points.
  • Each audio frame may correspond to an acoustic feature sequence.
  • the acoustic feature sequence of an audio frame may include an acoustic feature sequence composed of the acoustic features of each sampling point, and the acoustic features of each sampling point may include amplitude, phase, frequency, and correlation of various dimensions.
  • the first acoustic feature information of the at least one frame of speech information to be translated may include an acoustic feature sequence corresponding to the at least one frame of speech information to be translated.
  • Step 102 under streaming speech recognition, determine whether the first acoustic feature information corresponds to complete semantics.
  • the end-to-end speech translation involved in the present disclosure may include a streaming speech recognition mode and a non-streaming speech recognition mode.
  • the non-streaming speech translation mode refers to a translation mode that can listen to all speech audios to be translated at one time, and then generate translated text.
  • the streaming voice translation mode refers to a translation mode that completes translation while receiving voice streams.
  • the current speech translation mode is a streaming speech recognition mode, it may be determined whether at least one frame of first acoustic feature information satisfies a preset translation condition.
  • the preset translation condition here includes that at least one frame of first acoustic feature information corresponds to complete semantics.
  • step 103 If at least one frame of first acoustic feature information is complete semantics, go to step 103 . Otherwise, continue to obtain the acoustic feature sequence of at least one subsequent frame of speech information to be translated, and add the acoustic feature sequence of at least one subsequent frame of translated speech information to the acoustic feature sequence of at least one frame of speech information to be translated to obtain the updated first An acoustic feature information.
  • the first acoustic feature information is then translated.
  • the above step 102 may include:
  • the above-mentioned first acoustic feature information is input into the pre-trained preset semantic recognition model, and the preset semantic recognition model is used to determine whether the first acoustic feature information corresponds to complete semantics.
  • the above-mentioned semantic recognition model may be various machine learning models, such as a convolutional neural network model.
  • the above machine learning model may be, for example, a Continuous Integrate-and-Fire (CIF) model.
  • the CIF model includes a second encoder.
  • the CIF model can compress the first acoustic feature information obtained in step 101, and the CIF model can judge whether the compressed first acoustic feature information has complete semantics.
  • Step 103 in response to the determination result being yes, perform a translation operation on the first acoustic feature information to obtain a corresponding translation result.
  • the translation operation is performed on the first acoustic feature information, and the expression of the above semantics in the target language may be determined according to the semantics corresponding to the first acoustic feature information, so as to obtain a translation result.
  • the above-mentioned translation result may be a translation result in speech form, or a translation result in text form.
  • the voice information processing method obtains the first acoustic feature information of at least one frame of the voice information to be translated; under streaming speech recognition, determines whether the first acoustic feature information corresponds to complete semantics; responds to the determination result If yes, perform a translation operation on the first acoustic feature information to obtain a corresponding translation result, and realize that under streaming speech recognition, when the first acoustic feature information of at least one frame of speech information to be translated corresponds to complete semantics, The translation realizes that in streaming translation, the voice information to be translated with complete semantics is automatically determined, and the translated voice information with complete semantics is translated.
  • Streaming speech recognition in the related art is a scheme that intercepts speech information according to a fixed length or a fixed number of words, and extracts the feature sequence of the speech information for translation.
  • the speech information Since the speech information is intercepted by a fixed length or a fixed number of words for translation, the speech information to be translated in the source language intercepted by a fixed length or a fixed number of words may not have complete semantics, and the obtained target language translation result may not reflect the quality of the speech information to be translated.
  • the original semantics makes the translation result poor.
  • the solution of this embodiment translates the translated voice information with complete semantics, a more accurate translation result can be obtained and the accuracy of the translation result is improved.
  • this solution can translate the speech information to be translated with complete semantics after it is determined, there is no need to wait until the time period specified by the fixed length is over before translating. Therefore, the output delay of the translation result can be reduced.
  • the above voice information processing method further includes the following steps:
  • Step 104 under the non-streaming speech recognition, receive the multi-frame voice information to be translated until the input end instruction of the voice information is detected, obtain the second acoustic feature information of the multi-frame voice information to be translated, and analyze the second acoustic feature Perform translation operations on information to obtain corresponding translation results.
  • the second acoustic feature information corresponding to all the speech information to be translated may be determined after receiving all the speech information to be translated.
  • the second acoustic feature information may include multiple feature sequences. Multiple feature sequences can form a word vector matrix. Then the word vector matrix is analyzed and processed, and a translation operation is performed on the analyzed and processed word vector matrix to obtain translation results corresponding to all speech information to be translated.
  • the translation result may be a translation result in speech form or a translation result in text form.
  • all speech information to be translated is translated in a non-streaming speech recognition mode.
  • streaming translation mode and non-streaming translation mode are provided, and in the translation mode selected by the user, the corresponding streaming translation or non-streaming translation is performed, realizing the use of the same set of translation solutions, Both streaming speech translation and non-streaming speech translation are taken into account.
  • FIG. 2 shows a flow chart of another embodiment of the voice information processing method according to the present disclosure. As shown in Figure 2, the method includes the following steps:
  • Step 201 Input at least one frame of speech information to be processed into a pre-trained acoustic model to obtain the first acoustic feature information.
  • the above-mentioned acoustic model may be various machine learning models, such as a recurrent neural network model and the like.
  • the aforementioned machine learning model may be a pre-trained machine learning model. This machine learning model can convert the input speech information into a sequence of features.
  • the above-mentioned acoustic model may be a masked acoustic model (Masked Acoustic Model, MAM).
  • Shielded acoustic models can include encoders and prediction heads.
  • MAM Mask Acoustic Model
  • the sample audio data can be selected as input, and the sample encoding corresponding to the sample audio data is used as output.
  • MAM can select 15% of the input audio frames to cover them, and the model can predict the covered frames according to the context of the training text.
  • the Prediction Head layer contains two layers of forward network.
  • a preset loss function (such as L1Loss) can be used when training the MAM to minimize the gap between the vector of the 15% masked predicted value frame and the real frame vector.
  • Acoustic feature information may include, but not limited to, amplitude, phase, frequency, and correlation of each dimension.
  • the above acoustic feature information also includes feature information of human vocalizations. The above acoustic feature information corresponding to the same utterance by different people may be different.
  • Step 202 Input the first acoustic feature information into a pre-trained preset semantic recognition model, and use the preset semantic recognition model to determine whether the first acoustic feature information corresponds to complete semantics.
  • the first acoustic feature information output in step 201 can be input to the pre-trained preset semantic recognition model.
  • the first acoustic feature information is compressed by a preset semantic recognition model, and it is judged whether the compressed first acoustic feature information corresponds to complete semantics.
  • the aforementioned preset semantic recognition model may include, for example, a Continuous Integrate-and-Fire (CIF) module.
  • the CIF module can compress and align the first acoustic feature information output in step 201 .
  • the CIF may divide multiple feature values in the first acoustic feature information into two parts, one part is used for the current compression process, and the other part is used for the next compression process.
  • the acoustic feature information includes an acoustic feature sequence and a weight corresponding to each feature vector in the acoustic feature sequence.
  • the weight can represent the amount of information contained in the feature vector.
  • the acoustic feature sequence can be [h 1 , h 2 , h 3 , h 4 , h 5 , ...]; the corresponding weight sequence can be [ ⁇ 1, ⁇ 2 , ⁇ 3 , ⁇ 4 , ⁇ 5 ...].
  • Multiple feature vectors can be divided into two parts, and the first part is used to calculate this compression process.
  • Each compression process can integrate more than two feature vectors into a new feature vector.
  • the multiple feature vectors of the compression process are arranged according to the sequence of the vectors.
  • the weight ⁇ 2 of the feature vector h 2 is split into ⁇ 21 and ⁇ 22 .
  • ⁇ 4 is split into ⁇ 41 and ⁇ 42 .
  • the sum of the weights (including weight components) corresponding to each feature vector is 1 during this compression. For example, when the sum of the weight ⁇ 1 of the feature vector h1 and the weight component ⁇ 21 of the feature vector h2 is 1 , it can be determined that the object of this compression is the component of the feature vector h1 and the feature vector h2 .
  • the CIF module can determine whether the first acoustic feature information corresponds to complete semantics, and compress the first acoustic feature information corresponding to complete semantics.
  • Step 203 in response to the determination result being yes, input the acoustic feature information into the pre-trained translation model to obtain a translation result corresponding to the acoustic feature information.
  • the translation model mentioned above may be, for example, various neural network models, such as hidden Markov models.
  • the above-mentioned translation model may be a transformer model.
  • the transformer model consists of an encoder and a decoder, receiving the word vector matrix output by the CIF module and completing the translation.
  • the process of translating the word vector matrix output by the CIF module into the translation result of the target language by the above transformer model can be the same as the process of translating the word vector matrix by the existing transformer model, and will not be described here.
  • the above step 201 may include receiving multiple frames of speech information to be translated until an input end instruction of the speech information is detected, and waiting for the received multi-frames to be translated.
  • the translated speech information is input to the pre-trained acoustic model to obtain second acoustic feature information of multiple frames of speech information to be translated.
  • the above speech information processing method further includes step 204, inputting the second acoustic feature information into the preset semantic recognition model, compressing and aligning the second acoustic feature information, and obtaining the compressed second acoustic feature information.
  • Step 205 inputting the compressed second acoustic feature information into the pre-trained translation model to obtain a translation result corresponding to the second acoustic feature information.
  • the voice information processing method provided in this embodiment can improve the end-to-end voice translation by using the preset semantic recognition model to determine the first acoustic feature information with complete semantics, and using the translation model to obtain the translation result of the first acoustic feature information. speed and accuracy.
  • the speech information processing method provided in this embodiment completes streaming speech translation and non-streaming speech translation.
  • the speech information processing model includes: an acoustic model 401 , a semantic recognition model 402 and a translation model 403 .
  • the above speech information processing model can provide translation mode options.
  • the translation mode selections include streaming speech recognition mode and non-streaming speech recognition mode.
  • the user can select a translation mode.
  • the speech information processing model works in a streaming speech recognition mode or a non-streaming speech recognition mode.
  • the speech information in the source language to be translated can be input into the speech information processing model, so that the speech information in the source language can be translated by the speech information processing model.
  • the acoustic model 401 is configured to receive at least one frame of speech information to be translated, and extract first acoustic feature information of the at least one frame of speech information to be translated.
  • the semantic recognition model 402 is configured to receive the at least one frame of first acoustic feature information, and determine whether the at least one frame of first acoustic feature information corresponds to complete semantics.
  • the semantic recognition model may include a Continuous Integrate-and-Fire (CIF) module.
  • the CIF module can perform semantic recognition on the first acoustic feature information, and compress and align the first acoustic feature information.
  • the functions completed by the CIF module can refer to the process shown in FIG. 3 .
  • the translation model 403 is configured to determine that the first acoustic feature information corresponds to complete semantics, and determine a translation result corresponding to the first acoustic feature information. In the non-streaming mode, a translation result corresponding to the second acoustic feature information is determined.
  • the acoustic model 401 is used to: receive multiple frames of voice information to be translated, and extract multiple frames of voice information to be translated until the input end instruction of the voice information is detected
  • the second acoustic characteristic information of the information is used.
  • the semantic recognition model 402 is used for: compressing and aligning the second acoustic feature information;
  • the translation model 403 is used for: determining the translation result of the second acoustic feature information.
  • FIG. 5 shows the training method of the speech information processing model provided by the present disclosure.
  • Speech information processing models include acoustic models, semantic recognition models and translation models. As shown in Figure 5, the method includes the following steps.
  • Step 501 Acquire a training sample set, the training sample set includes a plurality of training sample pairs, and the training sample pairs include original speech information in a first language and sample translation results corresponding to the original speech information in a second language.
  • Step 502 Input the original speech information in the training sample pair into the acoustic model after the initial training, use the translation result as the output of the translation model, train the speech information processing model, and obtain the trained speech information Handle the model.
  • the aforementioned acoustic model may be a pre-trained model.
  • the above-mentioned acoustic model may be a recurrent neural network model or the like.
  • the above-mentioned acoustic model may be a shielded acoustic model.
  • the speech information processing model may be trained using the second loss function and the third loss function.
  • the second loss function here may be a quality loss function
  • the third loss function may be a cross-entropy loss function.
  • the above-mentioned triggering of the end of the training can be minimized by the sum of the above-mentioned second loss function and the third loss function.
  • the triggering of the end of the above training may be that the above training times reach the preset number of times required.
  • the above step 502 includes the following substeps:
  • the original speech information is input into the acoustic model after initial training, and the semantic recognition model is trained by using the sample code of the sample translation result and the first loss function to obtain the trained semantic recognition model.
  • the first loss function here may be a quality loss function.
  • the above semantic recognition model may include a Continuous Integrate-and-Fire (CIF) module.
  • CIF Continuous Integrate-and-Fire
  • the semantic recognition model can be trained first. Then train the speech information processing model as a whole. The overall training times of the speech information processing model can be reduced.
  • the speech information processing model obtained through the above training can work in either the streaming speech recognition mode or the non-streaming speech recognition mode.
  • the present disclosure provides some embodiments of speech information processing devices, which correspond to the method embodiments shown in FIG. 1 , and can be specifically applied to in various electronic devices.
  • the speech information processing apparatus of this embodiment includes: an acquisition unit 601 , a determination unit 602 , and a translation unit 603 .
  • the acquiring unit 601 is configured to acquire at least one frame of the first acoustic feature information of the speech information to be translated;
  • the determining unit 602 is configured to determine whether the at least one frame of the first acoustic feature information is The preset translation condition is satisfied;
  • the translation unit 603 is configured to perform a translation operation on the first acoustic feature information to obtain a corresponding translation result in response to the determination result being yes.
  • step 101 the specific processing of the acquisition unit 601, the determination unit 602, and the translation unit 603 of the speech information processing device and the technical effects brought about by them can refer to step 101, step 102, and step 103 in the corresponding embodiment of FIG. 1, respectively. Relevant descriptions will not be repeated here.
  • the obtaining unit 601 is further configured to: input at least one frame of speech information to be processed into a pre-trained acoustic model to obtain the first acoustic feature information.
  • the acoustic model includes a shielded acoustic model.
  • the determining unit 602 is further configured to: input the first acoustic feature information into a pre-trained preset semantic recognition model, and use the preset semantic recognition model to determine the first acoustic feature information. Whether the feature information corresponds to the complete semantics.
  • the preset semantic recognition model includes a continuous integration and distribution module.
  • the speech information processing device further includes a non-streaming speech information processing unit (not shown in the figure), and the non-streaming speech information processing unit is used for: under non-streaming speech recognition, receiving The multiple frames of voice information to be translated until the input end command of the voice information is detected, the second acoustic feature information of the multiple frames of voice information to be translated is obtained, and the translation operation is performed on the second acoustic feature information to obtain a corresponding translation result.
  • the translating operation includes: inputting acoustic feature information into a pre-trained translation model to obtain a translation result corresponding to the acoustic feature information.
  • the present disclosure provides an embodiment of a speech information processing model training device, which corresponds to the method embodiment shown in FIG. 5 ,
  • the device can be specifically applied to various electronic devices.
  • the apparatus for training a speech information processing model in this embodiment includes: a sample acquisition unit 701 and a training unit 702 .
  • the sample obtaining unit 701 is used to obtain a training sample set
  • the training sample set includes a plurality of training sample pairs
  • the training sample pairs include the translation corresponding to the original speech information in the first language and the original speech information in the second language Result
  • the training unit 702 is used for inputting the original speech information in the training sample pair to the acoustic model after the initial training, using the translation result as the output of the translation model, and training the speech information processing model to obtain The trained speech information processing model.
  • the specific processing of the sample acquisition unit 701 and the training unit 702 of the speech information processing model training device and the technical effects brought about by them can refer to the relevant descriptions of step 501 and step 502 in the embodiment corresponding to FIG. 5 , which will not be repeated here.
  • the above training unit 702 also includes a first training subunit (not shown in the figure), the first training subunit is used to: obtain the sample code corresponding to the translation result; Speech information is input into the acoustic model after initial training, and the semantic recognition model is trained by using the sample code of the translation result and the first loss function to obtain the trained semantic recognition model.
  • the training unit 702 is further configured to: use the second loss function and the third loss function to train the speech information processing model to obtain a trained speech information processing model.
  • the speech information processing method, device and electronic equipment obtained by the embodiments of the present disclosure obtain the first acoustic feature information of at least one frame of speech information to be translated; under streaming speech recognition, determine whether the first acoustic feature information is complete Semantics; in response to the determination result being yes, the translation operation is performed on the first acoustic feature information to obtain the corresponding translation result, which realizes the automatic determination of the voice information to be translated with complete semantics in streaming translation, and for the voice information with complete Compared with translating the audio information intercepted at a fixed time, more accurate translation results can be obtained by translating the semantically translated voice information. Improved accuracy of translation results. In addition, because this solution can translate the voice information to be translated after it is determined to have complete semantics, it does not need to wait until the time period specified by the fixed slice time ends before translating, so the output delay of the translation result can be reduced.
  • FIG. 8 shows an exemplary system architecture in which the voice information processing method or the voice information processing apparatus according to an embodiment of the present disclosure can be applied.
  • the system architecture may include terminal devices 801 , 802 , and 803 , a network 804 , and a server 805 .
  • the network 804 is used as a medium for providing communication links between the terminal devices 801 , 802 , 803 and the server 805 .
  • Network 804 may include various connection types, such as wires, wireless communication links, or fiber optic cables, among others.
  • the terminal devices 801, 802, 803 can interact with the server 805 through the network 804 to receive or send messages and the like.
  • Various client applications such as voice information collection applications, may be installed on the terminal devices 801, 802, and 803.
  • the client applications in the terminal devices 801, 802, and 803 can receive user instructions and complete corresponding functions according to the user instructions, such as sending the collected voice information to the server.
  • Terminal devices 801, 802, and 803 may be hardware or software.
  • the terminal devices 801, 802, and 803 may be various electronic devices that have display screens and support web browsing, including but not limited to smartphones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, moving picture expert compression standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, moving picture expert compression standard audio layer 4) player, laptop portable computer and desktop computer, etc.
  • the terminal devices 801, 802, and 803 are software, they can be installed in the electronic devices listed above. It can be implemented as a plurality of software or software modules (such as software or software modules for providing distributed services), or as a single software or software module. No specific limitation is made here.
  • the server 805 may be a server that provides various services, such as analyzing voice information sent by the terminal devices 801, 802, and 803 to obtain analysis results (translation results). And send the translation results to the terminal devices 801, 802, 803.
  • the voice information processing method provided by the embodiment of the present disclosure may be executed by a server, and correspondingly, the voice information processing apparatus may be set in the server 805 .
  • the voice information processing method can also be executed by a terminal device, and correspondingly, the voice information processing apparatus can be set in the terminal devices 801 , 802 , and 803 .
  • terminal devices, networks and servers in FIG. 8 are only illustrative. According to the implementation needs, there can be any number of terminal devices, networks and servers.
  • FIG. 9 shows a schematic structural diagram of an electronic device (such as the server or terminal device in FIG. 8 ) suitable for implementing the embodiments of the present disclosure.
  • the terminal equipment in the embodiment of the present disclosure may include but not limited to such as mobile phone, notebook computer, digital broadcast receiver, PDA (personal digital assistant), PAD (tablet computer), PMP (portable multimedia player), vehicle terminal (such as mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers and the like.
  • the electronic device shown in FIG. 9 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
  • an electronic device may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 901, which may be loaded into a random access memory according to a program stored in a read-only memory (ROM) 902 or from a storage device 908. (RAM) 903 to execute various appropriate actions and processing. In the RAM 903, various programs and data necessary for the operation of the electronic device 900 are also stored.
  • the processing device 901, ROM 902, and RAM 903 are connected to each other through a bus 904.
  • An input/output (I/O) interface 905 is also connected to the bus 904 .
  • the following devices can be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 907 such as a computer; a storage device 908 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 909.
  • the communication means 909 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While FIG. 9 shows an electronic device having various means, it is to be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via communication means 909, or from storage means 908, or from ROM 902.
  • the processing device 901 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and the server can communicate using any currently known or future network protocols such as HTTP (HyperText Transfer Protocol, Hypertext Transfer Protocol), and can communicate with digital data in any form or medium
  • HTTP HyperText Transfer Protocol
  • the communication eg, communication network
  • Examples of communication networks include local area networks (“LANs”), wide area networks (“WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: acquires the first acoustic feature information of at least one frame of speech information to be translated; In the voice recognition mode, it is determined whether the first acoustic feature information corresponds to complete semantics; in response to the determination result being yes, a translation operation is performed on the first acoustic feature information to obtain a corresponding translation result. or
  • the training sample set includes a plurality of training sample pairs, the training sample pairs include the original speech information of the first language and the translation result corresponding to the original speech information of the second language; the original training sample pairs Speech information is input into the acoustic model after initial training, and the translation result is used as an output of the translation model to train the speech information processing model to obtain a trained speech information processing model.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (such as through an Internet Service Provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of a unit does not constitute a limitation of the unit itself under certain circumstances.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs System on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, Random Access Memory (RAM), Read Only Memory (ROM), Erasable Programmable Read Only Memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • EPROM Erasable Programmable Read Only Memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Machine Translation (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A speech information processing method. The method comprises: acquiring first acoustic feature information of at least one frame of speech information to be translated (101); under streaming speech recognition, determining whether the first acoustic feature information corresponds to complete semantics (102); and in response to a determination result being yes, performing a translation operation on the first acoustic feature information, so as to obtain a corresponding translation result (103). Therefore, the accuracy of a translation result can be improved, and the output delay of the translation result can be reduced. The present application further relates to a speech information processing apparatus, a speech information processing model, a speech information processing model training method, a speech information processing model training apparatus, an electronic device, and a computer-readable medium.

Description

语音信息处理方法、装置和电子设备Speech information processing method, device and electronic equipment
相关申请的交叉引用Cross References to Related Applications
本申请要求于2021年07月28日提交的,申请号为202110860672.X、发明名称为“语音信息处理方法、装置和电子设备”的中国专利申请的优先权,该申请的全文通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202110860672.X and the title of the invention "Voice Information Processing Method, Device and Electronic Equipment" filed on July 28, 2021, the entire content of which is incorporated by reference in In this application.
技术领域technical field
本公开涉及人工智能技术领域,尤其涉及一种语音信息处理方法、装置和电子设备。The present disclosure relates to the technical field of artificial intelligence, and in particular to a voice information processing method, device and electronic equipment.
背景技术Background technique
本部分旨在为权利要求书中陈述的本发明的实施方式提供背景或上下文。此处的描述不因为包括在本部分中就承认是现有技术。This section is intended to provide a background or context for implementations of the invention that are recited in the claims. The descriptions herein are not admitted to be prior art by inclusion in this section.
语音翻译(Speech Translation,ST)旨在将源语言语音翻译成目标语言文本,广泛应用于会议演讲、商务会议、跨境客服、海外旅行等各种场景。Speech Translation (ST) aims to translate source language speech into target language text, and is widely used in conference speeches, business meetings, cross-border customer service, overseas travel and other scenarios.
传统的语音翻译模型通常先利用语音识别模型将语音转换为源语言的文字,再利用机器翻译模型将识别后的源语言文字翻译为目标语言。Traditional speech translation models usually first use a speech recognition model to convert speech into text in the source language, and then use a machine translation model to translate the recognized text in the source language into the target language.
发明内容Contents of the invention
提供该公开内容部分以便以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。该公开内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。This Disclosure section is provided to introduce a simplified form of concepts that are described in detail that follow in the Detailed Description section. This disclosure part is not intended to identify key features or essential features of the claimed technical solution, nor is it intended to be used to limit the scope of the claimed technical solution.
本公开实施例提供了一种语音信息处理方法、装置和电子设备。Embodiments of the present disclosure provide a voice information processing method, device and electronic equipment.
第一方面,本公开实施例提供了一种语音信息处理方法,包括:获取至少一帧待翻译语音信息的第一声学特征信息;在流式语音识别下,确定第一声学特征信息是否对应完整语义;响应于确定结果为是,对所述第一声学特征信息执行翻译操作,得到对应的翻译结果。In a first aspect, an embodiment of the present disclosure provides a method for processing speech information, including: acquiring first acoustic feature information of at least one frame of speech information to be translated; and determining whether the first acoustic feature information is Corresponding to complete semantics; in response to the determination result being yes, performing a translation operation on the first acoustic feature information to obtain a corresponding translation result.
第二方面,本公开实施例提供了一种语音信息处理模型,包括:声学模型、语义识别模型和翻译模型,其中,所述声学模型用于:在流式语音识别模式下,接收至少一帧待翻译语音信息,并提取所述至少一帧待翻译语音信息第一声学特征信息;所述语义识别模型用于:在流式语音识别模式下,接收所述至少一帧第一声学特征信息,并确定所述至少一帧第一声学特征信息是否对应完整语义;所述翻译模型 用于在流式语音识别模式下确定所述第一声学特征信息的翻译结果。In a second aspect, an embodiment of the present disclosure provides a speech information processing model, including: an acoustic model, a semantic recognition model, and a translation model, wherein the acoustic model is used to: receive at least one frame in the streaming speech recognition mode The speech information to be translated, and extracting the first acoustic feature information of the at least one frame of the speech information to be translated; the semantic recognition model is used to: receive the at least one frame of the first acoustic feature in the streaming speech recognition mode information, and determine whether the at least one frame of first acoustic feature information corresponds to complete semantics; the translation model is used to determine a translation result of the first acoustic feature information in a streaming speech recognition mode.
第三方面,本公开实施例提供了一种语音信息处理模型的训练法法,应用于第二方面所述的语音信息处理模型,所述语音信息处理模型包括声学模型、语义识别模型和翻译模型,该方法包括:获取训练样本集,所述训练样本集包括多个训练样本对,训练样本对包括第一语言的原始语音信息和第二语言的所述原始语音信息对应的翻译结果;将训练样本对中的原始语音信息输入到初始训练后的声学模型,将所述翻译结果作为所述翻译模型的输出,对所述语音信息处理模型进行训练,得到的训练后语音信息处理模型。In a third aspect, an embodiment of the present disclosure provides a training method for a speech information processing model, which is applied to the speech information processing model described in the second aspect, and the speech information processing model includes an acoustic model, a semantic recognition model, and a translation model , the method includes: obtaining a training sample set, the training sample set includes a plurality of training sample pairs, and the training sample pairs include original speech information in a first language and translation results corresponding to the original speech information in a second language; The original speech information in the sample pair is input to the acoustic model after initial training, and the translation result is used as the output of the translation model to train the speech information processing model to obtain a trained speech information processing model.
第四方面,本公开实施例提供了一种语音信息处理装置,包括:获取单元,用于获取待翻译语音信息的至少一帧第一声学特征信息;确定单元,用于在流式语音识别下,确定所述至少一帧第一声学特征信息是否满足预设翻译条件;翻译单元,用于响应于确定结果为是,对所述第一声学特征信息执行翻译操作,得到对应的翻译结果。In a fourth aspect, an embodiment of the present disclosure provides a voice information processing device, including: an acquisition unit, configured to acquire at least one frame of first acoustic feature information of the voice information to be translated; a determination unit, configured to Next, determine whether the first acoustic feature information of the at least one frame satisfies a preset translation condition; the translation unit is configured to perform a translation operation on the first acoustic feature information in response to the determination result being yes, to obtain a corresponding translation result.
第五方面,本公开实施例提供了一种语音信息处理模型的训练装置,包括:获取单元,用于获取待翻译语音信息的至少一帧第一声学特征信息;确定单元,用于在流式语音识别下,确定所述至少一帧第一声学特征信息是否满足预设翻译条件;翻译单元,用于响应于确定结果为是,对所述第一声学特征信息执行翻译操作,得到对应的翻译结果。In a fifth aspect, an embodiment of the present disclosure provides a speech information processing model training device, including: an acquisition unit, configured to acquire at least one frame of first acoustic feature information of the speech information to be translated; a determination unit, configured to Under the speech recognition method, determine whether the at least one frame of first acoustic feature information satisfies a preset translation condition; the translation unit is configured to perform a translation operation on the first acoustic feature information in response to the determination result being yes, to obtain corresponding translation results.
第六方面,本公开实施例提供了一种电子设备,包括:一个或多个处理器;存储装置,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如第一方面所述的语音信息处理方法,或者如第三方面所述的语音信息处理模型的训练方法。In a sixth aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device for storing one or more programs, when the one or more programs are executed by the one or more processors, so that the one or more processors implement the speech information processing method as described in the first aspect, or the training method of a speech information processing model as described in the third aspect.
第七方面,本公开实施例提供了一种计算机可读介质,其上存储有计算机程序,该程序被处理器执行时实现如第一方面所述的语音信息处理方法,或者如第三方面所述的语音信息处理模型的训练方法。In a seventh aspect, an embodiment of the present disclosure provides a computer-readable medium, on which a computer program is stored, and when the program is executed by a processor, the speech information processing method as described in the first aspect is implemented, or as described in the third aspect The training method of the speech information processing model described above.
附图说明Description of drawings
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。The above and other features, advantages and aspects of the various embodiments of the present disclosure will become more apparent with reference to the following detailed description in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numerals denote the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
图1是根据本公开的语音信息处理方法的一个实施例的流程图;FIG. 1 is a flowchart of an embodiment of a voice information processing method according to the present disclosure;
图2是根据本公开的语音信息处理方法的另一个实施例的流程图;FIG. 2 is a flow chart of another embodiment of the voice information processing method according to the present disclosure;
图3示出了图2所示实施例中的连续整合发放模块对声学特征信息进行处理的示意图;Fig. 3 shows a schematic diagram of processing acoustic feature information by the continuous integration and distribution module in the embodiment shown in Fig. 2;
图4示出了根据本公开的一种语音信息处理模型的结构示意图;Fig. 4 shows a schematic structural diagram of a speech information processing model according to the present disclosure;
图5示出了根据本公开的一种语音信息处理模型的训练方法的示意性流程图;Fig. 5 shows a schematic flowchart of a training method of a speech information processing model according to the present disclosure;
图6是根据本公开的语音信息处理装置的一个实施例的结构示意图;FIG. 6 is a schematic structural diagram of an embodiment of a speech information processing device according to the present disclosure;
图7是根据本公开的语音信息处理模型的训练装置的一个实施例的结构示意图;FIG. 7 is a schematic structural diagram of an embodiment of a training device for a speech information processing model according to the present disclosure;
图8是本公开的一个实施例的语音信息处理方法、语音信息处理装置可以应用于其中的示例性系统架构;FIG. 8 is an exemplary system architecture in which the voice information processing method and the voice information processing device according to an embodiment of the present disclosure can be applied;
图9是根据本公开实施例提供的电子设备的基本结构的示意图。Fig. 9 is a schematic diagram of a basic structure of an electronic device provided according to an embodiment of the present disclosure.
具体实施方式Detailed ways
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. Although certain embodiments of the present disclosure are shown in the drawings, it should be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein; A more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for exemplary purposes only, and are not intended to limit the protection scope of the present disclosure.
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。It should be understood that the various steps described in the method implementations of the present disclosure may be executed in different orders, and/or executed in parallel. Additionally, method embodiments may include additional steps and/or omit performing illustrated steps. The scope of the present disclosure is not limited in this respect.
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。As used herein, the term "comprise" and its variations are open-ended, ie "including but not limited to". The term "based on" is "based at least in part on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one further embodiment"; the term "some embodiments" means "at least some embodiments." Relevant definitions of other terms will be given in the description below.
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。It should be noted that concepts such as "first" and "second" mentioned in this disclosure are only used to distinguish different devices, modules or units, and are not used to limit the sequence of functions performed by these devices, modules or units or interdependence.
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。It should be noted that the modifications of "one" and "multiple" mentioned in the present disclosure are illustrative and not restrictive, and those skilled in the art should understand that unless the context clearly indicates otherwise, it should be understood as "one or more" multiple".
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。The names of messages or information exchanged between multiple devices in the embodiments of the present disclosure are used for illustrative purposes only, and are not used to limit the scope of these messages or information.
近来,端到端翻译方法应用到了非流式语音翻译和流式翻译中。发明人发现在将端到端翻译方法应用到流式翻译的一些方案,是按照固定的时间对源端音频进行切片,每个语言切片视为一个翻译的口令, 并应用于流式语音翻译。但真实环境中,语音长度往往是变长的,导致端到端语音翻译要么引入延迟、要么引起翻译错误。为了解决上述问题,本公开提出了下述方案。请参考图1,其示出了根据本公开的语音信息处理方法的一个实施例的流程。如图1所示,语音信息处理方法,包括以下步骤:Recently, end-to-end translation methods have been applied to both non-streaming speech translation and streaming translation. The inventors found that some solutions for applying the end-to-end translation method to streaming translation are to slice the source audio at a fixed time, and each language slice is regarded as a translation password, and applied to streaming speech translation. However, in real environments, the length of speech is often longer, resulting in end-to-end speech translation either introducing delays or causing translation errors. In order to solve the above-mentioned problems, the present disclosure proposes the following proposals. Please refer to FIG. 1 , which shows the flow of an embodiment of the speech information processing method according to the present disclosure. As shown in Figure 1, the speech information processing method comprises the following steps:
步骤101,获取至少一帧待翻译语音信息的第一声学特征信息。 Step 101, acquire first acoustic feature information of at least one frame of speech information to be translated.
端到端语音识别模型,可以将音频直接映射到字符或单词。上述待翻译语音信息可以是第一语言的语音信息。待翻译语音信息可用是当前采集的说话人的语音信息,也可以是预先存储的说话人的语音信息。这里的第一语言可以是任意语言,例如英语、中文、法语等。翻译结果可以对应目标语言。目标语言例如可以为第一语言之外的其他任意语言。An end-to-end speech recognition model that maps audio directly to characters or words. The voice information to be translated may be voice information in the first language. The speech information to be translated may be the currently collected speech information of the speaker, or the pre-stored speech information of the speaker. The first language here may be any language, such as English, Chinese, French, etc. The translation result may correspond to the target language. The target language can be, for example, any other language other than the first language.
语音信息可以包括单词序列。可以使用各种方法对上述语音信息进行特征提取,得到语音信息的声学特征。这里的语音信息的声学特征可以从语音的对数-梅尔频谱图中提取。Speech information may include sequences of words. Various methods may be used to perform feature extraction on the above speech information to obtain the acoustic features of the speech information. The acoustic features of the speech information here can be extracted from the logarithmic-mel spectrogram of the speech.
具体实践中,可以将待翻译语音信息进行逐帧提取声学特征。在一些应用场景中,每一个音频帧可以包括对连续音频信号进行离散化处理的多个采样点,例如一个音频帧可以包括1024个采样点。每个音频帧可以对应一个声学特征序列。一帧音频帧的声学特征序列可以包括每个采样点的声学特征组成的声学特征序列,每个采样点的声学特征可以包括幅度、相位、频率以及各个维度的相关性等。In practice, acoustic features can be extracted frame by frame from the speech information to be translated. In some application scenarios, each audio frame may include multiple sampling points for discretizing a continuous audio signal, for example, an audio frame may include 1024 sampling points. Each audio frame may correspond to an acoustic feature sequence. The acoustic feature sequence of an audio frame may include an acoustic feature sequence composed of the acoustic features of each sampling point, and the acoustic features of each sampling point may include amplitude, phase, frequency, and correlation of various dimensions.
上述至少一帧待翻译语音信息的第一声学特征信息,可以包括上述至少一帧待翻译语音信息对应的声学特征序列。The first acoustic feature information of the at least one frame of speech information to be translated may include an acoustic feature sequence corresponding to the at least one frame of speech information to be translated.
步骤102,在流式语音识别下,确定第一声学特征信息是否对应完整语义。 Step 102, under streaming speech recognition, determine whether the first acoustic feature information corresponds to complete semantics.
本公开涉及的端到端语音翻译可以包括流式语音识别模式和非流式语音识别模式。The end-to-end speech translation involved in the present disclosure may include a streaming speech recognition mode and a non-streaming speech recognition mode.
通常,非流语音翻译模式是指可以一次性收听所有待翻译的语音音频,然后生成翻译文本的翻译模式。流式语音翻译模式是指在接收语音流的同时完成翻译的翻译模式。Generally, the non-streaming speech translation mode refers to a translation mode that can listen to all speech audios to be translated at one time, and then generate translated text. The streaming voice translation mode refers to a translation mode that completes translation while receiving voice streams.
若当前语音翻译的模式为流式语音识别模式,可以确定至少一帧第一声学特征信息是否满足预设翻译条件。If the current speech translation mode is a streaming speech recognition mode, it may be determined whether at least one frame of first acoustic feature information satisfies a preset translation condition.
这里的预设翻译条件包括至少一帧第一声学特征信息对应完整语义。The preset translation condition here includes that at least one frame of first acoustic feature information corresponds to complete semantics.
若至少一帧第一声学特征信息为完整语义,则可以进入到步骤103。否则,继续获取后续至少一帧待翻译语音信息的声学特征序列,将后续至少一帧翻译语音信息的声学特征序列加入到上一至少一帧待翻译语音信息的声学特征序列,得到更新后的第一声学特征信息。If at least one frame of first acoustic feature information is complete semantics, go to step 103 . Otherwise, continue to obtain the acoustic feature sequence of at least one subsequent frame of speech information to be translated, and add the acoustic feature sequence of at least one subsequent frame of translated speech information to the acoustic feature sequence of at least one frame of speech information to be translated to obtain the updated first An acoustic feature information.
也就是,至少一帧第一声学特征信息所包括的特征序列对应一个完整语义时,再对第一声学特征信息进行翻译。That is, when the feature sequence included in at least one frame of the first acoustic feature information corresponds to a complete semantics, the first acoustic feature information is then translated.
在一些可选的实现方式中,上述步骤102可以包括:In some optional implementation manners, the above step 102 may include:
将上述第一声学特征信息输入到预训练的预设语义识别模型,利用预设语义识别模型确定第一声学特征信息是否对应完整语义。The above-mentioned first acoustic feature information is input into the pre-trained preset semantic recognition model, and the preset semantic recognition model is used to determine whether the first acoustic feature information corresponds to complete semantics.
上述语义识别模型可以为各种机器学习模型,例如卷积神经网络模型。在一些应用场景中,上述机器学习模型例如可以为连续整合发放(Continuous Integrate-and-Fire,CIF)模型。CIF模型包括第二编码器。CIF模型可以对步骤101得到的第一声学特征信息进行压缩,CIF模型可以判断压缩后的第一声学特征信息是否具有完整语义。The above-mentioned semantic recognition model may be various machine learning models, such as a convolutional neural network model. In some application scenarios, the above machine learning model may be, for example, a Continuous Integrate-and-Fire (CIF) model. The CIF model includes a second encoder. The CIF model can compress the first acoustic feature information obtained in step 101, and the CIF model can judge whether the compressed first acoustic feature information has complete semantics.
步骤103,响应于确定结果为是,对所述第一声学特征信息执行翻译操作,得到对应的翻译结果。 Step 103, in response to the determination result being yes, perform a translation operation on the first acoustic feature information to obtain a corresponding translation result.
这里对第一声学特征信息执行翻译操作,可以根据第一声学特征信息对应的语义,来确定使用目标语言对上述语义的表述,从而得到翻译结果。Here, the translation operation is performed on the first acoustic feature information, and the expression of the above semantics in the target language may be determined according to the semantics corresponding to the first acoustic feature information, so as to obtain a translation result.
上述翻译结果可以为语音形式的翻译结果,也可以为文本形式的翻译结果。The above-mentioned translation result may be a translation result in speech form, or a translation result in text form.
本实施例提供的语音信息处理方法,通过获取至少一帧待翻译语音信息的第一声学特征信息;在流式语音识别下,确定第一声学特征信息是否对应完整语义;响应于确定结果为是,对所述第一声学特征信息执行翻译操作,得到对应的翻译结果,实现了在流式语音识别下,至少一帧待翻译语音信息的第一声学特征信息对应完整语义时,进行翻译,实现了在流式翻译中,自动确定具有完整语义的待翻译语音信息,对具有完整语义的带翻译语音信息进行翻译。相关技术中的流式语音识别,按照定时长或定字数截取语音信息,并提取语音信息的特征序列进行翻译的方案。相关技术方案由于定时长或定字数截取语音信息进行翻译,按照定时长或定字数截取的源语言的待翻译语音信息可能不具有完整语义,所得到目标语言翻译结果可能无法反映待翻译语音信息的原本语义,使得翻译结果较差。而本实施例的方案由于是对具有完整语义的带翻译语音信息进行翻译,因而可以得到较为准确的翻译结果,提高了翻译结果的准确度。The voice information processing method provided in this embodiment obtains the first acoustic feature information of at least one frame of the voice information to be translated; under streaming speech recognition, determines whether the first acoustic feature information corresponds to complete semantics; responds to the determination result If yes, perform a translation operation on the first acoustic feature information to obtain a corresponding translation result, and realize that under streaming speech recognition, when the first acoustic feature information of at least one frame of speech information to be translated corresponds to complete semantics, The translation realizes that in streaming translation, the voice information to be translated with complete semantics is automatically determined, and the translated voice information with complete semantics is translated. Streaming speech recognition in the related art is a scheme that intercepts speech information according to a fixed length or a fixed number of words, and extracts the feature sequence of the speech information for translation. Related technical solutions Since the speech information is intercepted by a fixed length or a fixed number of words for translation, the speech information to be translated in the source language intercepted by a fixed length or a fixed number of words may not have complete semantics, and the obtained target language translation result may not reflect the quality of the speech information to be translated. The original semantics makes the translation result poor. However, because the solution of this embodiment translates the translated voice information with complete semantics, a more accurate translation result can be obtained and the accuracy of the translation result is improved.
另外,由于本方案可以在确定到具有完整语义的待翻译语音信息即可进行翻译,无需等到定时长规定的时间段结束之后再翻译。因此可以降低翻译结果的输出延迟。In addition, because this solution can translate the speech information to be translated with complete semantics after it is determined, there is no need to wait until the time period specified by the fixed length is over before translating. Therefore, the output delay of the translation result can be reduced.
在本实施例的一些可选的实现方式中,上述语音信息处理方法还包括如下步骤:In some optional implementations of this embodiment, the above voice information processing method further includes the following steps:
步骤104,在非流式语音识别下,接收到多帧待翻译语音信息直至检测到语音信息的输入结束指令,获取多帧待翻译语音信息的第二 声学特征信息,对所述第二声学特征信息执行翻译操作,得到对应的翻译结果。Step 104, under the non-streaming speech recognition, receive the multi-frame voice information to be translated until the input end instruction of the voice information is detected, obtain the second acoustic feature information of the multi-frame voice information to be translated, and analyze the second acoustic feature Perform translation operations on information to obtain corresponding translation results.
在这些可选的实现方式中,在非流式语音识别模式下,可以将全部待翻译语音信息接收完成之后,再确定全部待翻译语音信息对应的第二声学特征信息。第二声学特征信息可以包括多个特征序列。多个特征序列可以构成词向量矩阵。然后对词向量矩阵进行分析处理,并对分析处理后的词向量矩阵执行翻译操作,得到全部待翻译语音信息对应的翻译结果。翻译结果可以为语音形式的翻译结果,也可以为文本形式的翻译结果。In these optional implementation manners, in the non-streaming speech recognition mode, the second acoustic feature information corresponding to all the speech information to be translated may be determined after receiving all the speech information to be translated. The second acoustic feature information may include multiple feature sequences. Multiple feature sequences can form a word vector matrix. Then the word vector matrix is analyzed and processed, and a translation operation is performed on the analyzed and processed word vector matrix to obtain translation results corresponding to all speech information to be translated. The translation result may be a translation result in speech form or a translation result in text form.
在这些可选的实现方式中,实现了在非流式语音识别模式下,对全部待翻译语音信息进行翻译。In these optional implementation manners, all speech information to be translated is translated in a non-streaming speech recognition mode.
也即,在同一套翻译方案中,提供流式翻译模式和非流式翻译模式,在用户选择的翻译模式下,执行相应的流式翻译或非流式翻译,实现了利用同一套翻译方案,兼顾了流式语音翻译和非流式语音翻译。That is to say, in the same set of translation solutions, streaming translation mode and non-streaming translation mode are provided, and in the translation mode selected by the user, the corresponding streaming translation or non-streaming translation is performed, realizing the use of the same set of translation solutions, Both streaming speech translation and non-streaming speech translation are taken into account.
请参考图2,其示出了根据本公开的语音信息处理方法的另一个实施例的流程图。如图2所示,该方法包括如下步骤:Please refer to FIG. 2 , which shows a flow chart of another embodiment of the voice information processing method according to the present disclosure. As shown in Figure 2, the method includes the following steps:
步骤201,将至少一个帧待处理语音信息输入到预训练的声学模型,得到所述第一声学特征信息。Step 201: Input at least one frame of speech information to be processed into a pre-trained acoustic model to obtain the first acoustic feature information.
上述声学模型可以是各种机器学习模型,例如循环神经网络模型等。上述机器学习模型可以是预先训练的机器学习模型。该机器学习模型可以将输入的语音信息转换为特征序列。The above-mentioned acoustic model may be various machine learning models, such as a recurrent neural network model and the like. The aforementioned machine learning model may be a pre-trained machine learning model. This machine learning model can convert the input speech information into a sequence of features.
在一些应用场景中,上述声学模型可以是屏蔽式声学模型(Masked Acoustic Model,MAM)。屏蔽式声学模型可以包括编码器和Prediction Head。在对MAM进行训练时,可以选择样本音频数据作为输入,件样本音频数据对应的样本编码作为输出。MAM在训练时可以选择15%的输入音频帧将其遮盖,由模型根据训练文本的上下文,对遮盖住的帧进行预测。Prediction Head层包含了两层前向网络。在对MAM进行训练时可以使用预设损失函数(例如L1Loss)来最小化这15%的被遮盖的预测值帧的向量和真实帧向量之间的差距。In some application scenarios, the above-mentioned acoustic model may be a masked acoustic model (Masked Acoustic Model, MAM). Shielded acoustic models can include encoders and prediction heads. When training the MAM, the sample audio data can be selected as input, and the sample encoding corresponding to the sample audio data is used as output. During training, MAM can select 15% of the input audio frames to cover them, and the model can predict the covered frames according to the context of the training text. The Prediction Head layer contains two layers of forward network. A preset loss function (such as L1Loss) can be used when training the MAM to minimize the gap between the vector of the 15% masked predicted value frame and the real frame vector.
声学特征信息可以包括但不限于幅度、相位、频率以及各个维度的相关性。上述声学特征信息中除了包括信息的特征,还包括人的发声的特征信息。不同人的同样的发言对应的上述声学特征信息可以不同。Acoustic feature information may include, but not limited to, amplitude, phase, frequency, and correlation of each dimension. In addition to information features, the above acoustic feature information also includes feature information of human vocalizations. The above acoustic feature information corresponding to the same utterance by different people may be different.
步骤202,将所述第一声学特征信息输入到预训练的预设语义识别模型,利用所述预设语义识别模型确定所述第一声学特征信息是否对应完整语义。Step 202: Input the first acoustic feature information into a pre-trained preset semantic recognition model, and use the preset semantic recognition model to determine whether the first acoustic feature information corresponds to complete semantics.
在流式翻译模式下,可以将步骤201输出的第一声学特征信息输 入到预训练的预设语义识别模型。由预设语义识别模型对第一声学特征信息进行压缩,并判断压缩后的第一声学特征信息是否对应完整语义。In the streaming translation mode, the first acoustic feature information output in step 201 can be input to the pre-trained preset semantic recognition model. The first acoustic feature information is compressed by a preset semantic recognition model, and it is judged whether the compressed first acoustic feature information corresponds to complete semantics.
上述预设语义识别模型例如可以包括连续整合发放(Continuous Integrate-and-Fire,CIF)模块。CIF模块可以对步骤201输出的第一声学特征信息进行压缩和对齐。CIF在将第一声学特征信息进行压缩处理时,可以将第一声学特征信息中的多个特征值分成两部分,一部分用于对当前压缩处理,另一部分用于下一压缩处理。The aforementioned preset semantic recognition model may include, for example, a Continuous Integrate-and-Fire (CIF) module. The CIF module can compress and align the first acoustic feature information output in step 201 . When performing compression processing on the first acoustic feature information, the CIF may divide multiple feature values in the first acoustic feature information into two parts, one part is used for the current compression process, and the other part is used for the next compression process.
请参考图3,其示出了CIF模块对声学特征信息进行处理的示意图。声学特征信息包括声学特征序列以及声学特征序列中各特征向量对应的权重。权重可以表示特征向量包含的信息量。如图3所示,声学特征序列可以为[h 1、h 2、h 3、h 4、h 5、…];对应的权值序列可以为[α 1、α 2、α 3、α 4、α 5…]。可以将多个特征向量分为两部分,第一部分用于计算本次压缩处理。每一次压缩处理可以将两个以上的特征向量整合为一个新的特征向量。例如,将该次压缩处理的多个特征向量按照各向量的先后顺序进行排列。如图3所示,将特征向量h 2的权重α 2进行拆分,拆分为α 21和α 22。α 4进行拆分,拆分为α 41和α 42。在每次压缩时,本次压缩时各特征向量对应的权重(包括权重分量)之和为1。例如,特征向量h 1的权重α 1与特征向量h 2的权重分量α 21之和为1时,可以确定本次压缩的对象为特征向量为h 1与特征向量h 2的分量。如图3中,两次压缩结果如下:c 1=α 1×h 121×h 2;c 2=α 22×h 23×h 341×h 4。α 22、α 3、α 41之和为1。当多个特征向量的权重(包括特征分量对应的权重分量)之和为1,可以认为这些特征向量具有完整语义。可以将这些特征向量进行整合。 Please refer to FIG. 3 , which shows a schematic diagram of processing acoustic feature information by the CIF module. The acoustic feature information includes an acoustic feature sequence and a weight corresponding to each feature vector in the acoustic feature sequence. The weight can represent the amount of information contained in the feature vector. As shown in Figure 3, the acoustic feature sequence can be [h 1 , h 2 , h 3 , h 4 , h 5 , ...]; the corresponding weight sequence can be [α 1, α 2 , α 3 , α 4 , α 5 ...]. Multiple feature vectors can be divided into two parts, and the first part is used to calculate this compression process. Each compression process can integrate more than two feature vectors into a new feature vector. For example, the multiple feature vectors of the compression process are arranged according to the sequence of the vectors. As shown in FIG. 3 , the weight α 2 of the feature vector h 2 is split into α 21 and α 22 . α 4 is split into α 41 and α 42 . During each compression, the sum of the weights (including weight components) corresponding to each feature vector is 1 during this compression. For example, when the sum of the weight α1 of the feature vector h1 and the weight component α21 of the feature vector h2 is 1 , it can be determined that the object of this compression is the component of the feature vector h1 and the feature vector h2 . As shown in Figure 3, the results of two compressions are as follows: c 11 ×h 121 ×h 2 ; c 222 ×h 23 ×h 341 ×h 4 . The sum of α 22 , α 3 , and α 41 is 1. When the sum of the weights of multiple feature vectors (including the weight components corresponding to the feature components) is 1, it can be considered that these feature vectors have complete semantics. These eigenvectors can be integrated.
通过上述过程,CIF模块可以确定第一声学特征信息是否对应完整语义,将对应完整语义的第一声学特征信息进行压缩。Through the above process, the CIF module can determine whether the first acoustic feature information corresponds to complete semantics, and compress the first acoustic feature information corresponding to complete semantics.
步骤203,响应于确定结果为是,将声学特征信息输入到预先训练的翻译模型,得到所述声学特征信息对应的翻译结果。 Step 203, in response to the determination result being yes, input the acoustic feature information into the pre-trained translation model to obtain a translation result corresponding to the acoustic feature information.
上述翻译模型,例如可以是各种神经网络模型,例如隐马尔科夫模型等。优选地,上述翻译模型可以是transformer模型。transformer模型由编码器和解码器组成,接收CIF模块输出的词向量矩阵并完成翻译。上述transformer模型将CIF模块输出的词向量矩阵翻译为目标语言的翻译结果的过程可以与现有的transformer模型对词向量矩阵进行翻译的过程相同,此次不赘述。The translation model mentioned above may be, for example, various neural network models, such as hidden Markov models. Preferably, the above-mentioned translation model may be a transformer model. The transformer model consists of an encoder and a decoder, receiving the word vector matrix output by the CIF module and completing the translation. The process of translating the word vector matrix output by the CIF module into the translation result of the target language by the above transformer model can be the same as the process of translating the word vector matrix by the existing transformer model, and will not be described here.
此外,在一些可选的实现方式中,若在非流式语音识别下,上述步骤201可以包括,接收多帧待翻译语音信息直至检测到语音信息的输入结束指令,将所接收的多帧待翻译语音信息输入到预训练的声学模型,得到多帧待翻译语音信息的第二声学特征信息。进而上述语音 信息处理方法还包括步骤204,将第二声学特征信息输入到预设语义识别模型,对第二声学特征信息进行压缩和对齐,得到压缩后的第二声学特征信息。步骤205,将压缩后的第二声学特征信息输入到预训练的翻译模型,得到第二声学特征信息对应的翻译结果。In addition, in some optional implementations, if under non-streaming speech recognition, the above step 201 may include receiving multiple frames of speech information to be translated until an input end instruction of the speech information is detected, and waiting for the received multi-frames to be translated. The translated speech information is input to the pre-trained acoustic model to obtain second acoustic feature information of multiple frames of speech information to be translated. Furthermore, the above speech information processing method further includes step 204, inputting the second acoustic feature information into the preset semantic recognition model, compressing and aligning the second acoustic feature information, and obtaining the compressed second acoustic feature information. Step 205, inputting the compressed second acoustic feature information into the pre-trained translation model to obtain a translation result corresponding to the second acoustic feature information.
本实施例提供的语音信息处理方法,通过使用预设语义识别模型确定具有完整语义的第一声学特征信息,利用翻译模型得到第一声学特征信息的翻译结果,可以提高端到端语音翻译的速度和准确度。The voice information processing method provided in this embodiment can improve the end-to-end voice translation by using the preset semantic recognition model to determine the first acoustic feature information with complete semantics, and using the translation model to obtain the translation result of the first acoustic feature information. speed and accuracy.
此外,本实施例提供的语音信息处理方法完成流式语音翻译,以及非流式语音翻译。In addition, the speech information processing method provided in this embodiment completes streaming speech translation and non-streaming speech translation.
请参考图4,其示出了根据本公开的一种语音信息处理模型。如图4所示,语音信息处理模型包括:声学模型401、语义识别模型402和翻译模型403。Please refer to FIG. 4 , which shows a speech information processing model according to the present disclosure. As shown in FIG. 4 , the speech information processing model includes: an acoustic model 401 , a semantic recognition model 402 and a translation model 403 .
上述语音信息处理模型可以提供翻译模式选择项。翻译模式选择项包括流式语音识别模式和非流式语音识别模式。用户在利用上述语音信息处理模型进行端到端语音信息翻译时,可以对翻译模式进行选择。根据用户对翻译模式选择项的选择,上述语音信息处理模型工作在流式语音识别模式或者非流式语音识别模式。当选择了翻译模式选择项之后,可以将待翻译的源语言语音信息输入到语音信息处理模型,以由语音信息处理模型来对源语言语音信息进行翻译。The above speech information processing model can provide translation mode options. The translation mode selections include streaming speech recognition mode and non-streaming speech recognition mode. When using the above-mentioned speech information processing model to perform end-to-end speech information translation, the user can select a translation mode. According to the user's selection of the translation mode option, the speech information processing model works in a streaming speech recognition mode or a non-streaming speech recognition mode. After the translation mode option is selected, the speech information in the source language to be translated can be input into the speech information processing model, so that the speech information in the source language can be translated by the speech information processing model.
在流式语音识别模式下,声学模型401用于,接收至少一帧待翻译语音信息,并提取所述至少一帧待翻译语音信息第一声学特征信息。In the streaming speech recognition mode, the acoustic model 401 is configured to receive at least one frame of speech information to be translated, and extract first acoustic feature information of the at least one frame of speech information to be translated.
语义识别模型402,用于接收所述至少一帧第一声学特征信息,并确定所述至少一帧第一声学特征信息是否对应完整语义。The semantic recognition model 402 is configured to receive the at least one frame of first acoustic feature information, and determine whether the at least one frame of first acoustic feature information corresponds to complete semantics.
语义识别模型可以包括连续整合发放(Continuous Integrate-and-Fire,CIF)模块。CIF模块可以对第一声学特征信息进行语义识别,并对第一声学特征信息进行压缩和对齐。CIF模块完成的功能可以参考图3所示的过程。The semantic recognition model may include a Continuous Integrate-and-Fire (CIF) module. The CIF module can perform semantic recognition on the first acoustic feature information, and compress and align the first acoustic feature information. The functions completed by the CIF module can refer to the process shown in FIG. 3 .
所述翻译模型403,用于确定第一声学特征信息对应完整语义,确定所述第一声学特征信息对应的翻译结果。在非流式模式下,确定第二声学特征信息对应的翻译结果。The translation model 403 is configured to determine that the first acoustic feature information corresponds to complete semantics, and determine a translation result corresponding to the first acoustic feature information. In the non-streaming mode, a translation result corresponding to the second acoustic feature information is determined.
此外,若语音信息处理模型工作在非流式语音识别模式下时,所述声学模型401用于:接收多帧待翻译语音信息,至检测到语音信息的输入结束指令,提取多帧待翻译语音信息的第二声学特征信息。所述语义识别模型402用于:对第二声学特征信息进行压缩和对齐;所述翻译模型403用于:确定第二声学特征信息的翻译结果。In addition, if the voice information processing model works in the non-streaming voice recognition mode, the acoustic model 401 is used to: receive multiple frames of voice information to be translated, and extract multiple frames of voice information to be translated until the input end instruction of the voice information is detected The second acoustic characteristic information of the information. The semantic recognition model 402 is used for: compressing and aligning the second acoustic feature information; the translation model 403 is used for: determining the translation result of the second acoustic feature information.
语音信息处理模型所包括的各模型在语音信息处理方法完成的功能可以参考图2所示的实施例的说明,此次不赘述。For the functions performed by each model included in the speech information processing model in the speech information processing method, reference may be made to the description of the embodiment shown in FIG. 2 , which will not be repeated here.
请参考图5,其示出了本公开提供的语音信息处理模型的训练方法。语音信息处理模型包括声学模型、语义识别模型和翻译模型。如图5所示,该方法包括如下步骤。Please refer to FIG. 5 , which shows the training method of the speech information processing model provided by the present disclosure. Speech information processing models include acoustic models, semantic recognition models and translation models. As shown in Figure 5, the method includes the following steps.
步骤501,获取训练样本集,所述训练样本集包括多个训练样本对,训练样本对包括第一语言的原始语音信息和第二语言的所述原始语音信息对应的样本翻译结果。Step 501: Acquire a training sample set, the training sample set includes a plurality of training sample pairs, and the training sample pairs include original speech information in a first language and sample translation results corresponding to the original speech information in a second language.
步骤502,将训练样本对中的原始语音信息输入到初始训练后的声学模型,将所述翻译结果作为所述翻译模型的输出,对所述语音信息处理模型进行训练,得到的训练后语音信息处理模型。Step 502: Input the original speech information in the training sample pair into the acoustic model after the initial training, use the translation result as the output of the translation model, train the speech information processing model, and obtain the trained speech information Handle the model.
在本实施例中,上述声学模型可以是预先训练的模型。上述声学模型可以是循环神经网络模型等。优选地,上述声学模型可以是屏蔽式声学模型。In this embodiment, the aforementioned acoustic model may be a pre-trained model. The above-mentioned acoustic model may be a recurrent neural network model or the like. Preferably, the above-mentioned acoustic model may be a shielded acoustic model.
在上述训练过程中,可以使用第二损失函数、第三损失函数对语音信息处理模型进行训练。这里的第二损失函数可以为品质损失函数,第三损失函数可以为交叉熵损失函数。在一些应用场景中,上述训练结束的触发可以由上述第二损失函数和第三损失函数之和最小。在另外一些应用场景中,上述训练结束的触发可以由上述训练次数达到预设次数要求。In the above training process, the speech information processing model may be trained using the second loss function and the third loss function. The second loss function here may be a quality loss function, and the third loss function may be a cross-entropy loss function. In some application scenarios, the above-mentioned triggering of the end of the training can be minimized by the sum of the above-mentioned second loss function and the third loss function. In some other application scenarios, the triggering of the end of the above training may be that the above training times reach the preset number of times required.
在一些可选地实现方式中,上述步骤502包括如下子步骤:In some optional implementation manners, the above step 502 includes the following substeps:
首先,获取样本翻译结果对应的样本编码。First, obtain the sample code corresponding to the sample translation result.
其次,将所述原始语音信息输入到初始训练后的声学模型,利用所述样本翻译结果的样本编码以及第一损失函数对所述语义识别模型进行训练,得到训练后的所述语义识别模型。Secondly, the original speech information is input into the acoustic model after initial training, and the semantic recognition model is trained by using the sample code of the sample translation result and the first loss function to obtain the trained semantic recognition model.
这里的第一损失函数可以是品质损失函数。The first loss function here may be a quality loss function.
上述语义识别模型可以包括连续整合发放(Continuous Integrate-and-Fire,CIF)模块。The above semantic recognition model may include a Continuous Integrate-and-Fire (CIF) module.
在这些可选的实现方式中,可以先对语义识别模型进行训练。再对语音信息处理模型整体进行训练。可以降低对语音信息处理模型的整体训练次数。In these optional implementations, the semantic recognition model can be trained first. Then train the speech information processing model as a whole. The overall training times of the speech information processing model can be reduced.
通过上述训练得到的语音信息处理模型,可以工作在流式语音识别模式下,也可以工作在非流式语音识别模式下。The speech information processing model obtained through the above training can work in either the streaming speech recognition mode or the non-streaming speech recognition mode.
进一步参考图6,作为对上述各图所示方法的实现,本公开提供了语音信息处理装置的一些实施例,该装置实施例与图1所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。Further referring to FIG. 6 , as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of speech information processing devices, which correspond to the method embodiments shown in FIG. 1 , and can be specifically applied to in various electronic devices.
如图6所示,本实施例的语音信息处理装置包括:获取单元601、确定单元602、翻译单元603。其中,获取单元601,用于获取待翻译 语音信息的至少一帧第一声学特征信息;确定单元602,用于在流式语音识别下,确定所述至少一帧第一声学特征信息是否满足预设翻译条件;翻译单元603,用于响应于确定结果为是,对所述第一声学特征信息执行翻译操作,得到对应的翻译结果。As shown in FIG. 6 , the speech information processing apparatus of this embodiment includes: an acquisition unit 601 , a determination unit 602 , and a translation unit 603 . Wherein, the acquiring unit 601 is configured to acquire at least one frame of the first acoustic feature information of the speech information to be translated; the determining unit 602 is configured to determine whether the at least one frame of the first acoustic feature information is The preset translation condition is satisfied; the translation unit 603 is configured to perform a translation operation on the first acoustic feature information to obtain a corresponding translation result in response to the determination result being yes.
在本实施例中,语音信息处理装置的获取单元601、确定单元602、翻译单元603的具体处理及其所带来的技术效果可分别参考图1对应实施例中步骤101、步骤102、步骤103的相关说明,在此不再赘述。In this embodiment, the specific processing of the acquisition unit 601, the determination unit 602, and the translation unit 603 of the speech information processing device and the technical effects brought about by them can refer to step 101, step 102, and step 103 in the corresponding embodiment of FIG. 1, respectively. Relevant descriptions will not be repeated here.
在一些可选的实现方式中,获取单元601进一步用于:将至少一个帧待处理语音信息输入到预训练的声学模型,得到所述第一声学特征信息。In some optional implementation manners, the obtaining unit 601 is further configured to: input at least one frame of speech information to be processed into a pre-trained acoustic model to obtain the first acoustic feature information.
在一些可选的实现方式中,声学模型包括屏蔽式声学模型。In some optional implementations, the acoustic model includes a shielded acoustic model.
在一些可选的实现方式中,确定单元602进一步用于:将所述第一声学特征信息输入到预训练的预设语义识别模型,利用所述预设语义识别模型确定所述第一声学特征信息是否对应完整语义。In some optional implementation manners, the determining unit 602 is further configured to: input the first acoustic feature information into a pre-trained preset semantic recognition model, and use the preset semantic recognition model to determine the first acoustic feature information. Whether the feature information corresponds to the complete semantics.
在一些可选的实现方式中,预设语义识别模型包括连续整合发放模块。In some optional implementation manners, the preset semantic recognition model includes a continuous integration and distribution module.
在一些可选的实现方式中,语音信息处理装置还包括非流式语音信息处理单元(图中未示出),非流式语音信息处理单元用于:在非流式语音识别下,接到多帧待翻译语音信息直至检测到语音信息的输入结束指令,获取多帧待翻译语音信息的第二声学特征信息,对所述第二声学特征信息执行翻译操作,得到对应的翻译结果。In some optional implementation manners, the speech information processing device further includes a non-streaming speech information processing unit (not shown in the figure), and the non-streaming speech information processing unit is used for: under non-streaming speech recognition, receiving The multiple frames of voice information to be translated until the input end command of the voice information is detected, the second acoustic feature information of the multiple frames of voice information to be translated is obtained, and the translation operation is performed on the second acoustic feature information to obtain a corresponding translation result.
在一些可选的实现方式中,所述翻译操作包括:将声学特征信息输入到预先训练的翻译模型,得到所述声学特征信息对应的翻译结果。In some optional implementation manners, the translating operation includes: inputting acoustic feature information into a pre-trained translation model to obtain a translation result corresponding to the acoustic feature information.
进一步参考图7,作为对上述图5所示方法的实现,本公开提供了一种语音信息处理模型的训练装置的一个实施例,该装置实施例与图5所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。Further referring to FIG. 7 , as an implementation of the method shown in FIG. 5 above, the present disclosure provides an embodiment of a speech information processing model training device, which corresponds to the method embodiment shown in FIG. 5 , The device can be specifically applied to various electronic devices.
如图7所示,本实施例的语音信息处理模型的训练装置包括:样本获取单元701、和训练单元702。其中,样本获取单元701,用于获取训练样本集,所述训练样本集包括多个训练样本对,训练样本对包括第一语言的原始语音信息和第二语言的所述原始语音信息对应的翻译结果;训练单元702,用于将训练样本对中的原始语音信息输入到初始训练后的声学模型,将所述翻译结果作为所述翻译模型的输出,对所述语音信息处理模型进行训练,得到的训练后语音信息处理模型。As shown in FIG. 7 , the apparatus for training a speech information processing model in this embodiment includes: a sample acquisition unit 701 and a training unit 702 . Wherein, the sample obtaining unit 701 is used to obtain a training sample set, the training sample set includes a plurality of training sample pairs, and the training sample pairs include the translation corresponding to the original speech information in the first language and the original speech information in the second language Result; the training unit 702 is used for inputting the original speech information in the training sample pair to the acoustic model after the initial training, using the translation result as the output of the translation model, and training the speech information processing model to obtain The trained speech information processing model.
在本实施例中,语音信息处理模型的训练装置的样本获取单元701和训练单元702的具体处理及其所带来的技术效果可分别参考图5对应实施例中步骤501和步骤502的相关说明,在此不再赘述。In this embodiment, the specific processing of the sample acquisition unit 701 and the training unit 702 of the speech information processing model training device and the technical effects brought about by them can refer to the relevant descriptions of step 501 and step 502 in the embodiment corresponding to FIG. 5 , which will not be repeated here.
在一些可选的实现方式中,上述训练单元702还包括第一训练子单元(图中未示出),第一训练子单元用于:获取所述翻译结果对应的样本编码;将所述原始语音信息输入到初始训练后的声学模型,利用所述翻译结果的样本编码以及第一损失函数对所述语义识别模型进行训练,得到训练后的所述语义识别模型。In some optional implementations, the above training unit 702 also includes a first training subunit (not shown in the figure), the first training subunit is used to: obtain the sample code corresponding to the translation result; Speech information is input into the acoustic model after initial training, and the semantic recognition model is trained by using the sample code of the translation result and the first loss function to obtain the trained semantic recognition model.
在一些可选的实现方式中,上述训练单元702进一步用于:使用第二损失函数、第三损失函数对所述语音信息处理模型进行训练,得到训练后的语音信息处理模型。In some optional implementation manners, the training unit 702 is further configured to: use the second loss function and the third loss function to train the speech information processing model to obtain a trained speech information processing model.
本公开实施例提供的语音信息处理方法、装置和电子设备,通过获取至少一帧待翻译语音信息的第一声学特征信息;在流式语音识别下,确定第一声学特征信息是否对应完整语义;响应于确定结果为是,对所述第一声学特征信息执行翻译操作,得到对应的翻译结果,实现了在流式翻译中,自动确定具有完整语义的待翻译语音信息,对具有完整语义的带翻译语音信息进行翻译,相比于按照固定时间截取音频信息进行翻译而言,可以得到较为准确的翻译结果。提高了翻译结果的准确度。另外,由于本方案可以在确定到具有完整语义的待翻译语音信息进行翻译,无需等到固定切片时间规定的时间段结束之后再翻译,因此可以降低翻译结果的输出延迟。The speech information processing method, device and electronic equipment provided by the embodiments of the present disclosure obtain the first acoustic feature information of at least one frame of speech information to be translated; under streaming speech recognition, determine whether the first acoustic feature information is complete Semantics; in response to the determination result being yes, the translation operation is performed on the first acoustic feature information to obtain the corresponding translation result, which realizes the automatic determination of the voice information to be translated with complete semantics in streaming translation, and for the voice information with complete Compared with translating the audio information intercepted at a fixed time, more accurate translation results can be obtained by translating the semantically translated voice information. Improved accuracy of translation results. In addition, because this solution can translate the voice information to be translated after it is determined to have complete semantics, it does not need to wait until the time period specified by the fixed slice time ends before translating, so the output delay of the translation result can be reduced.
请参考图8,图8示出了本公开的一个实施例的语音信息处理方法或者语音信息处理装置可以应用于其中的示例性系统架构。Please refer to FIG. 8 , which shows an exemplary system architecture in which the voice information processing method or the voice information processing apparatus according to an embodiment of the present disclosure can be applied.
如图8所示,系统架构可以包括终端设备801、802、803,网络804,服务器805。网络804用以在终端设备801、802、803和服务器805之间提供通信链路的介质。网络804可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。As shown in FIG. 8 , the system architecture may include terminal devices 801 , 802 , and 803 , a network 804 , and a server 805 . The network 804 is used as a medium for providing communication links between the terminal devices 801 , 802 , 803 and the server 805 . Network 804 may include various connection types, such as wires, wireless communication links, or fiber optic cables, among others.
终端设备801、802、803可以通过网络804与服务器805交互,以接收或发送消息等。终端设备801、802、803上可以安装有各种客户端应用,例如语音信息采集应用。终端设备801、802、803中的客户端应用可以接收用户的指令,并根据用户的指令完成相应的功能,例如将所采集的语音信息发送给服务器。The terminal devices 801, 802, 803 can interact with the server 805 through the network 804 to receive or send messages and the like. Various client applications, such as voice information collection applications, may be installed on the terminal devices 801, 802, and 803. The client applications in the terminal devices 801, 802, and 803 can receive user instructions and complete corresponding functions according to the user instructions, such as sending the collected voice information to the server.
终端设备801、802、803可以是硬件,也可以是软件。当终端设备801、802、803为硬件时,可以是具有显示屏并且支持网页浏览的各种电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、膝上型便携计算机和台式计算机等等。当终端设备801、802、803为软件时,可以安装在上述所列举的电子设备中。其可以实现成多个软件或软件模块 (例如用来提供分布式服务的软件或软件模块),也可以实现成单个软件或软件模块。在此不做具体限定。 Terminal devices 801, 802, and 803 may be hardware or software. When the terminal devices 801, 802, and 803 are hardware, they may be various electronic devices that have display screens and support web browsing, including but not limited to smartphones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, moving picture expert compression standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, moving picture expert compression standard audio layer 4) player, laptop portable computer and desktop computer, etc. When the terminal devices 801, 802, and 803 are software, they can be installed in the electronic devices listed above. It can be implemented as a plurality of software or software modules (such as software or software modules for providing distributed services), or as a single software or software module. No specific limitation is made here.
服务器805可以是提供各种服务的服务器,例如分析终端设备801、802、803发送的语音信息,得到分析结果(翻译结果)。并将翻译结果发送给终端设备801、802、803。The server 805 may be a server that provides various services, such as analyzing voice information sent by the terminal devices 801, 802, and 803 to obtain analysis results (translation results). And send the translation results to the terminal devices 801, 802, 803.
需要说明的是,本公开实施例所提供的语音信息处理方法可以由服务器执行,相应地,语音信息处理装置可以设置在服务器805中。此外,语音信息处理方法还可以由终端设备执行,相应地,语音信息处理装置可以设置在终端设备801、802、803中。It should be noted that the voice information processing method provided by the embodiment of the present disclosure may be executed by a server, and correspondingly, the voice information processing apparatus may be set in the server 805 . In addition, the voice information processing method can also be executed by a terminal device, and correspondingly, the voice information processing apparatus can be set in the terminal devices 801 , 802 , and 803 .
应该理解,图8中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。It should be understood that the numbers of terminal devices, networks and servers in FIG. 8 are only illustrative. According to the implementation needs, there can be any number of terminal devices, networks and servers.
下面参考图9,其示出了适于用来实现本公开实施例的电子设备(例如图8中的服务器或终端设备)的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图9示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。Referring now to FIG. 9 , it shows a schematic structural diagram of an electronic device (such as the server or terminal device in FIG. 8 ) suitable for implementing the embodiments of the present disclosure. The terminal equipment in the embodiment of the present disclosure may include but not limited to such as mobile phone, notebook computer, digital broadcast receiver, PDA (personal digital assistant), PAD (tablet computer), PMP (portable multimedia player), vehicle terminal (such as mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers and the like. The electronic device shown in FIG. 9 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
如图9所示,电子设备可以包括处理装置(例如中央处理器、图形处理器等)901,其可以根据存储在只读存储器(ROM)902中的程序或者从存储装置908加载到随机访问存储器(RAM)903中的程序而执行各种适当的动作和处理。在RAM 903中,还存储有电子设备900操作所需的各种程序和数据。处理装置901、ROM 902以及RAM 903通过总线904彼此相连。输入/输出(I/O)接口905也连接至总线904。As shown in FIG. 9, an electronic device may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 901, which may be loaded into a random access memory according to a program stored in a read-only memory (ROM) 902 or from a storage device 908. (RAM) 903 to execute various appropriate actions and processing. In the RAM 903, various programs and data necessary for the operation of the electronic device 900 are also stored. The processing device 901, ROM 902, and RAM 903 are connected to each other through a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904 .
通常,以下装置可以连接至I/O接口905:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置906;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置907;包括例如磁带、硬盘等的存储装置908;以及通信装置909。通信装置909可以允许电子设备与其他设备进行无线或有线通信以交换数据。虽然图9示出了具有各种装置的电子设备,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。Typically, the following devices can be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 907 such as a computer; a storage device 908 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 909. The communication means 909 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While FIG. 9 shows an electronic device having various means, it is to be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施 例中,该计算机程序可以通过通信装置909从网络上被下载和安装,或者从存储装置908被安装,或者从ROM 902被安装。在该计算机程序被处理装置901执行时,执行本公开实施例的方法中限定的上述功能。In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts can be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 909, or from storage means 908, or from ROM 902. When the computer program is executed by the processing device 901, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。It should be noted that the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two. A computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above. In the present disclosure, a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device. In the present disclosure, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device . Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。In some embodiments, the client and the server can communicate using any currently known or future network protocols such as HTTP (HyperText Transfer Protocol, Hypertext Transfer Protocol), and can communicate with digital data in any form or medium The communication (eg, communication network) interconnections. Examples of communication networks include local area networks ("LANs"), wide area networks ("WANs"), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。The above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取至少一帧待翻译语音信息的第一声学特征信息;在流式语音识别下,确定第一声学特征信息是否对应完整语义;响应于确定结果为是,对所述第一声学特征信息执行翻译操作,得到对应的翻译结果。或者The above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: acquires the first acoustic feature information of at least one frame of speech information to be translated; In the voice recognition mode, it is determined whether the first acoustic feature information corresponds to complete semantics; in response to the determination result being yes, a translation operation is performed on the first acoustic feature information to obtain a corresponding translation result. or
获取训练样本集,所述训练样本集包括多个训练样本对,训练样本对包括第一语言的原始语音信息和第二语言的所述原始语音信息对 应的翻译结果;将训练样本对中的原始语音信息输入到初始训练后的声学模型,将所述翻译结果作为所述翻译模型的输出,对所述语音信息处理模型进行训练,得到的训练后语音信息处理模型。Obtain a training sample set, the training sample set includes a plurality of training sample pairs, the training sample pairs include the original speech information of the first language and the translation result corresponding to the original speech information of the second language; the original training sample pairs Speech information is input into the acoustic model after initial training, and the translation result is used as an output of the translation model to train the speech information processing model to obtain a trained speech information processing model.
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as the "C" language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In cases involving a remote computer, the remote computer may be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (such as through an Internet Service Provider). Internet connection).
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. It should also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。The units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of a unit does not constitute a limitation of the unit itself under certain circumstances.
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。The functions described herein above may be performed at least in part by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), System on Chips (SOCs), Complex Programmable Logical device (CPLD) and so on.
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储 器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of the present disclosure, a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device. A machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing. More specific examples of machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, Random Access Memory (RAM), Read Only Memory (ROM), Erasable Programmable Read Only Memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above description is only a preferred embodiment of the present disclosure and an illustration of the applied technical principles. Those skilled in the art should understand that the disclosure scope involved in this disclosure is not limited to the technical solution formed by the specific combination of the above-mentioned technical features, but also covers the technical solutions formed by the above-mentioned technical features or Other technical solutions formed by any combination of equivalent features. For example, a technical solution formed by replacing the above-mentioned features with technical features disclosed in this disclosure (but not limited to) having similar functions.
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。In addition, while operations are depicted in a particular order, this should not be understood as requiring that the operations be performed in the particular order shown or performed in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while the above discussion contains several specific implementation details, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are merely example forms of implementing the claims.

Claims (16)

  1. 一种语音信息处理方法,包括:A voice information processing method, comprising:
    获取至少一帧待翻译语音信息的第一声学特征信息;Acquire first acoustic feature information of at least one frame of speech information to be translated;
    在流式语音识别下,确定第一声学特征信息是否对应完整语义;Under streaming speech recognition, determine whether the first acoustic feature information corresponds to complete semantics;
    响应于确定结果为是,对所述第一声学特征信息执行翻译操作,得到对应的翻译结果。In response to the determination result being yes, a translation operation is performed on the first acoustic feature information to obtain a corresponding translation result.
  2. 根据权利要求1所述的方法,其中,所述获取至少一帧待翻译语音信息的第一声学特征信息包括:The method according to claim 1, wherein said obtaining the first acoustic feature information of at least one frame of speech information to be translated comprises:
    将至少一个帧待处理语音信息输入到预训练的声学模型,得到所述第一声学特征信息。Inputting at least one frame of speech information to be processed into a pre-trained acoustic model to obtain the first acoustic feature information.
  3. 根据权利要求2所述的方法,其中,所述声学模型包括:屏蔽式声学模型。The method according to claim 2, wherein the acoustic model comprises: a shielded acoustic model.
  4. 根据权利要求1或2所述的方法,其中,所述在流式语音识别下,确定第一声学特征信息是否对应完整语义包括:The method according to claim 1 or 2, wherein, under streaming speech recognition, determining whether the first acoustic feature information corresponds to complete semantics comprises:
    将所述第一声学特征信息输入到预训练的预设语义识别模型,利用所述预设语义识别模型确定所述第一声学特征信息是否对应完整语义。The first acoustic feature information is input into a pre-trained preset semantic recognition model, and the preset semantic recognition model is used to determine whether the first acoustic feature information corresponds to complete semantics.
  5. 根据权利要求4所述的方法,其中,所述预设语义识别模型包括连续整合发放模块。The method according to claim 4, wherein the preset semantic recognition model includes a continuous integration and delivery module.
  6. 根据权利要求1所述的方法,其中,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    在非流式语音识别下,接到多帧待翻译语音信息直至检测到语音信息的输入结束指令,获取多帧待翻译语音信息的第二声学特征信息,对所述第二声学特征信息执行翻译操作,得到对应的翻译结果。Under non-streaming speech recognition, receiving multiple frames of speech information to be translated until the input end instruction of the speech information is detected, obtaining second acoustic feature information of multiple frames of speech information to be translated, and performing translation on the second acoustic feature information Operation to get the corresponding translation result.
  7. 根据权利要求1或6所述的方法,其中,所述翻译操作包括:The method according to claim 1 or 6, wherein the translating operation comprises:
    将声学特征信息输入到预先训练的翻译模型,得到所述声学特征信息对应的翻译结果。The acoustic feature information is input into the pre-trained translation model to obtain a translation result corresponding to the acoustic feature information.
  8. 一种语音信息处理模型,包括:声学模型、语义识别模型和翻译模型,其中,A speech information processing model, comprising: an acoustic model, a semantic recognition model and a translation model, wherein,
    所述声学模型用于:在流式语音识别模式下,接收至少一帧待翻译语音信息,并提取所述至少一帧待翻译语音信息第一声学特征信息;The acoustic model is used to: receive at least one frame of speech information to be translated in the streaming speech recognition mode, and extract the first acoustic feature information of the at least one frame of speech information to be translated;
    所述语义识别模型用于:在流式语音识别模式下,接收所述至少一帧第一声学特征信息,并确定所述至少一帧第一声学特征信息是否对应完整语义;The semantic recognition model is configured to: receive the at least one frame of first acoustic feature information in a streaming speech recognition mode, and determine whether the at least one frame of first acoustic feature information corresponds to complete semantics;
    所述翻译模型用于在流式语音识别模式下确定所述第一声学特征信息的翻译结果。The translation model is used to determine a translation result of the first acoustic feature information in a streaming speech recognition mode.
  9. 根据权利要求8所述的模型,其中,所述声学模型进一步用于:在非流式语音识别模式,接收多帧待翻译语音信息,至检测到语音信息的输入结束指令,并提取多帧待翻译语音信息的第二声学特征信息;The model according to claim 8, wherein the acoustic model is further used for: in the non-streaming speech recognition mode, receiving multiple frames of speech information to be translated, until the input end instruction of the speech information is detected, and extracting multiple frames of speech information to be translated Translating the second acoustic feature information of the speech information;
    所述语义识别模型进一步用于:在非流式语音识别模式,对第二声学特征信息进行压缩和对齐;The semantic recognition model is further used for: compressing and aligning the second acoustic feature information in the non-streaming speech recognition mode;
    所述翻译模型进一步用于:在非流式语音识别模式下确定第二声学特征信息的翻译结果。The translation model is further used for: determining a translation result of the second acoustic feature information in a non-streaming speech recognition mode.
  10. 一种语音信息处理模型的训练方法,应用于权利要求8或9所述语音信息处理模型,所述语音信息处理模型包括声学模型、语义识别模型和翻译模型,所述方法包括:A training method of a speech information processing model, applied to the speech information processing model described in claim 8 or 9, said speech information processing model comprising an acoustic model, a semantic recognition model and a translation model, said method comprising:
    获取训练样本集,所述训练样本集包括多个训练样本对,训练样本对包括第一语言的原始语音信息和第二语言的所述原始语音信息对应的翻译结果;Obtain a training sample set, the training sample set includes a plurality of training sample pairs, and the training sample pairs include original speech information in a first language and translation results corresponding to the original speech information in a second language;
    将训练样本对中的原始语音信息输入到初始训练后的声学模型,将所述翻译结果作为所述翻译模型的输出,对所述语音信息处理模型进行训练,得到的训练后语音信息处理模型。The original speech information in the training sample pair is input into the acoustic model after initial training, the translation result is used as the output of the translation model, and the speech information processing model is trained to obtain a trained speech information processing model.
  11. 根据权利要求10所述的方法,其中,所述将训练样本对中的原始语音信息输入到初始训练后的声学模型,将所述翻译结果作为所述翻译模型的输出,对所述语音信息处理模型进行训练,得到的训练后语音信息处理模型,包括:The method according to claim 10, wherein the original speech information in the training sample pair is input to the acoustic model after initial training, the translation result is used as the output of the translation model, and the speech information is processed The model is trained, and the obtained speech information processing model after training includes:
    获取所述翻译结果对应的样本编码;Acquiring a sample code corresponding to the translation result;
    将所述原始语音信息输入到初始训练后的声学模型,利用所述翻译结果的样本编码以及第一损失函数对所述语义识别模型进行训练,得到训练后的所述语义识别模型。The original speech information is input into the acoustic model after initial training, and the semantic recognition model is trained by using the sample code of the translation result and the first loss function to obtain the trained semantic recognition model.
  12. 根据权利要求10所述的方法,其中,所述将训练样本对中的原始语音信息输入到初始训练后的声学模型,将所述翻译结果作为所述翻译模型的输出,对所述语音信息处理模型进行训练,得到的训练后语音信息处理模型,包括:The method according to claim 10, wherein the original speech information in the training sample pair is input to the acoustic model after initial training, the translation result is used as the output of the translation model, and the speech information is processed The model is trained, and the obtained speech information processing model after training includes:
    使用第二损失函数、第三损失函数对所述语音信息处理模型进行训练,得到训练后的语音信息处理模型。The speech information processing model is trained by using the second loss function and the third loss function to obtain a trained speech information processing model.
  13. 一种语音信息处理装置,包括:A voice information processing device, comprising:
    获取单元,用于获取待翻译语音信息的至少一帧第一声学特征信息;An acquisition unit, configured to acquire at least one frame of first acoustic feature information of the speech information to be translated;
    确定单元,用于在流式语音识别下,确定所述至少一帧第一声学特征信息是否满足预设翻译条件;A determination unit, configured to determine whether the at least one frame of first acoustic feature information satisfies a preset translation condition under streaming speech recognition;
    翻译单元,用于响应于确定结果为是,对所述第一声学特征信息执行翻译操作,得到对应的翻译结果。The translation unit is configured to perform a translation operation on the first acoustic feature information to obtain a corresponding translation result in response to the determination result being yes.
  14. 一种语音信息处理模型的训练装置,应用于权利要求8或9所述语音信息处理模型,所述语音信息处理模型声学模型、语义识别模型和翻译模型,所述装置包括:A training device for a speech information processing model, applied to the speech information processing model described in claim 8 or 9, said speech information processing model acoustic model, semantic recognition model and translation model, said device comprising:
    样本获取单元,用于获取训练样本集,所述训练样本集包括多个训练样本对,训练样本对包括第一语言的原始语音信息和第二语言的所述原始语音信息对应的翻译结果;A sample acquisition unit, configured to acquire a training sample set, the training sample set includes a plurality of training sample pairs, and the training sample pairs include original speech information in a first language and translation results corresponding to the original speech information in a second language;
    训练单元,用于将训练样本对中的原始语音信息输入到初始训练后的声学模型,将所述翻译结果作为所述翻译模型的输出,对所述语音信息处理模型进行训练,得到的训练后语音信息处理模型。The training unit is used to input the original speech information in the training sample pair into the acoustic model after the initial training, and use the translation result as the output of the translation model to train the speech information processing model, and obtain the trained Speech information processing model.
  15. 一种电子设备,其特征在于,包括:An electronic device, characterized in that it comprises:
    一个或多个处理器;one or more processors;
    存储装置,用于存储一个或多个程序,storage means for storing one or more programs,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-7中任一所述的方法,或者如权利要求10-12中任一项所述的方法。When the one or more programs are executed by the one or more processors, the one or more processors implement the method according to any one of claims 1-7, or the method according to claims 10-12 any one of the methods described.
  16. 一种计算机可读介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-7中任一所述的方法,或者如权利要求10-12中任一项所述的方法。A computer-readable medium on which a computer program is stored, wherein the program is executed by a processor to implement the method according to any one of claims 1-7, or any one of claims 10-12 method described in the item.
PCT/CN2022/106426 2021-07-28 2022-07-19 Speech information processing method and apparatus, and electronic device WO2023005729A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110860672.XA CN113571044A (en) 2021-07-28 2021-07-28 Voice information processing method and device and electronic equipment
CN202110860672.X 2021-07-28

Publications (1)

Publication Number Publication Date
WO2023005729A1 true WO2023005729A1 (en) 2023-02-02

Family

ID=78168726

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/106426 WO2023005729A1 (en) 2021-07-28 2022-07-19 Speech information processing method and apparatus, and electronic device

Country Status (2)

Country Link
CN (1) CN113571044A (en)
WO (1) WO2023005729A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116911323A (en) * 2023-09-13 2023-10-20 深圳市微克科技有限公司 Real-time translation method, system and medium of intelligent wearable device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113571044A (en) * 2021-07-28 2021-10-29 北京有竹居网络技术有限公司 Voice information processing method and device and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09146585A (en) * 1995-11-27 1997-06-06 Hitachi Ltd Voice recognition and translation system
CN108231062A (en) * 2018-01-12 2018-06-29 科大讯飞股份有限公司 A kind of voice translation method and device
CN110705317A (en) * 2019-08-28 2020-01-17 科大讯飞股份有限公司 Translation method and related device
CN112183120A (en) * 2020-09-18 2021-01-05 北京字节跳动网络技术有限公司 Speech translation method, device, equipment and storage medium
CN112530437A (en) * 2020-11-18 2021-03-19 北京百度网讯科技有限公司 Semantic recognition method, device, equipment and storage medium
CN112735417A (en) * 2020-12-29 2021-04-30 科大讯飞股份有限公司 Speech translation method, electronic device, computer-readable storage medium
CN112800782A (en) * 2021-01-29 2021-05-14 中国科学院自动化研究所 Text semantic feature fused voice translation method, system and equipment
CN113571044A (en) * 2021-07-28 2021-10-29 北京有竹居网络技术有限公司 Voice information processing method and device and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303692B (en) * 2008-06-19 2012-08-29 徐文和 All-purpose numeral semantic library for translation of mechanical language
US8311973B1 (en) * 2011-09-24 2012-11-13 Zadeh Lotfi A Methods and systems for applications for Z-numbers
WO2017088136A1 (en) * 2015-11-25 2017-06-01 华为技术有限公司 Translation method and terminal
CN109582982A (en) * 2018-12-17 2019-04-05 北京百度网讯科技有限公司 Method and apparatus for translated speech
CN109657252A (en) * 2018-12-25 2019-04-19 北京微播视界科技有限公司 Information processing method, device, electronic equipment and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09146585A (en) * 1995-11-27 1997-06-06 Hitachi Ltd Voice recognition and translation system
CN108231062A (en) * 2018-01-12 2018-06-29 科大讯飞股份有限公司 A kind of voice translation method and device
CN110705317A (en) * 2019-08-28 2020-01-17 科大讯飞股份有限公司 Translation method and related device
CN112183120A (en) * 2020-09-18 2021-01-05 北京字节跳动网络技术有限公司 Speech translation method, device, equipment and storage medium
CN112530437A (en) * 2020-11-18 2021-03-19 北京百度网讯科技有限公司 Semantic recognition method, device, equipment and storage medium
CN112735417A (en) * 2020-12-29 2021-04-30 科大讯飞股份有限公司 Speech translation method, electronic device, computer-readable storage medium
CN112800782A (en) * 2021-01-29 2021-05-14 中国科学院自动化研究所 Text semantic feature fused voice translation method, system and equipment
CN113571044A (en) * 2021-07-28 2021-10-29 北京有竹居网络技术有限公司 Voice information processing method and device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116911323A (en) * 2023-09-13 2023-10-20 深圳市微克科技有限公司 Real-time translation method, system and medium of intelligent wearable device
CN116911323B (en) * 2023-09-13 2024-03-26 深圳市微克科技股份有限公司 Real-time translation method, system and medium of intelligent wearable device

Also Published As

Publication number Publication date
CN113571044A (en) 2021-10-29

Similar Documents

Publication Publication Date Title
CN111933110B (en) Video generation method, generation model training method, device, medium and equipment
WO2023005729A1 (en) Speech information processing method and apparatus, and electronic device
CN112634876B (en) Speech recognition method, device, storage medium and electronic equipment
US11586831B2 (en) Speech translation method electronic device and computer-readable storage medium using SEQ2SEQ for determining alternative translated speech segments
US20220239882A1 (en) Interactive information processing method, device and medium
CN112183120A (en) Speech translation method, device, equipment and storage medium
US20200410731A1 (en) Method and apparatus for controlling mouth shape changes of three-dimensional virtual portrait
CN111597825B (en) Voice translation method and device, readable medium and electronic equipment
CN111816162B (en) Voice change information detection method, model training method and related device
WO2022121799A1 (en) Sound signal processing method and apparatus, and electronic device
CN112509562B (en) Method, apparatus, electronic device and medium for text post-processing
JP2023547917A (en) Image segmentation method, device, equipment and storage medium
CN111883107B (en) Speech synthesis and feature extraction model training method, device, medium and equipment
WO2022127620A1 (en) Voice wake-up method and apparatus, electronic device, and storage medium
US20240029709A1 (en) Voice generation method and apparatus, device, and computer readable medium
WO2022037419A1 (en) Audio content recognition method and apparatus, and device and computer-readable medium
CN111968647A (en) Voice recognition method, device, medium and electronic equipment
CN113257218A (en) Speech synthesis method, speech synthesis device, electronic equipment and storage medium
CN114429658A (en) Face key point information acquisition method, and method and device for generating face animation
CN114064943A (en) Conference management method, conference management device, storage medium and electronic equipment
CN112242143B (en) Voice interaction method and device, terminal equipment and storage medium
CN112837672A (en) Method and device for determining conversation affiliation, electronic equipment and storage medium
CN111312224A (en) Training method and device of voice segmentation model and electronic equipment
US20240096347A1 (en) Method and apparatus for determining speech similarity, and program product
WO2022121859A1 (en) Spoken language information processing method and apparatus, and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22848334

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE