CN113221581A - Text translation method, device and storage medium - Google Patents
Text translation method, device and storage medium Download PDFInfo
- Publication number
- CN113221581A CN113221581A CN202110524501.XA CN202110524501A CN113221581A CN 113221581 A CN113221581 A CN 113221581A CN 202110524501 A CN202110524501 A CN 202110524501A CN 113221581 A CN113221581 A CN 113221581A
- Authority
- CN
- China
- Prior art keywords
- text
- translation
- sample
- translated
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013519 translation Methods 0.000 title claims abstract description 493
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000012549 training Methods 0.000 claims description 68
- 238000004590 computer program Methods 0.000 claims description 2
- 230000014616 translation Effects 0.000 abstract description 429
- 238000004891 communication Methods 0.000 description 10
- 238000003062 neural network model Methods 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 7
- 238000013528 artificial neural network Methods 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000001537 neural effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/42—Data-driven translation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Machine Translation (AREA)
Abstract
The disclosure relates to a text translation method, a device and a storage medium, wherein the method obtains a first translation text corresponding to a text to be translated through a preset first translation model; obtaining a second translation text corresponding to the text to be translated through a preset second translation model; the character arrangement sequence of the first translation text is different from the character arrangement sequence of the second translation text; splicing the first translation text and the second translation text to obtain a target spliced text; the target spliced text is used as the input of a preset third translation model, the target translation corresponding to the text to be translated is output, high-quality translations can be obtained aiming at the texts to be translated with different language types, and the accuracy of a translation result can be effectively guaranteed.
Description
Technical Field
The present disclosure relates to the field of machine translation, and in particular, to a method, an apparatus, and a storage medium for text translation.
Background
NMT (Neural Machine Translation) is a technology for automatically translating a source language into another target language by a Machine based on a Neural network algorithm, is currently the most mainstream Machine Translation method, and has a wide technical and industrial value.
At present, the NMT has a large difference in the quality of the obtained translations for different target languages, and cannot ensure that a translation with a high quality is obtained for each target language.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a method, an apparatus, and a storage medium for text translation.
According to a first aspect of embodiments of the present disclosure, there is provided a method of text translation, including:
acquiring a text to be translated;
taking the text to be translated as the input of a preset first translation model to obtain a first translation text corresponding to the text to be translated;
taking the text to be translated as the input of a preset second translation model to obtain a second translation text corresponding to the text to be translated; the character arrangement sequence of the first translation text is different from the character arrangement sequence of the second translation text;
splicing the first translation text and the second translation text to obtain a target spliced text;
and taking the target spliced text as the input of a preset third translation model, and outputting to obtain a target translation corresponding to the text to be translated.
Optionally, the word arrangement order of the second translated text is an inverted order of the word arrangement order of the first translated text.
Optionally, the first translation model is trained by:
obtaining a first sample text set, wherein the first sample text set comprises a plurality of first sample pairs, each first sample pair comprises a first sample text to be translated and a first target translation sample text, and the first target translation sample text is a translated text of the first sample text to be translated;
and carrying out model training on the first initial network model through the first sample text set to obtain the first translation model.
Optionally, the second translation model is trained by:
acquiring a second sample text set, wherein the second sample text set comprises a plurality of second sample pairs, each second sample pair comprises a second sample text to be translated and a second target translation sample text, and the second target translation sample text is a translated text of the second sample text to be translated;
arranging the character sequence of the second target translation sample text according to a preset sequence to obtain a third target translation sample text corresponding to each second sample text to be translated;
and performing model training on a second initial network model through a plurality of second sample texts to be translated and the third target translation sample texts corresponding to each second sample text to be translated to obtain the second translation model.
Optionally, the arranging the text sequences of the second target translation sample texts according to a preset sequence to obtain a third target translation sample text corresponding to each second sample text to be translated includes:
turning the character sequence of the second target translation sample text to obtain a second target translation sample with the inverted character sequence;
and taking the second target translation sample with the inverted word sequence as the third target translation sample text.
Optionally, the third translation model is trained by:
acquiring a third sample text set, wherein the third sample text set comprises a plurality of third sample pairs, each third sample pair comprises a third sample text to be translated and a fourth target translation sample text, and the fourth target translation sample text is a translated text of the third sample text to be translated;
taking each third sample text to be translated as the input of the first translation model and the second translation model respectively, so that the first translation model outputs a first translation text corresponding to the third sample text to be translated, and the second translation model outputs a second translation text corresponding to the third sample text to be translated, wherein the word arrangement sequence of the first translation text is different from the word arrangement sequence of the second translation text;
generating a target model training sample according to the first translation text, the second translation text and the fourth target translation sample text corresponding to the plurality of third sample texts to be translated;
and carrying out model training on a third initial network model through the target model training sample to obtain the third translation model.
Optionally, the generating a target model training sample according to the first translation text, the second translation text and the fourth target translation sample text corresponding to the plurality of third sample texts to be translated includes:
splicing the first translation text and the second translation text corresponding to each third sample text to be translated to obtain a spliced translation text corresponding to the third sample text to be translated;
and taking the spliced translation text and the fourth target translation sample text corresponding to the plurality of third sample texts to be translated as the target model training samples.
According to a first aspect of embodiments of the present disclosure, there is provided an apparatus for text translation, including:
the acquisition module is configured to acquire a text to be translated;
the first determining module is configured to take the text to be translated as an input of a preset first translation model to obtain a first translation text corresponding to the text to be translated;
the second determining module is configured to use the text to be translated as an input of a preset second translation model to obtain a second translation text corresponding to the text to be translated; wherein the first translation text and the second translation text have different word arrangement orders;
the third determining module is configured to splice the first translation text and the second translation text to obtain a target spliced text;
and the fourth determining module is configured to take the target spliced text as the input of a preset third translation model, and output the target spliced text to obtain the target translation corresponding to the text to be translated.
Optionally, the word arrangement order of the second translated text is an inverted order of the word arrangement order of the first translated text.
Optionally, the first translation model is trained by:
obtaining a first sample text set, wherein the first sample text set comprises a plurality of first sample pairs, each first sample pair comprises a first sample text to be translated and a first target translation sample text, and the first target translation sample text is a translated text of the first sample text to be translated;
and carrying out model training on the first initial network model through the first sample text set to obtain the first translation model.
Optionally, the second translation model is trained by:
acquiring a second sample text set, wherein the second sample text set comprises a plurality of second sample pairs, each second sample pair comprises a second sample text to be translated and a second target translation sample text, and the second target translation sample text is a translated text of the second sample text to be translated;
arranging the character sequence of the second target translation sample text according to a preset sequence to obtain a third target translation sample text corresponding to each second sample text to be translated;
and performing model training on a second initial network model through a plurality of second sample texts to be translated and the third target translation sample texts corresponding to each second sample text to be translated to obtain the second translation model.
Optionally, the arranging the text sequences of the second target translation sample texts according to a preset sequence to obtain a third target translation sample text corresponding to each second sample text to be translated includes:
turning the character sequence of the second target translation sample text to obtain a second target translation sample with the inverted character sequence;
and taking the second target translation sample with the inverted word sequence as the third target translation sample text.
Optionally, the preset third translation model is obtained by training in the following manner:
acquiring a third sample text set, wherein the third sample text set comprises a plurality of third sample pairs, each third sample pair comprises a third sample text to be translated and a fourth target translation sample text, and the fourth target translation sample text is a translated text of the third sample text to be translated;
taking each third sample text to be translated as the input of the first translation model and the second translation model respectively, so that the first translation model outputs a first translation text corresponding to the third sample text to be translated, and the second translation model outputs a second translation text corresponding to the third sample text to be translated, wherein the word arrangement sequence of the first translation text is different from the word arrangement sequence of the second translation text;
generating a target model training sample according to the first translation text, the second translation text and the fourth target translation sample text corresponding to the plurality of third sample texts to be translated;
and carrying out model training on a third initial network model through the target model training sample to obtain the third translation model.
Optionally, the generating a target model training sample according to the first translation text, the second translation text and the fourth target translation sample text corresponding to the plurality of third sample texts to be translated includes:
splicing the first translation text and the second translation text corresponding to each third sample text to be translated to obtain a spliced translation text corresponding to the third sample text to be translated;
and taking the spliced translation text and the fourth target translation sample text corresponding to the plurality of third sample texts to be translated as the target model training samples.
According to a third aspect of the embodiments of the present disclosure, there is provided an apparatus for text translation, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the executable instructions when executed implement the steps of the method of text translation provided in the first aspect above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions, characterized in that the program instructions, when executed by a processor, implement the steps of the method for text translation provided above in the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: obtaining a first translation text corresponding to the text to be translated through a preset first translation model; obtaining a second translation text corresponding to the text to be translated through a preset second translation model; the character arrangement sequence of the first translation text is different from the character arrangement sequence of the second translation text; according to the first translation text and the second translation text, the target translation corresponding to the text to be translated is obtained through a preset third translation model, high-quality translations can be obtained for the texts to be translated with different language types, and the accuracy of a translation result can be effectively guaranteed.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating a method of text translation in accordance with an exemplary embodiment;
FIG. 2 is a flow chart diagram illustrating a method of model training in accordance with an exemplary embodiment of the present disclosure;
FIG. 3 is a block diagram illustrating an apparatus for text translation in accordance with an exemplary embodiment;
FIG. 4 is a block diagram illustrating another apparatus for text translation in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Before describing in detail embodiments of the present disclosure, the following description is first made of an application scenario of the present disclosure, which may be applied to a scenario of machine translation, which is translation from one source language to another target language by a machine. The mainstream machine translation method at present is neural machine translation, that is, a technology for automatically translating a source language into another target language based on a neural network algorithm. Current NMT models target language sequences, typically by way of L2R or R2L, to generate model training samples, and trains the model based on the model training samples to generate translations that conform to reading habits. However, different languages have different linguistic characteristics, for example, chinese is a "main predicate" structure, while japanese is a "main predicate", chinese-english translation generally adopts L2R To be superior To R2L, while japanese is a "main predicate" structure, while chinese-english translation generally adopts L2L To be superior To L2R, but in an actual NMT application process, for different target languages, either L2R or R2L (Right-To-Left) is often adopted for modeling, and no matter which way is adopted for modeling, translation quality for different target languages cannot be guaranteed.
In the related art, there is also a method for obtaining a translation through two modeling manners of R2L and L2R, but in the related art, either the quality of the translation is improved through interaction between a decoder obtained through a modeling manner of R2L and a decoder obtained through a modeling manner of L2R, or the related art includes a forward decoder and a backward decoder, a backward decoder generates a R2L translation, then a conventional forward encoder performs interaction between the encoder, the R2L translation, and a hidden decoding state generated by the backward decoder to generate a final L2R translation, that is, the R2L translation is used as an intermediate translation to adjust the L2R translation, so as to achieve the method for improving the quality of the translation, wherein when the quality of the translation is improved through interaction between the R2L decoder and the L2R decoder, only the translations which have been generated in each other can be seen in actual decoding, the global translation cannot be seen, the translation quality still cannot be guaranteed, and the interactive model structure of the R2L decoder and the L2R decoder is complex, so that the generation efficiency of the model is not favorably improved; due to the difference of languages, the translation quality of some languages L2R is better, and the translation quality of some languages R2L is better, so that the method of adjusting the translation of L2R by using the translation of R2L as an intermediate translation still cannot be applied to all target languages, and therefore, it cannot be guaranteed that the translations with higher quality can be obtained for different target languages. That is, for different target languages, the quality of the translation obtained by the current NMT is greatly different, and it cannot be guaranteed that a higher-quality translation is obtained for each target language.
In order to overcome technical problems in the related art, the present disclosure provides a method, an apparatus, and a storage medium for text translation, where the method obtains a first translation text corresponding to a text to be translated through a preset first translation model; obtaining a second translation text corresponding to the text to be translated through a preset second translation model; the character arrangement sequence of the first translation text is different from the character arrangement sequence of the second translation text; according to the first translation text and the second translation text, the target translation corresponding to the text to be translated is obtained through a preset third translation model, high-quality translations can be obtained for the texts to be translated with different language types, and the accuracy of a translation result can be effectively guaranteed.
The embodiments of the present disclosure will be described in detail with reference to specific examples.
Fig. 1 is a flowchart illustrating a text translation method according to an exemplary embodiment, and the text translation method is used in a terminal, as shown in fig. 1, and may include the following steps:
in step 101, a text to be translated is obtained.
The text to be translated can be a speech text of any language type such as English, Japanese, French, Chinese and the like.
In step 102, the text to be translated is used as an input of a preset first translation model, so as to obtain a first translation text corresponding to the text to be translated.
The first translation text and the text to be translated belong to different language types.
In this step, the preset first translation model can be obtained by training in the following training manner: obtaining a first sample text set, wherein the first sample text set comprises a plurality of first sample pairs, each first sample pair comprises a first sample text to be translated and a first target translation sample text, and the first target translation sample text is a translated text of the first sample text to be translated; and performing model training on the first initial network model through the first sample text set to obtain the first translation model.
Illustratively, the first sample text set is formed by n bilingual sentences, which can be expressed here as:wherein xiRepresenting the first sample text to be translated (i.e. the source language sequence),representing a first target translation sample text, (i.e. a target language sequence, the target language sequence being a sequence of a language type to be translated into, which may be a sequence of L2R translation of a target language, or a sequence of R2L translation of the target language), wherein n ≧ i ≧ 1, n is an integer, and the first sample text set is usedFor training a sample, training the first initial network model to obtain the preset first translation model, where the preset first translation model is an NMT (neural machine translation) model.
It should be noted that, the first initial network model may be a neural network model, and the neural network model may refer to any one of a coder-decoder model, a CNN (convolutional neural network), an RNN (recurrent neural network), and a self-attention network (self-attention network) in the prior art, and since there is no need to adjust a neural network model framework in the prior art when training the first translation model, the generation efficiency of the first translation model can be effectively improved.
In step 103, the text to be translated is used as an input of a preset second translation model, so as to obtain a second translation text corresponding to the text to be translated.
Wherein the word arrangement order of the first translation text is different from the word arrangement order of the second translation text; the second translation text and the text to be translated belong to different language types, and the second translation text and the first translation text belong to the same language type. The word may be a character or a character string, for example, in the case that the language type is english, the word is a character string corresponding to a word, and in the case that the language type is chinese, the word is a chinese character.
For example, in the chinese-english translation scenario, when the text to be translated is "I love you", the word sequence corresponding to the first translated text is "I love you", and the word sequence corresponding to the second translated text may be "you love I".
In this step, the second translation model may be obtained by training in the following training manner: acquiring a second sample text set, wherein the second sample text set comprises a plurality of second sample pairs, each second sample pair comprises a second sample text to be translated and a second target translation sample text, and the second target translation sample text is a translated text of the second sample text to be translated; arranging the character sequence of the second target translation sample text according to a preset sequence to obtain a third target translation sample text corresponding to each second sample text to be translated; and performing model training on the second initial network model through a plurality of second sample texts to be translated and the third target translation sample text corresponding to each second sample text to be translated to obtain the second translation model.
It should be noted that the second sample text set may be the same as or different from the first sample text set, and when the second sample text set is the same as the first sample text set, the plurality of second sample texts to be translated are the same as the plurality of first sample texts to be translated, and the plurality of second target translation sample texts corresponding to the plurality of second sample texts to be translated are the same as the plurality of first target translation sample texts corresponding to the plurality of first sample texts to be translated; and turning over each second target translation sample text to obtain a corresponding third target translation.
For example, if the second sample text set is the same as the first sample text set, the second sample text to be translated may be represented as xiX of theiRepresenting a sequence of source languages, the second target translation sample text may be represented asTheL2R sequence representing a target language, or R2L sequence of a target language, which is assumed hereIs the L2R sequence of the target language,for a sequence comprising t words, can be expressed asWherein y istDenotes the t-th word, willTurning over, namely recognizing the second target translation sample text of the L2R sequence from right to left, thereby obtaining the R2L sequence corresponding to the L2R sequence as the third target translation sample text, wherein the third target translation sample text can be represented as the third target translation sample textTheA plurality of the second sample texts x to be translatediWith a plurality of second sample texts x to be translatediCorresponding third target translation sample textAs training data, wherein the training data may be represented asAnd training the NMT model through the training data so as to obtain the second translation model. It should be noted that the second initial network model may be a neural network model, and the neural network model may refer to any one of a coder-decoder model, a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), and a self-attention network (self-attention network) in the prior art, since there is no need to model the existing neural network when training the second translation modelThe model frame is adjusted, so that the generation efficiency of the second translation model can be effectively improved. In addition, because the first translation model and the second translation model are both independent NMT models, the generated translations of L2R and R2L are independent from each other and do not interfere with each other, thereby being beneficial to ensuring the accuracy of the first translation text and the second translation text.
In step 104, the first translation text and the second translation text are spliced to obtain a target spliced text.
The embodiment of splicing the first translation text and the second translation text may be to splice the second translation text after the first translation text, or to splice the first translation text after the second translation text.
In step 105, the target spliced text is used as an input of a preset third translation model, and a target translation corresponding to the text to be translated is output.
For example, in a chinese-to-english translation scenario, if the first translation text is "I love you" and the second translation text is "you love I", the target stitched text "I love you love I" is generated and input into the preset third translation model, so that the preset third translation model outputs the target translation "I love you".
It should be noted that, because the preset first translation model, the preset second translation model and the preset third translation model do not need to adjust the existing neural network model framework in the training process, the generation efficiency of the model can be effectively ensured, and because the preset third translation model is a target spliced text generated according to the first translation text and the second translation text, a target translation more conforming to the reading habit can be obtained according to translations with different word sequences, for example, a final translation can be obtained according to a spliced translation of an L2R translation and an R2L translation, so that language translation requirements sequentially conforming to an L2R modeling mode can be met, language translation requirements of an R2L modeling mode can be met, a higher-quality translation can be obtained for to-be-translated texts with different language types, the accuracy of the translation result can be effectively ensured.
According to the technical scheme, a first translation text corresponding to the text to be translated is obtained through a preset first translation model; obtaining a second translation text corresponding to the text to be translated through a preset second translation model; the character arrangement sequence of the first translation text is different from the character arrangement sequence of the second translation text; according to the first translation text and the second translation text, the target translation corresponding to the text to be translated is obtained through a preset third translation model, high-quality translations can be obtained for the texts to be translated with different language types, and the accuracy of a translation result can be effectively guaranteed.
FIG. 2 is a flow chart diagram illustrating a method of model training in accordance with an exemplary embodiment of the present disclosure; referring to fig. 2, the third translation model shown in fig. 1 may be obtained by the following method shown in fig. 2, and the model training method may include:
and S1, acquiring a third sample text set.
Wherein the third sample text set includes a plurality of third sample pairs, each third sample pair includes a third sample text to be translated and a fourth target translation sample text, and the fourth target translation sample text is a translated text of the third sample text to be translated.
It should be noted that the third sample text set may be the same as or different from the first sample text set or the second sample text set. For example, the third sample text set may also be represented as:wherein xiRepresenting the third sample text to be translated (i.e. the source language sequence),and representing a fourth target translation sample text (namely a target language sequence which can be an L2R translation sequence of the target language or an R2L translation sequence of the target language), wherein n is more than or equal to i and more than or equal to 1, and n is an integer.
S2, taking each of the third sample texts to be translated as an input of the first translation model and the second translation model, so that the first translation model outputs a first translation text corresponding to the third sample text to be translated, and the second translation model outputs a second translation text corresponding to the third sample text to be translated.
And the character arrangement sequence of the first translation text is different from the character arrangement sequence of the second translation text. For example, the first translation text is the L2R sequence of the target language corresponding to the third sample text to be translated, and the second translation text is the R2L sequence of the target language corresponding to the third sample text to be translated. The word may be a character or a character string, for example, in the case that the language type is english, the word is a character string corresponding to a word, and in the case that the language type is chinese, the word is a chinese character.
For example, if the third sample text to be translated is a chinese text "I work in company a", a fourth target translation sample text corresponding to the third sample text to be translated is "I work in company a", the third sample text to be translated "I work in company a" is taken as an input of a preset first translation model, the preset first translation model outputs a first translation text "I work in a company", the third sample text to be translated "I work in company a" is taken as an input of a preset second translation model, and the preset second translation model outputs a second translation text "company a in work I".
And S3, generating target model training samples according to the first translation texts, the second translation texts and the fourth target translation sample texts corresponding to the plurality of third sample texts to be translated.
In this step, one possible implementation may include:
splicing the first translation text and the second translation text corresponding to each third sample text to be translated to obtain a spliced translation text corresponding to the third sample text to be translated; and taking the spliced translation text and the fourth target translation sample text corresponding to the third sample texts to be translated as the target model training samples.
Illustratively, still by taking the embodiment shown in the above step S2 as an example, the first translation text "I work in a company" and the second translation text "company a in work I" are spliced to obtain a spliced translation text "I work in a company a in work I" corresponding to the third sample text to be translated, and the spliced translation text "I work in a company a in work I" and the fourth target translation sample text "I work in a work I a company a in work I" as a set of data in the target model training sample, that is, the spliced translation text and the fourth target translation sample text corresponding to the third sample texts to be translated are used as the target model training sample.
Optionally, in this step, another possible implementation may include:
and taking the first translation text, the second translation text and the fourth target translation sample text corresponding to the plurality of third sample texts to be translated as the target model training samples.
Illustratively, still by taking the embodiment shown in the above step S2 as an example, the first translation text "I work in a company", the second translation text "company a in work I" and the fourth target translation sample text "I work in a company" are taken as a set of data in the target model training sample, so that the first translation text, the second translation text and the fourth target translation sample text corresponding to a plurality of third sample texts to be translated are taken as the target model training sample.
And S4, performing model training on the third initial network model through the target model training sample to obtain the third translation model.
The third initial network model can be a neural network model, the third initial network model does not need to change the architecture of the existing neural network model, and the third initial network model can be directly trained according to the target model training sample, so that the model training efficiency can be effectively improved.
It should be noted that, because the first translation model, the second translation model and the third translation model do not need to adjust the existing neural network model framework, the generation efficiency of the model can be effectively ensured; in addition, the first translation model and the second translation model are independent NMT models, so that the first translation text and the second translation text can be ensured to be independent from each other and not to interfere with each other, and the accuracy of each translation model and the accuracy of the second translation model can be ensured. Moreover, because the third translation model is a target spliced text according to the first translation text and the second translation text to generate a final target translation, a target translation more conforming to the reading habit can be obtained according to translations in different character sequences, and the translation quality is favorably improved; moreover, the third translation model can obtain a final translation according to the spliced translation of the L2R translation and the R2L translation, so that the language translation requirements of the L2R modeling mode and the R2L modeling mode can be met, high-quality translations can be obtained for texts to be translated of different language types, and the accuracy of translation results can be effectively guaranteed.
According to the technical scheme, the target model training sample is generated according to the first translation text, the second translation text and the fourth target translation sample text corresponding to the plurality of third to-be-translated sample texts, the third initial network model is subjected to model training through the target model training sample to obtain the third translation model, high-quality translations can be obtained for the to-be-translated texts of different language types under the condition that the model training efficiency is guaranteed, and the accuracy of the translation result can be effectively guaranteed.
FIG. 3 is a block diagram illustrating an apparatus for text translation in accordance with an exemplary embodiment; referring to fig. 3, the text translation apparatus may include:
an obtaining module 301 configured to obtain a text to be translated;
a first determining module 302, configured to use the text to be translated as an input of a preset first translation model, to obtain a first translation text corresponding to the text to be translated;
the second determining module 303 is configured to use the text to be translated as an input of a preset second translation model to obtain a second translation text corresponding to the text to be translated; the character arrangement sequence of the first translation text is different from the character arrangement sequence of the second translation text;
a third determining module 304, configured to splice the first translation text and the second translation text to obtain a target spliced text;
and the fourth determining module 305 is configured to take the target spliced text as an input of a preset third translation model, and output the input to obtain a target translation corresponding to the text to be translated.
According to the technical scheme, a first translation text corresponding to the text to be translated is obtained through a preset first translation model; obtaining a second translation text corresponding to the text to be translated through a preset second translation model; the character arrangement sequence of the first translation text is different from the character arrangement sequence of the second translation text; according to the first translation text and the second translation text, the target translation corresponding to the text to be translated is obtained through a preset third translation model, high-quality translations can be obtained for the texts to be translated with different language types, and the accuracy of a translation result can be effectively guaranteed.
Optionally, the word arrangement order of the second translated text is an inverted order of the word arrangement order of the first translated text.
Optionally, the first translation model is trained by:
obtaining a first sample text set, wherein the first sample text set comprises a plurality of first sample pairs, each first sample pair comprises a first sample text to be translated and a first target translation sample text, and the first target translation sample text is a translated text of the first sample text to be translated;
and performing model training on the first initial network model through the first sample text set to obtain the first translation model.
Optionally, the second translation model is trained by:
acquiring a second sample text set, wherein the second sample text set comprises a plurality of second sample pairs, each second sample pair comprises a second sample text to be translated and a second target translation sample text, and the second target translation sample text is a translated text of the second sample text to be translated;
arranging the character sequence of the second target translation sample text according to a preset sequence to obtain a third target translation sample text corresponding to each second sample text to be translated;
and performing model training on the second initial network model through a plurality of second sample texts to be translated and the third target translation sample text corresponding to each second sample text to be translated to obtain the second translation model.
Optionally, the arranging the text sequences of the second target translation sample texts according to a preset sequence to obtain a third target translation sample text corresponding to each second sample text to be translated includes:
turning the character sequence of the second target translation sample text to obtain a second target translation sample with the inverted character sequence;
and taking the second target translation sample with the inverted word sequence as the third target translation sample text.
Optionally, the third translation model is trained by:
acquiring a third sample text set, wherein the third sample text set comprises a plurality of third sample pairs, each third sample pair comprises a third sample text to be translated and a fourth target translation sample text, and the fourth target translation sample text is a translated text of the third sample text to be translated;
taking each third sample text to be translated as the input of the first translation model and the second translation model respectively, so that the first translation model outputs a first translation text corresponding to the third sample text to be translated, and the second translation model outputs a second translation text corresponding to the third sample text to be translated, wherein the word arrangement sequence of the first translation text is different from the word arrangement sequence of the second translation text;
generating a target model training sample according to the first translation text, the second translation text and the fourth target translation sample text corresponding to the plurality of third sample texts to be translated;
and carrying out model training on the third initial network model through the target model training sample to obtain the third translation model.
Optionally, the generating a target model training sample according to the first translation text, the second translation text and the fourth target translation sample text corresponding to a plurality of third sample texts to be translated includes:
splicing the first translation text and the second translation text corresponding to each third sample text to be translated to obtain a spliced translation text corresponding to the third sample text to be translated;
and taking the spliced translation text and the fourth target translation sample text corresponding to the third sample texts to be translated as the target model training samples.
According to the technical scheme, the target model training sample is generated according to the first translation text, the second translation text and the fourth target translation sample text corresponding to the plurality of third to-be-translated sample texts, the third initial network model is subjected to model training through the target model training sample to obtain the third translation model, high-quality translations can be obtained for the to-be-translated texts of different language types under the condition that the model training efficiency is guaranteed, and the accuracy of the translation result can be effectively guaranteed.
FIG. 4 is a block diagram illustrating another apparatus for text translation in accordance with an example embodiment. For example, the apparatus 400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 4, the apparatus 400 may include one or more of the following components: a processing component 402, a memory 404, a power component 406, a multimedia component 408, an audio component 410, an interface for input/output (I/O) 412, a sensor component 414, and a communication component 416.
The processing component 402 generally controls overall operation of the apparatus 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing component 402 may include one or more processors 420 to execute instructions to perform all or a portion of the steps of the text translation methods described above. Further, the processing component 402 can include one or more modules that facilitate interaction between the processing component 402 and other components. For example, the processing component 402 can include a multimedia module to facilitate interaction between the multimedia component 408 and the processing component 402.
The memory 404 is configured to store various types of data to support operations at the apparatus 400. Examples of such data include instructions for any application or method operating on the device 400, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 404 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The multimedia component 408 includes a screen that provides an output interface between the device 400 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 408 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 400 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 410 is configured to output and/or input audio signals. For example, audio component 410 includes a Microphone (MIC) configured to receive external audio signals when apparatus 400 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 404 or transmitted via the communication component 416. In some embodiments, audio component 410 also includes a speaker for outputting audio signals.
The I/O interface 412 provides an interface between the processing component 402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 414 includes one or more sensors for providing various aspects of status assessment for the apparatus 400. For example, the sensor assembly 414 may detect an open/closed state of the apparatus 400, the relative positioning of the components, such as a display and keypad of the apparatus 400, the sensor assembly 414 may also detect a change in the position of the apparatus 400 or a component of the apparatus 400, the presence or absence of user contact with the apparatus 400, orientation or acceleration/deceleration of the apparatus 400, and a change in the temperature of the apparatus 400. The sensor assembly 414 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 416 is configured to facilitate wired or wireless communication between the apparatus 400 and other devices. The apparatus 400 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 416 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods of text translation.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 404 comprising instructions, executable by the processor 420 of the apparatus 400 to perform the above-described method of text translation is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (10)
1. A method of text translation, comprising:
acquiring a text to be translated;
taking the text to be translated as the input of a preset first translation model to obtain a first translation text corresponding to the text to be translated;
taking the text to be translated as the input of a preset second translation model to obtain a second translation text corresponding to the text to be translated; wherein the first translation text and the second translation text have different word arrangement orders;
splicing the first translation text and the second translation text to obtain a target spliced text;
and taking the target spliced text as the input of a preset third translation model, and outputting to obtain a target translation corresponding to the text to be translated.
2. The method of claim 1, wherein the second translated text has an order of words that is an inverse of the order of words of the first translated text.
3. The method of claim 1, wherein the first translation model is trained by:
obtaining a first sample text set, wherein the first sample text set comprises a plurality of first sample pairs, each first sample pair comprises a first sample text to be translated and a first target translation sample text, and the first target translation sample text is a translated text of the first sample text to be translated;
and carrying out model training on the first initial network model through the first sample text set to obtain the first translation model.
4. The method of claim 1, wherein the second translation model is trained by:
acquiring a second sample text set, wherein the second sample text set comprises a plurality of second sample pairs, each second sample pair comprises a second sample text to be translated and a second target translation sample text, and the second target translation sample text is a translated text of the second sample text to be translated;
arranging the character sequence of the second target translation sample text according to a preset sequence to obtain a third target translation sample text corresponding to each second sample text to be translated;
and performing model training on a second initial network model through a plurality of second sample texts to be translated and the third target translation sample texts corresponding to each second sample text to be translated to obtain the second translation model.
5. The method according to claim 4, wherein the arranging the text sequences of the second target translation sample texts according to a preset sequence to obtain a third target translation sample text corresponding to each second sample text to be translated comprises:
turning the character sequence of the second target translation sample text to obtain a second target translation sample with the inverted character sequence;
and taking the second target translation sample with the inverted word sequence as the third target translation sample text.
6. The method of claim 1, wherein the third translation model is trained by:
acquiring a third sample text set, wherein the third sample text set comprises a plurality of third sample pairs, each third sample pair comprises a third sample text to be translated and a fourth target translation sample text, and the fourth target translation sample text is a translated text of the third sample text to be translated;
taking each third sample text to be translated as the input of the first translation model and the second translation model respectively, so that the first translation model outputs a first translation text corresponding to the third sample text to be translated, and the second translation model outputs a second translation text corresponding to the third sample text to be translated, wherein the word arrangement sequence of the first translation text is different from the word arrangement sequence of the second translation text;
generating a target model training sample according to the first translation text, the second translation text and the fourth target translation sample text corresponding to the plurality of third sample texts to be translated;
and carrying out model training on a third initial network model through the target model training sample to obtain the third translation model.
7. The method according to claim 6, wherein the generating of the target model training sample according to the first translation text, the second translation text and the fourth target translation sample text corresponding to the plurality of third sample texts to be translated comprises:
splicing the first translation text and the second translation text corresponding to each third sample text to be translated to obtain a spliced translation text corresponding to the third sample text to be translated;
and taking the spliced translation text and the fourth target translation sample text corresponding to the plurality of third sample texts to be translated as the target model training samples.
8. An apparatus for text translation, comprising:
the acquisition module is configured to acquire a text to be translated;
the first determining module is configured to take the text to be translated as an input of a preset first translation model to obtain a first translation text corresponding to the text to be translated;
the second determining module is configured to use the text to be translated as an input of a preset second translation model to obtain a second translation text corresponding to the text to be translated; wherein the first translation text and the second translation text have different word arrangement orders;
the third determining module is configured to splice the first translation text and the second translation text to obtain a target spliced text;
and the fourth determining module is configured to take the target spliced text as the input of a preset third translation model, and output the target spliced text to obtain the target translation corresponding to the text to be translated.
9. An apparatus for text translation, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the executable instructions when executed implement the steps of the method of any one of claims 1 to 7.
10. A computer-readable storage medium, on which computer program instructions are stored, which program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110524501.XA CN113221581B (en) | 2021-05-13 | Text translation method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110524501.XA CN113221581B (en) | 2021-05-13 | Text translation method, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113221581A true CN113221581A (en) | 2021-08-06 |
CN113221581B CN113221581B (en) | 2024-11-05 |
Family
ID=
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010091674A2 (en) * | 2009-02-16 | 2010-08-19 | Marius Gevers | Method and a system for translating a text from a first langauge into at least one further language, and a computer program product |
CN110175335A (en) * | 2019-05-08 | 2019-08-27 | 北京百度网讯科技有限公司 | The training method and device of translation model |
CN110532575A (en) * | 2019-08-21 | 2019-12-03 | 语联网(武汉)信息技术有限公司 | Text interpretation method and device |
CN111368560A (en) * | 2020-02-28 | 2020-07-03 | 北京字节跳动网络技术有限公司 | Text translation method and device, electronic equipment and storage medium |
US20200364412A1 (en) * | 2018-05-10 | 2020-11-19 | Tencent Technology (Shenzhen) Company Limited | Translation model training method, sentence translation method, device, and storage medium |
CN112329392A (en) * | 2020-11-05 | 2021-02-05 | 上海明略人工智能(集团)有限公司 | Target encoder construction method and device for bidirectional encoding |
CN112733556A (en) * | 2021-01-28 | 2021-04-30 | 何灏 | Synchronous interactive translation method and device, storage medium and computer equipment |
CN112749569A (en) * | 2019-10-29 | 2021-05-04 | 阿里巴巴集团控股有限公司 | Text translation method and device |
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010091674A2 (en) * | 2009-02-16 | 2010-08-19 | Marius Gevers | Method and a system for translating a text from a first langauge into at least one further language, and a computer program product |
US20200364412A1 (en) * | 2018-05-10 | 2020-11-19 | Tencent Technology (Shenzhen) Company Limited | Translation model training method, sentence translation method, device, and storage medium |
CN110175335A (en) * | 2019-05-08 | 2019-08-27 | 北京百度网讯科技有限公司 | The training method and device of translation model |
CN110532575A (en) * | 2019-08-21 | 2019-12-03 | 语联网(武汉)信息技术有限公司 | Text interpretation method and device |
CN112749569A (en) * | 2019-10-29 | 2021-05-04 | 阿里巴巴集团控股有限公司 | Text translation method and device |
CN111368560A (en) * | 2020-02-28 | 2020-07-03 | 北京字节跳动网络技术有限公司 | Text translation method and device, electronic equipment and storage medium |
CN112329392A (en) * | 2020-11-05 | 2021-02-05 | 上海明略人工智能(集团)有限公司 | Target encoder construction method and device for bidirectional encoding |
CN112733556A (en) * | 2021-01-28 | 2021-04-30 | 何灏 | Synchronous interactive translation method and device, storage medium and computer equipment |
Non-Patent Citations (4)
Title |
---|
WANG YAJUAN等: "Research of Uyghur-Chinese Machine Translation System Combination Based on Paraphrase Information", 《COMPUTER ENGINEERING》, vol. 45, no. 4, 15 April 2019 (2019-04-15) * |
张金超;艾山・吾买尔;买合木提・买买提;刘群;: "基于多编码器多解码器的大规模维汉神经网络机器翻译模型", 中文信息学报, no. 09, 15 September 2018 (2018-09-15) * |
李霞;马骏腾;覃世豪;: "融合图像注意力的多模态机器翻译模型", 中文信息学报, no. 07, 15 July 2020 (2020-07-15) * |
贾承勋等: "基于枢轴语言的汉越神经机器翻译伪平行语料生成", 《计算机工程与科学》, vol. 43, no. 3, 31 March 2021 (2021-03-31) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6918181B2 (en) | Machine translation model training methods, equipment and systems | |
CN107564526B (en) | Processing method, apparatus and machine-readable medium | |
CN111382748B (en) | Image translation method, device and storage medium | |
CN112036195A (en) | Machine translation method, device and storage medium | |
CN112037756A (en) | Voice processing method, apparatus and medium | |
CN113673261A (en) | Data generation method and device and readable storage medium | |
CN111104807B (en) | Data processing method and device and electronic equipment | |
CN111369978A (en) | Data processing method and device and data processing device | |
CN111090998A (en) | Sign language conversion method and device and sign language conversion device | |
CN109670025B (en) | Dialogue management method and device | |
CN109977424B (en) | Training method and device for machine translation model | |
CN112036174A (en) | Punctuation marking method and device | |
CN116543211A (en) | Image attribute editing method, device, electronic equipment and storage medium | |
CN110245358B (en) | Machine translation method and related device | |
CN109284510B (en) | Text processing method and system and text processing device | |
CN113923517B (en) | Background music generation method and device and electronic equipment | |
CN113221581B (en) | Text translation method, device and storage medium | |
CN114462410A (en) | Entity identification method, device, terminal and storage medium | |
CN113115104B (en) | Video processing method and device, electronic equipment and storage medium | |
CN110837741B (en) | Machine translation method, device and system | |
CN113221581A (en) | Text translation method, device and storage medium | |
CN108345590B (en) | Translation method, translation device, electronic equipment and storage medium | |
CN112905023A (en) | Input error correction method and device for input error correction | |
CN112613327B (en) | Information processing method and device | |
CN113723117B (en) | Translation model training method and device for translation model training |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |