CN113919372A - Machine translation quality evaluation method, device and storage medium - Google Patents
Machine translation quality evaluation method, device and storage medium Download PDFInfo
- Publication number
- CN113919372A CN113919372A CN202010663296.0A CN202010663296A CN113919372A CN 113919372 A CN113919372 A CN 113919372A CN 202010663296 A CN202010663296 A CN 202010663296A CN 113919372 A CN113919372 A CN 113919372A
- Authority
- CN
- China
- Prior art keywords
- statement
- pseudo
- monolingual
- target
- parallel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013519 translation Methods 0.000 title claims abstract description 155
- 238000013441 quality evaluation Methods 0.000 title claims abstract description 97
- 238000000034 method Methods 0.000 title claims abstract description 78
- 238000003860 storage Methods 0.000 title claims abstract description 22
- 238000012549 training Methods 0.000 claims abstract description 78
- 238000009826 distribution Methods 0.000 claims abstract description 42
- 238000001303 quality assessment method Methods 0.000 claims description 43
- 238000004364 calculation method Methods 0.000 claims description 16
- 238000011156 evaluation Methods 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 11
- 238000005516 engineering process Methods 0.000 abstract description 7
- 230000014616 translation Effects 0.000 description 140
- 238000010586 diagram Methods 0.000 description 22
- 238000012545 processing Methods 0.000 description 22
- 230000006870 function Effects 0.000 description 13
- 238000004891 communication Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 9
- 230000003287 optical effect Effects 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 230000000873 masking effect Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 241001136792 Alle Species 0.000 description 1
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/51—Translation evaluation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/58—Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Machine Translation (AREA)
Abstract
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for evaluating machine translation quality, and a storage medium. The method comprises the following steps: generating a pseudo parallel corpus according to a pre-configured original parallel corpus, wherein the pseudo parallel corpus comprises a plurality of pseudo parallel statement pairs, the pseudo parallel statement pairs comprise source monolingual statements and corresponding pseudo target monolingual statements, and the similarity degree of the data distribution of the pseudo target monolingual statements and the data distribution of the real machine translation is larger than a similarity threshold value; and training the original quality evaluation model according to the pseudo parallel corpus to obtain a target quality evaluation model. According to the embodiment of the disclosure, the computer device generates the pseudo parallel corpus according to the pre-configured original parallel corpus, and because the similarity degree between the data distribution of the pseudo target monolingual statement and the data distribution of the real machine translation is larger than the similarity threshold, after the original quality evaluation model is trained according to the pseudo parallel corpus, the translation quality of the obtained target quality evaluation model is greatly improved.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for evaluating machine translation quality, and a storage medium.
Background
The automatic machine translation quality evaluation is mainly divided into two directions, wherein the first direction is the machine translation quality evaluation with a reference translation, namely, the quality of an output translation is evaluated by comparing the similarity between the output translation of a machine translation system and the reference translation under the condition of giving a standard translation, namely the reference translation. The second direction is machine translation quality evaluation without reference translation, namely a method for directly evaluating the quality of a translation result under the condition of a source text and a target text of a machine translation system, and the reference translation is not relied on. The machine translation quality evaluation method related to the text is a machine translation quality evaluation method of a reference-free translation.
In the related art, a common method for evaluating the machine translation quality of a reference-free translation includes the following steps: in the pre-training stage, the feature extractor is trained on a large-scale original parallel corpus, and an additional quality assessment model is trained on marked translation quality assessment data. When a translation quality evaluation task is executed, extracting translation quality evaluation data to be evaluated into a group of high-dimensional representations according to a pre-trained feature extractor; and evaluating the translation quality in the translation quality evaluation data by adopting a pre-trained quality evaluation model according to the extracted high-dimensional representation.
However, in the above method, because there is a large difference between the data distribution of the original parallel corpus and the data distribution of the real translation quality assessment data, the knowledge of the original parallel corpus cannot be completely applied to the translation quality assessment task, which results in a poor effect of the current machine translation quality assessment.
Disclosure of Invention
In view of the above, the present disclosure provides a method, an apparatus, and a storage medium for evaluating quality of machine translation.
The technical scheme comprises the following steps:
according to an aspect of the present disclosure, there is provided a machine translation quality assessment method for use in a computer device, the method including:
generating a pseudo parallel corpus according to a pre-configured original parallel corpus, wherein the pseudo parallel corpus comprises a plurality of pseudo parallel statement pairs, the pseudo parallel statement pairs comprise a source monolingual statement and a corresponding pseudo target monolingual statement, and the similarity degree of the data distribution of the pseudo target monolingual statement and the data distribution of a real machine translation is larger than a similarity threshold value;
and training an original quality evaluation model according to the pseudo parallel corpus to obtain a target quality evaluation model, wherein the target quality evaluation model is used for evaluating the machine translation quality of the statement to be evaluated.
In one possible implementation, the original parallel corpus includes a plurality of original parallel sentence pairs, the original parallel sentence pairs including a source monolingual sentence and a corresponding correct target monolingual sentence; the generating of the pseudo parallel corpus according to the pre-configured original parallel corpus includes:
for each original parallel statement pair in the original parallel corpus, generating the pseudo target monolingual statement by calling a generator model, wherein the pseudo target monolingual statement is different from the correct target monolingual statement;
and obtaining the pseudo parallel statement pair according to the source monolingual statement and the pseudo target monolingual statement.
In another possible implementation manner, the generating, for each original parallel statement pair in the original parallel corpus, the pseudo target monolingual statement by invoking a generator model includes:
for each original parallel statement pair in the original parallel corpus, encoding the source monolingual statement by calling the generator model;
shielding at least one word in the correct target monolingual statement;
and reconstructing the shielded at least one word according to the coded source monolingual statement and the shielded correct target monolingual statement to obtain the pseudo target monolingual statement.
In another possible implementation manner, the generator model is a self-coding structure based on an original text encoder and a masking language model encoder, and a one-to-one correspondence relationship exists between a plurality of words in the pseudo target monolingual sentence and a plurality of words in the correct target monolingual sentence in position.
In another possible implementation manner, after reconstructing the masked at least one word according to the encoded source monolingual statement and the masked correct target monolingual statement to obtain the pseudo target monolingual statement, the method further includes:
generating a marking parameter corresponding to the pseudo target monolingual statement according to the pseudo target monolingual statement and the correct target monolingual statement;
the marking parameter is used for indicating whether each word in the pseudo target monolingual statement is obtained through machine translation or not, and/or the number of the words obtained through machine translation in the pseudo target monolingual statement accounts for the proportion of the total number of the words, wherein the total number of the words is the total number of the words in the pseudo target monolingual statement.
In another possible implementation manner, the training an original quality assessment model according to the pseudo parallel corpus to obtain a target quality assessment model includes:
calling a discriminator model to obtain a training result for each pseudo parallel statement pair in the pseudo parallel corpus, wherein the discriminator model is a model based on an original text encoder and a translated text encoder;
comparing the training result with the corresponding labeled parameter of the pseudo parallel statement to obtain a calculation loss, wherein the calculation loss is used for indicating an error between the training result and the labeled parameter;
and training to obtain the target quality evaluation model according to the calculation loss corresponding to each pseudo parallel statement pair.
In another possible implementation manner, after the training of the original quality assessment model according to the pseudo parallel corpus is performed to obtain a target quality assessment model, the method further includes:
obtaining a statement pair to be evaluated, wherein the statement pair to be evaluated comprises a source monolingual statement and a target monolingual statement;
and calling the trained target quality evaluation model to obtain an evaluation result according to the statement pair to be evaluated.
According to another aspect of the present disclosure, there is provided a machine translation quality assessment apparatus for use in a computer device, the apparatus including:
the generating module is used for generating a pseudo parallel corpus according to a pre-configured original parallel corpus, wherein the pseudo parallel corpus comprises a plurality of pseudo parallel statement pairs, the pseudo parallel statement pairs comprise source monolingual statements and corresponding pseudo target monolingual statements, and the similarity degree of the data distribution of the pseudo target monolingual statements and the data distribution of a real machine translation is larger than a similarity threshold value;
and the training module is used for training an original quality evaluation model according to the pseudo parallel corpus to obtain a target quality evaluation model, and the target quality evaluation model is used for performing machine translation quality evaluation on the statement to be evaluated.
In one possible implementation, the original parallel corpus includes a plurality of original parallel sentence pairs, the original parallel sentence pairs including a source monolingual sentence and a corresponding correct target monolingual sentence; the generation module is further configured to:
for each original parallel statement pair in the original parallel corpus, generating the pseudo target monolingual statement by calling a generator model, wherein the pseudo target monolingual statement is different from the correct target monolingual statement;
and obtaining the pseudo parallel statement pair according to the source monolingual statement and the pseudo target monolingual statement.
In another possible implementation manner, the generating module is further configured to:
for each original parallel statement pair in the original parallel corpus, encoding the source monolingual statement by calling the generator model;
shielding at least one word in the correct target monolingual statement;
and reconstructing the shielded at least one word according to the coded source monolingual statement and the shielded correct target monolingual statement to obtain the pseudo target monolingual statement.
In another possible implementation manner, the generator model is a self-coding structure based on an original text encoder and a masking language model encoder, and a one-to-one correspondence relationship exists between a plurality of words in the pseudo target monolingual sentence and a plurality of words in the correct target monolingual sentence in position.
In another possible implementation manner, the generating module is further configured to:
generating a marking parameter corresponding to the pseudo target monolingual statement according to the pseudo target monolingual statement and the correct target monolingual statement;
the marking parameter is used for indicating whether each word in the pseudo target monolingual statement is obtained through machine translation or not, and/or the number of the words obtained through machine translation in the pseudo target monolingual statement accounts for the proportion of the total number of the words, wherein the total number of the words is the total number of the words in the pseudo target monolingual statement.
In another possible implementation manner, the training module is further configured to:
calling a discriminator model to obtain a training result for each pseudo parallel statement pair in the pseudo parallel corpus, wherein the discriminator model is a model based on an original text encoder and a translated text encoder;
comparing the training result with the corresponding labeled parameter of the pseudo parallel statement to obtain a calculation loss, wherein the calculation loss is used for indicating an error between the training result and the labeled parameter;
and training to obtain the target quality evaluation model according to the calculation loss corresponding to each pseudo parallel statement pair.
In another possible implementation manner, the apparatus further includes: the system comprises an acquisition module and a calling module;
the obtaining module is used for obtaining a statement pair to be evaluated, wherein the statement pair to be evaluated comprises a source monolingual statement and a target monolingual statement;
and the calling module is used for calling the trained target quality evaluation model according to the statement pair to be evaluated to obtain an evaluation result.
According to another aspect of the present disclosure, there is provided a computer device including: a processor; a memory for storing processor-executable instructions;
wherein the processor is configured to:
generating a pseudo parallel corpus according to a pre-configured original parallel corpus, wherein the pseudo parallel corpus comprises a plurality of pseudo parallel statement pairs, the pseudo parallel statement pairs comprise a source monolingual statement and a corresponding pseudo target monolingual statement, and the similarity degree of the data distribution of the pseudo target monolingual statement and the data distribution of a real machine translation is larger than a similarity threshold value;
and training an original quality evaluation model according to the pseudo parallel corpus to obtain a target quality evaluation model, wherein the target quality evaluation model is used for evaluating the machine translation quality of the statement to be evaluated.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
The method comprises the steps that a computer device generates a pseudo parallel corpus according to a pre-configured original parallel corpus, wherein the pseudo parallel corpus comprises a plurality of pseudo parallel statement pairs, and each pseudo parallel statement pair comprises a source monolingual statement and a corresponding pseudo target monolingual statement; the original quality evaluation model is trained according to the pseudo parallel corpus to obtain a target quality evaluation model, and because the similarity degree of the data distribution of the pseudo target monolingual sentence and the data distribution of the real machine translation is larger than the similarity threshold, the pseudo target monolingual sentence can be modeled in a mode close to a machine translation quality evaluation task, so that the effect of the subsequent target quality evaluation model on the machine translation quality evaluation of the sentence pair to be evaluated is ensured.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a schematic structural diagram of a computer device provided by an exemplary embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a method for evaluating machine translation quality provided by an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a machine translation quality assessment method according to an exemplary embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating a method for machine translation quality assessment according to another exemplary embodiment of the present disclosure;
FIG. 5 illustrates a schematic diagram of a generator model training process provided by an exemplary embodiment of the present disclosure;
FIG. 6 illustrates a schematic diagram of a generator model usage process provided by an exemplary embodiment of the present disclosure;
FIG. 7 is a schematic diagram illustrating a discriminant model training process provided by an exemplary embodiment of the present disclosure;
FIG. 8 is a flowchart illustrating a method for machine translation quality assessment according to another exemplary embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a machine translation quality evaluation apparatus according to an exemplary embodiment of the present disclosure;
FIG. 10 is a block diagram illustrating an apparatus for performing a method for machine translation quality assessment in accordance with an exemplary embodiment;
fig. 11 is a block diagram illustrating an apparatus for performing a method for machine translation quality assessment in accordance with another exemplary embodiment.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
In the related art, a common method for evaluating the machine translation quality of a reference-free translation includes the following steps: in the pre-training stage, the feature extractor is trained on a large-scale original parallel corpus, and an additional quality assessment model is trained on marked translation quality assessment data. When a translation quality evaluation task is executed, extracting translation quality evaluation data to be evaluated into a group of high-dimensional representations according to a pre-trained feature extractor; and evaluating the translation quality in the translation quality evaluation data by adopting a pre-trained quality evaluation model according to the extracted high-dimensional representation.
The machine translation quality evaluation method without the reference translation can improve the performance of a quality evaluation model by introducing knowledge of large-scale original parallel corpora under the condition that the quantity of marked quality evaluation data is rare. However, there is a large difference between the data distribution of the original parallel corpus and the data distribution of the real translation quality assessment data. That is, the translation in the original parallel corpus is natural and correct, and the translation in the translation quality evaluation data is generated by the machine translation system, so that some errors may exist. When the feature extractor trained based on the correct translation is faced with the translation quality evaluation data containing the wrong data, the correct prediction may not be made, so that the knowledge of the original parallel corpus cannot be completely applied to the translation quality evaluation task.
In the related art, the data distribution of the original parallel corpus and the data distribution of the real translation quality evaluation data have a large difference, so that the information required by the translation quality evaluation task cannot be better learned from the original parallel corpus in the related art. In the embodiment of the present disclosure, the pseudo parallel corpus is generated by the computer device according to the pre-configured original parallel corpus, and the pseudo parallel corpus may include a plurality of pseudo parallel sentence pairs and is easily expanded, so the scheme has sustainability. In addition, because the similarity degree between the data distribution of the pseudo target monolingual statement and the data distribution of the real machine translation is larger than the similarity threshold, the translation quality of the obtained target quality evaluation model is greatly improved after the original quality evaluation model is trained according to the pseudo parallel corpus.
First, some terms related to the present disclosure are introduced.
Model tuning (fine tuning, FT): and retraining the trained model with a small amount of data of the professional field to improve the performance of the model in the professional field.
Parallel corpora (English: parallel corpus): refers to text written against, aligned with, and translated from sentences using different languages.
Monolingual corpus (English): refers to text written in a single language. Monolingual corpora are the basis for constructing parallel corpora.
The parallel sentence pair: it refers to a sentence pair written in contrast with different languages, aligned with each other, and translated with each other. Wherein a parallel statement pair comprises a source monolingual statement and a target monolingual statement. The source monolingual sentence is also called original text, and the target monolingual sentence is a translation corresponding to the source monolingual sentence.
Parallel word pairs: refers to word pairs written against each other using different languages, with translation between words. A professional parallel word pair refers to a professional field parallel word pair.
Word alignment (English): the method refers to the operation of aligning words with the same meaning in target monolingual words and source monolingual words in parallel linguistic data used for training a machine translation model. Because the grammar, the expression mode and the use habit of different languages are different, the alignment relation of the words in the parallel linguistic data has larger difference. And the corresponding relation between the parallel corpus vocabularies is established, so that the translation quality of the model can be improved.
Machine translation automatic evaluation method (BLEU): an automatic assessment of machine translation quality is provided. BLEU is an algorithm for measuring the similarity between the text obtained by machine translation and the translation reference text, and a larger BLEU value indicates a higher quality of machine translation.
Original parallel corpora: the bilingual parallel corpus is pre-configured. The original parallel corpora are texts written in contrast by using two languages, sentences are aligned with each other, and the texts are translated with each other.
The bilingual may be english, french, and german.
The original parallel corpus includes a plurality of original parallel sentence pairs, the original parallel sentence pairs including a source monolingual sentence and a corresponding correct target monolingual sentence. The source monolingual sentence is also called the original text, and the correct target monolingual sentence is a correct translation corresponding to the source monolingual sentence.
Pseudo parallel corpora: the method is a pseudo parallel corpus obtained based on an original parallel corpus. The pseudo parallel corpus includes a plurality of pseudo parallel statement pairs including a source monolingual statement and a corresponding pseudo target monolingual statement.
For an original parallel statement pair, after at least one word in a correct target monolingual statement is replaced, a pseudo target monolingual statement corresponding to a source monolingual statement can be obtained, namely, the pseudo parallel statement pair comprises the source monolingual statement and the pseudo target monolingual statement. The source monolingual sentence is also called original text, and the pseudo target monolingual sentence is a pseudo translation corresponding to the source monolingual sentence.
The degree of similarity between the data distribution of the pseudo target monolingual sentence and the data distribution of the real machine translation is larger than a similarity threshold.
Referring to fig. 1, a schematic structural diagram of a computer device according to an exemplary embodiment of the present disclosure is shown.
The computer device may be a terminal or a server. The terminal includes a tablet computer, a laptop portable computer, a desktop computer, and the like. The server can be a server, a server cluster consisting of a plurality of servers, or a cloud computing service center.
As shown in fig. 1, the computer device includes a processor 10, a memory 20, and a communication interface 30. Those skilled in the art will appreciate that the configuration shown in FIG. 1 is not intended to be limiting of the computer device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 10 is a control center of the computer device, connects various parts of the entire computer device using various interfaces and lines, and performs various functions of the computer device and processes data by operating or executing software programs and/or modules stored in the memory 20 and calling data stored in the memory 20, thereby performing overall control of the computer device. The processor 10 may be implemented by a CPU or a Graphics Processing Unit (GPU).
The memory 20 may be used to store software programs and modules. The processor 10 executes various functional applications and data processing by executing software programs and modules stored in the memory 20. The memory 20 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system 21, a virtual module, an application program required for at least one function (such as neural network model training, etc.), and the like; the storage data area may store data created according to use of the computer device, and the like. The Memory 20 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. Accordingly, the memory 20 may also include a memory controller to provide the processor 10 access to the memory 20.
Wherein the processor 20 is configured to perform the following functions: generating a pseudo parallel corpus according to a pre-configured original parallel corpus, wherein the pseudo parallel corpus comprises a plurality of pseudo parallel statement pairs, the pseudo parallel statement pairs comprise source monolingual statements and corresponding pseudo target monolingual statements, and the similarity degree of the data distribution of the pseudo target monolingual statements and the data distribution of the real machine translation is larger than a similarity threshold value; and training the original quality evaluation model according to the pseudo parallel corpus to obtain a target quality evaluation model, wherein the target quality evaluation model is used for evaluating the machine translation quality of the statement to be evaluated. The following describes a machine translation quality evaluation method provided by the embodiments of the present disclosure with exemplary embodiments.
Referring to fig. 2, a flowchart of a method for evaluating quality of machine translation according to an exemplary embodiment of the present disclosure is shown. The present embodiment is illustrated by applying the machine translation quality evaluation method to the computer apparatus shown in fig. 1. The machine translation quality evaluation method comprises the following steps:
The computer equipment obtains the pre-configured original parallel corpus and generates a pseudo parallel corpus according to the pre-configured original parallel corpus.
The original parallel corpus comprises a plurality of original parallel statement pairs, and the original parallel statement pairs comprise source monolingual statements and corresponding correct target monolingual statements. The pseudo parallel corpus includes a plurality of pseudo parallel statement pairs including a source monolingual statement and a corresponding pseudo target monolingual statement.
For the same source monolingual statement, the corresponding pseudo target monolingual statement is different from the corresponding correct target monolingual statement. Optionally, the pseudo target monolingual statement is a target monolingual statement obtained by replacing at least one word in the correct target monolingual statement.
The lengths of the pseudo target monolingual sentence and the correct target monolingual sentence are the same, and a plurality of words in the pseudo target monolingual sentence and a plurality of words in the correct target monolingual sentence have one-to-one correspondence in position.
The degree of similarity between the data distribution of the pseudo target monolingual sentence and the data distribution of the real machine translation is larger than a similarity threshold. Wherein, the similarity threshold is set by the computer device by default or is set by the user by self-definition. The embodiments of the present disclosure do not limit this.
Optionally, for the same source monolingual statement, the translation accuracy and/or the translation naturalness of the corresponding pseudo target monolingual statement in the pseudo parallel corpus is lower than that of the corresponding correct target monolingual statement in the original parallel corpus.
In order to improve translation quality, after the computer device obtains the pseudo-parallel corpus, the original quality assessment model may be trained according to a plurality of pseudo-parallel statement pairs included in the pseudo-parallel corpus, so as to obtain a target quality assessment model. That is, the original quality assessment model is trained through the pseudo parallel corpus, so that the translation quality of the target quality assessment model obtained after training can be improved.
The target quality evaluation model is used for evaluating the machine translation quality of the statement pair to be evaluated. The statement pair to be evaluated includes a source monolingual statement and a target monolingual statement.
The target quality evaluation model is used for converting the input sentences to be evaluated into evaluation results, and the evaluation results are used for indicating the translation quality of target monolingual sentences in the sentences to be evaluated.
The target quality evaluation model is used for representing the correlation between the statement pair to be evaluated and the evaluation result.
The target quality evaluation model is a preset mathematical model and comprises model coefficients between statement pairs to be evaluated and evaluation results.
To sum up, the embodiment of the present disclosure generates, by a computer device, a pseudo parallel corpus according to a preconfigured original parallel corpus, where the pseudo parallel corpus includes a plurality of pseudo parallel statement pairs, and the pseudo parallel statement pairs include a source monolingual statement and a corresponding pseudo target monolingual statement; the original quality evaluation model is trained according to the pseudo parallel corpus to obtain a target quality evaluation model, and because the similarity degree of the data distribution of the pseudo target monolingual sentence and the data distribution of the real machine translation is larger than the similarity threshold, the pseudo target monolingual sentence can be modeled in a mode close to a machine translation quality evaluation task, so that the effect of the subsequent target quality evaluation model on the machine translation quality evaluation of the sentence pair to be evaluated is ensured.
In addition, the related art is limited in that the original parallel corpus is unmarked, the bilingual sentence pair can be modeled only by predicting words in the bilingual sentence pair in the pre-training stage, and the task defined when the feature extractor is pre-trained is to predict each word in the translated text according to the context of the original text and the translated text. However, the goal of the task of evaluating the quality of machine translation is to predict the quality of each word in the translated text, as well as the quality of the sentence as a whole, and not to predict what each word is. Defining different training targets on the parallel corpus and the translation quality assessment data may result in the feature extractor failing to obtain the features most suitable for the translation quality assessment task.
Similarly, due to the limitation of data, the related art is only a word-level prediction task, and when the model trains the feature extractor by using the original parallel corpus, the sentence-level representation cannot be obtained. When the method is applied to a quality translation evaluation task, an additional quality evaluation model is required to be introduced so that high-dimensional representations at a word level can be combined to obtain high-dimensional representations at a sentence level. This means that the quality assessment model has no knowledge at the sentence level that can be learned from parallel corpora.
In the embodiment of the disclosure, a translation task is defined on an original parallel corpus, a generator model is used for generating a pseudo target monolingual statement, and a marking parameter corresponding to the pseudo target monolingual statement is automatically generated; and then, providing the pseudo parallel corpus for a discriminator model for training, wherein a training task of the discriminator model is to judge whether each word in the pseudo target monolingual sentence is obtained by machine translation, and in order to obtain sentence-level expression, the training task of the discriminator also comprises the step of predicting the proportion of the number of the words obtained by machine translation in the pseudo target monolingual sentence to the total number of the words, so that the training task is closer to a subsequent translation quality evaluation task.
In an illustrative example, as shown in FIG. 3, a computer device trains a generator model 32 from an original parallel corpus 31, and after training of the generator model 32 is completed, model parameters of the generator model 32 are fixed. The computer device generates a plurality of pseudo target monolingual sentences using the trained builder model 32, resulting in pseudo parallel corpora 33. The computer device trains the discriminator model 34 according to the pseudo parallel corpus 33 to obtain a target quality evaluation model 35, and calls the target quality evaluation model 35 to evaluate the machine translation quality according to the sentence pair 36 to be evaluated. The machine translation quality evaluation method provided by the embodiments of the present disclosure is further described below with exemplary embodiments.
Referring to fig. 4, a flowchart of a method for evaluating machine translation quality according to another exemplary embodiment of the present disclosure is shown, which is illustrated in the embodiment of using the method in the user equipment shown in fig. 1. The method comprises the following steps.
The original parallel corpus comprises a plurality of original parallel statement pairs, and the original parallel statement pairs comprise source monolingual statements and corresponding correct target monolingual statements. The pseudo parallel corpus includes a plurality of pseudo parallel statement pairs including a source monolingual statement and a corresponding pseudo target monolingual statement. The degree of similarity between the data distribution of the pseudo target monolingual sentence and the data distribution of the real machine translation is larger than a similarity threshold.
For each original parallel statement pair in the original parallel corpus, the computer equipment generates a pseudo target monolingual statement by calling a generator model, wherein the pseudo target monolingual statement is different from a correct target monolingual statement; and obtaining a pseudo parallel statement pair according to the source monolingual statement and the pseudo target monolingual statement. Thus obtaining the pseudo parallel statement pairs, namely pseudo parallel corpora, corresponding to the original parallel statement pairs.
Optionally, for each original parallel statement pair in the original parallel corpus, encoding the source monolingual statement by calling a generator model; shielding at least one word in the correct target monolingual sentence; and reconstructing at least one shielded word according to the coded source monolingual statement and the shielded correct target monolingual statement to obtain a pseudo target monolingual statement.
The generator Model is a self-coding structure based on an original text coder and a mask Language Model (English) coder. For example, the textual encoder is a transform model encoder.
Optionally, the at least one word masked is determined randomly. That is, the computer device randomly determines and masks at least one word in the correct target monolingual sentence.
Optionally, the source monolingual sentence X ═ X1,…,xmThe corresponding correct target monolingual sentence Y ═ Y1,…,ynAnd m is the length of the source monolingual statement, and n is the length of the correct target monolingual statement. The generator model includes an N-level textual encoder and an N-level mask language model encoder, where N is a positive integer (e.g., N is 6). The computer equipment inputs the source monolingual sentence into the original text encoder to obtain the hidden layer state of the source monolingual sentence; randomly masking at least one word in a correctly targeted monolingual statement by a computer device, e.g., the computer device employs a special [ MASK ]]Marking to replace correct target sheetWord y in a sentencetObtaining the correct target monolingual statement Y after shieldingmask={y1,…,[MASK],yt+1,…,yn}. The computer device will mask the correct target monolingual statement YmaskInputting the information into a shielding language model coder for coding, and acquiring the information of the hidden layer state of the source monolingual statement through an interactive attention (English: cross-attention) mechanism; predicting real words y by using hidden layer representation of position t at the top end of the shielding language modeltAnd reconstructing the shielded words to obtain the pseudo target monolingual statement Y'.
Before the computer device generates the pseudo target monolingual statement by calling the generator model, the computer device acquires the generator model after training.
The word reconstruction task is actually a multi-classification task, and the model training stage is used for returning and updating the model parameters of the generator model by using the loss gradient of the word reconstruction task. And after the training of the generator model is finished, fixing all model parameters of the generator model. And the computer equipment generates a plurality of pseudo target monolingual sentences by using the trained generator model, so that pseudo parallel corpora are obtained.
It should be noted that the training process of the generator model can be analogous to the model using process described above, and is not described herein again.
Because the generator model is of a self-coding structure, the lengths of the pseudo target monolingual statement and the correct target monolingual statement are the same, and a one-to-one correspondence relationship exists between a plurality of words in the pseudo target monolingual statement and a plurality of words in the correct target monolingual statement in position. With the help of the characteristic, the computer equipment can judge whether each word in the pseudo target monolingual sentence is obtained through machine translation according to whether the words in the pseudo target monolingual sentence and the words in the correct target monolingual sentence at the same position are equal, and further can obtain the proportion of the number of the words obtained through machine translation in the pseudo target monolingual sentence to the total number of the words. The data distribution of the obtained pseudo target monolingual statement is closer to that of a real machine translation, and meanwhile, the marking parameters can be automatically generated.
Optionally, after the computer device reconstructs the masked at least one word according to the encoded source monolingual statement and the masked correct target monolingual statement to obtain the pseudo target monolingual statement, the method further includes: and generating marking parameters corresponding to the pseudo target monolingual sentences according to the pseudo target monolingual sentences and the correct target monolingual sentences. The marking parameters are used for indicating whether each word in the pseudo target monolingual sentence is obtained through machine translation or not, and/or the number of the words obtained through machine translation in the pseudo target monolingual sentence accounts for the proportion of the total number of the words, and the total number of the words is the total number of the words in the pseudo target monolingual sentence.
Optionally, the marking parameter comprises a first marking parameter and/or a second marking parameter. The first marking parameter is used for indicating whether each word in the pseudo target monolingual sentence is translated by a machine or not, and the second marking parameter comprises the proportion of the number of the words translated by the machine in the pseudo target monolingual sentence to the total number of the words.
Illustratively, the first marking parameter comprises a numerical value corresponding to each word in the pseudo target monolingual statement, and when the numerical value corresponding to the word is the first numerical value, the numerical value is used for indicating that the word is obtained by machine translation; when a word corresponds to a second numerical value, it is used to indicate that the word is not translated by a machine, for example, the first numerical value is 0, and the second numerical value is 1, which is not limited in this disclosure.
Optionally, when the word in the pseudo target monolingual sentence is the same as the word in the correct target monolingual sentence at the same position, determining that the word in the pseudo target monolingual sentence is not translated by a machine; and when the word in the pseudo target monolingual sentence is different from the word at the same position in the correct target monolingual sentence, determining that the word in the pseudo target monolingual sentence is obtained by machine translation.
In an illustrative example, taking the example of generating a pseudo target monolingual sentence from an original parallel sentence pair, the source monolingual is English, the target monolingual is German, and the training process of the generator model is shown in FIG. 5. Source monolingual statements: x { prints, all, objects, within, the, printable, area, of, the,paper }; the corresponding correct target monolingual statement: y ═ druckt, alle, Objekte, innerhalb, des, druckbaren, Bereichs }. And after training of the generator model is finished, fixing model parameters of the generator model. Use of the Generator model As shown in FIG. 6, a computer device generates a pseudo-target monolingual statement Y using a trained Generator model′Anda parameter generator for automatically generating marking parameters, wherein the marking parameters comprise a first marking parameter O '11011101 and a second marking parameter q' 0.25.
Step 402, for each pseudo parallel statement pair in the pseudo parallel corpus, calling a discriminator model to obtain a training result, wherein the discriminator model is a model based on an original text encoder and a translated text encoder.
And the computer equipment takes the pseudo parallel corpus as a training sample set of the target quality evaluation model, and defines a discrimination task on the pseudo parallel corpus.
And for each pseudo parallel statement pair in the pseudo parallel linguistic data, calling a discriminator model by the computer equipment to obtain a training result.
The discriminator model is based on an original text encoder and a translated text encoder. Optionally, the discriminator model comprises a transformer model.
And for each pseudo parallel statement pair in the pseudo parallel corpus, inputting the source monolingual statement and the corresponding pseudo target monolingual statement into a discriminator model by the computer equipment, and outputting to obtain a training result.
For each pseudo parallel statement pair in the pseudo parallel corpus, the computer equipment calls a discriminator model to encode the source monolingual statement and encodes the pseudo target monolingual statement. And performing word-level binary classification tasks after coding to obtain training results.
Optionally, the training result includes a predicted flag parameter indicating whether each word in the predicted pseudo-target monolingual sentence is translated by a machine, and/or the number of words in the predicted pseudo-target monolingual sentence translated by the machine is a proportion of the total number of words. I.e. the training result comprises the predicted first labeling parameter and/or the second labeling parameter.
In one illustrative example, a pseudo target monolingual statement Y is generated at a computer device based on the example shown in FIG. 6′Then, the training process of the discriminator model is as shown in FIG. 7, and the computer device is based on the source monolingual statement X ═ { prints, all, objects, within, the, printable, area, of, the, paper } and the pseudo-target monolingual statement Y′And (d) training the discriminator model to obtain a training result, wherein the training result comprises a first marking parameter O '11011101 and a second marking parameter q' 0.25.
And 403, comparing the training result with the corresponding labeled parameter of the pseudo parallel statement to obtain a calculation loss, wherein the calculation loss is used for indicating an error between the training result and the labeled parameter.
And for each pseudo parallel statement pair in the pseudo parallel corpus, the computer equipment compares the training result with the marking parameters corresponding to the pseudo parallel statement pair to obtain the calculation loss. Alternatively, the computational loss is represented by cross entropy.
And step 404, training to obtain a target quality evaluation model according to the respective corresponding calculation loss of the plurality of pseudo parallel statements.
Optionally, the computer device obtains the target quality evaluation model by training with an error back propagation algorithm according to the respective computation loss of the plurality of pseudo parallel statements.
Optionally, the computer device determines a gradient direction of the target quality assessment model according to the computation loss through a back propagation algorithm, and updates the model parameters in the target quality assessment model layer by layer from an output layer of the target quality assessment model.
Optionally, the discriminator model comprises an N-layer original encoder and an N-layer translated encoder, where N is a positive integer. The method comprises the steps that a computer device converts a source monolingual sentence X into a hidden layer state by calling an original text encoder, the computer device carries out encoding according to a pseudo target monolingual sentence and the hidden layer state of the source monolingual sentence X by calling a translation encoder, and a word-level binary task (namely, predicting a first mark parameter) and a sentence-level regression task (namely, predicting a second mark parameter) are carried out at the top layer of the translation encoder according to the hidden layer state of the pseudo target monolingual sentence. The computer device uses the two classification tasks and the loss of the regression task to perform a gradient pass back and update the model parameters of the discriminator model.
And the discriminator model is introduced into the real translation quality evaluation data for model fine tuning, and the training mode of the discriminator model during model fine tuning is consistent with that during pre-training. Optionally, based on the trained target quality assessment model, the method further includes the following steps, as shown in fig. 8:
Optionally, when the computer device receives a machine translation quality evaluation instruction, an input statement pair to be evaluated is acquired, and the statement pair to be evaluated includes a source monolingual statement and a target monolingual statement.
And step 802, calling the trained target quality evaluation model according to the statement pair to be evaluated to obtain an evaluation result.
Optionally, the computer device calls the trained target quality evaluation model according to the statement pair to be evaluated to obtain an evaluation result; and displaying the evaluation result by the computer equipment.
The evaluation result is used for indicating whether each word in the pseudo target monolingual sentence is translated by a machine or not, and/or the number of the words translated by the machine in the pseudo target monolingual sentence is the proportion of the total number of the words, wherein the total number of the words is the total number of the words in the pseudo target monolingual sentence.
In summary, the embodiments of the present disclosure provide a method for evaluating machine translation quality, in one aspect, a generator is defined on a large-scale original parallel corpus, and the original parallel corpus can be converted into a pseudo parallel corpus whose data distribution is closer to that of a real machine translation. In another aspect, it is proposed to use pseudo target monolingual sentences with one-to-one correspondence in position instead of machine-translated output translations that are traditionally generated from left to right, so that the pseudo target monolingual sentences can be automatically marked up with position information. In another aspect, the discriminant task is proposed to replace the generation task as a pre-training task, and the characteristics of the pseudo-parallel corpus are fully utilized, so that the training task of the discriminant model is closer to the target during pre-training and fine-tuning. In another aspect, the labeling parameter is further used to indicate a ratio of a number of machine-translated words to a total number of words in the pseudo-target monolingual sentence, such that the discriminator model can obtain a sentence-level representation during the pre-training phase.
In an application level, in one aspect, the method for evaluating machine translation quality provided by the embodiments of the present disclosure is applicable to various language pairs, including but not limited to, english, russian, english, chinese, etc. On the other hand, the large-scale original parallel corpus and the real translation quality assessment corpus can be simultaneously utilized, and the originally different two-stage task is converted into an integral task by introducing the pseudo parallel corpus. In another aspect, the target end has bidirectional capability naturally, no additional mark needs to be introduced, the parameter scale is small, and the model training efficiency is high. In another aspect, based on the structure of the transformer model, the knowledge of other pre-trained language models, also based on the transformer model, can be easily migrated in. On the other hand, the model is simple and direct, the intelligibility is strong, and the machine translation quality evaluation effect of the target quality evaluation model is good.
The following are embodiments of the apparatus of the embodiments of the present disclosure, and for portions of the embodiments of the apparatus not described in detail, reference may be made to technical details disclosed in the above-mentioned method embodiments.
Referring to fig. 9, a schematic structural diagram of a machine translation quality evaluation apparatus according to an exemplary embodiment of the present disclosure is shown. The machine translation quality assessment apparatus may be implemented as all or part of a computer device by software, hardware, or a combination of both. The device includes: a generation module 910 and a training module 920.
A generating module 910, configured to generate a pseudo parallel corpus according to a preconfigured original parallel corpus, where the pseudo parallel corpus includes a plurality of pseudo parallel statement pairs, each pseudo parallel statement pair includes a source monolingual statement and a corresponding pseudo target monolingual statement, and a similarity degree between a data distribution of the pseudo target monolingual statement and a data distribution of a real machine translation is greater than a similarity threshold;
the training module 920 is configured to train the original quality assessment model according to the pseudo parallel corpus to obtain a target quality assessment model, where the target quality assessment model is used to perform machine translation quality assessment on a sentence pair to be assessed.
In one possible implementation, the original parallel corpus includes a plurality of original parallel sentence pairs, the original parallel sentence pairs including a source monolingual sentence and a corresponding correct target monolingual sentence; the generating module 910 is further configured to:
for each original parallel statement pair in the original parallel corpus, generating a pseudo target monolingual statement by calling a generator model, wherein the pseudo target monolingual statement is different from a correct target monolingual statement;
and obtaining a pseudo parallel statement pair according to the source monolingual statement and the pseudo target monolingual statement.
In another possible implementation manner, the generating module 910 is further configured to:
for each original parallel statement pair in the original parallel corpus, encoding a source monolingual statement by calling a generator model;
shielding at least one word in the correct target monolingual sentence;
and reconstructing at least one shielded word according to the coded source monolingual statement and the shielded correct target monolingual statement to obtain a pseudo target monolingual statement.
In another possible implementation manner, the generator model is a self-coding structure based on the original text encoder and the shielding language model encoder, and a one-to-one correspondence relationship exists between a plurality of words in the pseudo target monolingual statement and a plurality of words in the correct target monolingual statement in position.
In another possible implementation manner, the generating module 910 is further configured to:
generating marking parameters corresponding to the pseudo target monolingual sentences according to the pseudo target monolingual sentences and the correct target monolingual sentences;
the marking parameters are used for indicating whether each word in the pseudo target monolingual sentence is obtained through machine translation or not, and/or the number of the words obtained through machine translation in the pseudo target monolingual sentence accounts for the proportion of the total number of the words, and the total number of the words is the total number of the words in the pseudo target monolingual sentence.
In another possible implementation manner, the training module 920 is further configured to:
calling a discriminator model to obtain a training result for each pseudo parallel statement pair in the pseudo parallel corpus, wherein the discriminator model is a model based on an original text encoder and a translated text encoder;
comparing the training result with the corresponding marking parameters of the pseudo parallel statement to obtain a calculation loss, wherein the calculation loss is used for indicating the error between the training result and the marking parameters;
and training to obtain a target quality evaluation model according to the respective corresponding calculation loss of the plurality of pseudo parallel statements.
In another possible implementation manner, the apparatus further includes: the system comprises an acquisition module and a calling module;
the system comprises an acquisition module, a judgment module and a comparison module, wherein the acquisition module is used for acquiring a statement pair to be evaluated, and the statement pair to be evaluated comprises a source monolingual statement and a target monolingual statement;
and the calling module is used for calling the trained target quality evaluation model according to the statement pair to be evaluated to obtain an evaluation result.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the above functional modules is illustrated, and in practical applications, the above functions may be distributed by different functional modules according to actual needs, that is, the content structure of the device is divided into different functional modules, so as to complete all or part of the functions described above.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An embodiment of the present disclosure further provides a computer device, where the computer device includes: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: the method in each of the above method embodiments is implemented.
The disclosed embodiments also provide a non-transitory computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the methods in the various method embodiments described above.
Fig. 10 is a block diagram illustrating an apparatus 1000 for performing a method for machine translation quality assessment in accordance with an exemplary embodiment. For example, the apparatus 1000 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 10, the apparatus 1000 may include one or more of the following components: processing component 1002, memory 1004, power component 1006, multimedia component 1008, audio component 1010, input/output (I/O) interface 1012, sensor component 1014, and communications component 1016.
The processing component 1002 generally controls the overall operation of the device 1000, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 1002 may include one or more processors 1020 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 1002 may include one or more modules that facilitate interaction between processing component 1002 and other components. For example, the processing component 1002 may include a multimedia module to facilitate interaction between the multimedia component 1008 and the processing component 1002.
The memory 1004 is configured to store various types of data to support operations at the apparatus 1000. Examples of such data include instructions for any application or method operating on device 1000, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1004 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 1006 provides power to the various components of the device 1000. The power components 1006 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 1000.
The multimedia component 1008 includes a screen that provides an output interface between the device 1000 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1008 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 1000 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1010 is configured to output and/or input audio signals. For example, audio component 1010 includes a Microphone (MIC) configured to receive external audio signals when apparatus 1000 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 1004 or transmitted via the communication component 1016. In some embodiments, audio component 1010 also includes a speaker for outputting audio signals.
I/O interface 1012 provides an interface between processing component 1002 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1014 includes one or more sensors for providing various aspects of status assessment for the device 1000. For example, sensor assembly 1014 may detect an open/closed state of device 1000, the relative positioning of components, such as a display and keypad of device 1000, the change in position of device 1000 or a component of device 1000, the presence or absence of user contact with device 1000, the orientation or acceleration/deceleration of device 1000, and the change in temperature of device 1000. The sensor assembly 1014 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1014 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1014 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1016 is configured to facilitate communications between the apparatus 1000 and other devices in a wired or wireless manner. The device 1000 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1016 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1016 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1000 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1004, is also provided, including computer program instructions executable by the processor 1020 of the apparatus 1000 to perform the above-described methods.
Fig. 11 is a block diagram illustrating an apparatus 1100 for performing a method for machine translation quality assessment in accordance with another exemplary embodiment. For example, the apparatus 1100 may be provided as a server. Referring to fig. 11, the apparatus 1100 includes a processing component 1122 that further includes one or more processors and memory resources, represented by memory 1132, for storing instructions, such as application programs, executable by the processing component 1122. The application programs stored in memory 1132 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1122 is configured to execute instructions to perform the above-described method.
The apparatus 1100 may also include a power component 1126 configured to perform power management of the apparatus 1100, a wired or wireless network interface 1150 configured to connect the apparatus 1100 to a network, and an input/output (I/O) interface 1158. The apparatus 1100 may operate based on an operating system stored in the memory 1132, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1132, is also provided that includes computer program instructions executable by the processing component 1122 of the device 1100 to perform the methods described above.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (10)
1. A method for evaluating machine translation quality, for use in a computer device, the method comprising:
generating a pseudo parallel corpus according to a pre-configured original parallel corpus, wherein the pseudo parallel corpus comprises a plurality of pseudo parallel statement pairs, the pseudo parallel statement pairs comprise a source monolingual statement and a corresponding pseudo target monolingual statement, and the similarity degree of the data distribution of the pseudo target monolingual statement and the data distribution of a real machine translation is larger than a similarity threshold value;
and training an original quality evaluation model according to the pseudo parallel corpus to obtain a target quality evaluation model, wherein the target quality evaluation model is used for evaluating the machine translation quality of the statement to be evaluated.
2. The method of claim 1, wherein the original parallel corpus comprises a plurality of original parallel sentence pairs, the original parallel sentence pairs comprising a source monolingual sentence and a corresponding correct target monolingual sentence; the generating of the pseudo parallel corpus according to the pre-configured original parallel corpus includes:
for each original parallel statement pair in the original parallel corpus, generating the pseudo target monolingual statement by calling a generator model, wherein the pseudo target monolingual statement is different from the correct target monolingual statement;
and obtaining the pseudo parallel statement pair according to the source monolingual statement and the pseudo target monolingual statement.
3. The method of claim 2, wherein said generating said pseudo target monolingual statement by invoking a builder model for each said pair of original parallel statements in said original parallel corpus comprises:
for each original parallel statement pair in the original parallel corpus, encoding the source monolingual statement by calling the generator model;
shielding at least one word in the correct target monolingual statement;
and reconstructing the shielded at least one word according to the coded source monolingual statement and the shielded correct target monolingual statement to obtain the pseudo target monolingual statement.
4. The method of claim 3, wherein the generator model is a self-coding structure based on an original text encoder and a mask language model encoder, and wherein a plurality of words in the pseudo target monolingual sentence have a one-to-one correspondence in position with a plurality of words in the correct target monolingual sentence.
5. The method of claim 4, wherein after reconstructing the masked at least one word from the encoded source monolingual sentence and the masked correct target monolingual sentence to obtain the pseudo target monolingual sentence, the method further comprises:
generating a marking parameter corresponding to the pseudo target monolingual statement according to the pseudo target monolingual statement and the correct target monolingual statement;
the marking parameter is used for indicating whether each word in the pseudo target monolingual statement is obtained through machine translation or not, and/or the number of the words obtained through machine translation in the pseudo target monolingual statement accounts for the proportion of the total number of the words, wherein the total number of the words is the total number of the words in the pseudo target monolingual statement.
6. The method according to claim 5, wherein the training of the original quality assessment model according to the pseudo-parallel corpus to obtain the target quality assessment model comprises:
calling a discriminator model to obtain a training result for each pseudo parallel statement pair in the pseudo parallel corpus, wherein the discriminator model is a model based on an original text encoder and a translated text encoder;
comparing the training result with the corresponding labeled parameter of the pseudo parallel statement to obtain a calculation loss, wherein the calculation loss is used for indicating an error between the training result and the labeled parameter;
and training to obtain the target quality evaluation model according to the calculation loss corresponding to each pseudo parallel statement pair.
7. The method according to any one of claims 1 to 6, wherein after the training of the original quality assessment model according to the pseudo-parallel corpus to obtain the target quality assessment model, the method further comprises:
obtaining a statement pair to be evaluated, wherein the statement pair to be evaluated comprises a source monolingual statement and a target monolingual statement;
and calling the trained target quality evaluation model to obtain an evaluation result according to the statement pair to be evaluated.
8. A machine translation quality assessment apparatus for use in a computer device, the apparatus comprising:
the generating module is used for generating a pseudo parallel corpus according to a pre-configured original parallel corpus, wherein the pseudo parallel corpus comprises a plurality of pseudo parallel statement pairs, the pseudo parallel statement pairs comprise source monolingual statements and corresponding pseudo target monolingual statements, and the similarity degree of the data distribution of the pseudo target monolingual statements and the data distribution of a real machine translation is larger than a similarity threshold value;
and the training module is used for training an original quality evaluation model according to the pseudo parallel corpus to obtain a target quality evaluation model, and the target quality evaluation model is used for performing machine translation quality evaluation on the statement to be evaluated.
9. A computer device, characterized in that the computer device comprises: a processor; a memory for storing processor-executable instructions;
wherein the processor is configured to:
generating a pseudo parallel corpus according to a pre-configured original parallel corpus, wherein the pseudo parallel corpus comprises a plurality of pseudo parallel statement pairs, the pseudo parallel statement pairs comprise a source monolingual statement and a corresponding pseudo target monolingual statement, and the similarity degree of the data distribution of the pseudo target monolingual statement and the data distribution of a real machine translation is larger than a similarity threshold value;
and training an original quality evaluation model according to the pseudo parallel corpus to obtain a target quality evaluation model, wherein the target quality evaluation model is used for evaluating the machine translation quality of the statement to be evaluated.
10. A non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010663296.0A CN113919372A (en) | 2020-07-10 | 2020-07-10 | Machine translation quality evaluation method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010663296.0A CN113919372A (en) | 2020-07-10 | 2020-07-10 | Machine translation quality evaluation method, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113919372A true CN113919372A (en) | 2022-01-11 |
Family
ID=79232328
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010663296.0A Pending CN113919372A (en) | 2020-07-10 | 2020-07-10 | Machine translation quality evaluation method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113919372A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114757212A (en) * | 2022-03-30 | 2022-07-15 | 北京金山数字娱乐科技有限公司 | Translation model training method and device, electronic equipment and medium |
CN114970571A (en) * | 2022-06-23 | 2022-08-30 | 昆明理工大学 | Hantai pseudo parallel sentence pair generation method based on double discriminators |
-
2020
- 2020-07-10 CN CN202010663296.0A patent/CN113919372A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114757212A (en) * | 2022-03-30 | 2022-07-15 | 北京金山数字娱乐科技有限公司 | Translation model training method and device, electronic equipment and medium |
CN114970571A (en) * | 2022-06-23 | 2022-08-30 | 昆明理工大学 | Hantai pseudo parallel sentence pair generation method based on double discriminators |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111524521B (en) | Voiceprint extraction model training method, voiceprint recognition method, voiceprint extraction model training device and voiceprint recognition device | |
CN112347795A (en) | Machine translation quality evaluation method, device, equipment and medium | |
CN107564526B (en) | Processing method, apparatus and machine-readable medium | |
CN113761888A (en) | Text translation method and device, computer equipment and storage medium | |
US20240078385A1 (en) | Method and apparatus for generating text | |
CN113822076A (en) | Text generation method and device, computer equipment and storage medium | |
CN114065778A (en) | Chapter-level translation method, translation model training method and device | |
CN113673261A (en) | Data generation method and device and readable storage medium | |
CN116127062A (en) | Training method of pre-training language model, text emotion classification method and device | |
CN113919372A (en) | Machine translation quality evaluation method, device and storage medium | |
CN111414772B (en) | Machine translation method, device and medium | |
CN112735396A (en) | Speech recognition error correction method, device and storage medium | |
CN112199963A (en) | Text processing method and device and text processing device | |
CN110349577B (en) | Man-machine interaction method and device, storage medium and electronic equipment | |
CN112035651B (en) | Sentence completion method, sentence completion device and computer readable storage medium | |
CN116913278B (en) | Voice processing method, device, equipment and storage medium | |
CN108733657B (en) | Attention parameter correction method and device in neural machine translation and electronic equipment | |
CN113535969B (en) | Corpus expansion method, corpus expansion device, computer equipment and storage medium | |
CN111400443B (en) | Information processing method, device and storage medium | |
CN116108157B (en) | Method for training text generation model, text generation method and device | |
CN113239707A (en) | Text translation method, text translation device and storage medium | |
CN109460458B (en) | Prediction method and device for query rewriting intention | |
CN111506767A (en) | Song word filling processing method and device, electronic equipment and storage medium | |
CN109979435B (en) | Data processing method and device for data processing | |
CN112989819B (en) | Chinese text word segmentation method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |