CN110276082B - Translation processing method and device based on dynamic window - Google Patents

Translation processing method and device based on dynamic window Download PDF

Info

Publication number
CN110276082B
CN110276082B CN201910490402.7A CN201910490402A CN110276082B CN 110276082 B CN110276082 B CN 110276082B CN 201910490402 A CN201910490402 A CN 201910490402A CN 110276082 B CN110276082 B CN 110276082B
Authority
CN
China
Prior art keywords
target
word
window
source end
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910490402.7A
Other languages
Chinese (zh)
Other versions
CN110276082A (en
Inventor
熊皓
张睿卿
张传强
何中军
吴华
李芝
王海峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910490402.7A priority Critical patent/CN110276082B/en
Publication of CN110276082A publication Critical patent/CN110276082A/en
Application granted granted Critical
Publication of CN110276082B publication Critical patent/CN110276082B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation

Abstract

The invention provides a translation processing method and device based on a dynamic window, wherein the method comprises the following steps: controlling a target window to slide in an input source end word according to preset window sliding parameters; performing similarity calculation on the translated target word and the target word in the current range of the target window; and performing voice synthesis according to the similarity calculation result to output a target translation. Therefore, translation delay of simultaneous interpretation is reduced, and translation efficiency is improved.

Description

Translation processing method and device based on dynamic window
Technical Field
The present invention relates to the field of speech processing technologies, and in particular, to a method and apparatus for translation processing based on a dynamic window.
Background
In general, in the simultaneous interpretation process, a speech signal to be translated is first identified, and then sentence boundary identification is performed to identify a sentence. The sentences are processed through the punctuation mark annotation model to form a complete sentence which can be translated, and the target translation is generated through the machine translation engine.
However, in the translation process, the speech output from the presenter to the final destination is performed, so that the links are more and the delay is larger. For example, in the english-english simultaneous interpretation scene, it is often necessary to wait for a pause of a presenter to break a sentence, and after receiving contents for several seconds or even more than ten seconds, a complete sentence can be identified to generate a translation.
Disclosure of Invention
Therefore, a first object of the present invention is to provide a translation processing method based on a dynamic window, which reduces translation delay of simultaneous interpretation and improves translation efficiency.
A second object of the present invention is to provide a translation processing device based on a dynamic window.
A third object of the invention is to propose a computer device.
A fourth object of the present invention is to propose a computer readable storage medium.
An embodiment of a first aspect of the present invention provides a translation processing method based on a dynamic window, including the following steps: controlling a target window to slide in an input source end word according to preset window sliding parameters; similarity calculation is carried out on the translated target word and a target source end word in the current range of the target window; and performing voice synthesis according to the similarity calculation result to output a target translation.
In addition, the translation processing method based on the dynamic window provided by the embodiment of the invention has the following additional technical characteristics:
optionally, before the controlling the target window to slide in the input source word according to the preset window sliding parameter, the method further includes: judging whether the length of the currently input source end word meets the initial length of the target window or not; and if the length of the current input source end word is known to meet the initial length of the target window, translating the current input source end word to generate a target word.
Optionally, the method further comprises: calculating the alignment relation of each word according to the alignment method, and obtaining a sample source end word and sentence corresponding to a sample target word; training the initial length of the target window according to the sample source end words and sentences corresponding to the sample target words.
Optionally, the controlling sliding of the target window in the input source end word according to the preset window sliding parameter includes: acquiring the current starting position and ending position of the target window; calculating state values of the starting position and the ending position according to a preset function and a preset threshold value; and controlling the target window to slide in the input source end words according to the state values of the starting position and the ending position.
Optionally, before the controlling the target window to slide in the input source word according to the preset window sliding parameter, the method further includes: acquiring a pre-adjustment sequence according to the current starting position of the target window and the current position of the input source end word; and if the word semantic similarity between the current position of the input source word and the word corresponding to the target window meets the preset condition according to the pre-trained order-adjusting function, adjusting the word position of the pre-adjusting sequence.
An embodiment of a second aspect of the present invention provides a translation processing device based on a dynamic window, including: the sliding module is used for controlling the target window to slide in the input source end words according to preset window sliding parameters; the calculation module is used for calculating the similarity between the translated target word and the target source end word in the current range of the target window; and the synthesis module is used for carrying out voice synthesis according to the similarity calculation result to output the target translation.
In addition, the translation processing device based on the dynamic window in the embodiment of the invention also has the following additional technical characteristics:
optionally, the method further comprises: the judging module is used for judging whether the length of the currently input source end word meets the initial length of the target window; the generating module is used for translating the current input source end word to generate a target word when the length of the current input source end word meets the initial length of the target window.
Optionally, the sliding module includes: the acquisition unit is used for acquiring the current starting position and ending position of the target window; the calculating unit is used for calculating state values of the starting position and the ending position according to a preset function and a preset threshold value; and the control unit is used for controlling the target window to slide in the input source end words according to the state values of the starting position and the ending position.
Optionally, the method further comprises: the acquisition module is used for acquiring a pre-adjustment sequence according to the current starting position of the target window and the current position of the input source end word; and the adjusting module is used for adjusting the word positions of the pre-adjusting sequence when determining that the word semantic similarity between the current position of the input source word and the word corresponding to the target window meets the preset condition according to the pre-trained order adjusting function.
An embodiment of a third aspect of the present invention provides a computer device, including a processor and a memory; wherein the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory, for implementing the dynamic window-based translation processing method as described in the embodiment of the first aspect.
An embodiment of a fourth aspect of the present invention proposes a computer readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements a dynamic window based translation processing method as described in the embodiment of the first aspect.
The technical scheme provided by the embodiment of the invention at least has the following additional technical characteristics:
the method can dynamically adjust the size of the attention window according to the content of the presenter, generate translations in real time and reduce simultaneous interpretation time delay.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow diagram of a simultaneous interpretation of the prior art;
FIG. 2 is a flow diagram of a dynamic window based translation processing method according to one embodiment of the present invention;
FIG. 3 is a schematic diagram of attention computation of an attention mechanism according to one embodiment of the invention;
FIG. 4 is a schematic diagram of attention computation of an attention mechanism according to another embodiment of the invention;
FIG. 5 is a schematic diagram of dynamic changes to a dynamic window according to one embodiment of the invention;
FIG. 6 is a schematic diagram of a dynamic window based translation processing apparatus according to one embodiment of the present invention;
FIG. 7 is a schematic diagram of a dynamic window based translation processing apparatus according to another embodiment of the present invention;
FIG. 8 is a schematic diagram of a dynamic window based translation processing apparatus according to yet another embodiment of the present invention; and
fig. 9 is a schematic structural view of a translation processing device based on a dynamic window according to still another embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present invention and should not be construed as limiting the invention.
In view of the above background art, in the conventional simultaneous interpretation process, as shown in fig. 1, it is necessary to recognize the speech first, automatically break sentences based on pauses in the recognized speech, further, recognize positions of punctuation such as periods after breaking sentences, add punctuation based on the positions of punctuation, machine translate the speech information after adding punctuation, and further, output speech synthesis according to the translated text. Or the accuracy of the sentence breaking module is reduced due to voice recognition errors, and whether the sentence is a complete sentence or not is judged after tens of seconds, so that translation delay is large.
In order to solve the technical problems, the invention provides a simultaneous interpretation device utilizing a dynamic window attention mechanism, which can dynamically slide an attention window to generate a target translation in real time without waiting for a presenter to speak a complete sentence.
Specifically, fig. 2 is a flowchart of a method for dynamic window-based translation processing according to an embodiment of the present invention, as shown in fig. 2, the method includes:
and step 101, controlling the target window to slide in the input source end words according to the preset window sliding parameters.
The source words are the received words to be translated.
It can be appreciated that in the embodiment of the present application, an end-to-end neural network translation model is adopted, in which the attention module plays a very important role in improving the translation quality. In practical application, as shown in fig. 3, the attention mechanism based on the attention module can realize the recognition of the similarity between the input far-end sentence and the translated sentence, and determine the important word needing to be focused on in the target word based on the similarity, referring to fig. 3 (the higher the gray value is used for identifying the similarity, the higher the gray value is, the more similar word is possibly the word needing to be focused, and the higher weight is needed to be given, so that the attention of different words is adjusted to the target word, and the translation quality can be improved when the target translation is obtained later.
Of course, consider that in the attention mechanism shown in fig. 3, the whole sentence obtained is still taken as the basis of attention processing, and the model is required to obtain all words (complete sentences) of the source word and sentence before the attention calculation. And the cost of obtaining the complete sentence is high, and high-delay translations are easy to generate. Thus, as shown in FIG. 4, a slidable target window is introduced, and only the attention of source words and phrases within the target window are calculated at a time. Referring to fig. 4, each time before the final translation is generated, only the attention information of the source words within the window need be calculated, the sentence boundaries of the source sentences need not be recognized, and the attention calculation need not be performed with all the recognized contents as one long sentence. We need only calculate the attention information within the current window.
It should be noted that, the target window in the embodiment of the present invention may be slidable, where the initial length of the target window may be obtained by training according to experimental data.
As a possible implementation manner, a small number of data sets can be marked manually, which words and sentences of a source end need to be inspected when target words are generated, when training is performed, the alignment relation of each word is calculated based on an alignment mode, the sample source end words and sentences corresponding to sample target words are obtained, the initial length of a target window is trained according to the sample source end words and sentences corresponding to the sample target words, and the training process can use some training methods in the traditional technology to determine the relation between the target words and the source end words and sentences depending on to perform convergence training and determine the initial length.
In this example, it is determined whether the length of the currently input source word meets the initial length of the target window, and if it is known that the length of the currently input source word meets the initial length of the target window, a condition for starting translation is triggered, and the currently input source word is translated to generate the target word. At this time, since the initial length contains fewer words and sentences of the source end than the number of words and sentences of the whole sentence, the translation process can be triggered quickly, and the translation efficiency is improved.
In this example, after the initial length is determined, the window is dynamically adjusted along with the change of the translated target word, so as to ensure that as few source words and sentences as possible are consistent with the translated target word.
Specifically, the sliding mechanism of the control target window is: the method comprises the steps of obtaining the current starting position and the current ending position of a current target window, calculating state values of the starting position and the ending position according to a preset function and a preset threshold value, wherein the state values are used for determining whether the starting position and the ending position of the window are slid or not, and controlling the target window to slide in an input source end word according to the state values of the starting position and the ending position. In this example, the sliding direction is the right direction of the window, and the status value may be information in a plurality of formats specified in advance, for example, may be 0 and 1, where 0 represents not sliding to the right and 1 represents sliding to the right.
Based on this, as shown in fig. 5, a target window may slide to the right only at the end position, or slide to the right only at the start position, or slide to the right both at the start position and the end position, depending on which word of the source end needs to be examined when the target word is generated, the consideration of this dependency relationship may be obtained by the RL method convergence training, and based on the RL method convergence training, the state values of the start position and the end position may be determined according to the comparison between the preset function and the preset threshold, and as a possible implementation, the preset function is a Sigmoid function or a Bernoulli function, and the two functions are mainly based on whether the source end word covered between the start position and the end position can translate the more accurate target word to calculate the state value.
In this method, if the dynamic window start position can be defined as s and the end position as e, the values of s and e have either 0 or 1, 0 means not sliding to the right, and 1 means sliding to the right by one position. The values of s and e can be obtained by using Bernoulli distribution sampling, or by using a Sigmoid function to judge whether the state value exceeds 0.5. There are two ways to train RL, such as polar gradient, and backward propagation gradient is calculated after decision is made according to sampling. Or imitation learning, designing a teacher agent, generating a corresponding action sequence by using a word alignment result generated by the teacher agent, and then performing training by using supervision, wherein the above method for convergence training by the RL method can be obtained by the prior art, and is not described in detail herein.
And 102, performing similarity calculation on the translated target word and a target source end word in the current range of the target window.
The translated target words are translation words corresponding to source end sentences in the current target window, and the target source end words in the current range are source end sentences contained in the current target window.
Specifically, the similarity calculation is performed on the translated target word and the target source word in the current range of the target window, the higher the similarity, the more obviously the translated target word is generated, so that the weight of translation is increased when the translation is generated subsequently based on the dependency degree, namely the similarity degree, for example, when the target source word is Gonna make it right, the target word is "want to be correct", and the translation result of the comparison standard is "want to be good", therefore, the similarity of the source word "right" is obviously not very high, and obviously, the source word with higher similarity is "make", "it", "Gonna", and as a possible implementation manner, the similarity can be determined based on the context information and the semantic similarity.
And step 103, performing voice synthesis according to the similarity calculation result to output the target translation.
Specifically, after obtaining the similarity, performing speech synthesis according to the similarity calculation result to output a target translation, for example, increasing the weight of a target word with higher similarity to generate a corresponding translation, and obtaining speech synthesis corresponding to the translation.
Of course, because of the specificity of the machine translation problem: the translated word string often depends on long-distance order adjustment, namely, the currently generated word may need to see a far-away source sentence, especially for English translation scenes, the public name of the owner may appear at the beginning of the long sentence, so that a certain pre-adjustment is needed for the source sentence.
In one embodiment of the present invention, a pre-adjustment sequence is obtained according to a current starting position of a target window and a current position of an input source word, for example, as shown in fig. 6, the starting position of the current window is a 16 th source word, the current position of the current input source word is an 18 th source word, at this time, based on a semantic correspondence between a current translated target word result and the source word, and the newly obtained 17 th-18 th word, the 18 th word is determined to be added to the current target window for translation, specifically, a word semantic similarity between the current position of the input source word and the corresponding word of the target window can be determined to satisfy a preset condition based on a pre-trained tuning function, that is, the newly input source word and the source word in the target window have a stronger semantic relationship, and the pre-adjustment sequence adjusts the word position, for example, the 18 th word is added to the current target window, and a word with a low semantic contribution is selected to be placed outside the window.
Wherein, as a possible example, the pre-trained order-adjusting function may be formula (1) corresponding to the following function, wherein in formula (1), h t Semantic representation of the current t moment; tanh is a tangent nonlinear transformation function; τ the size of the sequence adjusting window; t is the model size; sigma is a Sigmoid function, and the value is between 0 and 1; i is an integer variable, and the value is between 0 and 2τ; e is the embedded word vector representation, w is the parameter to be learned.
Figure BDA0002086814370000061
In summary, the translation processing method based on the dynamic window can dynamically adjust the size of the window of attention according to the content of a presenter, generate translations in real time, and reduce simultaneous interpretation time delay.
In order to realize the embodiment, the invention also provides a translation processing device based on the dynamic window.
Fig. 6 is a schematic structural view of a dynamic window based translation processing apparatus according to an embodiment of the present invention, as shown in fig. 6, the dynamic window based translation processing apparatus includes: a sliding module 10, a computing module 20, a synthesizing module 30, wherein,
and the sliding module 10 is used for controlling the target window to slide in the input source end words according to the preset window sliding parameters.
And the calculating module 20 is used for calculating the similarity between the translated target word and the target source end word in the current range of the target window.
And the synthesis module 30 is used for performing voice synthesis according to the similarity calculation result to output the target translation.
In one embodiment of the present invention, as shown in fig. 7, the apparatus further comprises, on the basis of that shown in fig. 6: a judging module 40, a generating module 50, wherein,
a judging module 40, configured to judge whether the length of the currently input source word meets the initial length of the target window.
The generating module 50 is configured to translate the currently input source word to generate a target word when knowing that the length of the currently input source word meets the initial length of the target window.
In one embodiment of the present invention, as shown in fig. 8, the sliding module 10 includes, on the basis of that shown in fig. 6: an acquisition unit 11, a calculation unit 12, a control unit 13, wherein,
an obtaining unit 11, configured to obtain a current start position and an end position of the target window.
A calculating unit 12 for calculating a state value of the start position and the end position according to a preset function and a preset threshold value.
And the control unit 13 is used for controlling the target window to slide in the input source end words according to the state values of the starting position and the ending position.
In one embodiment of the present invention, as shown in fig. 9, the apparatus further comprises, on the basis of that shown in fig. 6: an acquisition module 60, and an adjustment module 70, wherein,
the obtaining module 60 is configured to obtain a pre-adjustment sequence according to a current starting position of the target window and a current position of the input source word.
The adjustment module 70 is configured to adjust the word position of the pre-adjustment sequence when it is determined according to the pre-trained order adjustment function that the word semantic similarity between the current position of the input source word and the word semantic similarity corresponding to the target window meets a preset condition.
It should be noted that, the explanation of the translation processing method based on the dynamic window in the foregoing embodiment is also applicable to the translation processing device based on the dynamic window in this embodiment, and will not be repeated here.
In summary, the translation processing device based on the dynamic window in the embodiment of the invention can dynamically adjust the size of the window of attention according to the content of a presenter, generate translations in real time and reduce simultaneous interpretation time delay.
In order to implement the above embodiment, the present invention also proposes a computer device including a processor and a memory; wherein the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory, for implementing the dynamic window-based translation processing method according to any one of the foregoing embodiments.
In order to implement the above embodiments, the present invention also proposes a computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements a dynamic window-based translation processing method according to any of the foregoing embodiments.
In the description of the present invention, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", "axial", "radial", "circumferential", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element being referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present invention.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
In the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly, through intermediaries, or both, may be in communication with each other or in interaction with each other, unless expressly defined otherwise. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In the present invention, unless expressly stated or limited otherwise, a first feature "up" or "down" a second feature may be the first and second features in direct contact, or the first and second features in indirect contact via an intervening medium. Moreover, a first feature being "above," "over" and "on" a second feature may be a first feature being directly above or obliquely above the second feature, or simply indicating that the first feature is level higher than the second feature. The first feature being "under", "below" and "beneath" the second feature may be the first feature being directly under or obliquely below the second feature, or simply indicating that the first feature is less level than the second feature.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.

Claims (11)

1. A translation processing method based on a dynamic window is characterized by comprising the following steps:
controlling a target window to slide in the input source words according to preset window sliding parameters, and calculating the attention information of the source words in the current target window range;
performing similarity calculation on the translated target words and target source end words in the current range of the target window, wherein the translated target words are translation words corresponding to source end sentences in the current target window, and the target source end words in the current range are source end sentences contained in the current target window;
and performing voice synthesis according to the similarity calculation result to output a target translation.
2. The method of claim 1, further comprising, prior to said controlling the sliding of the target window in the input source word according to the preset window sliding parameter:
judging whether the length of the currently input source end word meets the initial length of the target window or not;
and if the length of the current input source end word is known to meet the initial length of the target window, translating the current input source end word to generate a target word.
3. The method as recited in claim 2, further comprising:
calculating the alignment relation of each word according to the alignment method, and obtaining a sample source end word and sentence corresponding to a sample target word;
training the initial length of the target window according to the sample source end words and sentences corresponding to the sample target words.
4. The method of claim 1, wherein controlling the sliding of the target window in the input source word according to the preset window sliding parameter comprises:
acquiring the current starting position and ending position of the target window;
calculating state values of the starting position and the ending position according to a preset function and a preset threshold value;
and controlling the target window to slide in the input source end words according to the state values of the starting position and the ending position.
5. The method of claim 1, further comprising, prior to said controlling the sliding of the target window in the input source word according to the preset window sliding parameter:
acquiring a pre-adjustment sequence according to the current starting position of the target window and the current position of the input source end word;
and if the word semantic similarity between the current position of the input source word and the word corresponding to the target window meets the preset condition according to the pre-trained order-adjusting function, adjusting the word position of the pre-adjusting sequence.
6. A dynamic window based translation processing apparatus, comprising:
the sliding module is used for controlling the target window to slide in the input source end words according to preset window sliding parameters, and calculating the attention information of the source end words in the current target window range;
the calculation module is used for carrying out similarity calculation on the translated target words and target source end words in the current range of the target window, wherein the translated target words are translation words corresponding to source end sentences in the current target window, and the target source end words in the current range are source end sentences contained in the current target window;
and the synthesis module is used for carrying out voice synthesis according to the similarity calculation result to output the target translation.
7. The apparatus as recited in claim 6, further comprising:
the judging module is used for judging whether the length of the currently input source end word meets the initial length of the target window;
the generating module is used for translating the current input source end word to generate a target word when the length of the current input source end word meets the initial length of the target window.
8. The apparatus of claim 6, wherein the sliding module comprises:
the acquisition unit is used for acquiring the current starting position and ending position of the target window;
the calculating unit is used for calculating state values of the starting position and the ending position according to a preset function and a preset threshold value;
and the control unit is used for controlling the target window to slide in the input source end words according to the state values of the starting position and the ending position.
9. The apparatus as recited in claim 6, further comprising:
the acquisition module is used for acquiring a pre-adjustment sequence according to the current starting position of the target window and the current position of the input source end word;
and the adjusting module is used for adjusting the word positions of the pre-adjusting sequence when determining that the word semantic similarity between the current position of the input source word and the word corresponding to the target window meets the preset condition according to the pre-trained order adjusting function.
10. A computer device comprising a processor and a memory;
wherein the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory for implementing the dynamic window-based translation processing method according to any one of claims 1 to 5.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements a dynamic window based translation processing method according to any of claims 1-5.
CN201910490402.7A 2019-06-06 2019-06-06 Translation processing method and device based on dynamic window Active CN110276082B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910490402.7A CN110276082B (en) 2019-06-06 2019-06-06 Translation processing method and device based on dynamic window

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910490402.7A CN110276082B (en) 2019-06-06 2019-06-06 Translation processing method and device based on dynamic window

Publications (2)

Publication Number Publication Date
CN110276082A CN110276082A (en) 2019-09-24
CN110276082B true CN110276082B (en) 2023-06-30

Family

ID=67962041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910490402.7A Active CN110276082B (en) 2019-06-06 2019-06-06 Translation processing method and device based on dynamic window

Country Status (1)

Country Link
CN (1) CN110276082B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304390A (en) * 2017-12-15 2018-07-20 腾讯科技(深圳)有限公司 Training method, interpretation method, device based on translation model and storage medium
CN108647214A (en) * 2018-03-29 2018-10-12 中国科学院自动化研究所 Coding/decoding method based on deep-neural-network translation model

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2916678B1 (en) * 2007-06-01 2021-07-16 Advanced Track & Trace PROCEDURE AND DEVICE FOR SECURING DOCUMENTS
CN103257798B (en) * 2012-02-17 2017-05-10 阿里巴巴集团控股有限公司 Window sliding method and window sliding device
CN104243970A (en) * 2013-11-14 2014-12-24 同济大学 3D drawn image objective quality evaluation method based on stereoscopic vision attention mechanism and structural similarity
CN107704453B (en) * 2017-10-23 2021-10-08 深圳市前海众兴科研有限公司 Character semantic analysis method, character semantic analysis terminal and storage medium
CN108132931B (en) * 2018-01-12 2021-06-25 鼎富智能科技有限公司 Text semantic matching method and device
CN108664632B (en) * 2018-05-15 2021-09-21 华南理工大学 Text emotion classification algorithm based on convolutional neural network and attention mechanism
CN109086869B (en) * 2018-07-16 2021-08-10 北京理工大学 Human body action prediction method based on attention mechanism
CN109145190B (en) * 2018-08-27 2021-07-30 安徽大学 Local citation recommendation method and system based on neural machine translation technology
CN109034378B (en) * 2018-09-04 2023-03-31 腾讯科技(深圳)有限公司 Network representation generation method and device of neural network, storage medium and equipment
CN111368565B (en) * 2018-09-05 2022-03-18 腾讯科技(深圳)有限公司 Text translation method, text translation device, storage medium and computer equipment
CN109344413B (en) * 2018-10-16 2022-05-20 北京百度网讯科技有限公司 Translation processing method, translation processing device, computer equipment and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304390A (en) * 2017-12-15 2018-07-20 腾讯科技(深圳)有限公司 Training method, interpretation method, device based on translation model and storage medium
CN108647214A (en) * 2018-03-29 2018-10-12 中国科学院自动化研究所 Coding/decoding method based on deep-neural-network translation model

Also Published As

Publication number Publication date
CN110276082A (en) 2019-09-24

Similar Documents

Publication Publication Date Title
KR102117574B1 (en) Dialog system with self-learning natural language understanding
CN108447486B (en) Voice translation method and device
CN110069790B (en) Machine translation system and method for contrasting original text through translated text retranslation
US20200035215A1 (en) Speech synthesis method and apparatus based on emotion information
KR101683943B1 (en) Speech translation system, first terminal device, speech recognition server device, translation server device, and speech synthesis server device
US20190392858A1 (en) Intelligent voice outputting method, apparatus, and intelligent computing device
CN105404621B (en) A kind of method and system that Chinese character is read for blind person
CN106503231B (en) Search method and device based on artificial intelligence
CN109448699A (en) Voice converting text method, apparatus, computer equipment and storage medium
US20150199340A1 (en) System for translating a language based on user's reaction and method thereof
CN110070855B (en) Voice recognition system and method based on migrating neural network acoustic model
KR20170053527A (en) Apparatus and method for evaluating machine translation quality using distributed representation, machine translation apparatus, and apparatus for constructing distributed representation model
CN113793603A (en) Recognizing accented speech
KR102321801B1 (en) Intelligent voice recognizing method, apparatus, and intelligent computing device
JP7072178B2 (en) Equipment, methods and programs for natural language processing
KR20190104278A (en) Intelligent voice recognizing method, apparatus, and intelligent computing device
JP2015187684A (en) Unsupervised training method, training apparatus, and training program for n-gram language model
CN114596844A (en) Acoustic model training method, voice recognition method and related equipment
CN112016271A (en) Language style conversion model training method, text processing method and device
CN111539199A (en) Text error correction method, device, terminal and storage medium
CN114708474A (en) Image semantic understanding algorithm fusing local and global features
CN114550718A (en) Hot word speech recognition method, device, equipment and computer readable storage medium
CN110276082B (en) Translation processing method and device based on dynamic window
CN115064170B (en) Voice interaction method, server and storage medium
CN114519358A (en) Translation quality evaluation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant