CN117313754B - Intelligent translation method, device and translator - Google Patents

Intelligent translation method, device and translator Download PDF

Info

Publication number
CN117313754B
CN117313754B CN202311577252.6A CN202311577252A CN117313754B CN 117313754 B CN117313754 B CN 117313754B CN 202311577252 A CN202311577252 A CN 202311577252A CN 117313754 B CN117313754 B CN 117313754B
Authority
CN
China
Prior art keywords
target
characters
text
target text
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311577252.6A
Other languages
Chinese (zh)
Other versions
CN117313754A (en
Inventor
车建波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Bepal Intelligent Technology Co ltd
Original Assignee
Shenzhen Bepal Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Bepal Intelligent Technology Co ltd filed Critical Shenzhen Bepal Intelligent Technology Co ltd
Priority to CN202311577252.6A priority Critical patent/CN117313754B/en
Publication of CN117313754A publication Critical patent/CN117313754A/en
Application granted granted Critical
Publication of CN117313754B publication Critical patent/CN117313754B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/338Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/34Browsing; Visualisation therefor
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses an intelligent translation method, device and translator, which relate to the technical field of translation and comprise the following steps: the device comprises an input unit, an output unit and a translation unit; the input unit is used for acquiring first target characters, and the first target characters are used for determining source books of the first target characters; the translation unit is used for determining second audio information corresponding to preset characters according to the first target characters and the first audio information, predicting the to-be-recognized ambiguous characters in the second target characters to be the initial third target characters according to the second target characters before and after the second audio information, comparing the third target characters with the third audio information according to the predicted text information, and outputting the determined character information to be the fourth target characters; the output unit is used for outputting the fourth target text as audio content so as to obtain a corresponding translation result; the reading experience of the user can be improved, and the readability and the consistency are improved.

Description

Intelligent translation method, device and translator
Technical Field
The invention relates to the technical field of translation, in particular to an intelligent translation method, an intelligent translation device and an intelligent translation machine.
Background
Extracurricular reading is an effective way of learning, and children often have unrecognized words for some extracurricular books without pinyin due to the problem of learning amount, and when the children meet the condition, the children can only select to skip or seek the help of parents, so that the reading experience is poor.
However, in the prior art, when learning is assisted by a translator, a problem of a certain form occurs in the context processing, particularly, many Chinese characters are interpreted differently in homophones, and the Chinese characters have various definitions, so that the finally obtained translation effect is poor, and the understanding effect of the user on the characters cannot be improved.
Disclosure of Invention
According to the embodiment of the application, by providing the intelligent translation method, the device and the translation machine, the influence of various paraphrases on the translation effect in the prior art is reduced, the interaction content enables the translation to obtain feedback and correction of a user in real time, and the readability and consistency of the translation are improved.
The embodiment of the application provides an intelligent translation device, which comprises: the device comprises an input unit, an output unit and a translation unit;
the input unit is used for acquiring first target characters, and the first target characters are used for determining source books of the first target characters;
the translation unit is used for determining second audio information corresponding to preset characters according to the first target characters and the first audio information, predicting the output of the to-be-recognized ambiguous characters in the second target characters as initial third target characters according to the second target characters before and after the second audio information, comparing the third target characters with the third audio information according to the predicted text information, and outputting the determined character information as fourth target characters;
the output unit is used for outputting the fourth target text as audio content, so as to obtain a corresponding translation result.
The intelligent translation device also comprises an identification unit, wherein the identification unit is used for identifying fourth target characters output by the translation unit and determining interaction information of a user; and outputting the fifth target text according to the interaction information.
When the length of the second target text is determined, determining the set length of the second target text according to the idiom quantity proportion appearing in the first target text;
when the second target text is determined, a third target text is determined according to the second target text, the second target text and the first target text continuous information are determined to correspond to each other based on the second target text, and the text information which is the same as the common polysemous word in the second target text is the current third target text.
Comparing the occurrence times of the fifth target characters in the second target characters when the fifth target characters are acquired, and reconstructing the third target characters by taking the fifth target characters as a basis when the occurrence times of the fifth target characters are larger than a preset time threshold;
taking the fifth target text as a basis, and reconstructing the third target text comprises the following steps:
and splicing the fifth target text with the first word of the second target text, and when the fifth target text can be spliced with the second target text, using the fifth target text as the reconstructed third target text.
When the occurrence number of the fifth target characters is smaller than a preset number threshold, comparing the same parts of the fourth target characters and the fifth target characters, and taking the characters of the fourth target characters and the fifth target characters as reconstructed third target characters.
Dividing the interaction information into a plurality of regions of interest, and acquiring a first characteristic of each region of interest;
determining a weight distribution of the interaction information based on each first feature, and selecting the first feature with the largest weight as the first feature of the region of interest;
grading the interactive information, and acquiring a second characteristic according to grading feedback in the interactive information;
and determining the output fifth target text according to the first characteristic and the second characteristic.
And identifying the interactive information according to the identification unit, acquiring a fifth target character, and adjusting the upper limit of the length of the fifth target character according to the length of the second target character when the second fifth target character is received, wherein the upper limit of the length of the fifth target character is half of the length of the second target character.
Adjusting the third target text according to the fifth target text, including: when the fifth target text is continuous with the second target text, the text with the preset length consistent with the text with the preset length of the third target text after the second target text is the adjusted third target text;
and when the fifth target characters are discontinuous with the second target characters, splicing the fifth target characters with the second target characters one by one, wherein the characters which are fixedly combined after splicing are the adjusted third target characters.
The first audio information is the audio information of the user reading the current text after the first target text is acquired; the second audio information is audio information related to the preset characters in the audio information sent by the user when the preset characters are acquired; the third audio information is audio of the user after translation is completed; the second target characters are book contents corresponding to the second audio information; the third target text is the to-be-recognized ambiguous word in the second target text; the fourth target text is expressed as meaning of the content of the ambiguous word after the context adjustment; the fifth target text refers to the text which is related to or the same as the translation information in the interactive information.
One or more technical solutions provided in the embodiments of the present application at least have the following technical effects or advantages:
the acquired text information is processed together through the input unit, the translation unit, the output unit and the identification unit, and the interactive information of the user is acquired, so that the translated content is modified in time after the interactive information is acquired, the translation effect is improved, and the readability and the continuity of translation are improved.
Drawings
FIG. 1 is a schematic diagram of a first embodiment of an intelligent translation device;
FIG. 2 is a schematic diagram of a second embodiment of an intelligent translation device;
FIG. 3 is a flow chart of an intelligent translation method.
Description of the embodiments
In order that the invention may be readily understood, a more complete description of the invention will be rendered by reference to the appended drawings; the preferred embodiments of the present invention are illustrated in the drawings, however, the present invention may be embodied in many different forms and is not limited to the embodiments described herein; rather, these embodiments are provided so that this disclosure will be thorough and complete.
It should be noted that the terms "vertical", "horizontal", "upper", "lower", "left", "right", and the like are used herein for illustrative purposes only and do not represent the only embodiment.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs; the terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention; the term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
Example 1
When a user recognizes a word, a plurality of fonts are actually determined according to the context, especially when the user translates ancient poems, dialects and the like, each word has a plurality of meanings, individual words can even reach tens of meanings, even the meaning estimated by the context still has errors, and at the moment, the words are timely adjusted according to the content of the selected translation, thereby assisting the learning of the user.
As shown in fig. 1, the intelligent translation apparatus includes: the device comprises an input unit, an output unit and a translation unit;
the input unit is used for collecting first target characters, and the first target characters are used for determining source books of the first target characters, so that what books the first target characters are and contents in ancient books are determined.
And acquiring first audio information according to the first target characters, wherein the first audio information is the audio information of the current characters read by the user after the first target characters are acquired.
The translation unit is used for determining second audio information corresponding to preset characters according to the first target characters and the first audio information under the condition that the preset characters are acquired, predicting the to-be-recognized ambiguous characters in the second target characters to be the initial third target characters according to the second target characters before and after the second audio information, comparing the third target characters with the third audio information according to the predicted text information, and outputting the determined character information to be the fourth target characters. The preset text is determined according to the information of the first target text source, and can be specific sentences in books and paleo-texts, and the preset text is used for determining which sentences are likely to have translation problems.
The output unit is used for outputting the fourth target text as audio content, so as to obtain a corresponding translation result.
Preferably, the second audio information is audio information related to the preset text, which is extracted from the audio information sent by the user when the preset text is acquired, and book content corresponding to the second audio information is used as the second target text.
The fourth target text is expressed as meaning the content of the ambiguous word after the context adjustment.
Preferably, the initial third target text determines a combined phrase style according to the front-back stitching of the second target text, the length of the third target text is the same as the common idiom composition form, the set length of the second target text is determined according to the article source determined by the first target text, the length of the third target text is 5 when the cultural relics are collected, and the length of the third target text is adjusted to be 3, 5 and 7 according to the types of the ancient poems when the ancient poems are collected.
Preferably, when the target text is determined, comparing the corresponding target feature in the first target text with the preset feature, selecting a special text combination in the first target text, and determining the expression meaning of the text combination according to the configuration condition of the context, thereby outputting and obtaining the corresponding translation information.
When the text information is obtained, the paraphrasing combination of the target text is determined according to the context and the text combination, the relation between the corresponding text and the context is further determined according to the collected audio information, the pronunciation is obtained to further narrow down the range of the target text, the finally recognized text information content is determined according to the combination result of the context and the text, further, after the corresponding translation information is obtained, the output translation form is determined according to the interaction condition of the user, and the translation result is improved.
Preferably, third audio information is obtained, wherein the third audio information is audio of a user after translation is completed.
Preferably, the third audio information satisfies a preset condition, where the current user pronounces continuously, and the content of the pronouncing text belongs to the type of the target text.
Preferably, when determining the length of the second target text, the set length of the second target text is determined according to the idiom number proportion appearing in the first target text.
Preferably, when the second target text is determined, a third target text is determined according to the second target text, the second target text and the first target text continuous information are determined to correspond to each other based on the second target text, and the text information identical to the common polysemous word in the second target text is the current third target text.
At this time, a polysemous word list appearing in the current article is determined according to whether common polysemous words exist in the second target text continuous content, and a third target text which is not easy to identify and translate in the current translation content is obtained according to the polysemous word list.
Preferably, after the third target text is obtained, determining the text which is translated with the user requirement in the third target text according to the setting condition of the third target text, comparing the text selected from the third target text with the first target text and the second target text, thereby selecting a proper paraphrase, and translating the third target text into the fourth target text.
In this embodiment, the reading effect is improved by acquiring the ambiguous words and processing the ambiguous words, so as to assist the user in understanding the translated text information.
Example two
In order to improve the translation effect and the translation and interaction effect of the user, as shown in fig. 2, the invention also provides an identification unit for identifying the query points appearing after the translation, and the content to be modified in the translation is determined by processing the interaction information of the user, so that the translation effect is improved and the readability of the translation is improved.
Preferably, the intelligent translation device further comprises an identification unit, wherein the identification unit is used for identifying fourth target characters output by the translation unit and determining interaction information of a user; and outputting the fifth target text according to the interaction information.
Specifically, the recognition unit is configured to obtain a fifth target text according to the interaction information corresponding to the translation unit, where the fifth target text refers to text related to or the same as the translation information in the interaction information; according to the fifth target text, it can be known whether different places exist in the translation information after the content of the current translation is determined, so that the part of data is recorded, and the content in the translation is adjusted when the next translation is performed.
Comparing the occurrence times of the fifth target characters in the second target characters when the fifth target characters are acquired, and reconstructing the third target characters by taking the fifth target characters as a basis when the occurrence times of the fifth target characters are larger than a preset time threshold; the number of times of the fifth target text is more, the corresponding text definition after the fifth target text appears is more complex, and the part which is not clearly described can appear in the information fed back by the user for many times.
Preferably, the process of reconstructing the third target text by taking the fifth target text as a basis comprises the following steps:
and splicing the fifth target text with the first word of the second target text, and when the fifth target text can be spliced with the second target text, using the fifth target text as the reconstructed third target text.
Preferably, when the number of occurrences of the fifth target character is smaller than the preset number of occurrences threshold, comparing the same portion of the fourth target character with the fifth target character, and using the characters of the fourth target character and the fifth target character as the reconstructed third target character.
In this embodiment, the third target text is modified through the feedback result of the interaction information to modify the translation content of the polysemous word in the final translation, so as to improve the translation effect, so that the translation device can adjust the translation content according to the feedback of the user's requirement, and improve the translation effect.
Example III
In order to improve the processing effect of the user interaction, the obtaining mode of the fifth target text is adjusted, so that the fifth target text can show the feedback requirement of the user, and preferably, the obtaining mode of the fifth target text further comprises:
dividing the interaction information into a plurality of regions of interest, and acquiring a first characteristic of each region of interest;
the first features are obtained by dividing interaction information into a plurality of regions of interest, wherein each region of interest is extracted, the regions of interest are determined according to a result obtained by translation and a click machine of a machine interface when a user checks the translation result, the size of each region of interest is consistent, at least one first feature exists in each region of interest, and the first features are information which is easy to draw attention of the user during interaction.
Preferably, the interactive information is decomposed into a set of topics, each topic is assigned a set of keywords, and the keywords of each topic are the first features of the region of interest.
Based on each first feature, a weight distribution of the interaction information is determined, and the first feature with the largest weight is selected as the first feature of the region of interest.
Specifically, the specific gravity is calculated according to the frequency and the duty ratio of each first feature in the interactive information, and the first features with more occurrence times have higher weight;
and evaluating the first features according to the weights of the first features, so that keywords corresponding to the first features are adjusted, and the efficiency of each first feature when directly evaluated is improved.
And grading the interaction information, and acquiring a second characteristic according to grading feedback in the interaction information.
The second characteristic is determined according to the preference and the feedback level of the user and is extracted from the historical data and the interaction behavior of the user.
Preferably, the second feature is obtained by:
and acquiring historical interaction data of the interaction information, and extracting the associated information of the clicking behaviors of the user and the region of interest according to the historical interaction data.
Acquiring click characteristics and association characteristics of a user according to the association information of the click behavior of the user and the region of interest; the clicking characteristics are the number of clicks, clicking positions, clicking time and the like extracted aiming at clicking behaviors; the associated features are features for extracting associated information in the region of interest.
And calculating the feature weight of each feature according to the click feature and the associated feature, normalizing the feature weight, and selecting the feature with the largest feature weight as the second feature.
And processing the click characteristics and the associated characteristics through a neural network, taking each click characteristic and the associated characteristics as input, calculating the difference between each output and the expected output, and acquiring characteristic weights one by one according to the error obtained by the difference, wherein the specific mode of the neural network is not described in excess.
Specifically, the operation mode of the user after translation can be known by acquiring the data such as the click times and the click positions of the user in the history interaction data, so that the characters appearing in translation can be distinguished according to the click positions and the click times of the user; for example, the click position is identified, the translation text and the original text corresponding to the click position can be identified, and the problem occurring when the polysemous word is translated can be known after the two texts are compared, so that the translation method form is adjusted; recognizing the clicking times can know that the user is not satisfied with certain translation contents or generates a problem of no smoothness after translation; and meanwhile, the relevant characteristics of the interested areas are acquired, the area associated with the problem can be known each time when the problem occurs, and the final translation result of each area is adjusted, so that the final identification information effect is improved.
And determining the output fifth target text according to the first characteristic and the second characteristic.
In the step, the first characteristic and the second characteristic are used for determining the fifth target character finally output, and the obtained fifth target character can be used for obtaining the target characteristic more clearly;
preferably, the recognition unit is further configured to determine whether the fifth target text meets a preset condition, when the fifth target text is obtained, identify the interaction information according to the recognition unit, obtain a fifth target text, and when a second fifth target text is received, adjust an upper limit of the length of the fifth target text according to the length of the second target text, where the upper limit of the length of the fifth target text is half of the length of the second target text.
At this time, the length of the fifth target text is limited, and because the translated text is limited each time, the second target text is the text identified when the second audio message is acquired, and the length of the fifth target text is set to be half of the length of the second target text, enough text can be obtained, so that more text contents can be identified in an auxiliary manner.
Preferably, the recognition unit is further configured to determine a paragraph tag, set the paragraph tag for the interactive information of the different paragraphs when the recognition unit obtains the interactive information of the different paragraphs, and retrieve the fifth target text when the paragraph tag updates the information, and transmit the fifth stored target text to the translation unit, so as to adjust the third target text according to the fifth target text.
When the fifth target text is continuous with the second target text, the text with the same preset length as the third target text after the second target text is the adjusted third target text;
and when the fifth target characters are discontinuous with the second target characters, splicing the fifth target characters with the second target characters one by one, wherein the characters which are fixedly combined after splicing are the adjusted third target characters.
At this time, each paragraph is provided with a corresponding fifth target character, and in each paragraph, when the fifth target character can be continuously identified with the second target character, the fifth target character is the content which is difficult to correctly translate at present; when discontinuous, the fifth target word is a word which is easily problematic to be fed back, and if the second target word can be combined with the fifth target word, it is indicated that some sentences which are difficult to translate exist.
Preferably, as shown in fig. 3, the present invention provides an intelligent translation method, which includes:
step S1, collecting first target characters of a user, and acquiring first audio information according to the first target characters;
step S2, according to the first target characters and the first audio information, under the condition that the preset characters are acquired, determining second audio information corresponding to the preset characters, and according to second target characters before and after the second audio information, predicting to-be-recognized ambiguous characters in the second target characters and outputting the to-be-recognized ambiguous characters as initial third target characters;
s3, comparing the third target text with the third audio information according to the predicted text information, and outputting the determined text information as fourth target text;
step S4, determining interaction information of the user according to the fourth target text; outputting fifth target characters according to the interaction information;
and S5, reconstructing a third target character by taking the fifth target character as a basis.
In the step, the reconstructed third target text is output again, so that the effect of adjusting the translation content is achieved, the translation is more accurate, and the influence of polysemous words appearing in the translation on the translation can be reduced.
Preferably, the present invention provides an intelligent translator comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to perform the apparatus according to any one of embodiments one, two, and three.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (4)

1. An intelligent translation device, comprising: the device comprises an input unit, an output unit and a translation unit;
the input unit is used for acquiring first target characters, and the first target characters are used for determining source books of the first target characters;
the translation unit is used for determining second audio information corresponding to preset characters according to the first target characters and the first audio information, predicting the to-be-recognized ambiguous characters in the second target characters to be the initial third target characters according to the second target characters before and after the second audio information, comparing the third target characters with the third audio information according to the predicted text information, and outputting the determined character information to be the fourth target characters;
the output unit is used for outputting the fourth target text as audio content so as to obtain a corresponding translation result;
the identification unit is used for identifying the fourth target text output by the translation unit and determining interaction information of the user; outputting fifth target characters according to the interaction information;
comparing the occurrence times of the fifth target characters in the second target characters when the fifth target characters are acquired, and reconstructing the third target characters by taking the fifth target characters as a basis when the occurrence times of the fifth target characters are larger than a preset time threshold;
taking the fifth target text as a basis, and reconstructing the third target text comprises the following steps:
splicing the fifth target text with the first word of the second target text, and when the fifth target text can be spliced with the second target text, using the fifth target text as a reconstructed third target text;
when the occurrence number of the fifth target characters is smaller than a preset number threshold, comparing the same parts of the fourth target characters and the fifth target characters, and taking the characters of the fourth target characters and the fifth target characters as reconstructed third target characters;
adjusting the third target text according to the fifth target text, including: when the fifth target text is continuous with the second target text, the text with the preset length consistent with the text with the preset length of the third target text after the second target text is the adjusted third target text;
when the fifth target characters are discontinuous with the second target characters, splicing the fifth target characters with the second target characters one by one, wherein the characters which are fixedly combined after splicing are the adjusted third target characters;
the first audio information is the audio information of the user reading the current text after the first target text is acquired; the second audio information is audio information related to the preset characters in the audio information sent by the user when the preset characters are acquired; the third audio information is audio of the user after translation is completed; the second target characters are book contents corresponding to the second audio information; the third target text is the to-be-recognized ambiguous word in the second target text; the fourth target text is expressed as meaning of the content of the ambiguous word after the context adjustment; the fifth target text refers to the text which is related to or the same as the translation information in the interactive information.
2. The intelligent translation device according to claim 1, wherein when determining the length of the second target text, the set length of the second target text is determined based on the idiom number proportion appearing in the first target text;
when the second target text is determined, a third target text is determined according to the second target text, the second target text and the first target text continuous information are determined to correspond to each other based on the second target text, and the text information which is the same as the common polysemous word in the second target text is the current third target text.
3. The intelligent translation device according to claim 1, wherein the interactive information is divided into a plurality of regions of interest, and a first feature of each region of interest is obtained;
determining a weight distribution of the interaction information based on each first feature, and selecting the first feature with the largest weight as the first feature of the region of interest;
grading the interactive information, and acquiring a second characteristic according to grading feedback in the interactive information;
and determining the output fifth target text according to the first characteristic and the second characteristic.
4. The intelligent translation device according to claim 1, wherein a fifth target text is obtained by identifying the interactive information according to the identification unit, and when a second fifth target text is received, an upper limit of the length of the fifth target text is adjusted according to the length of the second target text, and the upper limit of the length of the fifth target text is half the length of the second target text.
CN202311577252.6A 2023-11-24 2023-11-24 Intelligent translation method, device and translator Active CN117313754B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311577252.6A CN117313754B (en) 2023-11-24 2023-11-24 Intelligent translation method, device and translator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311577252.6A CN117313754B (en) 2023-11-24 2023-11-24 Intelligent translation method, device and translator

Publications (2)

Publication Number Publication Date
CN117313754A CN117313754A (en) 2023-12-29
CN117313754B true CN117313754B (en) 2024-01-30

Family

ID=89281353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311577252.6A Active CN117313754B (en) 2023-11-24 2023-11-24 Intelligent translation method, device and translator

Country Status (1)

Country Link
CN (1) CN117313754B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006163769A (en) * 2004-12-07 2006-06-22 Nec Corp User dictionary generating system, user dictionary generating device, user dictionary generating method and program
JP2018206356A (en) * 2017-06-08 2018-12-27 パナソニックIpマネジメント株式会社 Translation information providing method, translation information providing program, and translation information providing apparatus
KR20190141891A (en) * 2018-06-15 2019-12-26 부산외국어대학교 산학협력단 Method and Apparatus for Sentence Translation based on Word Sense Disambiguation and Word Translation Knowledge
CN110991196A (en) * 2019-12-18 2020-04-10 北京百度网讯科技有限公司 Translation method and device for polysemous words, electronic equipment and medium
CN112507736A (en) * 2020-12-21 2021-03-16 蜂后网络科技(深圳)有限公司 Real-time online social translation application system
CN115273834A (en) * 2022-07-26 2022-11-01 深圳市东象设计有限公司 Translation machine and translation method
CN116052671A (en) * 2022-11-21 2023-05-02 深圳市东象设计有限公司 Intelligent translator and translation method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8990066B2 (en) * 2012-01-31 2015-03-24 Microsoft Corporation Resolving out-of-vocabulary words during machine translation
US10540451B2 (en) * 2016-09-28 2020-01-21 International Business Machines Corporation Assisted language learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006163769A (en) * 2004-12-07 2006-06-22 Nec Corp User dictionary generating system, user dictionary generating device, user dictionary generating method and program
JP2018206356A (en) * 2017-06-08 2018-12-27 パナソニックIpマネジメント株式会社 Translation information providing method, translation information providing program, and translation information providing apparatus
KR20190141891A (en) * 2018-06-15 2019-12-26 부산외국어대학교 산학협력단 Method and Apparatus for Sentence Translation based on Word Sense Disambiguation and Word Translation Knowledge
CN110991196A (en) * 2019-12-18 2020-04-10 北京百度网讯科技有限公司 Translation method and device for polysemous words, electronic equipment and medium
CN112507736A (en) * 2020-12-21 2021-03-16 蜂后网络科技(深圳)有限公司 Real-time online social translation application system
WO2022133802A1 (en) * 2020-12-21 2022-06-30 蜂后网络科技(深圳)有限公司 Real-time online social translation application system
CN115273834A (en) * 2022-07-26 2022-11-01 深圳市东象设计有限公司 Translation machine and translation method
CN116052671A (en) * 2022-11-21 2023-05-02 深圳市东象设计有限公司 Intelligent translator and translation method

Also Published As

Publication number Publication date
CN117313754A (en) 2023-12-29

Similar Documents

Publication Publication Date Title
Way Quality expectations of machine translation
CN108351871B (en) General translator
CN106649603B (en) Designated information pushing method based on emotion classification of webpage text data
US10157171B2 (en) Annotation assisting apparatus and computer program therefor
US8170868B2 (en) Extracting lexical features for classifying native and non-native language usage style
US20180366013A1 (en) System and method for providing an interactive visual learning environment for creation, presentation, sharing, organizing and analysis of knowledge on subject matter
San-Segundo et al. Design, development and field evaluation of a Spanish into sign language translation system
CN108885617B (en) Sentence analysis system and program
CN111651996A (en) Abstract generation method and device, electronic equipment and storage medium
US20230069935A1 (en) Dialog system answering method based on sentence paraphrase recognition
CN108280065B (en) Foreign text evaluation method and device
JP6626917B2 (en) Readability evaluation method and system based on English syllable calculation method
CN111553159B (en) Question generation method and system
CN110610003A (en) Method and system for assisting text annotation
US8666987B2 (en) Apparatus and method for processing documents to extract expressions and descriptions
Lestiyanawati et al. Translation techniques used by students in translating English news items
Park et al. Automatic analysis of thematic structure in written English
CN117313754B (en) Intelligent translation method, device and translator
CN115017271B (en) Method and system for intelligently generating RPA flow component block
CN114185573A (en) Implementation and online updating system and method for human-computer interaction machine translation system
CN116806338A (en) Determining and utilizing auxiliary language proficiency metrics
KR20220084915A (en) System for providing cloud based grammar checker service
CN115965017B (en) Multi-language input and analysis system and method based on development platform
US20240086452A1 (en) Tracking concepts within content in content management systems and adaptive learning systems
Palihakkara et al. Dialogue act recognition for text-based sinhala

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant