CN113221582A - Translation processing method and device and translation processing device - Google Patents

Translation processing method and device and translation processing device Download PDF

Info

Publication number
CN113221582A
CN113221582A CN202110477869.5A CN202110477869A CN113221582A CN 113221582 A CN113221582 A CN 113221582A CN 202110477869 A CN202110477869 A CN 202110477869A CN 113221582 A CN113221582 A CN 113221582A
Authority
CN
China
Prior art keywords
text
boundary
target
unit
selected text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110477869.5A
Other languages
Chinese (zh)
Other versions
CN113221582B (en
Inventor
方菲
鲁涛
李质轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN202110477869.5A priority Critical patent/CN113221582B/en
Publication of CN113221582A publication Critical patent/CN113221582A/en
Application granted granted Critical
Publication of CN113221582B publication Critical patent/CN113221582B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Machine Translation (AREA)

Abstract

The embodiment of the invention provides a translation processing method and device and a translation processing device. The method comprises the following steps: identifying a target text unit in a selected text, wherein the selected text corresponds to a first language; and outputting the translation result of the selected text corresponding to the second language and outputting the relevant information of the target text unit. On the basis of providing the translation result of the selected text for the user, the embodiment of the invention can also provide the user with the related information of the target text unit contained in the selected text, thereby expanding the translation processing range, providing the user with deeper and richer information in the selected text, avoiding the need of further manual query operation steps because the user does not understand the meaning of the target text unit, reducing the operation cost of the user and improving the operation efficiency of the user.

Description

Translation processing method and device and translation processing device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a translation processing method and apparatus, and an apparatus for translation processing. With the continuous development of computer technology, the functions of mobile terminals, such as smart phones, are more and more powerful, and users can install various application programs in the mobile terminals according to their respective needs to implement different functions.
For example, through a translation application in the mobile terminal, a translation function can be realized anytime and anywhere, a source language text is translated into a target language text, and the target language text is displayed, so that great convenience is brought to a user.
However, the current text translation function only gives a translation result to the text selected by the user. The content provided for the user in the translation result is limited, and if the user wants to know the information of deeper layers in the text, the user needs to manually perform further query, so that the operation steps of the user are complicated, and the operation efficiency is low.
Disclosure of Invention
The embodiment of the invention provides a translation processing method and device and a translation processing device, which can reduce the operation cost of a user and improve the operation efficiency of the user.
In order to solve the above problem, an embodiment of the present invention discloses a translation processing method, where the method includes:
identifying a target text unit in a selected text, wherein the selected text corresponds to a first language;
and outputting the translation result of the selected text corresponding to the second language and outputting the relevant information of the target text unit.
Optionally, before the identifying the target text unit in the selected text, the method further includes:
pre-establishing a template structure corresponding to a preset text unit;
comparing the selected text with the template structure, and judging whether the selected text contains a text structure matched with the template structure;
and if the selected text contains a text structure matched with the template structure, determining the matched text structure as the identified target text unit.
Optionally, the comparing the selected text with the template structure, and determining whether the selected text contains a text structure matched with the template structure, includes:
segmenting the selected text and labeling the part of speech of each segmented word to obtain a labeling result;
and comparing each participle and the part-of-speech of the participle in the labeling result with each participle and the part-of-speech of the participle preset in each template structure established in advance, and determining a matched text structure.
Optionally, before the identifying the target text unit in the selected text, the method further includes:
when the duration of the trigger operation on the target area is detected to exceed the preset duration, determining a selected area according to the position of the trigger operation, and determining the text in the selected area as the selected text.
Optionally, after determining the selected region according to the position of the trigger operation, the method further includes:
displaying a first boundary and a second boundary of the selected area;
adjusting a position of the first boundary and/or the second boundary in response to a drag operation on the first boundary and/or the second boundary;
and after the dragging operation is detected to stop, determining an area between the adjusted first boundary and the second boundary as a selected area.
Optionally, after the outputting the information related to the target text unit, the method further includes:
and responding to the triggering operation of the relevant information of the target text unit, and jumping to a detail page corresponding to the target text unit.
Optionally, the information related to the target text unit includes at least one of an original text of the target text unit in the selected text, a type of the target text unit, an original shape of the target text unit, and a translation result of the target text unit corresponding to a second language.
On the other hand, the embodiment of the invention discloses a translation processing device, which comprises:
the translation recognition module is used for recognizing a target text unit in a selected text, and the selected text corresponds to a first language;
and the result output module is used for outputting the translation result of the selected text corresponding to the second language and outputting the related information of the target text unit.
Optionally, the apparatus further comprises:
the template establishing module is used for establishing a template structure corresponding to the preset text unit in advance;
the template comparison module is used for comparing the selected text with the template structure and judging whether the selected text contains a text structure matched with the template structure;
and the target determining module is used for determining the matched text structure as the identified target text unit if the selected text contains the text structure matched with the template structure.
Optionally, the template matching module includes:
the part-of-speech tagging submodule is used for segmenting words of the selected text and tagging the part of speech of each segmented word to obtain a tagging result;
and the part-of-speech comparison submodule is used for comparing each participle and the part-of-speech of the participle in the labeling result with each participle and the part-of-speech of the participle preset in each template structure established in advance and determining a matched text structure.
Optionally, the apparatus further comprises:
and the trigger determining module is used for determining a selected area according to the position of the trigger operation when the duration of the trigger operation on the target area is detected to exceed the preset duration, and determining the text in the selected area as the selected text.
Optionally, the apparatus further comprises:
the boundary display module is used for displaying a first boundary and a second boundary of the selected area;
a boundary adjusting module, configured to adjust a position of the first boundary and/or the second boundary in response to a drag operation on the first boundary and/or the second boundary;
and the area determining module is used for determining an area between the adjusted first boundary and the second boundary as a selected area after the dragging operation is detected to be stopped.
Optionally, the apparatus further comprises:
and the page jump module is used for responding to the triggering operation of the relevant information of the target text unit and jumping to the detail page corresponding to the target text unit.
In yet another aspect, an embodiment of the present invention discloses an apparatus for translation processing, including a memory, and one or more programs, where the one or more programs are stored in the memory, and configured to be executed by the one or more processors includes instructions for executing the translation processing method according to any of the foregoing method embodiments.
In yet another aspect, embodiments of the invention disclose a machine-readable medium having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform a translation processing method as described in one or more of the preceding.
The embodiment of the invention has the following advantages:
the embodiment of the invention can identify the target text unit in the selected text, the selected text corresponds to the first language, and the relevant information of the target text unit is also output on the basis of outputting the translation result of the selected text corresponding to the second language. On the basis of providing the translation result of the selected text for the user, the embodiment of the invention can also provide the user with the related information of the target text unit contained in the selected text, thereby expanding the translation processing range, providing the user with deeper and richer information in the selected text, avoiding the need of further manual query operation steps because the user does not understand the meaning of the target text unit, reducing the operation cost of the user and improving the operation efficiency of the user.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a flow diagram of the steps of one embodiment of a translation processing method of the present invention;
FIG. 2 is a block diagram of a translation processing apparatus according to an embodiment of the present invention;
FIG. 3 is a block diagram of an apparatus 800 for translation processing of the present invention;
fig. 4 is a schematic diagram of a server in some embodiments of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Method embodiment
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a translation processing method according to the present invention is shown, and specifically may include the following steps:
step 101, identifying a target text unit in a selected text, wherein the selected text corresponds to a first language;
and 102, outputting a translation result of the selected text corresponding to the second language and outputting related information of the target text unit.
The translation processing method provided by the embodiment of the invention can be applied to electronic equipment, wherein the electronic equipment comprises a display screen, and man-machine interaction can be realized through the display screen. The electronic devices include, but are not limited to: a server, a smart phone, a recording pen, a tablet computer, an e-book reader, an MP3 (Moving Picture Experts Group Audio Layer III) player, an MP4 (Moving Picture Experts Group Audio Layer IV) player, a laptop, a car computer, a desktop computer, a set-top box, a smart tv, a wearable device, and the like.
The selected text can be a text to be translated selected by a user through selection operation, and the word-taking translation function can be triggered under the condition that the selected text in the current interface is detected and is used for translating the selected text in real time and outputting a translation result. The selected text may be a sentence, a word, an article, etc. According to the embodiment of the invention, after the selected text is detected, the selected text is translated from the first language to the second language through a machine translation algorithm, and a translation result of the selected text corresponding to the second language is obtained. The machine translation algorithm can be any existing machine translation algorithm, and can be served by a cloud server or a software algorithm built in the current electronic equipment. In addition, the embodiment of the invention identifies the selected text, judges whether the selected text contains the target text unit, if the selected text contains the target text unit, acquires the related information of the target text unit, and outputs the selected text
And outputting the relevant information of the target text unit on the basis of the translation result corresponding to the second language.
It should be noted that the first language and the second language are different languages. The embodiment of the invention does not limit the language types of the first language and the second language. In the embodiment of the present invention, the language categories of the first language and the second language may be preset, or the language categories of the first language and the second language may be automatically determined by the current application environment. The language categories of the first language and the second language may include chinese, english, french, italian, german, portuguese, japanese, korean, and the like. In the embodiment of the invention, the first language is English, the second language is Chinese, and the application scenes of other language types are similar and can be referred to each other.
In an optional embodiment of the invention, the method may further comprise: and determining the first language and the second language according to the language type of the application environment where the selected text is located.
In one example, the application environment in which the selected text is located may be an online translation application, and the source language currently set by the online translation application may be used as the first language, and the target language currently set by the online translation application may be used as the second language. For example, if the source language currently set by the online translation application is French and the target language is Chinese, it can be determined that the first language is French and the second language is Chinese.
In another example, the first language corresponding to the selected text may be obtained through automatic recognition, and if the language type of the application environment in which the selected text is located is different from the first language corresponding to the selected text, the language type of the application environment in which the selected text is located may be determined to be the second language. For example, if the application in which the selected text is located is a news browsing application, the first language corresponding to the selected text is english, and the language type of the news browsing application is chinese, it may be determined that the second language is chinese.
Of course, the above-mentioned manner for automatically determining the first language and the second language is only an application example of the present invention, and the embodiment of the present invention does not limit the manner for automatically determining the first language and the second language. The second language may also be determined, for example, based on a language class of the electronic device system.
In an embodiment of the present invention, the type of the target text unit includes, but is not limited to, fixed collocation and/or phrase. Wherein, the fixed collocation is a fixed structure consisting of at least two fixed words. For example, "recording to" (basis) and "be determined to" (decision) are two fixed collocations in English. Phrases are units of language without sentence, which are composed of words that can be matched in grammar, and are also called phrases. Phrases are grammatical units that are larger than words but not sentences. Further, phrases may include consecutive phrases and non-consecutive phrases. A continuous phrase is a phrase consisting of at least two words without a separation. For example, "under the weather" (uncomfortable body) is a continuous phrase in English that contains three consecutive words with no space in between. Non-consecutive phrases are distinguished from consecutive phrases in that at least two of the words comprising the non-consecutive phrase may have a space between them. For example, "take place" is a non-continuous phrase in English, and there may be a gap between the words "take" and "place", such as "take sb's place", or "take sth's place".
In specific implementation, the type of the target text unit is not limited to phrases and fixed collocation, and may also include popular phrases, idioms, word sequences constituting sentences without grammatical meanings, and the like, and the type of the target text unit may be set according to actual needs.
Through the embodiment of the invention, in the process of translating the selected text in real time, the translation result of the selected text can be provided for the user, the target text unit (such as phrases, fixed collocation and the like) in the selected text can be further automatically identified, and the related information of the target text unit is provided for the user.
In an optional embodiment of the present invention, the information related to the target text unit includes, but is not limited to, at least one of an original text of the target text unit in the selected text, a type of the target text unit, an original shape of the target text unit, and a translation result of the target text unit corresponding to a second language.
The embodiment of the invention does not limit the specific content of the relevant information of the target text unit. For example, the related information may include simple description information and/or detailed description information of the target text unit. The simple description information may include an original shape of the target text unit, a translation result corresponding to the second language, a simple paraphrase, and the like. The detailed description information may include a detailed paraphrase, usage, illustrative sentences, etc. of the target text unit.
In one example, assuming that The selected text is "The text of The government saved to tap strong mediums of science", The embodiment of The present invention translates The selected text after detecting The selected text, and outputs The translation result of The selected text corresponding to The second language (assumed to be chinese) as "they urge The government to take powerful measures to strike The violent actor. ". In addition, the embodiment of the invention also identifies the selected text and identifies that the selected text contains two target text units. One is the fixed collocation of "wee urged to" (original form is "be urged to"), and the other is the phrase "take string measures" (original form is "take measures"). Therefore, on the basis of outputting the translation result of the selected text corresponding to the second language, the relevant information of the target text units "wee forwarded to" and "take strokes measures" can also be output. The related information includes, but is not limited to, the original text of the target text unit in the selected text, the type of the target text unit, the original shape, the translation result corresponding to the second language, and the like. For example, in this example, a target text element "wee urged to" whose type is "fixed collocation" and whose original shape is "be urged to" corresponding to the translation result in the second language may be output as "wee urged to" in the original text of the selected text. And outputting an original text of a target text unit 'take string measures' in the selected text as 'take string measures', wherein the type of the target text unit is 'phrase', the original shape of the target text unit is 'take measures', and the translation result corresponding to the second language is 'take measures'.
It should be noted that, the embodiment of the present invention does not limit the output form of the translation result of the selected text. For example, a layer may be newly created, and the translation result of the selected text corresponding to the second language is displayed in the newly created layer. The parameters of the layer can be set by those skilled in the art or by users, and the parameters can include background color, size, transparency, and the like. Preferably, the background color of the layer may be different from the background color of the selected text, so as to highlight the translation result.
Of course, the embodiment of the present invention does not limit the output form of the related information of the target text unit. For example, the related information of the target text unit may be displayed in the same layer as the translation result of the selected text, or may be separately displayed in different layers.
In one example, the related information of the target text unit is separately displayed in one layer and is displayed at a related position of the selected text, where the related position may be a position near the position of the selected text, and the near position may be a position of a blank area, so as to avoid affecting normal display of other contents in the current interface.
Preferably, in order not to affect the user to browse other contents in the current interface, the embodiment of the present invention displays the related information of the target text unit in the form of a pop-up window, and the related information only includes simple description information of the target text unit. For example, in The above example, identifying The target text elements "wee urged to take strokes" and "take strokes" in The selected text "The concept weeurges and The experience of The science," a popup may be displayed in The current interface and simple descriptive information for The target text elements "wee urd to" and "take strokes" may be displayed in The popup, such as The original shapes "be urged to" and "take strokes" of only two target text elements, to prompt The user to include The two target text elements in The selected text.
In an optional embodiment of the present invention, after the outputting the information related to the target text unit, the method may further include: and responding to the triggering operation of the relevant information of the target text unit, and jumping to a detail page corresponding to the target text unit.
After the target text unit contained in the selected text is identified and the related information of the target text unit is output, if the user wants to further acquire more detailed information of the target text unit, the related information of the target text unit can be triggered, so that the current interface jumps to the detailed page corresponding to the target text unit. The details page corresponding to the target unit of text may be used to display detailed description information for the target unit of text, or the details page may be an explanation page in the online dictionary for the target unit of text.
The triggering operation includes, but is not limited to, a mouse click operation, such as a left click, a left double click, a right double click, a touch operation on a touch display screen of the electronic device, and the like. The touch operation may be implemented by a target object (e.g., a finger, a stylus, etc.) approaching or contacting a touch display screen of the electronic device. In one example, the touch operation may include: clicking (including single clicking, double clicking, three continuous clicking) operation, long-time pressing operation, sliding operation and the like.
Further, the trigger operation may include a gesture operation, which may be a gesture track or the like; alternatively, the trigger operation may be a combination operation of a gesture operation and a device input operation other than the gesture operation. For example, the combined operation may be a combination operation formed by combining a gesture operation and a key operation of a certain function key on the electronic device; the method can also be combined operation formed by combining gesture operation and voice control operation; or a combined operation formed by combining gesture operation and touch operation; or a combined operation formed by combining a gesture operation and a fingerprint acquisition operation, and the like. Correspondingly, the device other than the gesture operation may be a function key, a touch screen, a fingerprint acquisition device, a voice acquisition device, or the like of the electronic device.
On the basis of providing the translation result of the selected text for the user, the embodiment of the invention can also provide the user with the related information of the target text unit contained in the selected text, expand the translation processing range, provide the user with deeper and richer information in the selected text, avoid the need of further manual query operation steps because the user does not understand the meaning of the target text unit, reduce the user operation cost and improve the user operation efficiency.
In an optional embodiment of the present invention, before the identifying the target text unit in the selected text in step 101, the method may further include:
step S11, pre-establishing a template structure corresponding to the preset text unit;
step S12, comparing the selected text with the template structure, and judging whether the selected text contains a text structure matched with the template structure;
and step S13, if the selected text contains a text structure matched with the template structure, determining the matched text structure as the identified target text unit.
The embodiment of the invention can pre-establish a template library, and the template library comprises template structures corresponding to all the preset text units. Wherein each preset text unit may correspond to at least one template structure. The type of the preset text unit includes, but is not limited to, preset phrases, fixed collocations, idioms, and the like.
The embodiment of the invention judges whether the selected text contains the text structure matched with the template structure or not by comparing the selected text with the template structure, and can determine that the selected text contains the target text unit under the condition that the selected text contains the text structure matched with the template structure.
It should be noted that the embodiment of the present invention does not limit the specific form of the template structure. In one example, the template structure of the preset text unit "be subscribed to" is pre-established as follows: "[ be verb, any tense ] urged to", the template structure consists of an arbitrary tense "be" verb, the past expression "urged" of the verb "urge", and the preposition "to". The preset text unit is originally shaped as 'be sorted to' and is of a fixed collocation type. When The selected text "The concept document lower url to take strokes sets against The template structure," because "lower" in The text structure "lower url to" in The selected text is a "be" verb, The text structure is composed of a "be" verb, The past "url" of The verb "urge," and The preposition "to". Thus, the text structure "wee urged to" matches the template structure "[ be verb, any tense ] urged to", and may be determined to be the identified target text element.
Furthermore, the embodiment of the present invention may also pre-store the related information, such as the type, original shape, paraphrase, etc., of the preset text unit corresponding to each template structure. For example, when it is recognized that the target text unit "wee urged to" in the selected text matches the preset text unit "be urged to", it may be determined that the type of the target text unit is a fixed collocation, the original form is "be urged to", and the translation result corresponding to the second language is "promote", and other relevant information.
In another example, the template structure of the preset text unit "take measures" is pre-established as follows: "take [ spacer of preset condition ] measures", the template structure includes verb "take" and noun "measures", and there may be a spacer of preset condition between verb "take" and noun "measures". In one example, the preset conditions are: the number of the interval words is greater than or equal to zero, and the number of the interval words is not greater than 30% of the sentence length and the number of words is not greater than 4. The preset text unit is originally in the shape of 'take measures' and the type of the preset text unit is a phrase. When The selected text "The comment here output to The question strings measures against The template structure," The interval satisfies The predetermined condition because The text structure "take strings" includes The verb "take" and The noun "measures," and there is an interval word "string" between The verb "take" and The noun "measures. Therefore, it may be determined that the text structure "take strokes measures" is matched with the template structure "take [ spacer words of preset conditions ] measures", and it may be determined that "take strokes measures" is a target text unit obtained by recognition, the type of the target text unit is a phrase, the original form is "take measures", and the translation result corresponding to the second language is "take measures".
In this example, The selected text "The concept power output to take strong strokes measurements against The results of The translation in The second language" they urge The government to take forceful action against The violent actor may be output. ", and outputs the related information of the recognized two target text units (" wee urged to "and" take strokes measures "), such as the original forms" be urged to "and" take measures "of the two target text units may be output.
In an optional embodiment of the present invention, the comparing, in step S12, the selected text with the template structure, and determining whether the selected text includes a text structure matching the template structure, includes:
step S121, performing word segmentation on the selected text and labeling the part of speech of each word segmentation to obtain a labeling result;
and S122, comparing each participle and the part-of-speech of each participle in the labeling result with each participle and the part-of-speech of each participle preset in each template structure established in advance, and determining a matched text structure.
Further, before comparing the selected text with the pre-established template structure, the selected text may be participled and parts of speech of each participle may be labeled. Taking The selected text "The text Chinese output to music strokes structures capturing The input of The characters of science", The selected text is segmented and The part of speech of each segmented word is labeled, so as to obtain The following labeling result: "The [ definite article ] concept [ noun ] wee [ be verb, past ] digest [ verb, past participle ] to [ to ] take [ verb, now generally ] strong [ adjective ] measures [ noun ] against [ preposition ] The [ definite article ] peptides [ noun ] of [ preposition ] vision [ noun ]". Comparing the selected text with a pre-established template structure, namely comparing each participle and part of speech of the participle in the labeling result with each participle and part of speech of the participle preset in each pre-established template structure to judge whether the part of speech of the participle labeled in the selected text is matched with the part of speech of the participle preset in the template structure, and further judging whether the participle meets a preset condition under the condition that the participle exists. When the participles, the part of speech of the participles and the space words in a certain text structure in the selected text all accord with a certain template structure, the text structure can be determined to be matched with the template structure.
For example, in the above example, for the text unit "take strokes measures" in the selected text, the participles, the part of speech of the participle, and the space words contained therein all satisfy the preset conditions of the participles, the part of speech of the participle, and the space words required in the template structure "take [ space words of the preset condition ] measures" of the preset text unit "take measures", and therefore, it is possible to determine "take strokes measures" as the target text unit.
It should be noted that, the embodiment of the present invention does not limit the specific manner of identifying the target text unit in the selected text. In practical applications, the implementation manner of identifying the target text unit in the selected text may include multiple, for example, may further include: presetting a regular expression, and identifying target text units in the selected text through the regular expression; or, the recognition model for recognizing the target text unit in the text may be trained in advance, and when the selected text is detected, the selected text may be input into the trained recognition model, and the target text unit in the selected text may be recognized and output through the recognition model.
The recognition model can be obtained by performing supervised training on the existing neural network according to a large number of training samples and a machine learning method. It should be noted that, the embodiment of the present invention does not limit the model structure and the training method of the recognition model. The recognition model may fuse a variety of neural networks. The neural network includes, but is not limited to, at least one or a combination, superposition, nesting of at least two of the following: CNN (Convolutional Neural Network), LSTM (Long Short-Term Memory) Network, RNN (Simple Recurrent Neural Network), attention Neural Network, and the like.
In an optional embodiment of the present invention, before the identifying the target text unit in the selected text in step 101, the method may further include:
when the duration of the trigger operation on the target area is detected to exceed the preset duration, determining a selected area according to the position of the trigger operation, and determining the text in the selected area as the selected text.
It should be noted that, the embodiment of the present invention does not limit the manner of obtaining the selected text. For example, the display content in the display interface may be triggered by a mouse or a keyboard to select the text. Further, for the electronic device with the touch display screen, the display content in the touch display screen of the electronic device can be triggered through the target object to select the text.
In an example, the triggering operation may be a long-press operation, and when it is detected that the duration of the triggering operation on the target area exceeds a preset time length (e.g., 2 seconds), the triggering operation may be considered to meet a condition for triggering the word fetching translation function, and the word fetching translation function may be triggered. And under the condition of triggering the word-taking translation function, determining a selected area according to the position of the triggering operation. Further, the selected area may be displayed distinctively, for example, by changing the background color of the selected area. And detecting whether the recognizable text content exists in the selected area through a preset algorithm strategy, and if so, determining that the text in the selected area is the selected text.
It is understood that the trigger operation includes, but is not limited to, a long press operation, and may also include a mouse click operation, such as a left click, a left double click, a right double click, a sliding operation on a touch screen of the electronic device, or a combination of multiple operations.
In an optional embodiment of the present invention, after determining the selected area according to the position of the trigger operation, the method may further include:
step S21, displaying a first boundary and a second boundary of the selected area;
step S22, responding to the dragging operation of the first boundary and/or the second boundary, and adjusting the position of the first boundary and/or the second boundary;
and step S23, after the dragging operation is detected to stop, determining the area between the adjusted first boundary and the second boundary as a selected area.
In the embodiment of the invention, after the selected area is determined, the range of the selected area can be adjusted to increase or decrease the text content in the selected text. Further, the first boundary and the second boundary may be displayed in a form of a cursor, so as to prompt a user to adjust a range of the selected area by dragging the cursor corresponding to the first boundary and the second boundary.
The drag operation may be a move operation performed on the basis of a hold mouse click operation, or the drag operation may be a move operation performed on the basis of a hold target object press operation, or the like.
After the dragging operation is detected to stop, the area between the adjusted first boundary and the adjusted second boundary can be determined as a selected area, then the selected text in the selected area after the range is adjusted can be translated in real time, the target text unit can be identified, and the translation result of the selected text in the selected area after the range is adjusted, which corresponds to the second language, and the relevant information of the identified target text unit can be output.
By the embodiment of the invention, the selected text in the selected area can be translated in real time, the target text unit in the selected text can be automatically identified, the content of the selected text is automatically updated along with the change of the range of the selected area, and the translation result of the selected text and the identified target text unit are synchronously updated along with the update of the selected text.
In an alternative embodiment of the present invention, the selected text includes, but is not limited to, any one of the following: the method comprises the steps of executing a selection operation on text content in a page to obtain a selected text, performing text recognition on a selected picture to obtain text content, and performing a selection operation on the text content in a hyperlink to obtain a selected text.
The selected text can be any text in the current interface of the electronic equipment. Such as sent text, received text, text being entered by the user via the keyboard, on-screen text, text in the translation results page, etc. in the instant messaging application interface.
In one example, a user is browsing news through a web browser in the electronic device, and when it is detected that the user presses a certain area in the page for more than a preset duration, the word-taking translation function is triggered. Specifically, the text in the long-press area of the user is determined as the selected text, and the original text and the translation result of the selected text are displayed, and the related information of the target text unit in the selected text is displayed.
By the embodiment of the invention, a user does not need to interrupt the operation of the current browsing page, and inquires a dictionary or retrieves the related information of the target text unit in the selected text through a search engine. The method and the device can simplify the operation process of the user and improve the translation processing efficiency, and the user does not need to quit the current application, so that the continuity of using the current application by the user can be maintained, and the user experience is improved.
It should be noted that, the source of the selected text is not limited in the embodiments of the present application. The source of the selected text is not limited to the above-mentioned page text, picture, and hyperlink, and the selected text may also be derived from voice information. The voice information may be voice information stored in the electronic device, voice information transmitted by the electronic device, voice information received by the electronic device, and the like.
For example, the embodiment of the present invention may receive voice information input by a user in real time through a microphone of an electronic device, and may perform voice recognition on the voice information when detecting that the voice information is selected, and take text content obtained by the voice recognition as a selected text.
To sum up, the embodiment of the present invention can identify a target text unit in a selected text, where the selected text corresponds to a first language, and output related information of the target text unit on the basis of outputting a translation result of the selected text corresponding to a second language. On the basis of providing the translation result of the selected text for the user, the embodiment of the invention can also provide the user with the related information of the target text unit contained in the selected text, expand the translation processing range, provide the user with deeper and richer information in the selected text, avoid the need of further manual query operation steps because the user does not understand the meaning of the target text unit, reduce the user operation cost and improve the user operation efficiency.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Device embodiment
Referring to fig. 2, a block diagram of a translation processing apparatus according to an embodiment of the present invention is shown, where the apparatus may include:
the translation recognition module 201 is configured to recognize a target text unit in a selected text, where the selected text corresponds to a first language;
and the result output module 202 is configured to output a translation result of the selected text corresponding to the second language and output related information of the target text unit.
Optionally, the apparatus further comprises:
the template establishing module is used for establishing a template structure corresponding to the preset text unit in advance;
the template comparison module is used for comparing the selected text with the template structure and judging whether the selected text contains a text structure matched with the template structure;
and the target determining module is used for determining the matched text structure as the identified target text unit if the selected text contains the text structure matched with the template structure.
Optionally, the template matching module includes:
the part-of-speech tagging submodule is used for segmenting words of the selected text and tagging the part of speech of each segmented word to obtain a tagging result;
and the part-of-speech comparison submodule is used for comparing each participle and the part-of-speech of the participle in the labeling result with each participle and the part-of-speech of the participle preset in each template structure established in advance and determining a matched text structure.
Optionally, the apparatus further comprises:
and the trigger determining module is used for determining a selected area according to the position of the trigger operation when the duration of the trigger operation on the target area is detected to exceed the preset duration, and determining the text in the selected area as the selected text.
Optionally, the apparatus further comprises:
the boundary display module is used for displaying a first boundary and a second boundary of the selected area;
a boundary adjusting module, configured to adjust a position of the first boundary and/or the second boundary in response to a drag operation on the first boundary and/or the second boundary;
and the area determining module is used for determining an area between the adjusted first boundary and the second boundary as a selected area after the dragging operation is detected to be stopped.
Optionally, the apparatus further comprises:
and the page jump module is used for responding to the triggering operation of the relevant information of the target text unit and jumping to the detail page corresponding to the target text unit.
Optionally, the apparatus further comprises:
and the language determining module is used for determining the first language and the second language according to the language type of the application environment where the selected text is located.
Optionally, the information related to the target text unit includes at least one of an original text of the target text unit in the selected text, a type of the target text unit, an original shape of the target text unit, and a translation result of the target text unit corresponding to a second language.
Optionally, the selected text includes any one of: the method comprises the steps of executing a selection operation on text content in a page to obtain a selected text, performing text recognition on a selected picture to obtain text content, and performing a selection operation on the text content in a hyperlink to obtain a selected text.
The embodiment of the invention can identify the target text unit in the selected text, the selected text corresponds to the first language, and the relevant information of the target text unit is also output on the basis of outputting the translation result of the selected text corresponding to the second language. On the basis of providing the translation result of the selected text for the user, the embodiment of the invention can also provide the user with the related information of the target text unit contained in the selected text, expand the translation processing range, provide the user with deeper and richer information in the selected text, avoid the need of further manual query operation steps because the user does not understand the meaning of the target text unit, reduce the user operation cost and improve the user operation efficiency.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An embodiment of the present invention provides an apparatus for translation processing, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs configured to be executed by the one or more processors include instructions for: identifying a target text unit in a selected text, wherein the selected text corresponds to a first language; and outputting the translation result of the selected text corresponding to the second language and outputting the relevant information of the target text unit.
Optionally, prior to said identifying the target text unit in the selected text, the device is further configured to execute the one or more programs by the one or more processors including instructions for:
pre-establishing a template structure corresponding to a preset text unit;
comparing the selected text with the template structure, and judging whether the selected text contains a text structure matched with the template structure;
and if the selected text contains a text structure matched with the template structure, determining the matched text structure as the identified target text unit.
Optionally, the comparing the selected text with the template structure, and determining whether the selected text contains a text structure matched with the template structure, includes:
segmenting the selected text and labeling the part of speech of each segmented word to obtain a labeling result;
and comparing each participle and the part-of-speech of the participle in the labeling result with each participle and the part-of-speech of the participle preset in each template structure established in advance, and determining a matched text structure.
Optionally, prior to said identifying the target text unit in the selected text, the device is further configured to execute the one or more programs by the one or more processors including instructions for:
when the duration of the trigger operation on the target area is detected to exceed the preset duration, determining a selected area according to the position of the trigger operation, and determining the text in the selected area as the selected text.
Optionally, after determining the selected region according to the location of the triggering operation, the device is further configured to execute the one or more programs by one or more processors including instructions for:
displaying a first boundary and a second boundary of the selected area;
adjusting a position of the first boundary and/or the second boundary in response to a drag operation on the first boundary and/or the second boundary;
and after the dragging operation is detected to stop, determining an area between the adjusted first boundary and the second boundary as a selected area.
Optionally, after said outputting information about the target text unit, the device is further configured to execute the one or more programs by one or more processors including instructions for:
and responding to the triggering operation of the relevant information of the target text unit, and jumping to a detail page corresponding to the target text unit.
Optionally, the information related to the target text unit includes at least one of an original text of the target text unit in the selected text, a type of the target text unit, an original shape of the target text unit, and a translation result of the target text unit corresponding to a second language.
Fig. 3 is a block diagram illustrating an apparatus 800 for translation processing in accordance with an example embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 3, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing elements 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice information processing mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed state of the device 800, the relative positioning of components, such as a display and keypad of the apparatus 800, the sensor assembly 814 may also interpret changes in the position of the processing apparatus 800 or a component of the apparatus 800, the presence or absence of user contact with the apparatus 800, orientation or acceleration/deceleration of the apparatus 800, and changes in the temperature of the apparatus 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on radio frequency information processing (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 4 is a schematic diagram of a server in some embodiments of the invention. The server 1900 may vary widely by configuration or performance and may include one or more Central Processing Units (CPUs) 1922 (e.g., one or more processors) and memory 1932, one or more storage media 1930 (e.g., one or more mass storage devices) storing applications 1942 or data 1944. Memory 1932 and storage medium 1930 can be, among other things, transient or persistent storage. The program stored in the storage medium 1930 may include one or more modules (not shown), each of which may include a series of instructions operating on a server. Still further, a central processor 1922 may be provided in communication with the storage medium 1930 to execute a series of instruction operations in the storage medium 1930 on the server 1900.
The server 1900 may also include one or more power supplies 1926, one or more wired or wireless network interfaces 1950, one or more input-output interfaces 1958, one or more keyboards 1956, and/or one or more operating systems 1941, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
A non-transitory computer-readable storage medium in which instructions, when executed by a processor of an apparatus (server or terminal), enable the apparatus to perform a translation processing method shown in fig. 1.
A non-transitory computer readable storage medium in which instructions, when executed by a processor of an apparatus (server or terminal), enable the apparatus to perform a translation processing method, the method comprising: identifying a target text unit in a selected text, wherein the selected text corresponds to a first language; and outputting the translation result of the selected text corresponding to the second language and outputting the relevant information of the target text unit.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
The present invention provides a translation processing method, a translation processing apparatus and a translation processing apparatus, which have been described in detail above, and the principles and embodiments of the present invention are explained herein by applying specific examples, and the descriptions of the above examples are only used to help understanding the method and the core ideas of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (15)

1. A translation processing method, characterized in that the method comprises:
identifying a target text unit in a selected text, wherein the selected text corresponds to a first language;
and outputting the translation result of the selected text corresponding to the second language and outputting the relevant information of the target text unit.
2. The method of claim 1, wherein prior to identifying the target text unit in the selected text, the method further comprises:
pre-establishing a template structure corresponding to a preset text unit;
comparing the selected text with the template structure, and judging whether the selected text contains a text structure matched with the template structure;
and if the selected text contains a text structure matched with the template structure, determining the matched text structure as the identified target text unit.
3. The method of claim 2, wherein comparing the selected text with the template structure to determine whether the selected text contains a text structure matching the template structure comprises:
segmenting the selected text and labeling the part of speech of each segmented word to obtain a labeling result;
and comparing each participle and the part-of-speech of the participle in the labeling result with each participle and the part-of-speech of the participle preset in each template structure established in advance, and determining a matched text structure.
4. The method of claim 1, wherein prior to identifying the target text unit in the selected text, the method further comprises:
when the duration of the trigger operation on the target area is detected to exceed the preset duration, determining a selected area according to the position of the trigger operation, and determining the text in the selected area as the selected text.
5. The method of claim 4, wherein after determining the selected region based on the location of the triggering operation, the method further comprises:
displaying a first boundary and a second boundary of the selected area;
adjusting a position of the first boundary and/or the second boundary in response to a drag operation on the first boundary and/or the second boundary;
and after the dragging operation is detected to stop, determining an area between the adjusted first boundary and the second boundary as a selected area.
6. The method of claim 1, wherein after outputting the information related to the target text unit, the method further comprises:
and responding to the triggering operation of the relevant information of the target text unit, and jumping to a detail page corresponding to the target text unit.
7. The method of claim 1, wherein the information related to the target text unit comprises at least one of an original text of the target text unit in the selected text, a type of the target text unit, an original shape of the target text unit, and a translation result of the target text unit corresponding to a second language.
8. A translation processing apparatus, characterized in that the apparatus comprises:
the translation recognition module is used for recognizing a target text unit in a selected text, and the selected text corresponds to a first language;
and the result output module is used for outputting the translation result of the selected text corresponding to the second language and outputting the related information of the target text unit.
9. The apparatus of claim 8, further comprising:
the template establishing module is used for establishing a template structure corresponding to the preset text unit in advance;
the template comparison module is used for comparing the selected text with the template structure and judging whether the selected text contains a text structure matched with the template structure;
and the target determining module is used for determining the matched text structure as the identified target text unit if the selected text contains the text structure matched with the template structure.
10. The apparatus of claim 9, wherein the template alignment module comprises:
the part-of-speech tagging submodule is used for segmenting words of the selected text and tagging the part of speech of each segmented word to obtain a tagging result;
and the part-of-speech comparison submodule is used for comparing each participle and the part-of-speech of the participle in the labeling result with each participle and the part-of-speech of the participle preset in each template structure established in advance and determining a matched text structure.
11. The apparatus of claim 8, further comprising:
and the trigger determining module is used for determining a selected area according to the position of the trigger operation when the duration of the trigger operation on the target area is detected to exceed the preset duration, and determining the text in the selected area as the selected text.
12. The apparatus of claim 11, further comprising:
the boundary display module is used for displaying a first boundary and a second boundary of the selected area;
a boundary adjusting module, configured to adjust a position of the first boundary and/or the second boundary in response to a drag operation on the first boundary and/or the second boundary;
and the area determining module is used for determining an area between the adjusted first boundary and the second boundary as a selected area after the dragging operation is detected to be stopped.
13. The apparatus of claim 8, further comprising:
and the page jump module is used for responding to the triggering operation of the relevant information of the target text unit and jumping to the detail page corresponding to the target text unit.
14. An apparatus for translation processing, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and wherein the one or more programs configured to be executed by the one or more processors comprise instructions for performing the translation processing method according to any one of claims 1 to 7.
15. A machine-readable medium having stored thereon instructions, which when executed by one or more processors, cause an apparatus to perform the translation processing method of any of claims 1 to 7.
CN202110477869.5A 2021-04-29 2021-04-29 Translation processing method and device for translation processing Active CN113221582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110477869.5A CN113221582B (en) 2021-04-29 2021-04-29 Translation processing method and device for translation processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110477869.5A CN113221582B (en) 2021-04-29 2021-04-29 Translation processing method and device for translation processing

Publications (2)

Publication Number Publication Date
CN113221582A true CN113221582A (en) 2021-08-06
CN113221582B CN113221582B (en) 2024-08-06

Family

ID=77090201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110477869.5A Active CN113221582B (en) 2021-04-29 2021-04-29 Translation processing method and device for translation processing

Country Status (1)

Country Link
CN (1) CN113221582B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010056352A1 (en) * 2000-04-24 2001-12-27 Endong Xun Computer -aided reading system and method with cross-language reading wizard
US20020091509A1 (en) * 2001-01-02 2002-07-11 Yacov Zoarez Method and system for translating text
JP2003330926A (en) * 2002-05-14 2003-11-21 Nippon Telegr & Teleph Corp <Ntt> Translation method, device, and program
US20170011023A1 (en) * 2015-07-07 2017-01-12 Rima Ghannam System for Natural Language Understanding
CN108829686A (en) * 2018-05-30 2018-11-16 北京小米移动软件有限公司 Translation information display methods, device, equipment and storage medium
JP2018206356A (en) * 2017-06-08 2018-12-27 パナソニックIpマネジメント株式会社 Translation information providing method, translation information providing program, and translation information providing apparatus
CN109165389A (en) * 2018-07-23 2019-01-08 北京搜狗科技发展有限公司 A kind of data processing method, device and the device for data processing
CN112487157A (en) * 2019-09-12 2021-03-12 甲骨文国际公司 Template-based intent classification for chat robots

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010056352A1 (en) * 2000-04-24 2001-12-27 Endong Xun Computer -aided reading system and method with cross-language reading wizard
US20020091509A1 (en) * 2001-01-02 2002-07-11 Yacov Zoarez Method and system for translating text
JP2003330926A (en) * 2002-05-14 2003-11-21 Nippon Telegr & Teleph Corp <Ntt> Translation method, device, and program
US20170011023A1 (en) * 2015-07-07 2017-01-12 Rima Ghannam System for Natural Language Understanding
JP2018206356A (en) * 2017-06-08 2018-12-27 パナソニックIpマネジメント株式会社 Translation information providing method, translation information providing program, and translation information providing apparatus
CN108829686A (en) * 2018-05-30 2018-11-16 北京小米移动软件有限公司 Translation information display methods, device, equipment and storage medium
CN109165389A (en) * 2018-07-23 2019-01-08 北京搜狗科技发展有限公司 A kind of data processing method, device and the device for data processing
CN112487157A (en) * 2019-09-12 2021-03-12 甲骨文国际公司 Template-based intent classification for chat robots

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
司莉;贾欢;: "跨语言信息检索查询翻译消歧方法与技术研究", 图书馆学研究, no. 20, 25 October 2015 (2015-10-25) *
孙越恒;段楠;侯越先;: "统计机器翻译中的非连续短语模板抽取及其应用", 计算机科学, no. 10 *
李玉鉴, 钟义信: "基于通用模板匹配替换方法的英汉翻译系统", 计算机工程与应用, no. 24, 15 December 2002 (2002-12-15) *

Also Published As

Publication number Publication date
CN113221582B (en) 2024-08-06

Similar Documents

Publication Publication Date Title
CN107436691B (en) Method, client, server and device for correcting errors of input method
CN107688399B (en) Input method and device and input device
CN111898388B (en) Video subtitle translation editing method and device, electronic equipment and storage medium
CN108304412B (en) Cross-language search method and device for cross-language search
CN108829686B (en) Translation information display method, device, equipment and storage medium
CN109471919B (en) Zero pronoun resolution method and device
CN107424612B (en) Processing method, apparatus and machine-readable medium
CN113343675A (en) Subtitle generating method and device for generating subtitles
CN107132927B (en) Input character recognition method and device for recognizing input characters
CN113033163B (en) Data processing method and device and electronic equipment
CN107784037B (en) Information processing method and device, and device for information processing
CN108628461B (en) Input method and device and method and device for updating word stock
CN110795014A (en) Data processing method and device and data processing device
CN112199032A (en) Expression recommendation method and device and electronic equipment
CN110858100B (en) Method and device for generating association candidate words
CN113221582B (en) Translation processing method and device for translation processing
CN111258691B (en) Input method interface processing method, device and medium
CN109388252B (en) Input method and device
CN110716653B (en) Method and device for determining association source
CN113534973B (en) Input method, device and device for inputting
CN112199033B (en) Voice input method and device and electronic equipment
CN113918030B (en) Handwriting input method and device for handwriting input
CN112668340B (en) Information processing method and device
CN111460836B (en) Data processing method and device for data processing
CN113918078A (en) Word-fetching method and device and word-fetching device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant